Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

What Models Should be Used to Create Requirements in an Agile Project?

Software Requirements Blog - Seilevel.com - Thu, 08/07/2014 - 17:00
This is a question that came up repeatedly in the classes I have taught these past few months.  My answer is always the same – “any and all models that are needed to define the problem space and derive the requirements.”  The reaction to this is almost always identical – “but I thought we ONLY […]
Categories: Requirements

A Valuable Sprint Review (a.k.a. Demo): How To

Xebia Blog - Thu, 08/07/2014 - 10:05

A valuable Sprint Review (from now on in this blog referred to as Demo) can be built in three steps. It starts during the Sprint planning session with agreeing on and understanding the user stories on the Sprint backlog. Then, during the Sprint, the team constantly exchanges ideas and results of the realisation of the Story. Finally, during the demo itself, the Product Owner and the rest of the team demo the stories to the stakeholders to display the value delivered and open up for feedback.

Planning for a good demo

During the planning session, it is imperative that the Product Owner and the rest of the team understand the stories that will be picked up. This sounds obvious, but it happens often that this is not the case. Stories might be too technical so the Product Owner is disconnected or stories are so high level that it is hard to determine what needs to be done.

Make sure stories are formulated from the perspective of an end-user of the functionality that will be delivered. This could be an actual user, a system that picks up whatever result is created or any other manifestation of who or what will use the result of the story.

Also take care of getting the acceptance criteria clear. This way it will be clear to developers what to build, to testers what to test for and designers what to design. It will help the Product Owner to have a better idea what is in and what might have to be defined in a new/other user story.

It is important that everyone understands the context in which the story ‘lives’. What part of the system is touched (end-to-end is preferred but not always possible), which parties are affected by the change, what prerequisites are needed, etc.

Building for a great demo

When during the creation of the value of each story the whole team is in constant contact about intermediate results and decisions taken, everyone will be able to add to the value and be aware of what the result of the story will be. It is very important that the whole team is including the Product Owner. When the PO sees the intermediate results, she or he can already create an image of what the result will be like. Also, the PO can contact stakeholders that might have an opinion of what is created and, when needed, adjust the end result to match expectations.

Delivering an valuable demo

In the demo, the Product Owner should present to the stakeholders the value of each user story that has been delivered. So, per story, explain what has changed from the perspective of the end-user and have the rest of the team show this. Also, when stories are not done, explain which (sub-)functionality is not yet finished. Make sure to ask for feedback from the end-user or other stakeholders on what is demonstrated.

Conclusion

The value of the demo depends largely on the cooperation of the entire team. When the Product Owner and the rest of the team work together on understanding what will be delivered and help each other to get the most value from each story delivered the demo will be focused, valuable and fun.

Seven Deadly Sins of Metrics Programs: Sloth

14545519494_12ab1ba776_kSloth plagues many measurement programs as they age.  As time goes by, it is easy for practitioners to drift away from the passionate pursuit of transforming data into knowledge. Sloth in measurement programs is typically not caused by laziness. Leaders of measurement groups begin as true believers, full of energy. However over time, many programs fall prey to wandering relevance. When relevance is allowed to waiver it is very difficult to maintain the same level of energy as when the program was new and shiny. Relevance can slip away if measurement goals are not periodically challenged and validated. An overall reduction in energy can occur even when goals are synchronized, if there is a conflict on how the data will be used and analyzed between any of the stakeholder classes (measurement team, management or the measured). Your energy will wane if your work results in public floggings or fire drills (at the very least it will make you unpopular).

The drift into sloth may be a reflection of a metrics palette that is not relevant to the organization’s business, therefore not likely to produce revelations that create excitement and interest.  This can cause a cascade of further issues.  Few metrics programs begin life by selecting irrelevant metrics, except by mistake, however over time relevance can wander as goals and organizational needs change.  Without consistent review, relevance will wane and it will be easy for metrics personnel to lose interest and become indifferent and disengaged.

In order to avoid or reclaim your program from sloth due to drifting goals; synchronize measurement goals with the organization goals periodically.  I suggest mapping each measurement goal and measure to the organizations goals.  If a direct link can’t be traced, I suggest that you replace the measure.  Note: measurement goals should be reviewed and validated any time a significant management change occurs.

When usage is the culprit, your job is to counsel all stakeholders on proper usage. However, if management wants to use measurement as a stick, it is their prerogative. Your prerogative is to change fields or to act out and accept the consequences. If the usage is a driver for lack of energy, you probably failed much earlier in the measurement program and turning the ship will be very difficult. Remember that it pays to spend time counseling the organization about how to use measurement data from day one rather than getting trapped in a reactionary mode.

The same symptoms occur when management is either disinterested (not engaged and not disposed positively or negatively toward the topic) or has become uninterested (disengaged). The distinction between disinterested and uninterested is important because the solutions are different. Disinterest requires marketing to find a reason to care; to be connected.  A stakeholder that has become uninterested needs to be reconnected with by providing information so their decisions matter.  Whatever the reason for actively disengaging or losing interest, loosing passion for metrics will sap the vitality of your program and begin a death spiral.  Keep your metrics relevant and that relevance will provide protection against waning interest. Metrics professionals should ensure there is an explicit linkage between your metrics palate and the business goals of your organization.  Periodically audit your metrics program.  As part of the audit map the linkages between each metric and the organizations business goals.  Make sure you are passionate about what you do.  Sharing your passion of developing knowledge and illustrating truth will help generate a community of need and support.

Synchronizing goals, making metrics relevant and instilling passion may not immunize your metrics program from failure but they will certainly stave off the deadly sin of sloth. If you can’t generate passion or generate information and knowledge from the metrics program to generate relevance consider a new position, because in the long run not making the change isn’t really an option..


Categories: Process Management

Scrumban

Xebia Blog - Wed, 08/06/2014 - 19:19

Scrum has become THE revolution in the world of software development. The main philosophy behind scrum is accepting that a problem cannot be fully understood or defined at start; scrum has the focus on maximizing the team's ability to deliver quickly and respond to emerging requirements. It came as truly refreshing in the time when projects were ruled by procedure and MS-project planning. Because of scrum:

  1. Projects can deliver what the customer needs, not just what he thought he wanted.
  2. Teams are efficient. They work as a unit to reach a common goal.
  3. We have better project roles (like a product owner and scrum master), ceremonies (like daily stand-ups, grooming) and a scrumboard.

But the central question is: "are we there yet"? And the answer is: "No!". We can optimize scrum by mixing it with kanban, which leads to scrumban.

A kanban introduction for scrummers

Instead of scrum, which is a software development framework in the widest sense of the term, kanban is a method. It, or instance, does not define ceremonies and project roles. There are two main principles in kanban I would like to highlight:

  1. Each column on the kanban board represents a step in the workflow. So, instead of the lanes 'todo', 'inprogress' and 'done' like in scrum, you have 'defining', 'developing', 'testing' and 'deploying'. That is a more full-stack view; a task has a wider lifecycle. This concept is also called 'from concept to cash'; from user research and strategic planning to data center operations and product support.
  2. Another principle of Kanban is that it limits WIP (work in progress). An example of a WIP limit is limiting the number of cards allowed in each column. The advantage is that it reveals bottlenecks dynamically. Because of the WIP, Kanban is a pull mechanism. For instance, a tester can only pickup a next work item if there are items available in de done-column of development-lane and when the WIP limit of the test-lane isn't exceeded.

After all kanban is incredibly simple, but at the same time incredibly powerful.

What's wrong with scrum?

  1. The reason why we went to scrum is because we did not want the waterfall approach anymore. But, in fact each sprint in scrum has become a mini waterfall. In each sprint teams plan, try to design, develop and test. At the end the product owner reviews the completed work and decides which of the stories are shippable and ready for production. Those sprints can result in a staccato flow, which can be exhausted. With kanban we can make sprints more agile and the goal is to have a more continuous flow. In comparison with how to run a marathon? You don't make sprints of 200 meters, but rather with a constant rate.
  2. Scrum is a push mechanism and therefore 'pushes' the quality out of your product. When a sprint backlog is defined, the team is asked for a commitment. Whatever happened, a team must satisfy its commitment and at the end of the sprint the product owner must say 'decharge', else the team has failed. No team wants to publicly fail, so most of the time, at the end of the sprint, teams take shortcuts to satisfy the deadline. Those shortcuts are killing for quality! Asking for commitment is like not trusting the intrinsic motivation of the team. The correct commitment is visible during each standup. During a standup team members have to tell each other what they've done the day before. If they are working too long on a story, another team member will rebel. That is the real commitment.
  3. One of the reasons why we do scrum is that it is better to start immediately instead of doing an estimation and a feasibility study upfront, because almost always after the study is completed, the project will be executed. The estimation at the start is not reliable and the feasibility study is just a waste of time. Isn't that the same mistake we make with scrum with the grooming and ready sessions that causes a lot of overhead? The first overhead during grooming is that we give an estimation with relative precision. It is in a developer's nature to argue about the story points; is it 3, 5, 8 or maybe 1 points? And that is a waste. You should only talk about the story sizes large, medium or small. Making a more precise estimation is just a waste of time, because there are too many external factors. Second, with the grooming we do a mini feasibility study. With a team we will think about a direction of the solution, which is fine. But most of the time it takes two or three sprints before it is realized in the sprint. And with all the weekends of beer in between we've already forgot the solutions. So one smart guy says: 'yes, lets document it', but that is an inefficient way for the real problem: there is too much time between the grooming and the realization.

Scrumban: the board of kanban

Untitled

A scrumban board

The first column in a scrumban board is reserved for the backlog, where stories are ordered by their respective priority. There are several rules for the kanban backlog:

  1. It is the product owner's responsibility to fill this lane with stories, and keep it steadily supplied. The product owner must avoid creating or analyzing too many stories, because this is a waste and it corrupts with the Just-In-Time principle of scrumban. Therefore the scrumban board has a WIP-limit of 5 backlog stories.
  2. Assure the necessary level of analysis before starting development. Each story must be analyzed with a minimum effort. That should be done in the Weekly Time Box (WTB), which will be discussed later on.
  3. The backlog should be event-driven with an order point.
  4. Prioritization-on-demand. The ideal work planning process should always provide the team with best things to work on next, no more and no less.

Next to the backlog-column is the tasking-column, in which there should always be at least one story that is tasked (a minimum WIP-limit). If this isn't the case the team will task after the standup to satisfy this condition. A story is picked up from the backlog and is tasked by the team. Tip: put the tasked cards at the back of the story cards. The next columns are the realization columns. Each team is free to add, remove or change necessary columns so it suits the business. In the realization columns there should be a maximum number of stories that are worked on. If the maximum limit has not been reached, a story can be pulled from the tasking column and unfolded on the 'to implement' lane. Now the team can work on the tasks of the story. Each task that is implemented can be moved to the 'ready' lane. If all of the tasks are done for a story, the story can be moved to the next lane. When the story and tasks are ready, the cards can be moved to the right bottom of the board, so there is a new horizontal lane available for the next story.

Scrumban: the ceremonies of scrum

With scrumban we only have two types of meetings: the daily standup and the weekly timeblock. The Weeky Timeblock is a recurring meeting used for multiple purposes. It should be set up in the middle of the week. The big advantage of the weekly timeblock is that developers are distracted from their work only once a week (instead of the various of meetings with scrum).

The Weekly Timeblock contains three parts. First there is a demo of the work done. Second, there is a retrospective on the development process of the last week. Third, the team should have a preview of upcoming workitems. The team try to understand the intent of each item and provide feedback. The only size-indication a team has to make is if the story is small, medium or large. Avoid using poker cards/story points, which are too fine-grained and are to vague.

Conclusion

Scrumban is a mix of the scrum ceremonies and the kanban method. With scrumban we

  1. Introduce the weekly timeblock (WTB). The weekly timeblock should be around 4 hours and there are no more meetings
  2. Have a wider lifecycle of a story: 'from concept to cash'.
  3. Change the scrumboard to  company flows and avoid the push principle of a sprint but using a pull mechanism.

Tokutek White Paper: A Comparison of Log-Structured Merge (LSM) and Fractal Tree Indexing

What data structure does your database use? It's not something the typical database user spends much time pondering. Since data structure is destiny, what data structure your database uses is a key point to consider in your selection process.

We know CouchDB uses a modified B+ tree. We've learned a lot fascinating details over the years about the use of Log-structured merge-trees in Cassandra, HBase and LevelDB. So B+ trees and LSMs seem familiar by now.

What may not be so familiar is Tokutek's Fractal Tree Indexing technology that is supposed to be even better than B+ trees and LSMs.

As a comparison between Fractal Tree Indexing and LSMs, Bradley Kuszmaul, Chief Architect at Tokutek, has written a detailed paper, a must read for the algorithmically inclined or someone interested in database internals: A Comparison of Log-Structured Merge (LSM) and Fractal Tree Indexing.

Here's a quick intro to Fractal Tree (FT) indexes:

Categories: Architecture

How to Avoid Three Big Estimation Traps Posted

I sent a Pragmatic Manager email last week, How to Avoid Three Big Estimation Traps. If you subscribed, you’d have seen it already. (That was a not-so-subtle hint to subscribe :-)

If you’re not sure of the value of being on yet-another-email list, browse the back issues. You can see I’m consistent. Not about the day I send the Pragmatic Manager email. I can’t make myself be that consistent. I provide you some great content. I tell you where I’m speaking. I let you know where you can read my writing, and how to find more of my work. That’s it.

In any case, take a look at How to Avoid Three Big Estimation Traps. I bet you’ll like it!

Categories: Project Management

Kanban should be the default choice for DevOps teams

Xebia Blog - Wed, 08/06/2014 - 14:59

We had a successful workshop on DevOpsDays 2014. Our main point was that Kanban should be the default choice for DevOps teams. The presentation can be downloaded here.

DevOpsDays 2014 was a success

On the 19th, 20th & 21st of June 2014 the second edition of DevOpsDays Amsterdam was held in Pakhuis De Zwijger in Amsterdam. This year I was asked to teach a course there on Kanban for DevOps. At the 2013 edition I also gave a presentation about this subject and it was nice to be invited back to this great event.

With the Open Source mindset in mind I teamed up with Maarten Hoppen en Bas van Oudenaarde. Our message: Kanban should be the default choice for DevOps teams. 

The response to this workshop was very positive and because we received a lot of great feedback I thought I’d share the slide deck. The presentation assumes you are working in an environment where DevOps might work or is already being implemented. 

Main points of the presentation

DevOps is about Culture, Automation, Measurement and Sharing (CAMS). These four values require a way of working that looks past existing processes, handovers and role descriptions. 

The Kanban Method is about looking at your organization in a different way. From a point of:

  • Sustainability: by shaping demand and limiting Work in Progress
  • Service-Orientation: by creating an SLA based on past results and data
  • Survivability: create an improvement mindset in the organization to respond to rapidly changing environments

The three different ways the Kanban Method makes you look at your organization makes it an extremely powerful solution for DevOps.

If you are interested to learn more about the workshop, check out the slides here:

http://www.slideshare.net/jsonnevelt/kanban-bootcamp-devopsdays-2014

Quote of the Day

Herding Cats - Glen Alleman - Wed, 08/06/2014 - 14:38

No data hath meaning apart from their context. ~unknown

Related articles How to Forecast the Future Failure is NOT an Option!
Categories: Project Management

Seven Deadly Sins of Metrics Programs: Wrath

Wrath

Wrath

Wrath is the inordinate and uncontrolled feelings of hatred and anger.  I suspect that you conjure a picture of someone striking out with potentially catastrophic results.  When applied to measurement, wrath is the use of data in a negative or self-destructive manner (rather than an act of wrathful measurement). Very few people are moved to measure by wrath, rather they are moved by wrath to use measurement badly. Wrath causes people to act in a manner that might not be in their or in the organization’s best interest. Both scenarios are bad.  Data and the information (good or bad) derived from that data can used as a weapon in a manner that destroys the credibility of the program and the measurement practitioners.  

Anger impairs one’s ability to process information and to exert cognitive control over their behavior. An angry person may lose his/her objectivity, empathy, prudence or thoughtfulness and may cause harm to others. Actions driven by extreme anger is easily recognized by observers, but rarely by those perpetrating the behavior. This is an example of being blind with rage.  There is no room in the workplace for rage. Protect your measurement program and your career by staying in control. When confronted with scenarios that induce rage you need to learn how to step back and see the whole situation. Being mad or angry is fine if those emotions do not cloud your judgment. Teaching yourself to always see things more calmly will help your realize the truth of the harm that you are causing to yourself and others through rage. I once saw a CIO fly off the hook when are project shared it’s measurement dashboard, the project reporting that they were behind schedule, defects were above projections and the number of potential risks were rising. The uncontrolled rant was awe inspiring however the CIO lost the support of his senior leaders and within a month he was gone. Control puts you in a position to react in a more rational manner.

Measurement data and the information derived from that data deliver the ability to understand why things happen: why a project is late, why a project costs what it does or even why a specific level of quality was achieved.  Measurement is a tool to take action to improve how work is done.  What it should not be is a weapon of indiscriminate destruction. Acting in a rage changes all of that. When you strike out in an uncontrolled manner you have transformed that data into a weapon with very little guidance. Think of the difference between the indiscriminate nature of a land mine and the precision of phasers of the Star Ship Enterprise. Wrath turns a potentially valuable tool into something far less reliable. For example, a purposeful misrepresentation of the meaning of data can lead to team or organization making wrong decisions. Other examples include errors of omissions (leaving out salient facts) or inclusion (including irrelevant data that changes the conclusions drawn from the data).  Whether omission or inclusion, poor use of data erodes the value of the measurement program though politicization or placing doubt about the value of measurement into people’s minds. Remember that all analysis requires interpretation, however the interpretations are generally based on an assumption that people will act logically and consistently. That includes your behavior. Analysis based on an obviously false assumptions just to make a point does no one any good in the long run.  For example, assuming productivity is constant across all sized of projects so that you can show that a project under-performed to get back at someone will destroy your credibility even if you win the argument. Be true to the data or be the point of a failure in trust.  

Do not confuse passion and rage; they are not the same. You must have passion to be effective but what you can’t do, is to lose control of your emotions to the point that you stop thinking before you act. The deadly sin of wrath is a sin that reflection of bad behavior, if you let wrath affect your behavior you will begin a spiral that ends with a failure of trust.


Categories: Process Management

PMI Professional in Business Analysis

Software Requirements Blog - Seilevel.com - Tue, 08/05/2014 - 17:00
PMI is offering a new certification, and this one is aimed at business analysts. Called the PMI Professional in Business Analysis (PMI-PBA), PMI is recognizing the growth of business analysts in the industry, as well as the role that BAs play. Intrigued, I decided to look more into it, as I’m still a few hours […]
Categories: Requirements

Sponsored Post: Apple, Gawker, FoundationDB, Monitis, Cie Games, BattleCry, Surge, Cloudant, CopperEgg, Logentries, Couchbase, MongoDB, BlueStripe, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Software Developer in Test. The iOS Systems team is looking for a Quality Assurance engineer. In this role you will be expected to work hand-in-hand with the software engineering team to find and diagnose software defects. The ideal candidate will also seek out ways to further automate all aspects of our existing process. This is a highly technical role and requires in-depth knowledge of both white-box and black-box testing methodologies. Please apply here
    • Senior Software Engineer -iOS Systems. Do you love building highly scalable, distributed web applications? Does the idea of a fast-paced environment make your heart leap? Do you want your technical abilities to be challenged every day, and for your work to make a difference in the lives of millions of people? If so, the iOS Systems Carrier Services team is looking for a talented software engineer who is not afraid to share knowledge, think outside the box, and question assumptions. Please apply here.
    • Software Engineering Manager, IS&T WWDR Dev Systems. The WWDR development team is seeking a hands-on engineering manager with a passion for building large-scale, high-performance applications. The successful candidate will be collaborating with Worldwide Developer Relations (WWDR) and various engineering teams throughout Apple. You will lead a team of talented engineers to define and build large-scale web services and applications. Please apply here.
    • C++ Senior Developer and Architect- Maps. The Maps Team is looking for a senior developer and architect to support and grow some of the core backend services that support Apple Map's Front End Services. Ideal candidate would have experience with system architecture, as well as the design, implementation, and testing of individual components but also be comfortable with multiple scripting languages. Please apply here.

  • Systems & Networking Lead at Gawker. We are looking for someone to take the initiative on the lowest layers of the Kinja platform. All the way down to power and up through hardware, networking, load-balancing, provisioning and base-configuration. The goal for this quarter is a roughly 30% capacity expansion, and the goal for next quarter will be a rolling CentOS7 upgrade as well as to planning/quoting/pitching our 2015 footprint and budget. For the full job spec and to apply, click here: http://grnh.se/t8rfbw

  • Cie Games, small indie developer and publisher in LA, is looking for rock star Senior Game Java programmers to join our team! We need devs with extensive experience building scalable server-side code for games or commercial-quality applications that are rich in functionality. We offer competitive comp, great benefits, interesting projects, and exceptional growth opportunities. Check us out at http://www.ciegames.com/careers.

  • BattleCry, the newest ZeniMax studio in Austin, is seeking a qualified Front End Web Engineer to help create and maintain our web presence for AAA online games. This includes the game accounts web site, enhancing the studio website, our web and mobile- based storefront, and front end for support tools. http://jobs.zenimax.com/requisitions/view/540

  • FoundationDB is seeking outstanding developers to join our growing team and help us build the next generation of transactional database technology. You will work with a team of exceptional engineers with backgrounds from top CS programs and successful startups. We don’t just write software. We build our own simulations, test tools, and even languages to write better software. We are well-funded, offer competitive salaries and option grants. Interested? You can learn more here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • OmniTI has a reputation for scalable web applications and architectures, but we still lean on our friends and peers to see how things can be done better. Surge started as the brainchild of our employees wanting to bring the best and brightest in Web Operations to our own backyard. Now in its fifth year, Surge has become the conference on scalability and performance. Early Bird rate in effect until 7/24!
Cool Products and Services
  • Couchbase, MongoDB and DataStax: Compared. Find out which database delivers great read/write latency while scaling well with both read-intensive and balanced workloads. Get the initial results here: http://info.couchbase.com/2014-Benchmark-Showdown-Results-LP.html.

  • Now track your log activities with Log Monitor and be on the safe side! Monitor any type of log file and proactively define potential issues that could hurt your business' performance. Detect your log changes for: Error messages, Server connection failures, DNS errors, Potential malicious activity, and much more. Improve your systems and behaviour with Log Monitor.

  • The NoSQL "Family Tree" from Cloudant explains the NoSQL product landscape using an infographic. The highlights: NoSQL arose from "Big Data" (before it was called "Big Data"); NoSQL is not "One Size Fits All"; Vendor-driven versus Community-driven NoSQL.  Create a free Cloudant account and start the NoSQL goodness

  • Finally, log management and analytics can be easy, accessible across your team, and provide deep insights into data that matters across the business - from development, to operations, to business analytics. Create your free Logentries account here.

  • CopperEgg. Simple, Affordable Cloud Monitoring. CopperEgg gives you instant visibility into all of your cloud-hosted servers and applications. Cloud monitoring has never been so easy: lightweight, elastic monitoring; root cause analysis; data visualization; smart alerts. Get Started Now.

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions.  Read the whitepaper: http://www.aerospike.com/docs/architecture/assets/AerospikeACIDSupport.pdf.

  • BlueStripe FactFinder Express is the ultimate tool for server monitoring and solving performance problems. Monitor URL response times and see if the problem is the application, a back-end call, a disk, or OS resources.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Material design in the 2014 Google I/O app

Android Developers Blog - Tue, 08/05/2014 - 16:39
figure { margin: 0 0 20px 0; } figcaption { font-style: italic; opacity: 0.7; } figure img { padding: 0 !important; } blockquote { font-style: italic; }

By Roman Nurik, lead designer for the Google I/O Android App

Every year for Google I/O, we publish an Android app for the conference that serves two purposes. First, it serves as a companion for conference attendees and those tuning in from home, with a personalized schedule, a browsing interface for talks, and more. Second, and arguably more importantly, it serves as a reference demo for Android design and development best practices.

Last week, we announced that the Google I/O 2014 app source code is now available, so you can go check out how we implemented some of the features and design details you got to play with during the conference. In this post, I’ll share a glimpse into some of our design thinking for this year’s app.

On the design front, this year’s I/O app uses the new material design approach and features of the Android L Developer Preview to present content in a rational, consistent, adaptive and beautiful way. Let’s take a look at some of the design decisions and outcomes that informed the design of the app. Surfaces and shadows

In material design, surfaces and shadows play an important role in conveying the structure of your app. The material design spec outlines a set of layout principles that helps guide decisions like when and where shadows should appear. As an example, here are some of the iterations we went through for the schedule screen:

First iteration Second iteration Third iteration

The first iteration was problematic for a number of reasons. First, the single shadow below the app bar conveyed that there were two “sheets” of paper: one for the app bar and another for the tabs and screen contents. The bottom sheet was too complex: the “ink” that represents the contents of a sheet should be pretty simple; here ink was doing too much work, and the result was visual noise. An alternative could be to make the tabs a third sheet, sitting between the app bar and content, but too much layering can also be distracting.

The second and third iterations were stronger, creating a clear separation between chrome and content, and letting the ink focus on painting text, icons, and accent strips.

Another area where the concept of “surfaces” played a role was in our details page. In our first release, as you scroll the details screen, the top banner fades from the session image to the session color, and the photo scrolls at half the speed beneath the session title, producing a parallax effect. Our concern was that this design bent the physics of material design too far. It’s as if the text was sliding along a piece of paper whose transparency changed throughout the animation.

A better approach, which we introduced in the app update on June 25th, was to introduce a new, shorter surface on which the title text was printed. This surface has a consistent color and opacity. Before scrolling, it’s adjacent to the sheet containing the body text, forming a seam. As you scroll, this surface (and the floating action button attached to it) rises above the body text sheet, allowing the body text to scroll beneath it.

This aligns much better with the physics in the world of material design, and the end result is a more coherent visual, interaction and motion story for users. (See the code: Fragment, Layout XML)

Color

A key principle of material design is also that interfaces should be “bold, graphic, intentional” and that the foundational elements of print-based design should guide visual treatments. Let’s take a look at two such elements: color and margins.

In material design, UI element color palettes generally consist of one primary and one accent color. Large color fields (like the app bar background) take on the main 500 shade of the primary color, while smaller areas like the status bar use a darker shade, e.g. 700.

The accent color is used more subtly throughout the app, to call attention to key elements. The resulting juxtaposition of a tamer primary color and a brighter accent, gives apps a bold, colorful look without overwhelming the app’s actual content.

In the I/O app, we chose two accents, used in various situations. Most accents were Pink 500, while the more conservative Light Blue 500 was a better fit for the Add to Schedule button, which was often adjacent to session colors. (See the code: XML color definitions, Theme XML)

And speaking of session colors, we color each session’s detail screen based on the session’s primary topic. We used the base material design color palette with minor tweaks to ensure consistent brightness and optimal contrast with the floating action button and session images.

Below is an excerpt from our final session color palette exploration file.

Session colors, with floating action button juxtaposed to evaluate contrast Desaturated session colors, to evaluate brightness consistency across the palette Margins

Another important “traditional print design” element that we thought about was margins, and more specifically keylines. While we’d already been accustomed to using a 4dp grid for vertical sizing (buttons and simple list items were 48dp, the standard action bar was 56dp, etc.), guidance on keylines was new in material design. Particularly, aligning titles and other textual items to keyline 2 (72dp on phones and 80dp on tablets) immediately instilled a clean, print-like rhythm to our screens, and allowed for very fast scanning of information on a screen. Gestalt principles, for the win!

Grids

Another key principle in material design is “one adaptive design”:

A single underlying design system organizes interactions and space. Each device reflects a different view of the same underlying system. Each view is tailored to the size and interaction appropriate for that device. Colors, iconography, hierarchy, and spatial relationships remain constant.

Now, many of the screens in the I/O app represent collections of sessions. For presenting collections, material design offers a number of containers: cards, lists, and grids. We originally thought to use cards to represent session items, but since we’re mostly showing homogenous content, we deemed cards inappropriate for our use case. The shadows and rounded edges of the cards would add too much visual clutter, and wouldn’t aid in visually grouping content. An adaptive grid was a better choice here; we could vary the number of columns on screen size (see the code), and we were free to integrate text and images in places where we needed to conserve space.

Delightful details

Two of the little details we spent a lot of time perfecting in the app, especially with the L Developer Preview, were touch ripples and the Add to Schedule floating action button.

We used both the clipped and unclipped ripple styles throughout the app, and made sure to customize the ripple color to ensure the ripples were visible (but still subtle) regardless of the background. (See the code: Light ripples, Dark ripples)

But one of our favorite details in the app is the floating action button that toggles whether a session shows up in your personalized schedule or not:

We used a number of new API methods in the L preview (along with a fallback implementation) to ensure this felt right:

  1. View.setOutline and setClipToOutline for circle-clipping and dynamic shadow rendering.
  2. android:stateListAnimator to lift the button toward your finger on press (increase the drop shadow)
  3. RippleDrawable for ink touch feedback on press
  4. ViewAnimationUtils.createCircularReveal for the blue/white background state reveal
  5. AnimatedStateListDrawable to define the frame animations for changes to icon states (from checked to unchecked)

The end result is a delightful and whimsical UI element that we’re really proud of, and hope that you can draw inspiration from or simply drop into your own apps.

What’s next?

And speaking of dropping code into your own apps, remember that all the source behind the app, including L Developer Preview features and fallback code paths, is now available, so go check it out to see how we implemented these designs.

We hope this post has given you some ideas for how you can use material design to build beautiful Android apps that make the most of the platform. Stay tuned for more posts related to this year’s I/O app open source release over the coming weeks to get even more great ideas for ways to deliver the best experience to your users.


Join the discussion on
+Google Design


Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Tue, 08/05/2014 - 16:34

A point of view can be a dangerous luxury when substituted for insight and understanding.
~ Marshall McLuhan

Categories: Project Management

Equality for All Agile Team Members?

Mike Cohn's Blog - Tue, 08/05/2014 - 15:00

"Liberté, égalité, fraternité" is the national motto of France, and originated during the French Revolution. And while freedom, equality, and brotherhood are great ideals for a country, I'm not sure about one of them for agile teams: Equality.

I'm frequently asked if agile means that everyone is equal on an agile team. The feeling is that self-organization means everyone should be equal--including that junior intern that started yesterday.

Fortunately, self-organization does not require everyone to be equal. In fact, self-organization requires the opposite: there must be differences between the agents who are self-organizing.

In her Ph.D. dissertation on self-organization, Glenda Eoyang wrote that self-organization requires three things: a container, differences and transforming exchanges. This is known as the CDE model, for “Containers, Differences, and Exchanges.

Differences are necessary for self-organization to occur because otherwise, any organization that emerged would be random. There would be no advantage to one agent (person) doing something rather than someone else if all agents were identical. So, differences are necessary. And, fortunately, when dealing with humans, differences abound.

So, no, becoming good at agile does not require everyone on a team to be treated equally. Each team member should get the respect he or she deserves. When an experienced team member with a track record of giving solid advice says something will be difficult, others should consider that opinion.

When that junior intern we hired yesterday says the same, team members should listen politely but give only minor credence to the opinion.

Another meaning of equality could mean that all team members do the same work – meaning everyone in agile becomes a generalist and we have no room for specialists. That’s an equality myth that I’ll address next week.

In the comments section below, let me know what you think, and how your team works. Is everyone equal on your team?

Equality for All Agile Team Members?

Mike Cohn's Blog - Tue, 08/05/2014 - 15:00

"Liberté, égalité, fraternité" is the national motto of France, and originated during the French Revolution. And while freedom, equality, and brotherhood are great ideals for a country, I'm not sure about one of them for agile teams: Equality.

I'm frequently asked if agile means that everyone is equal on an agile team. The feeling is that self-organization means everyone should be equal--including that junior intern that started yesterday.

Fortunately, self-organization does not require everyone to be equal. In fact, self-organization requires the opposite: there must be differences between the agents who are self-organizing.

In her Ph.D. dissertation on self-organization, Glenda Eoyang wrote that self-organization requires three things: a container, differences and transforming exchanges. This is known as the CDE model, for “Containers, Differences, and Exchanges.

Differences are necessary for self-organization to occur because otherwise, any organization that emerged would be random. There would be no advantage to one agent (person) doing something rather than someone else if all agents were identical. So, differences are necessary. And, fortunately, when dealing with humans, differences abound.

So, no, becoming good at agile does not require everyone on a team to be treated equally. Each team member should get the respect he or she deserves. When an experienced team member with a track record of giving solid advice says something will be difficult, others should consider that opinion.

When that junior intern we hired yesterday says the same, team members should listen politely but give only minor credence to the opinion.

Another meaning of equality could mean that all team members do the same work – meaning everyone in agile becomes a generalist and we have no room for specialists. That’s an equality myth that I’ll address next week.

In the comments section below, let me know what you think, and how your team works. Is everyone equal on your team?

Azure: Virtual Machine, Machine Learning, IoT Event Ingestion, Mobile, SQL, Redis, SDK Improvements

ScottGu's Blog - Scott Guthrie - Tue, 08/05/2014 - 07:28

This past month we’ve released a number of great enhancements to Microsoft Azure.  These include:

  • Virtual Machines: Preview Portal Support as well as SharePoint Farm Creation
  • Machine Learning: Public preview of the new Azure Machine Learning service
  • Event Hub: Public preview of new Azure Event Ingestion Service
  • Mobile Services: General Availability of .NET support, SignalR support
  • Notification Hubs: Price Reductions and New Features
  • SQL Database: New Geo-Restore, Geo-Replication and Auditing support
  • Redis Cache: Larger Cache Sizes
  • Storage: Support for Zone Redundant Storage
  • SDK: Tons of great VS and SDK improvements

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Virtual Machines: Support in the new Azure Preview portal

We previewed the new Azure Preview Portal at the //Build conference earlier this year.  It brings together all of your Azure resources in a single management portal, and makes it easy to build cloud applications on the Azure platform using our new Azure Resource Manager (which enables you to manage multiple Azure resources as a single application).  The initial preview of the portal supported Web Sites, SQL Databases, Storage, and Visual Studio Online resources.

This past month we’ve extended the preview portal to also now support Virtual Machines.  You can create standalone VMs using the portal, or group multiple VMs (and PaaS services) together into a Resource Group and manage them as a single logical entity. You can use the preview portal to get deep insights into billing and monitoring of these resources, and customize the portal to view the data however you want.  If you are an existing Azure customer you can start using the new portal today: http://portal.azure.com.

Below is a screen-shot of the new portal in action.  The service dashboard showing service/region health can be seen in the top-left of the portal, along with billing data about my subscriptions – both make it really easy for you to see the health and usage of your services in Azure.  In the screen-shot below I have a single VM running named “scottguvstest” – and clicking the tile for it displays a “blade” of additional details about it to the right – including integrated performance monitoring usage data:

image

The initial “blade” for a VM provides a summary view of common metrics about it.  You can click any of the titles to get even more detailed information as well. 

For example, below I’ve clicked the CPU monitoring title in my VM, which brought up a Metric blade with even more details about CPU utilization over the last few days.  I’ve then clicked the “Add Alert” command within it to setup an automatic alert that will trigger (and send an email to me) any time the CPU of the VM goes above 95%:

image

In the screen-shot below, I’ve clicked the “Usage” tile within the VM blade, which displays details about the different VM sizes available – and what each VM size provides in terms of CPU, memory, disk IOPS and other capabilities.  Changing the size of the VM being used is as simple as clicking another of the pricing tiles within the portal – no redeployment of the VM required:

image

SharePoint Farm support via the Azure Gallery

Built-into the Azure Preview Portal is a new “Azure Gallery” that provides an easy way to deploy a wide variety of VM images and online services.  VM images in the Azure Gallery include Windows Server, SQL Server, SharePoint Server, Ubuntu, Oracle, Baracuda images. 

Last month, we also enabled a new “SharePoint Server Farm” gallery item.  It enables you to easily configure and deploy a highly-available SharePoint Server Farm consisting of multiple VM images (databases, web servers, domain controllers, etc) in only minutes.  It provides the easiest way to create and configure SharePoint farms anywhere:

image

Over the next few months you’ll see even more items show up in the gallery – enabling a variety of additional new scenarios.  Try out the ones in the gallery today by visiting the new Azure portal: http://portal.azure.com/

Machine Learning: Preview of new Machine Learning Service for Azure

Last month we delivered the public preview of our new Microsoft Azure Machine Learning service, a game changing service that enables your applications and systems to significantly improve your organization’s understanding across vast amounts of data. Azure Machine Learning (Azure ML) is a fully managed cloud service with no software to install, no hardware to manage, and no OS versions or development environments to grapple with. Armed with nothing but a browser, data scientists can log into Azure and start developing Machine Learning models from any location, and from any device.

ML Studio, an integrated development environment for Machine Learning, lets you set up experiments as simple data flow graphs, with an easy to use drag, drop and connect paradigm. Data scientists can use it to avoid programming a large number of common tasks, allowing them to focus on experiment design and iteration. A collection of best of breed algorithms developed by Microsoft Research comes built-in, as is support for custom R code – and over 350 open source R packages can be used securely within Azure ML today.

image

Azure ML also makes it simple to create production deployments at scale in the cloud. Pre-trained Machine Learning models can be incorporated into a scoring workflow and, with a few clicks, a new cloud-hosted REST API can be created.

Azure ML makes the incredible potential of Machine Learning accessible both to startups and large enterprises. Startups are now able to immediately apply machine learning to their applications. Larger enterprises are able to unleash the latent value in their big data to generate significantly more revenue and efficiencies. Above all, the speed of iteration and experimentation that is now possible will allow for rapid innovation and pave the way for intelligence in cloud-connected devices all around us. Getting Started

Getting started with the Azure Machine Learning Service is easy.  Within the current Azure Portal simply choose New->Data Services->Machine Learning to create your first ML service today:

image

Subscribe to the Machine Learning Team Blog to learn more about the Azure Machine Learning service.  And visit our Azure Machine Learning documentation center to watch videos and explore tutorials on how to get started immediately.

Event Hub: Preview of new Azure Event Ingestion Service

Today’s connected world is defined by big data.  Big data may originate from connected cars and thermostats that produce telemetry data every few minutes, application performance counters that generate events every second or mobile apps that capture telemetry for every user’s individual action. The rapid proliferation of connected devices raises challenges due to the variety of platforms and protocols involved.  Connecting these disparate data sources while handling the scale of the aggregate stream is a significant challenge. 

I’m happy to announce the public preview of a significant new Azure service: Event Hub. Event Hub is a highly scalable pub-sub ingestor capable of elastic scale to handle millions of events per second from millions of connected devices so that you can process and analyze the massive amounts of data produced by your connected devices and applications. With this new service, we now provide an easy way for you to provision capacity for ingesting events from a variety of sources, and over a variety of protocols in a secure manner. Event Hub supports a variety of partitioning modes to enable parallelism and scale in your downstream processing tier while preserving the order of events on a per device basis. Creating an Event Hub

You can easily create a new instance of Event Hub from the Azure Management Portal by clicking New->App Services->Service Bus->Event Hub. During the Preview, Event Hub service is available in a limited number of regions (East US 2, West Europe, Southeast Asia) and requires that you first create a new Service Bus Namespace:

image Learn More

Try out the new Event Hub service and give us your feedback! For more information, visit the links below:

Mobile Services: General Availability of .NET Support, SignalR and Offline Sync

A few months ago I announced a preview of Mobile Services with .NET backend support. Today I am excited to announce the general availability of the Mobile Services .NET offering, which makes it an incredibly attractive choice for developers building mobile facing backend APIs using .NET.  Using Mobile Services you can now:

  • Quickly add a fully featured backend to your iOS, Android, Windows, Windows Phone, HTML or cross-platform Xamarin, Sencha, or PhoneGap app, leveraging ASP.NET Web API, Mobile Services, and corresponding Mobile Services client SDKs.
  • Publish any existing ASP.NET Web API to Azure and have Mobile Services monitor and manage your Web API controllers for you.
  • Take advantage of built-in mobile capabilities like push notifications, real-time notifications with SignalR, enterprise sign-on with Azure Active Directory, social auth, offline data sync for occasionally connected scenariosYou can also take full advantage of Web API features like OData controllers, and 3rd party Web API-based frameworks like Breeze.
  • Have your mobile app’s users login via Azure Active Directory and securely access enterprise assets such as SharePoint and Office 365. In addition, we've also enabled seamless connectivity to on-premises assets, so you can reach databases and web services that are not exposed to the Internet and behind your company’s firewall.
  • Build, test, and debug your Mobile Services .NET backend using Visual Studio running locally on your machine or remotely in Azure.

You can learn more about Mobile Services .NET from this blog post, and the Mobile Services documentation center. Real-time Push with Mobile Services and SignalR

We recently released an update to our Mobile Services .NET backend support which enables you to use ASP.NET SignalR for real-time, bi-directional communications with your mobile applications. SignalR will use WebSockets under the covers when it's available, and fallback to other “techniques” (i.e. HTTP hacks) when it isn't. Regardless of the mode, your application code stays the same.

The SignalR integration with Azure Mobile Services includes:

  • Turnkey Web API Integration: Send messages to your connected SignalR applications from any Web API controller or scheduled job – we automatically give you access to SignalR Hubs from the ApiServices context.
  • Unified Authentication: Protect your SignalR Hubs the same way you protect any of your Mobile Service Web API controllers using a simple AuthorizeLevel attribute.
  • Automatic Scale-out: When scaling out your Azure Mobile Service using multiple front-ends, we automatically scale out SignalR using Azure Service Bus as the backplane for sync’ing between the front-ends. You don’t need to do anything to scale your SignalR Hubs.

Learn more about the SignalR capability in Mobile Services from Henrik’s blog. Mobile Services Offline Sync support for Xamarin and native iOS apps

I've blogged earlier about the new Offline Sync feature in Mobile Services, which provides a lightweight, cross-platform way for applications to work with data even when they are offline / disconnected from the network. At that time we released Offline Sync support for Windows Phone and Windows Store apps.

Today we are also introducing a preview of Mobile Services Offline Sync for native iOS apps, as well as Xamarin.iOS, and Xamarin.Android. Mobile Services Accelerators

I’m pleased to also introduce our new Mobile Services Accelerators, which are feature complete sample apps that demonstrate how to leverage the new enterprise features of the Mobile Services platform in an end-to-end scenario. We will have two accelerator apps for you today, available as a source code, as well as published in the app store.

These apps leverage the Mobile Services .NET backend support to handle authenticating employees with Azure Active Directory, store data securely, working with data offline, as well as get reminders via push notifications. We hope you will find these apps useful for your teams as a reference material. Stay tuned, as more accelerators are coming! Notification Hubs: Price reductions and new features

The Azure Notification Hubs service enables large scale cross platform push notifications from any server backend running on-premise or in the cloud.  It supports a variety of mobile devices including iOS, Android, Windows, Kindle Fire, and Nokia X. I am excited to announce several great updates to Azure Notification Hubs today:

  • Price reduction. We are reducing the Notification Hubs price by up to 40x to accommodate a wider range of customer scenarios. With the new price (effective September 1st), customers can send 1 million mobile push notifications per month for free, and pay $1 per additional million pushes using our new Basic tier. Visit the Notification Hubs pricing page for more details.
  • Scheduled Push. You can now use Notification Hubs to schedule individual and broadcast push notifications at certain times of the day. For example, you can use this feature to schedule announcements to be delivered in the morning to your customers.  We include support to enable this no matter which time zone your customers are in.
  • Bulk Registration management. You can now send bulk jobs to create, update or export millions of mobile device registrations at a time with a single API call. This is useful if you are moving from an old push notification system to Notification Hubs, or to import user segments from a 3rd party analytics system.

You can learn more about Azure Notification Hubs at the developer center.

SQL Databases: New Geo-Restore, Geo-Replication and Auditing support

In April 2014, we first previewed our new SQL Database service tiers: Basic, Standard, and Premium. Today, I’m excited to announce the addition of more features to the preview:

  • Geo-restore: Designed for emergency data recovery when you need it most, geo-restore allows you to recover a database to any Azure region. Geo-restore uses geo-redundant Azure blob storage for automatic database backups and is available for Basic, Standard, and Premium databases in the Windows Azure Management Portal and REST APIs.
  • Geo-replication: You can now configure your SQL Databases to use our built-in geo-replication support that enables you to setup an asynchronously replicated secondary SQL Database that can be failed over to in the event of disaster.  Geo-replication is available for Standard and Premium databases, and can be configured via the Windows Azure Management portal and REST APIs. You can get more information about Azure SQL Database Business Continuity and geo-replication here and here.
  • Auditing: Our new auditing capability tracks and logs events that occur in your database and provides dashboard views and reports that enables you to get insights into these events. You can use auditing to streamline compliance-related activities, gain knowledge about what is happening in your database, and to identify trends, discrepancies and anomalies. Audit events are also written to an audit log which is stored in a user-designated Azure storage account.  Auditing is now available for all Basic, Standard, and Premium databases.

You can learn even more about these new features here.

Redis Cache: Large Cache Sizes, Six New Regions, Redis MaxMemory Policy Support

This past May, we launched the public preview of the new Azure Redis Cache service. This cache service gives you the ability to use a secure, dedicated Redis cache, managed as a service by Microsoft. Using the new Cache service, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft.

Last month we updated the service with the following features:

  • Support for larger cache sizes. We now support the following sizes: 250 MB, 1 GB, 2.5 GB, 6 GB, 13 GB and 26 GB. 
  • Support for six new Azure Regions. The full list of supported regions can be found in the Azure Regions page.
  • Support for configuring Redis MaxMemory policy

For more information on the Azure Redis Cache, check out this blog post: Lap around Azure Redis Cache. Storage: Support for Zone Redundant Storage

We are happy to introduce a new Azure Storage account offering: Zone Redundant Storage (ZRS).

ZRS replicates your data across 2 to 3 facilities either within a single Azure region or across two Azure regions. If your storage account has ZRS enabled, then your data is durable even in the case where one of the datacenter facilities hosting your data suffers a catastrophic issue. ZRS is also more cost efficient than the existing Global Redundant Storage (GRS) offering we have today.

You can create a ZRS storage account by simply choosing the ZRS option under the replication dropdown in the Azure Management Portal.

image

You can find more information on pricing for ZRS at http://azure.microsoft.com/en-us/pricing/details/storage/. Azure SDK: WebSites, Mobile, Virtual Machines, Storage and Cloud Service Enhancements

Earlier today we released the Update 3 release of Visual Studio 2013 as well as the new Azure SDK 2.4 release.  These updates contain a ton of great new features that make it even easier to build solutions in the cloud using Azure.  Today’s updates include:

Visual Studio Update 3

  • Websites: Publish WebJobs from Console or Web projects.
  • Mobile Services: Create a Dev/Test environment in the cloud when creating Mobile Services projects. Use the Push Notification Wizard with .NET Mobile Services.
  • Notification Hubs: View and manage device registrations.

Azure SDK 2.4

  • Virtual Machines: Remote debug 32-bit Virtual Machines. Configure Virtual Machines, including installation & configuration of dynamic extensions (e.g. anti-malware, Puppet, Chef and custom script). Create Virtual Machine snapshots of the disk state.
  • Storage: View Storage activity logs for diagnostics. Provision Read-Access Geo-redundant Storage from Visual Studio.
  • Cloud Services: Emulator Express is the default option for new projects (Full Emulator is deprecated). Configure new networking capabilities in the service model.

You can learn all about the updates from the Azure team’s SDK announcement blog post.

Summary

This most recent release of Azure includes a bunch of great features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

omni
Categories: Architecture, Programming

Seven Deadly Sins of Metrics Programs: Envy

Definition of done

Don’t trip the runner next to you just to win.

The results of software measurement can be held up as badge of honor. It is not uncommon for a CIO, department manager, project manager or even technical lead to hold up the performance on their projects in front others, engendering envy from other projects. Envy is a feeling of discontent and resentment aroused by and in conjunction with desire for the possessions or qualities of another. Measurement is a spotlight that can focus other’s envy if the situation is right. That can occur when  bonuses are tied to measurement and when the assignment and staffing of projects is driven by unknown factors. There are two major types of metrics-based envy: one must be addressed at the personnel level and the second must be addressed organizationally.

Envy can be caused when the metrics of projects managed by others in your peer group (real or perceived) are held up as examples to be emulated.  The active component of envy at this level is triggered by a social comparison that threatens a person’s self image, and can be exacerbated when the attributes that impact performance are outside of the team’s control. The type or complexity of the work coming to a team is generally negotiable. Teams that get the really tough problem will generally not have the highest productivity even though they may solved an intractable business problem. Envy generated by this type of problem translates into a variety of harmful behaviors. In benign cases, we might just pass it off as office politics (which everybody loves, not), or in a worst case scenario could generate a self destructive spiral of negative behavior which is not helpful to anyone.  Typical envy-driven behaviors to watch for include the loss of will, poor communication, withdrawal and hiding.  While the amateur psychologist in me would be happy to pontificate on the personal side of envy, I am self aware enough to know that I shouldn’t.  If you have fallen into the trap of envy, get professional help. If you are a manager of a person that is falling into this hole, get them help or get them out of the organization.

The other category of triggers are organizational.  These are the triggers that as managers, we have more control over and have the obligation to address.  As leaders we have a chance to mold the organizational culture to be supportive of efficiency and effectiveness.  Cultures and environments can facilitate and foster both good and bad behaviors.  Cultures that support an atmosphere of individual competition above collaboration can create an atmosphere where envy will flourish. This will act as a feedback loop to further deepen silos and the possibility of envy. For example, Sid may feel that Joe always gets the best recruits and he is powerless to change the equation (for whatever reason), therefore he can’t compete.  Envy may cause him to focus on stealing Joe’s recruits rather than coaching his own. Thisculture can disrupt communication and collaboration and create silos. In this type of environment positive behaviors, such as displaying measurement data, can act as feedback loop to deepen the competitive culture rather than generating collaboration and communication.  Typical behaviors generated by envy triggered by organizational issues include those noted earlier and outright sabotage of projects and careers (tripping the runner next to you so you can win), and just as bad, the pursuit of individual goals at the expense of the overall business goals.

Measurement programs can take the lead in developing a culture where teams can perform, be recognized for that performance and then share the lessons that delivered that performance when it is truly special. An important way to understand what type of performance really should be held up and emulated is based on the work of W. Edward Deming. In Deming’s seminal work Out of the Crisis, he suggested that only variation caused by by special causes should be specifically reviewed rather than normal or common cause performance. Understanding and using the concepts of common and special cause of variation as tools in your analysis will help ground your message in a reality that focuses on where specific performance is different enough to be studied. Common cause variation is generated by outcomes that are within the capability of the system.  Whereas special cause outcomes represent performance outside the normal capacity of the system. In every case, performance outside of the norm, should be studied and where positive, held up for others to emulate. By focusing your spotlight on these outcomes you have the opportunity to identify new cutting edge ideas and ideas that should be avoided.  Another technique for fostering collaboration (an environment where envy is less likely to take root) is to invite all parties to participate in the analysis of measurement data using tools such as a WIKI. The measurement group should provide the first wave of analysis, then let the stakeholders participate in shaping the final analysis, using the crowd sourcing techniques made famous by Jimmy Wales and Wikipedia.  Getting everyone involved creates a learning environment that uses measurement not only as tool to generate information, but also as a tool to shape the environment and channel the corporate culture.

Measurement and measurement programs don’t cause the sin of envy.  People and organizational cultures foster this sin in equal measure. Done correctly, measurement programs can act as a tool to tame the excess that lead to this sin. However the corollary is also true.  Done incorrectly or poorly, measurement ceases to be a positive tool and becomes part of the problem.  Measurement that fosters transparency and collaboration will help an organization communicate, grow and improve.


Categories: Process Management

What Do We Mean When We Say "Agile Community?"

Herding Cats - Glen Alleman - Mon, 08/04/2014 - 17:29

In a recent Skype conversation around agile, estimating, Little's Law and the #NoEstimate hashtag, the term agile community was used. My first reaction was whose agile community? The community of sole contributors? The community of $1B weapons systems and all in between?

My thoughts go back to the presentation below. There is a spectrum of project management processes built around agile. My experience starts with 5. Literally 5, since I have time in that aircraft. My software development management experience goes all the way to the end. And aircraft experience to 25.

Paradigm of agile project management from Glen Alleman So when I hear the agile community which one is it? The recent Agile Conference looks like it was attended by sole contributors or small team members. But there are other agile communities where I work

And guideance for deploying agile

So Now Back To The Core Issues

If you're a sole contributor and have a customer sitting near by, estimating you cost, schedule, and technology outcomes is likely of little value. If you're in the other end, say the flight avionics systems for the 777, then the level of rigor, formality, reporting is different. Both use agile. Not all in the same way, but both write code using the principles of agile development

12 agile principlesNo credble management process would or should object to these principles and practices. Do to so means Doing Stupid Things on Purpose. So many of the motivators for not doing something are actually bad management. Let's not estimate, because estimates are misused is my favorite DDSTOP example.

Here's an example of how to connect the dots between these principles and practices in a more formal business management process - in this case Earned Value Management. 

Ev+agile=success (final v2) from Glen Alleman

So when we hear the agile community and those representing the agile community which community is that? 

There is a crass American term used in our domain.

When you see dysfunction, see something you don't understand, or see something that is counter to your paradigm - Follow the Money.

This is the basis of microeconomics of writing software for money. What is considered a waste or even evil in one domain is a critical success factor in another domain.

Ask some simple questions to establish this domain:

  • What's the Value At Risk? 
  • Are you subject to any governance process?
  • Is this project considered mission critical in any way? Project's that are not mission critical have little need to be successful on time, on budget, with the needed capabilities.

In The End

Can we have any meanigful disucssion about any topic in the absence of a domain and context? Especially when that topic is driven by Value at Risk, Governance, and business processes?

I'd say it is incumbent on those making a suggestion for example

No Estimates

To show in what domain this statement is applicable, how we would recognize its applicability outside the the domain of those making the suggestion, how we could test the suggestion to see if it is applicable, and most important what are the conditions that allow the suggestion to work in those domains? 

  • How can decisions be made in the absence of knowing the impact of those decisions? This is a violation of the principles of microeconomics.
  • How can we spend other peoples money and not inform them about the probability of the total cost, delivery date, and confidence in delivering the needed capabilities?
  • How can we plan for needed capacity in the absence of knowing what Done looks like in some unit of measure meaningful to the decision maker - not just the solution provider?
Related articles Agile and the Federal Government Agile as a Systems Engineering Paradigm Can Enterprise Agile Be Bottom Up? What Software Domain Do You Work In? More #NoEstimates Using Agile Effectively in DoD Environments Is Your Organization Ready for Agile? Domain and Context Are King, Then Comes Process and Experience
Categories: Project Management

Tumblr: Hashing Your Way to Handling 23,000 Blog Requests per Second

This is a guest post by Michael Schenck, SRE Staff Engineer at Tumblr.

At Tumblr, blogs (or Tumblelog) are one of our most highly trafficked faces on the internet.  One of the most convenient aspects of tumblelogs is their highly cacheable nature, which is fantastic because of the high views/post ratio the Tumblr network offers our users.  That said, it's not entirely trivial to scale out the perimeter proxy tier, let alone the caching tier, necessary for serving all of those requests.

This article describes the architecture of the portion of our perimeter responsible for blogs serving, one of our more highly trafficked perimeter end-points.

Here's how we do it.

Stats
Categories: Architecture

I Hate Astronomers

Making the Complex Simple - John Sonmez - Mon, 08/04/2014 - 16:00

The majority is not always right. If you get nothing else from this post, but it has helped remind you of this fact, then I’ll feel that I have done my job. The society we live in today is more connected than ever. It is easier and easier to share ideas and communicate. Right now […]

The post I Hate Astronomers appeared first on Simple Programmer.

Categories: Programming