Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Social Intelligence and 95 Articles to Give You an Unfair Advantage

Social Intelligence is hot.

I added a new category at Sources of Insight to put the power of Social Intelligence at your fingertips:

Social Intelligence

(Note that you can get to Social Intelligence from the menu under “More Topics …”)

I wanted a simple category to capture and consolidate the wealth of insights around interpersonal communication, relationships, conflict, influence, negotiation, and more.   There are 95 articles in this category, and growing, and it includes everything from forging friendships to dealing with people you can’t stand, to building better relationships with your boss.

According to Wikipedia, “Social intelligence is the capacity to effectively negotiate complex social relationships and environments.”

There's a great book on Social Intelligence by Daniel Goleman:

Social Intelligence, The New Science of Human Relationships

According to Goleman, “We are constantly engaged in a ‘neural ballet’ that connects our brain to the brains with those around us.”

Goleman says:

“Our reactions to others, and theirs to us, have a far-reaching biological impact, sending out cascades of hormones that regulate everything from our hearts to our immune systems, making good relationships act like vitamins—and bad relationships like poisons. We can ‘catch’ other people’s emotions the way we catch a cold, and the consequences of isolation or relentless social stress can be life-shortening. Goleman explains the surprising accuracy of first impressions, the basis of charisma and emotional power, the complexity of sexual attraction, and how we detect lies. He describes the ‘dark side’ of social intelligence, from narcissism to Machiavellianism and psychopathy. He also reveals our astonishing capacity for ‘mindsight,’ as well as the tragedy of those, like autistic children, whose mindsight is impaired.”

According to the Leadership Lab for Corporate Social Innovation, by Dr. Claus Otto Scharmer  (MIT OpenCourseware), there is a relational shift:

The Rise of the Network Society

And, of course, Social is taking off as a hot technology in the Enterprise arena.  It’s changing the game, and changing how people innovate, communicate, and collaborate in a comprehensive collaboration sort of way.

Here is a sampling of some of my Social Intelligence articles to get you started:

5 Conversations to Have with Your Boss
6 Styles Under Stress
10 Types of Difficult People
Antiheckler Technique
Ask, Mirror, Paraphrase and Prime
Cooperative Controversy Over Competitive Controversy
Coping with Power-Clutchers, Paranoids and Perfectionists
Dealing with People You Can't Stand
Expectation Management
How To Consistently Build a Winning Team
How To Deal With Criticism
How Do You Choose a Friend?
How To Repair a Broken Work Relationship
Mutual Purpose
Superordinate Goals
The Lens of Human Understanding
The Politically Competent Leader, The Political Analyst, and the Consensus Builder
Work on Me First

If you really want to dive in here, you can brows the full collection at:

Social Intelligence

Enjoy, and may the power of Social Intelligence be with you.

Categories: Architecture, Programming

Spring Cleaning

Spring flowers

Spring flowers

In my home it is traditional every spring to thoroughly clean our house, yard and even our office.  Spring cleaning is different than a normal cleaning.  Everything gets touched, sorted and perhaps even thrown away. When we are done it always amazes me to step back and see the stuff that has accumulated since our last spring cleaning that is no longer needed. The same spring-cleaning concept can be applied to the processes that you use at work.

  1. Convene a small team. Consider using a Three Amigos-like process consisting of a developer, tester and process or business analyst. The small team will reduce the time needed to come to aconsensus and the inclusion of multiple disciplines will help make sure that important steps don’t get “cleaned up.”
  2. Map your actual processes. A simple process map showing steps with their inputs and outputs will be useful for focusing the spring cleaning on what is actually being done rather what is supposed to be happening.
  3. Review your actual process against the organizational standard or what every thinks ought to be happening.
    1. Identify steps that have been added to the process. Ask if the added steps can be removed. In many cases, process steps are added to prevent a specific mistake or oversight. I recently saw a process with a weekly budget review signoff because in an earlier release the team had gone over budget. The step in process added two additional hours of overhead to collect and validate signatures (the data already existed).
    2. Review each step in the process to determine whether there are simpler ways to accomplish the same result. In the example of the weekly budget review, we removed the step and put a simple budget burn down chart on the wall in the team room, which took approximately five minutes to update every week.
  4. Review the process change recommendations with the rest of the project team. I like convening a lunch session to review the changes and to share a common meal.
  5. Implement the process changes based on the review and monitor the results.
  6. Calculate and monitor the project’s burden rate. The burden rate is a simple metric that is the ratio of testing, review, sign-off and management to total time.  The burden rate represents the overhead being expended to manage the project and to ensure quality. If you were to be able to construct a perfect engineering process the burden rate would be zero, however perfect is not possible. Spring cleaning should reduce the burden rate. I recommend reviewing the burden rate during a retrospective periodically so that overhead does not creep back into the process.

Spring cleaning is a tradition in many of the colder climates. When the days grow warmer and longer all the extra stuff that has accumulated over the winter becomes obvious and a bit oppressive. Cleaning out what isn’t needed lifts the spirits; process spring cleaning serves the same purpose. Get rid of steps that don’t add value and simplify how you work.  A process spring cleaning will lift your team’s spirits and help them deliver more value. Spring cleaning is part of a virtuous cycle.


Categories: Process Management

Many Explorers Don't Make It Home

Herding Cats - Glen Alleman - Sat, 04/12/2014 - 22:15

When I hear the phrase we're exploring I'm reminded that in fact many who explore without a plan, measures of their progress progress against this plan, a risk management Plan-B for getting home when things go wrong, and without insufficient resources to survive the trip - come home empty handed or many time don't come home at all. Exploring without these items is called wandering around in the wilderness looking for something to eat

Here's a simple tale about an actual explorer, Ernest Shackleton, who experienced failure and near death on their first expedition to the South Pole (ADM Scott), that informed his attempt the reach the Pole a second time, only to experience failure again. In the first example prepartion was weak, management inconsistent, and lacking an actual strategy, no Plan-B. The second attempt, without Scott, was well planned, well provisioned, well staffed. When trouble started, Plan-B and then Plan-C were put in place and executed. 

  What's the Point About Managing Projects? So when we hear we're exploring and there is no destination in mind, or named problem to be solved, or even a description of possible root causes of the un-named problem, remember Shackleton's first trip and Scott's mismanagement of that exploration. On the second trip Shackleton had estimated what he would need, what route he would take, what skills his crew needed, what Plan-B would be, and even Plan-C when that didn't work, and most of all he estimated the probability of success to be high enough it was worth the risk to reach the South Pole and return to tell about it. The Polar expedition for Shackleton was a project, planned and executed with credible estimates of every step along the way, including the possibilities hat everything could go wrong and it did. Through is leadership, they lived to tell about it. Related articles Performance-Based Project Management(sm) Released Agile as a Systems Engineering Paradigm 1909 - Ernest Shackleton, leading the Nimrod Expedition to the South Pole, plants the British flag 97 miles (156 km) from the South Pole, the furthest anyone had ever reached at that time. Black Swans and "They Never Saw It Coming" 3 Impediments To Actual Improvement in the Presence of Dysfunction
Categories: Project Management

The Industry Life Cycle

I’m a fan of simple models that help you see things you might otherwise miss, or that help explain how things work, or that simply show you a good lens for looking at the world around you.

Here’s a simple Industry Life Cycle model that I found in Professor Jason Davis’ class, Technology Strategy (MIT’s OpenCourseWare.)

image

It’s a simple backdrop and that’s good.  It’s good because there is a lot of complexity in the transitions, and there are may big ideas that all build on top of this simple frame.

Sometimes the most important thing to do with a model is to use it as a map.

What stage is your industry in?

Categories: Architecture, Programming

Get Up And Code 049: Overcoming Your Fear of Failure

Making the Complex Simple - John Sonmez - Sat, 04/12/2014 - 15:00

Failure is not something to be feared. Instead, you should fully embrace it and recognize it as the only way to reach long term success. In this episode, I talk about failure and why you need to learn to overcome it. Full transcript below: show John:               Hey, everyone.  Welcome back to Get Up and CODE.  […]

The post Get Up And Code 049: Overcoming Your Fear of Failure appeared first on Simple Programmer.

Categories: Programming

An Architecture Aware VsVars.ps1

DevHawk - Harry Pierson - Sat, 04/12/2014 - 02:28

Like many in the Microsoft dev community, I’m a heavy user of Visual Studio and Powershell. And so, of course, I’ve been a heavy user Chris Tavares’ vsvars32.ps1 script. However, recently I needed the ability to specify my desired processor architecture when setting up a VS command line session. Unfortunately, Chris’s script wraps vsvars32.bat which only supports generating 32-bit apps. Luckily, VC++ includes a vcvarsall.bat script that let’s you specify processor architecture. So I updated my local copy of vsvars.ps1 to use vcvarsall.bat under the hood and added an -x64 switch to enable setting up a 64-bit command line environment. Vcvarsall.bat supports a variety of additional options, but 64-bit support is all I needed so that’s all I added. I didn’t change the name of the script because there’s WAY too much muscle memory associated with typing “vsvars” to bother changing that now.

If you want it, you can get my architecture aware version of vsvars.ps1 from my OneDrive here: http://1drv.ms/1kf8g9I.

Categories: Architecture, Programming

The Final Factors in the Success of Agile Implementations

 

Agile like cooking is about  people.

Agile like cooking is about people.

Over the past few weeks I have been asking friends and colleagues to formally answer the following question:

What are the top reasons you think an organization succeeds in implementing Agile?

We have already covered the top six reasons organizations succeed with Agile.

  1. Senior Management
  2. Engagement, Early Feedback (Tied)
  3. Trust, Adaptable Culture, Coaching (Tied)

The folks that participated in this survey are from a highly experienced cohort of process improvement personnel, testers or developers. Completing the top eleven success factors in the survey are the areas of process discipline, team size, capable people and appropriate training.

Process discipline reflects the team’s capability to follow and improve the organization’s standard processes. For example, if Scrum were the standard Agile project management process you would expect that the team would follow standard practices of Scrum. The team would use retrospectives to tailor that process based on data gathered through experience within the limits the organization established. Process discipline is required for a group of people to work together to solve a common problem without tripping over themselves. Without process discipline it will be difficult for team members to predict how other team members will behave requiring a need to built in contingency.

Team size influences efficiency and effectiveness of Agile techniques. Agile teams typically have five to nine members. Team size is the sum of the entire core team; product owner, Scrum Master and the development members. Many of the collaborative techniques typically used in Agile don’t work well when team sizes expand.  For example, large teams tend to have difficulty completing standup meetings in a reasonable period of time, which causes participants to become bored and inattentive. When team members start to checkout, command and control management techniques are generally substituted for Agile techniques and principles.

Capable people are required to apply Agile processes. This was the most obvious success factor and probably the one that is least specific to Agile. Capable people are a requirement for any type of work. The combination of personal capability and engagement, an earlier success factor, both are required for Agile to prosper in an organization.

Appropriate training is required to apply Agile. Training should be provided not only to the core team, but to all of the stakeholders that will be impacted by the team’s new behavior. Training generally needs to be nuanced. Business stakeholders will have different training and knowledge needs than IT support teams. On a cautionary note, many organizations confuse presentations with training. Training for Agile needs to embrace the adult learning concepts that include an explanation and hands on practice before trainees are asked to use the material. In order to be most effective, training should be deployed just prior to time of need (just-in-time training).

Success implementing and using Agile requires that teams keep their eye on the ball both in terms of using the process as well as on delivering value. Process discipline, team size, capable people and training are all revolve around people and it bears repeating that people are center of Agile world.


Categories: Process Management

Brokered Component Wake On Callback Demo Video

DevHawk - Harry Pierson - Fri, 04/11/2014 - 23:43

As you might imagine, I had a pretty amazing time @ Build. The only thing that went wrong all week was when one of my demos in my session failed. It’s was pretty cool demo – the brokered WinRT component fires an event which wakes up a suspended WinRT app for a few seconds to process the event. However, I had shut off toast notifications on my machine, which messed up the demo. So here, for your enjoyment, is a short 3 minute video of the working demo.

Categories: Architecture, Programming

New user and sequence based segments in the Core Reporting API

Google Code Blog - Fri, 04/11/2014 - 18:40
Author PhotoBy Nick Mihailovksi, Product Manager, Google Analytics API Team

Cross-posted from the Google Analytics Blog

Segmentation is one of the most powerful analysis techniques in Google Analytics. It's core to understanding your users, and allows you to make better marketing decisions. Using segmentation, you can uncover new insights such as:
  • How loyalty impacts content consumption
  • How search terms vary by region
  • How conversion rates differ across demographics
Last year, we announced a new version of segments that included a number of new features.
Today, we've added this powerful functionality to the Google Analytics Core Reporting API. Here's an overview of the new capabilities we added:
User SegmentationPreviously, advanced segments were solely based on sessions. With the new functionality in the API, you can now define user-based segments to answer questions like "How many users had more than $1,000 in revenue across all transactions in the date range?"

Example: &segment=users::condition::ga:transactionRevenue>1000
Try it in the Query Explorer.

Sequence-based SegmentsSequence-based segments provide an easy way to segment users based on a series of interactions. With the API, you can now define segments to answer questions like "How many users started at page 1, then later, in a different session, made a transaction?"
Example: segment=users::sequence::ga:pagePath==/shop/search;->>perHit::ga:transactionRevenue>10

Try it in the Query Explorer.
New OperatorsTo simplify building segments, we added a bunch of new operators to simplify filtering on dimensions whose values are numbers, and limiting metric values within ranges. Additionally, we updated segment definitions in the Management API segments collection.
Partner SolutionsPadicode, one of our Google Analytics Technology Partners, used the new sequence-based segments API feature in their funnel analysis product they call PadiTrack.
PadiTrack allows Google Analytics customers to create ad-hoc funnels to identify user flow bottlenecks. By fixing these bottlenecks, customers can improve performance, and increase overall conversion rate.
The tool is easy to use and allows customers to define an ad-hoc sequence of steps. The tool uses the Google Analytics API to report how many users completed, or abandoned, each step.
paditrack-horizontal-funnel.jpg
Funnel Analysis Report in PadiTrack
According to Claudiu Murariu, founder of Padicode, "For us, the new API has opened the gates for advanced reporting outside the Google Analytics interface. The ability to be able to do a quick query and find out how many people added a product to the shopping cart and at a later time purchased the products, allows managers, analysts and marketers to easily understand completion and abandonment rates. Now, analysis is about people and not abstract terms such as visits."
The PadiTrack conversion funnel analysis tool is free to use. Learn more about PadiTrack on their website.
Resources
We're looking forward to seeing what people build using this powerful new functionality.
Nick is the Lead Product Manager for Core Google Analytics, including the Google Analytics APIs. Nick loves and eats data for lunch, and in his spare time he likes to travel around the world.

Posted by Louis Gray, Googler
Categories: Programming

Stuff The Internet Says On Scalability For April 11th, 2014

Hey, it's HighScalability time:

 Daly and Newton/Getty Images)
DNA nanobots deliver drugs in living cockroaches which have as much compute power as a Commodore 64
  • 40,000: # of people it takes to colonize a star system; 600,000: servers vulnerable to heartbleed
  • Quotable Quotes:
    • @laurencetratt: High frequency traders paid $300M to reduce New York <-> Chicago network transfers from 17 -> 13ms. 
    • @talios: People read http://highscalability.com  for sexual arousal - jim webber #cm14
    • @viktorklang: LOL RT @giltene 2nd QOTD: @mjpt777 “Some people say ‘Thread affinity is for sissies.’Those people don’t make money.”
    • @pbailis: Reminder: eventual consistency is orthogonal to durability and data loss as long as you correctly resolve update conflicts.
    • @codinghorror: Converting a low-volume educational Discourse instance from Heroku at ~$152.50/month to Digital Ocean at $10/month.
    • @FrancescoC: Scary post on kids who can't tell the diff. between atomicity & eventual consistency architecting bitcoin exchanges 
    • @jboner: "Protocols are a lot harder to get right than APIs, and most people can't get APIs right" -  @daveathomas at #reactconf
    • @vitaliyk: “Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens—usually.” @nntaleb
    • Blazes: Asynchrony * partial failure is hard.
    • David Rosenthal: I have long thought that the fundamental challenge facing system architects is to build systems that fail gradually, progressively, and slowly enough for remedial action to be effective, all the while emitting alarming noises to attract attention to impending collapse. 
    • Brian Wilson: Moral of the story: design for failure and buy the cheapest components you can. :-)

  • Just damn. DNA nanobots deliver drugs in living cockroaches: Levner and his colleagues at Bar Ilan University in Ramat-Gan, Israel, made the nanobots by exploiting the binding properties of DNA. When it meets a certain kind of protein, DNA unravels into two complementary strands. By creating particular sequences, the strands can be made to unravel on contact with specific molecules – say, those on a diseased cell. When the molecule unravels, out drops the package wrapped inside.

  • Remember those studies where a guerilla walks through the middle of a basketball game and most people don't notice? Attention blindness. 1000 eyeballs doesn't mean anything will be seen. That's human nature. Heartbleed -- another horrible, horrible, open-source FAIL.

  • Remember the Content Addressable Web? Your kids won't. The mobile web vs apps is another front on the battle between open and closed systems.

  • In Public Cloud Instance Pricing Wars - Detailed Context and Analysis Adrian Cockcroft takes a deep stab at making sense of the recent price cuts by Google, Amazon, and Microsoft. AWS users should migrate to the new m3, r3, c3 instances; AWS and Google instance prices are essentially the same for similar specs; Microsoft doesn't have the latest Intel CPUs and isn't pricing against like spec'ed machines;  IBM Softlayer pricing is still higher; Moore's law dictates price curves going forward.

  • Seth Lloyd: Quantum Machine Learning - QM algorithms are a win because they give exponential speedups on BigData problems. The mathematical structure of QM, because a wave can be at two places at once, is that the states of QM systems are in fact vectors in high dimensional vector spaces. The kind of transformations that happen when particles of light bounce of CDs, for example, are linear transformations on these high dimensional vector spaces. Quantum computing is the effort to exploit quantum systems to allow these linear transformations to perform the kind of calculations we want to perform. Or something like that. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...

Categories: Architecture

IT Governance and Decision Rights

Herding Cats - Glen Alleman - Fri, 04/11/2014 - 16:15

GoveranceIn the discussions around #NoEstimates, it's finally dawned on me - after walking the book shelf in the office - the conversation is split across a chasm. Governance based organizations and non-governance based organizations. 

Same is the case for product development organizaitons. Those producing a software product for sale or providing a service in exchange for money. There are governance based product organizations and non-governance based product organizations. 

I can't say how those are differentiated, but there is broad research on the top of governance and business success using IT. The book on the left is a start. In this book there is a study of 300 enterprises around the world, with the following...

Companies with effective IT governance have profits that are 20% higher than other companies pursuing similar strategies. One viable explanation for this differential is that IT governance specifies accountabilities for IT-related business outcomes and helps companies align their IT investments with their business priorities. But IT governance is a mystery to key decision makers at most companies. Our research indicates that, on average, only 38% of senior managers in a company know how IT is governed. Ignorance is not bliss. In our research senior management awareness of IT governance processes proved to be the single best indicator of governance effectiveness with top performing firms having 60, 70 or 80% of senior executives aware of how IT is governed. Taking the time at senior management levels to design, implement, and communicate IT governance processes is worth the trouble—it pays off.

IT Governance is a decision rights and accountability framework for encouraging desirable behaviours in the use of IT. And I'd add the creation of IT, IT products, and IT services. Since IT is a broad domain, let's exclude development effort for things like games, phone apps, plugins. and in general items that have low value at risk. This doesn't mean they don't have high revenue, but the investment value is low. So when they don't produce their desired beneficial outcome, the degree of loss is low as well.

Asessment of IT Governance focuses on four objectives:

  1. Cost effectiveness of IT
  2. Effective use of IT for asset utilization
  3. Effecrive use of IT for growth
  4. Effective use of IT for business flexibility

In all four, Weill and Ross provide guidance for assessing the capabilities of IT. In all four Cost is considered a critical success factor.

Without knowing the cost of a decision, the choices presented by that decision cannot be assessed. So when we hear #NoEstimates is about making decisions, ask of those decisions are being made in a governance based organization?

Then ask the question who has the decision rights to make those decisions? Who has the need to know the cost of the value produced by the firm in exchange for that cost. The developers, the management of the development team, the business management of the firm, the customers of the firm?

The three dependent variables of all projects are schedule, cost, and technical perfomance of produced capabilities (this is a wrapper word for everything NOT time and cost). The value at risk is a good starting point for deciding to apply governance processes or not. If you fix one of these variables - say budget (which is a place holder for cost until the actuals arrive), then the other two (time and technical) are now free to vary. Estimating their behaviour will be needed to assure the ROI meets the business goals. In the governance paradigm, these three dependent variables are part of the decision making process. Not knowing one of more puts at risk the value produced by the project or work effort. It's this value at risk that is the key to determining why, how, and when to estimate. 

What are you willing to loose (risk) if you don't need to know when you'll be done, or what you'll be able to produce on a planned day, or what that will cost, or determine if the ROI (return on investment), ROA (return on asset value), or ROE (return on equity) to some level of confidence to support your decision making - then estimating is a waste of time.

If on the other hand, the firm or customers you work for writing software in exchange for money have an interest in knowing any or all of those answers to support their decision making, you'll likley going to have to estimate, time, cost, produced capabilities to answer their questions.

It's not about you (the consumer of money). To find out who, follow the money, they'll tell you if they need an estimate or not.

Related articles Back To The Future Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices Danger Will Robinson Some more answers to the estimating questions Capabilties Based Planning, Use Cases, and Agile The Value of Making An Estimate
Categories: Project Management

Monoidal Event Sourcing

Think Before Coding - Jérémie Chassaing - Fri, 04/11/2014 - 03:34

I Could not sleep… 3am and this idea…

 

Event sourcing is about fold but there is no monoid around !

 

What’s a monoid ?

 

First, for those that didn’t had a chance to see Cyrille Martaire’s  fabulous explanation with beer glasses, or read the great series of post on F# for fun and profit, here is a quick recap:

 

We need a set.

Let’s take a simple one, positive integers.

 

And an operation, let say + which takes two positive integers.

 

It returns… a positive integer. The operation is said to be closed on the set.

 

Something interesting is that 3 + 8 + 2 = 13 = (3 + 8) + 2 = 3 + (8 + 2).

This is associativity: (x + y) + z = x + (y + z)

 

Le last interesting thing is 0, the neutral element:

x + 0 = 0 + x = x

 

(N,+,0) is a monoid.

 

Let say it again:

(S, *, ø) is a monoid if

  • * is closed on S  (* : S –> S > S)
  • * is associative ( (x * y) * z = x * (y * z) )
  • ø is the neutral element of * ( x * ø = ø * x = x )

warning: it doesn’t need to be commutative so x * y can be different from y * x !

 

Some famous monoids:

(int, +, 0)

(int, *, 1)

(lists, concat, empty list)

(strings, concat, empty string)

 

The link with Event Sourcing

 

Event sourcing is based on an application function apply : State –> Event –> State, which returns the new state based on previous state and an event.

 

Current state is then:

fold apply emptyState events

 

(for those using C# Linq, fold is the same as .Aggregate)

 

Which is great because higher level functions and all…

 

But fold is event more powerful with monoids ! For integers, fold is called sum, and the cool thing is that it’s associative !

 

With a monoid you can fold subsets, then combine them together after (still in the right order). This is what give the full power of map reduce: move code to the data. Combine in place, then combine results. As long as you have monoids, it works.

 

But apply will not be part of a monoid. It’s not closed on a set.

 

To make it closed on a set it should have the following signature:

 

apply: State –> State –> State, so we should maybe rename the function combine.

 

Let’s dream

 

Let’s imagine we have a combine operation closed on State.

 

Now, event sourcing goes from:

 

decide: Command –> State –> Event list

apply: State –> Event –> State

 

to:

decide: Command –> State –> Event list

convert: Event –> State

combine: State –> State –> State

 

the previous apply function is then just:

apply state event = combine state (convert event)

 

and fold distribution gives:

 

fold apply emptyState events = fold combine emptyState (map convert events) 

 

(where map applies the convert fonction to each item of the events list, as does .Select in Linq)

 

The great advantage here is that we map then fold which is another name for reduce !

 

Application of events can be done in parallel by chuncks and then combined !

 

Is it just a dream ?

 

Surely not.

 

Most of the tricky decisions have been taken in the decide function which didn’t change. The apply function usually just set state members to values present in the event, or increment/decrement values, or add items to a list… No business decision is taken in the apply function, and most of the primitive types in state members are already monoids under there usual operations…

 

And a group (tuple, list..) of monoid is also a monoid under a simple composition:

if (N1,*,n1) and (N2,¤,n2) are monoids then N1 * N2 is a monoid with an operator <*> ( (x1,x2) <*> (y1,y2) = (x1*y1, x2¤y2)) and a neutral element (n1,n2)…

 

To view it more easily, the convert fonction converts an event to a Delta, a difference of the State.

 

Those delta can then be aggregated/folded to make a bigger delta.

It can then be applied to a initial value to get the result !

 

 

The idea seams quite interesting and I never read anything about this.. If anyone knows prior study of the subject, I’m interested.

 

Next time we’ll see how to make monoids for some common patterns we can find in the apply function, to use them in the convert function.

Categories: Architecture, Requirements

Checklist for Process Improvement Success

I spend lots of time in airports.

I spend lots of time in airports.

I get around.  Once upon a time in my life that might have been an epithet but now reflects a wide exposure to what works, doesn’t work and what is clearly a cop out.  I suggest that there are five requirements for a successful process improvement program or five attributes that give a program a chance of success.  They are:

  1. The best and brightest
  2. Understanding of change management
  3. The wish to change
  4. A commitment to change in dollars and cents
  5. Recognition that implementation matters
The best change programs are staffed by people that have a solid track record of success both technically and in business terms. The right candidates will have a high follow-ability quotient.  Follow-ability is combination of a number of attributes including optimism, successful, collaborative, vision and leadership. Change management is a structured approach to shifting/transitioning individualsteams, and organizations from a current state to a desired future state.  It requires planning for selling and promoting ideas, then supporting the nascent changes until that have achieved critical mass.  Make sure your process improvement group has someone trained in change management.   Skills will include sales, promotion, branding, communication and organizational politics. I have observed many change programs that were created to check a box.  There was no real impetuous within the organization to change.  The organization must want to change; to become something different or the best a change program can do is to put lipstick on a pig. Having a commitment to change in dollars and cents is critical.  I suggest funding the process improvement program by having each of the affected groups contribute the needed budget. The word contribute means that they can choose not to renew funding if they do not get what they need. The funding linkage ensures that the funding groups stay involved and at the same time makes sure the process improvement team recognizes their customer. Finally the recognition that implementation matters is the capstone.  Process changes, unlike spaghetti, cannot be tossed against the wall to determine if it is done.  How a process change is implemented will determine whether it sticks.  An implementation plan that integrates with the organization change management plan is part of the price of admission for a successful change
Categories: Process Management

Paper: Scalable Atomic Visibility with RAMP Transactions - Scale Linearly to 100 Servers

We are not yet at the End of History for database theory as Peter Bailis and the Database Group at UC Berkeley continue to prove with a great companion blog post to their new paper: Scalable Atomic Visibility with RAMP Transactions. I like the approach of pairing a blog post with a paper. A paper almost by definition is formal, but a blog post can help put a paper in context and give it some heart.

From the abstract:

Databases can provide scalability by partitioning data across several servers. However, multi-partition, multi-operation transactional access is often expensive, employing coordination-intensive locking, validation, or scheduling mechanisms. Accordingly, many real-world systems avoid mechanisms that provide useful semantics for multi-partition operations. This leads to incorrect behavior for a large class of applications including secondary indexing, foreign key enforcement, and materialized view maintenance. In this work, we identify a new isolation model—Read Atomic (RA) isolation—that matches the requirements of these use cases by ensuring atomic visibility: either all or none of each transaction’s updates are observed by other transactions. We present algorithms for Read Atomic Multi-Partition (RAMP) transactions that enforce atomic visibility while offering excellent scalability, guaranteed commit despite partial failures (via synchronization independence), and minimized communication between servers (via partition independence). These RAMP transactions correctly mediate atomic visibility of updates and provide readers with snapshot access to database state by using limited multi-versioning and by allowing clients to independently resolve non-atomic reads. We demonstrate that, in contrast with existing algorithms, RAMP transactions incur limited overhead—even under high contention—and scale linearly to 100 servers.

What is RAMP?

Categories: Architecture

5 Questions That Need Answers for Project Success

Herding Cats - Glen Alleman - Thu, 04/10/2014 - 16:40

5 Success Questions

These 5 questions need credible answers in units of measure meanigful to the decision makers.

  1. Capabilities Based Planning starts with a Concept of Operations for each deliverable and the system as a whole. The ConOps describes how these capabilities will be used and what benefits will be produced as a result. This question is answered through Measures of Effectiveness (MOE) for each capability. MOEs are operational measures of success related to the achievement of the mission or operational objectives evaluated in an operational environment, under a specific set of conditions
  2. Technical and Operational requirements fulfill the needed capabilities. These requirements are assessed through their Measures of Performance (MOP), which characterize the physical or functional attributes relating the system operation, measured or estimated under specific conditions.
  3. The Integrated Master Plan and Integrated Master Schedule describes the increasing maturity of the project deliverables through Technical Performance Measures (TPM) to determine how well each deliverables and the resulting system elements satisfy or are expected to satisfy a technical requirement or goal in support of the MOEs and MOPs.
  4. Assessing progress to plan starts with Earned Value Management on a periodic basis. Physical Percent Complete is determine through adherence with the planned TPMs at the time of assessment and the adjustment of Earned Value (BCWP) to forecast future performance and the Estimate at Completion for cost and Estimated Completion Duration for the schedule.
  5. For each key planned deliverable and the work activities to produce it, Risks and their handling strategies are in place to adjust future performance assessment. Irreducible risks, such as duration and cost are handled with margin. Reducible risks are handled with risk retirement plans. Compliance with the risk buy down plan becomes a fifth assessment of progress to plan.

What Does All This Mean?

With these top level questions, many approaches are available, not matter what the domain or technology. But in the end if we don't have answers the probability of success will be reduced.

  1. If we don't have some idea of what DONE looks like in measures of effectiveness, then the project itself is in jeopardy from do one. The only way out is to pay money during the project to discover what DONE looks like. Agile does essentially this, but there are other ways. In all ways, knowing where we are going is mandatory. Exploring is the same as wandering around looking for a solution. If the customer is paying for this, the project is likely a R&D project. Even then the "D" part of R&D has a goal to discover something useful for those paying.
  2. When we separate capabilities based planning from requirements elicitation, we are freed up to be creative, emergent, agile, and maximize our investments. The technical and operational requirements can then be traced to the needed capabilities. This approach sets the stage for valiation of each requirement. Answering the question - why do we have this requirement? There is always mention of feature blot, this is an approach to test each requirement for its contirbution to the business or mission benefit of the project.
  3. The paradigm of Integrated Master Planning (IMP) provides the framework for assessing the increasing maturity of the project's deliverables. The IMP is the strategy for producing these deliverables along their path of increasing maturity to the final deliverables. Terms like preliminaryfinal, recviewed, accepted, installed, delivered, available, etc. are statements about the increasing maturity.
  4. Measuring physical percent complete, starts with the exit criteria for all packages of work on the project. This is a paradigm of agile, but more importantly it has always been the paradigm in defense and space domain. The foundation of this is Systems Engineering that is just starting to appear in enterprise IT projects.
  5. Risk Management is how adults manage projects - Tim Lister This says it all. Don't have a risk register, an assessment of the probability of occurrence, impacts, handling plans, residual risk and its impact - those risks are not going away. Each risk has to be monetized and their handling included in the budget. 

 

Categories: Project Management

Why You Need To Start Writing

Making the Complex Simple - John Sonmez - Thu, 04/10/2014 - 16:00

I used to hate writing. It always felt like such a strain to put my ideas on paper or in a document. Why not just say what I thought? In this video, I’ll tell you why I changed my mind about writing and why I think writing is one of the best things you can […]

The post Why You Need To Start Writing appeared first on Simple Programmer.

Categories: Programming

You Make It Happen (I Go Where People Send Me)

NOOP.NL - Jurgen Appelo - Thu, 04/10/2014 - 08:59
Management 3.0 Book Tour

I don’t decide which countries I go to. You do.

My new one-day workshop follows an important principle:
I go where people send me.
Selecting countries and cities
Every week I get questions such as “Will your book tour come to Argentina?” “When will you visit China?” and “Why are you not planning for Norway?”

And every time my answer is the same: I go where people send me. The backlog of countries is based on the readers of my mailing list

The post You Make It Happen (I Go Where People Send Me) appeared first on NOOP.NL.

Categories: Project Management

Trust, Adaptable Culture and Coaches and the Success of Agile Implementation

Adaptable culture, not adaptable hair...

Adaptable culture, not adaptable hair…

So far we have discussed three of the top factors for successful Agile implementations:

  1. Senior Management
  2. (Tied) Engagement and Early Feedback

Tied for fourth place in the list of success factors are trust, adaptable culture and coaching.

Trust was one of surprises on the list. Trust, in this situation, means that all of the players needed to deliver value including the team, stakeholders and management should exhibit predictable behavior. From the team’s perspective there needs to be trust that they won’t be held to other process standards to judge how they deliver if they adopt Agile processes. From a stakeholder and management perspective there needs to be trust that a team will live up to the commitments they make.

An adaptable culture reflects an organization’s ability to make and accept change.  I had expected this to be higher on the list.  Implementing Agile generally requires that an organization makes a substantial change to how people are managed and how work is delivered.  Those changes typically impact not only the project team, but also the business departments served by the project. Organizations that do not adopt to change well rarely make a jump into Agile painlessly. Organizations that have problems adapting will need to spend significantly more effort on organizational change management.

Coaches help teams, stakeholders and other leaders within an organization learn how to be Agile. Being Agile requires some combination of knowing specific techniques and embracing a set of organizational principles. Even in more mature Agile organizations, coaches bring new ideas to the table, different perspectives and a shot of energy. That shot of energy is important to implementing Agile and then for holding on to those new behaviors until they become muscle memory.

Change in organizations is rarely easy. Those being asked to change very rarely perceive change being for the better, which makes trust very difficult. Adopting Agile requires building trust between teams, the business and IT management and vice versa. Coaching is a powerful tool to help even adaptable organizations build trust and embrace Agile as a mechanism to deliver value and as a set of principles for managing work.


Categories: Process Management

Build a map infographic with Google Maps & JavaScript

Google Code Blog - Wed, 04/09/2014 - 18:30
Cross-posted from the Geo Developers Blog

By Mark McDonald, Google GeoDevelopers Team

We recently announced the launch of the data layer in the Google Maps JavaScript API, including support for GeoJSON and declarative styling.  Today we’d like to share a technical overview explaining how you can create great looking data visualizations using Google Maps.

Here’s our end goal. Click through to interact with the live version.
Data provided by the Census Bureau Data API but is not endorsed or certified by the Census Bureau. The map loads data from two sources: the shape outlines (polygons) are loaded from a public Google Maps Engine table and we query the US Census API for the population data.  You can use the controls above the map to select a category of data to display (the "census variable"). The display is then updated to show a choropleth map shading the various US regions in proportion to the values recorded in the census.How it worksWhen the map loads, it first queries the Google Maps Engine API to retrieve the polygons defining the US state boundaries and render them using the loadGeoJson method. The controls on the map are used to select a data source and then execute a query against the US Census Data API for the specified variable.  
Note: At the time of writing, the data layer and functions described here require you to use the experimental (3.exp) version of the Maps API.Loading polygons from Maps EngineThe Maps Engine API's Table.Features list method returns resources in GeoJSON format so the API response can be loaded directly using loadGeoJson. For more information on how to use Maps Engine to read public data tables, check out the developer guide.
The only trick in the code below is setting the idPropertyName for the data that is loaded. When we load the census data we'll need a way to connect it with the Maps Engine data based on some common key. In this case we're using the 'STATE' property.
Importing data from the US Census APIThe US Census Bureau provides an API for querying data in a number of ways. This post will not describe the Census API, other than to say that the data is returned in JSON format. We use the state ID, provided in the 2nd column, to look up the existing state data (using the lookupId method of google.maps.Data) and update with the census data (using the setProperty method of google.maps.Data)
Styling the dataData can be styled through the use of a Data.StyleOptions object or through a function that returns a Data.StyleOptions object. Here we create a choropleth map by applying a gradient to each polygon in the dataset based on the value in the census data.In addition to the coloring, we've created an interactive element by adding events that respond to mouse activity. When you hover your mouse cursor (or finger) over a region with data, the border becomes heavier and the data card is updated with the selected value.
We’ve also used a custom basemap style in this example to provide some contrast to the colorful data. 
Check out Google Maps Engine if you need somewhere to store your geospatial data in the cloud, as we’ve done here. If you have any questions on using these features, check out the docs for the data layer and the Maps Engine API or head over to Stack Overflow and ask there. You can also check out this article’s permanent home, where the interactive version lives.

Posted by Louis Gray, Googler
Categories: Programming

Dart improves async and server-side performance

Google Code Blog - Wed, 04/09/2014 - 17:40
Cross-posted from the Chromium Blog

By Anders Johnsen, Google Chrome Team

Today's release of the Dart SDK version 1.3 includes a 2x performance improvement for asynchronous Dart code combined with server-side I/O operations. This puts Dart in the same league as popular server-side runtimes and allows you to build high-performance server-side Dart VM apps.

We measured request-per-second improvements using three simple HTTP benchmarks: Hello, File, and JSON. Hello, which improved by 130%, provides a measure for how many basic connections an HTTP server can handle, by simply measuring an HTTP server responding with a fixed string. The File benchmark, which simulates the server accessing and serving static content, improved by nearly 30%. Finally, as a proxy for performance of REST apps, the JSON benchmark nearly doubled in throughput. In addition to great performance, another benefit of using Dart on the server is that it allows you to use the same language and libraries on both the client and server, reducing mental context switches and improving code reuse.

The data for the chart above was collected on a Ubuntu 12.04.4 LTS machine with 8GB RAM and a Intel(R) Core(TM) i5-2400 CPU, running a single-isolate server on Dart VM version 1.1.3, 1.2.0 and 1.3.0-dev.7.5.
The source for the benchmarks is available.
We are excited about these initial results, and we anticipate continued improvements for server-side Dart VM apps. If you're interested in learning how to build a web server with Dart, check out the new Write HTTP Clients and Servers tutorial and explore the programmer's guide to command-line apps with Dart. We hope to see what you build in our Dartisans G+ community.

Anders Johnsen is a software engineer on the Chrome team, working in the Aarhus, Denmark office. He helps Dart run in the cloud.

Posted by Louis Gray, Googler
Categories: Programming