Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Stockholm Syndrome and Outsourcing

Cultural disconnects are a major contributor to problems in outsourcing.

Cultural disconnects are a major contributor to problems in outsourcing.

Managing risk is one of the keys to success in an outsourcing arrangement.  There are many control mechanisms used to manage outsourcing deals.  Control mechanisms can range from full-scale contract offices, PMO’s, metrics, scorecard reporting, audits, CMMI assessments and on-site oversight teams.  In real life, typically these are applied in combination.

Cultural disconnects are a major contributor to problems in outsourcing that increase as the distance between client and outsourcer grows.  Sounds like a truism, however examples abound even today as organizations fall prey to misunderstandings driven by cultural disconnects.  The misunderstandings that can occur can range for differences in semantics to deep-seated cultural differences.

A tool to manage/minimize this type of disconnect is to co-locate an on-site account management team with the outsourcer.  This type of arrangement provides an avenue to mitigate cultural differences, translate both intent and words and mainly to build trust.  All positives; however darker possibilities exist, and as personal observations prove, they do happen!

On-site management of outsourced projects provides a number of impressive benefits that other forms of control do not provide.  The first and most important of these is a simple visible presence that reinforces that work is important.  Secondly, an onsite presence can provide a bridge between cultures (both organizational and sociological cultures).  Further, a presence provides a mechanism for translation, and for ironing out semantic differences quickly and efficiently.  Finally, an onsite presence is a basis to build a common history and understanding, which yields trust.

A powerful tool with equally powerful drawbacks: what happens when an onsite lead or team loses perspective?  I participated in an assessment of team that supports a group of outsourced applications.  During initial discussions it was impossible to determine who worked for the outsourcer and who worked for the onsite account management.  Collaboration you might ask?  True, but only if the arrangement is structured as such and all parties perceive it that way, which was not the case.  The on-site team had lost perspective and aligned themselves with organization they were overseeing, an application of the ‘Stockholm Syndrome’.

When an onsite team gets too close, they lose perspective, and they begin to believe they are part of the company they supposed to interface with.  When perspective is lost, who will they advocate for the project or how will a critical point of translation be interpreted?   Even if the closeness is merely an appearance, it will be difficult for others to understand how to act.

How do the best make co-located teams work?  The best observed application of the techniques begins with the sourcer deploying a cross-functional team.  The skills that are required include project management, business and systems analysis.  The very best include personnel with both facilitation and negotiation skills (negotiation is more typical). All team members require a strong sense loyalty to their company.  Note that if the work is ’offshored‘ (not just outsourced), then the team members must have command of the local language.  The rational for teams rather than an individual is two-fold:  The first is that a team can field more skills.  The second is that a team is far less likely to “go native” than an individual (teams create their own support structure).  Note, using teams is a best practice only if the amount of work supports it.  Smaller outsourcing agreements may not have the luxury, which means they must roll all of these skills into a single individual.

Despites the downside risks, co-locating sourcer and outsourcing teams of any size are a best practice.  How organizations structure their co-location program to keep the personnel fresh and useful is what separates the wheat from the chaff.  Observed tactical best practices to maintain the crispness of on-site teams include:

  • Rotation of personnel (not everyone at once unless there is only one person) re-enforces the attachment to the parent company.  A secondary form of rotation includes making a spot on the team a step on a job progression.
  • Leveraging PPQA reviews provides an assessment of whether the outsourcers processes are being followed.  Non-compliances are identified and an action plan is put in place for remediation.
  • External audits, using models such as the CMMI, ITIL or ISO Standards, provide a far more formal reading of whether processes are followed (typically with more consequences if they are not).

On-site teams are a best practice for reducing the risk of an outsourcing agreement, but it is a best practice that has a downside unless they are carefully managed.


Categories: Process Management

What I Don't Know Can Actually Hurt Me

Herding Cats - Glen Alleman - 10 hours 16 min ago

Ostrich Head in SandThe notion of not knowing the impact of decisions, choices, approaches is like putting your head in the sand because you don't like the answer or don't what to know the answer.

In our domain there are three common root causes for program difficulties when the program overruns it's budget, shows up late, or the delivered product doesn't work as required

  1. They couldn't know - the source the problem was in fact unknowable.
  2. They didn't know - the source of the problem was knowable, but the effort to discover it was done.
  3. They dodn't want to know - the source of the problem was there, but if it was made visible the project would be canceled or not started

When I read about decision making processes like ...

Screen Shot 2014-08-22 at 12.07.24 PM

I'm struck by the 3rd statement. Knowing something about the cost of a decision, the outcome of an investment, the risks, scope impact, progress reporting of future values requires you make estimates. Since all project variables are random variables that interact in random ways - technically non-stationary stochastic processes - knowing about the impacts from deciding anything using these variables means estimating the three core variables of all projects, shown below.

When I hear the suggestion that decisions can be made in the absence of those estimates, think of the Ostrich, ask the following:

  • Can I decide about some future outcome without estimating the impact of that outcome?
  • If I'm going to invest - spend money - can I do the ROI calculation in the absence of the estimate of the cost or the value - since neither of those are known on day one?
  • Risk - by it's very definition - is an uncertainty. These uncertainty are probabilistic outcomes of future events. Estimates are needed.

If there ways to decide these things without estimates, let's hear them. Each of the three variables and each of their drivers is a random variable whose value is not know in the present, but can only be knowing with an estimate of it's future possible values.

Screen Shot 2014-08-22 at 12.11.42 PM

Related articles Resources for Moving Beyond the "Estimating Fallacy" Making Estimates For Your Project Require Discipline, Skill, and Experience Four Critical Elements of Project Success First Comes Theory, Then Comes Practice How to Deal With Complexity In Software Projects? Statistical Process Control The Basis of All Modern Control Systems Let's Stop Guessing and Learn How to Estimate The Value of Information Why is Statistical Thinking Hard?
Categories: Project Management

Stuff The Internet Says On Scalability For August 22nd, 2014

Hey, it's HighScalability time:


Exterminate! 1,024 small, mobile, three-legged machines that can move and communicate using infrared laser beams.
  • 1.6 billion: facts in Google's Knowledge Vault built by bots; 100: lightening strikes every second
  • Quotable Quotes:
    • @stevendborrelli: There's a common feeling here at #MesosCon that we at the beginning of a massive shift in the way we manage infrastructure.
    • @deanwampler: 2000 machine service will see > 10 machine crashes per day. Failure is normal. (Google) #Mesoscon
    • @peakscale: "not everything revolves around docker" /booted from room immediately
    • @deanwampler: Twitter has most of their critical infrastructure on Mesos, O(10^4) machines, O(10^5) tasks, O(10^0) SREs supporting it. #Mesoscon
    • @adrianco: Dig yourself a big data hole, then drown in your data lake...
    • bbulkow:  I saw huge Go uptake at OSCON. I met one guy doing log processing easily at 1M records per minute on a single amazon instance, and knew it would scale.
    • @julian_dunn: clearly, running Netflix on a mainframe would have avoided this problem

  • Programming is the new way in an old tradition of using new ideas to explain old mysteries. Take the new Theory of Everything, doesn't it sound a lot like OO programming?: According to constructor theory, the most fundamental components of reality are entities—“constructors”—that perform particular tasks, accompanied by a set of laws that define which tasks are actually possible for a constructor to carry out. Then there's Our Mathematical Universe, which posits that the attributes of objects are the objects: all physical properties of an electron, say, can be described mathematically; therefore, to him, an electron is itself a mathematical structure. Any data modeler knows how faulty is this conceit. We only model our view relative to a problem, not universally. Modelers also have another intuition, that all attributes arise out of relationships between entities and that entities may themselves not have attributes. So maybe physics and programming have something to do with each other after all?

  • Love this. Multi-Datacenter Cassandra on 32 Raspberry Pi’s. Over the top lobby theatrics is a signature Silicon Valley move.

  • Computation is all around us. Jellyfish Use Novel Search Strategy: instead of using a consistent Lévy walk approach, barrel jellyfish also employ a bouncing technique to locate prey. These large jellies ride the currents to a new depth in search of food. If a meal is not located in the new location, the creature rides the currents back to its original location. 

  • Two months early. 300k under budget. Building a custom CMS using a Javascript based Single Page App (SPA), a Clojure back end, a set of small Clojure based micro services sitting on top of MongoDB, hosted in Rackspace.

  • While Twitter may not fight against the impersonation of certain Journalism professors, it does fight spam with a large sword. Here's how that sword of righteousness was forged: Fighting spam with BotMaker. The main challenge: applying rules defined using their own rule language with a low latency. Spam is detected in three stages: real-time, before the tweet enters the system; near real-time, on the write path; periodic, in the background. The result: a 40% reduction in spam and faster response time to new spam attacks.

  • An architecture of small apps. A PHP/Symfony CMS called Megatron takes 10 seconds to render a page. Pervasive slowness leads to constant problems with cache clearing, timeouts, server spin ups and downs, cache warmup. What to do? As an answer an internal Yammer conversation on different options is shared. The major issue is dumping their CMS for a microservices based approach. Interesting discussion that covers a lot of ground.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Neo4j: LOAD CSV – Handling empty columns

Mark Needham - 15 hours 47 min ago

A common problem that people encounter when trying to import CSV files into Neo4j using Cypher’s LOAD CSV command is how to handle empty or ‘null’ entries in said files.

For example let’s try and import the following file which has 3 columns, 1 populated, 2 empty:

$ cat /tmp/foo.csv
a,b,c
mark,,
load csv with headers from "file:/tmp/foo.csv" as row
MERGE (p:Person {a: row.a})
SET p.b = row.b, p.c = row.c
RETURN p

When we execute that query we’ll see that our Person node has properties ‘b’ and ‘c’ with no value:

==> +-----------------------------+
==> | p                           |
==> +-----------------------------+
==> | Node[5]{a:"mark",b:"",c:""} |
==> +-----------------------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 3
==> Labels added: 1
==> 26 ms

That isn’t what we want – we don’t want those properties to be set unless they have a value.

TO achieve this we need to introduce a conditional when setting the ‘b’ and ‘c’ properties. We’ll assume that ‘a’ is always present as that’s the key for our Person nodes.

The following query will do what we want:

load csv with headers from "file:/tmp/foo.csv" as row
MERGE (p:Person {a: row.a})
FOREACH(ignoreMe IN CASE WHEN trim(row.b) <> "" THEN [1] ELSE [] END | SET p.b = row.b)
FOREACH(ignoreMe IN CASE WHEN trim(row.c) <> "" THEN [1] ELSE [] END | SET p.c = row.c)
RETURN p

Since there’s no if or else statements in cypher we create our own conditional statement by using FOREACH. If there’s a value in the CSV column then we’ll loop once and set the property and if not we won’t loop at all and therefore no property will be set.

==> +-------------------+
==> | p                 |
==> +-------------------+
==> | Node[4]{a:"mark"} |
==> +-------------------+
==> 1 row
==> Nodes created: 1
==> Properties set: 1
==> Labels added: 1
Categories: Programming

R: Rook – Hello world example – ‘Cannot find a suitable app in file’

Mark Needham - 17 hours 33 min ago

I’ve been playing around with the Rook library and struggled a bit getting a basic Hello World application up and running so I thought I should document it for future me.

I wanted to spin up a web server using Rook and serve a page with the text ‘Hello World’. I started with the following code:

library(Rook)
s <- Rhttpd$new()
 
s$add(name='MyApp',app='helloworld.R')
s$start()
s$browse("MyApp")

where helloWorld.R contained the following code:

function(env){ 
  list(
    status=200,
    headers = list(
      'Content-Type' = 'text/html'
    ),
    body = paste('<h1>Hello World!</h1>')
  )
}

Unfortunately that failed on the ‘s$add’ line with the following error message:

> s$add(name='MyApp',app='helloworld.R')
Error in .Object$initialize(...) : 
  Cannot find a suitable app in file helloworld.R

I hadn’t realised that you actually need to assign that function to a variable ‘app’ in order for it to be picked up:

app <- function(env){ 
  list(
    status=200,
    headers = list(
      'Content-Type' = 'text/html'
    ),
    body = paste('<h1>Hello World!</h1>')
  )
}

Once I fixed that everything seemed to work as expected:s

> s
Server started on 127.0.0.1:27120
[1] MyApp http://127.0.0.1:27120/custom/MyApp
 
Call browse() with an index number or name to run an application.
Categories: Programming

Vitamins, Aspirin and The Little Blue Pill

photo (1)

Why do some products make a bigger splash than others? I recently heard an analogy, which explains why some products have naturally higher market demand than others. The analogy stated that there are really only three macro classes of products; products that avoid problems, solve a specific pain and provide new functionality. The analogy used vitamins, aspirin and the little blue pill as a metaphor for these three categories. Process improvement can easily be classified using these metaphors. As change agents, we can use this analogy as a tool to guide how we conceive and implement process changes within our organizations.

One of the failings of many software process improvement programs is that they are framed as means of avoiding a problem. I call this the futurist point of view. The futurist point of view translates to the “vitamins” for the organizational change world. If I asked you directly I am certain that you would understand the need to take precautions so that the future we will be better. The big unvoiced “however” in your answer would be that it is hard get motivated to make a change now for a nebulous payoff in the future. Just remember the last time you toyed with the idea of starting an exercise program. The linkage between a current change in behavior (taking vitamins) and future benefits is just not direct enough to create a groundswell of acceptance. Bottom-line: Selling potential benefits in the future is it best of times a difficult proposition.

A few years ago I took sales training (and I am proud of it). I learned how to identify pain during the training program. Ask any professional salesperson and they will tell you that an immediate pain is an important motivator to making a sale, maybe the most important motivator. At least 99.9% of the people in the world want pain to go away when they have it, which is why an aspirin is an easier sale than an exercise program for back pain. The “gotcha” is that pain initially expressed is usually not the root cause of the pain (there are a lot reasons for this behavior, but that is topic for another day). The art of persuasion, sales and requirements gathering is the ability to peel back the layers until you can expose the root cause so the pain can be solved, not just masked. The ability to successfully navigate the “pain” conversation to get to the root cause and not irritate person feeling the pain is a skill not consistently found on IT project teams. Bottom-line: I highly recommend a course in salesmanship for all project managers and requirements analysts, make sure your process improvement program solves current problems and always carry aspirin.

I have been shocked and amazed at how the little blue pill and other similar drugs became and stayed blockbuster drugs. Going back to our analogy, the little blue pill represents products that deliver additional functionality. The little blue pill of process improvement projects delivers the ability to either do something that was not possible before, provide greater flexibility in how work is done and/or greater flexibility in the decision making process. I believe that most IT personnel have a bit of a libertarian streak, which conflates flexibility and choice in how we accomplish our job with the functionality of the process. For example, having more than one way to do a retrospective and the choice of which technique to use makes a process for retrospectives better than one without options. IT personnel are problem solvers, solving problems is central to our identity. Process improvement projects that deliver functionality which make it easier (or even possible) to deliver solutions to IT’s customers addresses the core needs of IT developers and IT leaders. Bottom-line: Make sure your process improvement projects make it possible to do work that could not be done before or at the very least provide a more flexible, choice driven approach.

The analogy of vitamins, aspirin and little blue pill frames a discussion that many process improvement leaders do not have when choosing process improvement projects. I suggest that to survive when budgets are being cut, process improvement programs must deliver real benefits NOW, to solve pain IT organizations have NOW, but to be true to our nature, changes must also provide for a better future. These considerations are not just packaging or salesmanship, addressing these considerations is central to providing real value now and in the future.

PS: Take your vitamins, carry aspirin and if you have processes that stay the same for more than four cycles seek medical attention immediately.


Categories: Process Management

Azure: New DocumentDB NoSQL Service, New Search Service, New SQL AlwaysOn VM Template, and more

ScottGu's Blog - Scott Guthrie - Thu, 08/21/2014 - 21:39

Today we released a major set of updates to Microsoft Azure. Today’s updates include:

  • DocumentDB: Preview of a New NoSQL Document Service for Azure
  • Search: Preview of a New Search-as-a-Service offering for Azure
  • Virtual Machines: Portal support for SQL Server AlwaysOn + community-driven VMs
  • Web Sites: Support for Web Jobs and Web Site processes in the Preview Portal
  • Azure Insights: General Availability of Microsoft Azure Monitoring Services Management Library
  • API Management: Support for API Management REST APIs

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: DocumentDB: Announcing a New NoSQL Document Service for Azure

I’m excited to announce the preview of our new DocumentDB service - a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA.

As a NoSQL store, DocumentDB is truly schema-free. It allows you to store and query any JSON document, regardless of schema. The service provides built-in automatic indexing support – which means you can write JSON documents to the store and immediately query them using a familiar document oriented SQL query grammar. You can optionally extend the query grammar to perform service side evaluation of user defined functions (UDFs) written in server-side JavaScript as well. 

DocumentDB is designed to linearly scale to meet the needs of your application. The DocumentDB service is purchased in capacity units, each offering a reservation of high performance storage and dedicated performance throughput. Capacity units can be easily added or removed via the Azure portal or REST based management API based on your scale needs. This allows you to elastically scale databases in fine grained increments with predictable performance and no application downtime simply by increasing or decreasing capacity units.

Over the last year, we have used DocumentDB internally within Microsoft for several high-profile services.  We now have DocumentDB databases that are each 100s of TBs in size, each processing millions of complex DocumentDB queries per day, with predictable performance of low single digit ms latency.  DocumentDB provides a great way to scale applications and solutions like this to an incredible size.

DocumentDB also enables you to tune performance further by customizing the index policies and consistency levels you want for a particular application or scenario, making it an incredibly flexible and powerful data service for your applications.   For queries and read operations, DocumentDB offers four distinct consistency levels - Strong, Bounded Staleness, Session, and Eventual. These consistency levels allow you to make sound tradeoffs between consistency and performance. Each consistency level is backed by a predictable performance level ensuring you can achieve reliable results for your application.

DocumentDB has made a significant bet on ubiquitous formats like JSON, HTTP and REST – which makes it easy to start taking advantage of from any Web or Mobile applications.  With today’s release we are also distributing .NET, Node.js, JavaScript and Python SDKs.  The service can also be accessed through RESTful HTTP interfaces and is simple to manage through the Azure preview portal. Provisioning a DocumentDB account

To get started with DocumentDB you provision a new database account. To do this, use the new Azure Preview Portal (http://portal.azure.com), click the Azure gallery and select the Data, storage, cache + backup category, and locate the DocumentDB gallery item.

image

Once you select the DocumentDB item, choose the Create command to bring up the Create blade for it.

In the create blade, specify the name of the service you wish to create, the amount of capacity you wish to scale your DocumentDB instance to, and the location around the world that you want to deploy it (e.g. the West US Azure region):

image

Once provisioning is complete, you can start to manage your DocumentDB account by clicking the new instance icon on your Azure portal dashboard. 

image

The keys tile can be used to retrieve the security keys to use to access the DocumentDB service programmatically. Developing with DocumentDB

DocumentDB provides a number of different ways to program against it. You can use the REST API directly over HTTPS, or you can choose from either the .NET, Node.js, JavaScript or Python client SDKs.

The JSON data I am going to use for this example are two families:

// AndersonFamily.json file

{<?xml:namespace prefix = "o" />

    "id": "AndersenFamily",

    "lastName": "Andersen",

    "parents": [

        { "firstName": "Thomas" },

        { "firstName": "Mary Kay" }

    ],

    "children": [

        { "firstName": "John", "gender": "male", "grade": 7 }

    ],

    "pets": [

        { "givenName": "Fluffy" }

    ],

    "address": { "country": "USA", "state": "WA", "city": "Seattle" }

}

and

// WakefieldFamily.json file

{

    "id": "WakefieldFamily",

    "parents": [

        { "familyName": "Wakefield", "givenName": "Robin" },

        { "familyName": "Miller", "givenName": "Ben" }

    ],

    "children": [

        {

            "familyName": "Wakefield",

            "givenName": "Jesse",

            "gender": "female",

            "grade": 1

        },

        {

            "familyName": "Miller",

            "givenName": "Lisa",

            "gender": "female",

            "grade": 8

        }

    ],

    "pets": [

        { "givenName": "Goofy" },

        { "givenName": "Shadow" }

    ],

    "address": { "country": "USA", "state": "NY", "county": "Manhattan", "city": "NY" }

}

Using the NuGet package manager in Visual Studio, I can search for and install the DocumentDB .NET package into any .NET application. With the URI and Authentication Keys for the DocumentDB service that I retrieved earlier from the Azure Management portal, I can then connect to the DocumentDB service I just provisioned, create a Database, create a Collection, Insert some JSON documents and immediately start querying for them:

using (client = new DocumentClient(new Uri(endpoint), authKey))

{

    var database = new Database { Id = "ScottsDemoDB" };

    database = await client.CreateDatabaseAsync(database);

 

    var collection = new DocumentCollection { Id = "Families" };

    collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection);

 

    //DocumentDB supports strongly typed POCO objects and also dynamic objects

    dynamic andersonFamily =  JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\AndersonFamily.json"));

    dynamic wakefieldFamily = JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\WakefieldFamily.json"));

 

    //persist the documents in DocumentDB

    await client.CreateDocumentAsync(collection.SelfLink, andersonFamily);

    await client.CreateDocumentAsync(collection.SelfLink, wakefieldFamily);

 

    //very simple query returning the full JSON document matching a simple WHERE clause

    var query = client.CreateDocumentQuery(collection.SelfLink, "SELECT * FROM Families f WHERE f.id = 'AndersenFamily'");

    var family = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Anderson family have the following pets:");              

    foreach (var pet in family.pets)

    {

        Console.WriteLine(pet.givenName);

    }

 

    //select JUST the child record out of the Family record where the child's gender is male

    query = client.CreateDocumentQuery(collection.DocumentsLink, "SELECT * FROM c IN Families.children WHERE c.gender='male'");

    var child = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Andersons have a son named {0} in grade {1} ", child.firstName, child.grade);

 

    //cleanup test database

    await client.DeleteDatabaseAsync(database.SelfLink);

}

As you can see above – the .NET API for DocumentDB fully supports the .NET async pattern, which makes it ideal for use with applications you want to scale well. 

Server-side JavaScript Stored Procedures

If I wanted to perform some updates affecting multiple documents within a transaction, I can define a stored procedure using JavaScript that swapped pets between families. In this scenario it would be important to ensure that one family didn’t end up with all the pets and another ended up with none due to something unexpected happening. Therefore if an error occurred during the swap process, it would be crucial that the database rollback the transaction and leave things in a consistent state.  I can do this with the following stored procedure that I run within the DocumentDB service:

function SwapPets(family1Id, family2Id) {

    var context = getContext();

    var collection = context.getCollection();

    var response = context.getResponse();

 

    collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id  = "' + family1Id + '"', {},

    function (err, documents, responseOptions) {

        var family1 = documents[0];

 

        collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id = "' + family2Id + '"', {},

        function (err2, documents2, responseOptions2) {

            var family2 = documents2[0];

                   

            var itemSave = family1.pets;

            family1.pets = family2.pets;

            family2.pets = itemSave;

 

            collection.replaceDocument(family1._self, family1,

                function (err, docReplaced) {

                    collection.replaceDocument(family2._self, family2, {});

                });

 

            response.setBody(true);

        });

    });

}

 

If an exception is thrown in the JavaScript function due to for instance a concurrency violation when updating a record, the transaction is reversed and system is returned to the state it was in before the function began.

It’s easy to register the stored procedure in code like below (for example: in a deployment script or app startup code):

    //register a stored procedure

    StoredProcedure storedProcedure = new StoredProcedure

    {

        Id = "SwapPets",

        Body = File.ReadAllText(@".\JS\SwapPets.js")

    };

               

    storedProcedure = await client.CreateStoredProcedureAsync(collection.SelfLink, storedProcedure);

 

And just as easy to execute the stored procedure from within your application:

    //execute stored procedure passing in the two family documents involved in the pet swap              

    dynamic result = await client.ExecuteStoredProcedureAsync<dynamic>(storedProcedure.SelfLink, "AndersenFamily", "WakefieldFamily");

If we checked the pets now linked to the Anderson Family we’d see they have been swapped. Learning More

It’s really easy to get started with DocumentDB and create a simple working application in a couple of minutes.  The above was but one simple example of how to start using it.  Because DocumentDB is schema-less you can use it with literally any JSON document.  Because it performs automatic indexing on every JSON document stored within it, you get screaming performance when querying those JSON documents later. Because it scales linearly with consistent performance, it is ideal for applications you think might get large.

You can learn more about DocumentDB from the new DocumentDB development center here.

Search: Announcing preview of new Search as a Service for Azure

I’m excited to announce the preview of our new Azure Search service.  Azure Search makes it easy for developers to add great search experiences to any web or mobile application.   

Azure Search provides developers with all of the features needed to build out their search experience without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  It is delivered as a fully managed service with an enterprise grade SLA.  We also are releasing a Free tier of the service today that enables you to use it with small-scale solutions on Azure at no cost. Provisioning a Search Service

To get started, let’s create a new search service.  In the Azure Preview Portal (http://portal.azure.com), navigate to the Azure Gallery, and choose the Data storage, cache + backup category, and locate the Azure Search gallery item.

image

Locate the “Search” service icon and select Create to create an instance of the service:

image

You can choose from two Pricing Tier options: Standard which provides dedicated capacity for your search service, and a Free option that allows every Azure subscription to get a free small search service in a shared environment.

The standard tier can be easily scaled up or down and provides dedicated capacity guarantees to ensure that search performance is predictable for your application.  It also supports the ability to index 10s of millions of documents with lots of indexes.

The free tier is limited to 10,000 documents, up to 3 indexes and has no dedicated capacity guarantees. However it is also totally free, and also provides a great way to learn and experiment with all of the features of Azure Search. Managing your Azure Search service

After provisioning your Search service, you will land in the Search blade within the portal - which allows you to manage the service, view usage data and tune the performance of the service:

image

I can click on the Scale tile above to bring up the details of the number of resources allocated to my search service. If I had created a Standard search service, I could use this to increase the number of replicas allocated to my service to support more searches per second (or to provide higher availability) and the number of partitions to give me support for higher numbers of documents within my search service. Creating a Search Index

Now that the search service is created, I need to create a search index that will hold the documents (data) that will be searched. To get started, I need two pieces of information from the Azure Portal, the service URL to access my Azure Search service (accessed via the Properties tile) and the Admin Key to authenticate against the service (accessed via the Keys title).

image

Using this search service URL and admin key, I can start using the search service APIs to create an index and later upload data and issue search requests. I will be sending HTTP requests against the API using that key, so I’ll setup a .NET HttpClient object to do this as follows:

HttpClient client = new HttpClient();

client.DefaultRequestHeaders.Add("api-key", "19F1BACDCD154F4D3918504CBF24CA1F");

I’ll start by creating the search index. In this case I want an index I can use to search for contacts in my dataset, so I want searchable fields for their names and tags; I also want to track the last contact date (so I can filter or sort on that later on) and their address as a lat/long location so I can use it in filters as well. To make things easy I will be using JSON.NET (to do this, add the NuGet package to your VS project) to serialize objects to JSON.

var index = new

{

    name = "contacts",

    fields = new[]

    {

        new { name = "id", type = "Edm.String", key = true },

        new { name = "fullname", type = "Edm.String", key = false },

        new { name = "tags", type = "Collection(Edm.String)", key = false },

        new { name = "lastcontacted", type = "Edm.DateTimeOffset", key = false },

        new { name = "worklocation", type = "Edm.GeographyPoint", key = false },

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(index), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

You can run this code as part of your deployment code or as part of application initialization. Populating a Search Index

Azure Search uses a push API for indexing data. You can call this API with batches of up to 1000 documents to be indexed at a time. Since it’s your code that pushes data into the index, the original data may be anywhere: in a SQL Database in Azure, DocumentDb database, blob/table storage, etc.  You can even populate it with data stored on-premises or in a non-Azure cloud provider.

Note that indexing is rarely a one-time operation. You will probably have an initial set of data to load from your data source, but then you will want to push new documents as well as update and delete existing ones. If you use Azure Websites, this is a natural scenario for Webjobs that can run your indexing code regularly in the background.

Regardless of where you host it, the code to index data needs to pull data from the source and push it into Azure Search. In the example below I’m just making up data, but you can see how I could be using the result of a SQL or LINQ query or anything that produces a set of objects that match the index fields we identified above.

var batch = new

{

    value = new[]

    {

        new

        {

            id = "221",

            fullname = "Jay Adams",

            tags = new string[] { "work" },

            lastcontacted = DateTimeOffset.UtcNow,

            worklocation = new

            {

                type = "Point",

                coordinates = new [] { -122.131577, 47.678581 }

            }

        },

        new

        {

            id = "714",

            fullname = "Catherine Abel",

            tags = new string[] { "work", "personal" },

            lastcontacted = DateTimeOffset.UtcNow,

            worklocation = new

            {

                type = "Point",

                coordinates = new [] { -121.825579, 47.1419814}

            }

        }

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs/index?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(batch), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

Searching an Index

After creating an index and populating it with data, I can now issue search requests against the index. Searches are simple HTTP GET requests against the index, and responses contain the data we originally uploaded as well as accompanying scoring information.

I can do a simple search by executing the code below, where searchText is a string containing the user input, something like abel work for example:

var response = client.GetAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs?api-version=2014-07-31-Preview&search=" + Uri.EscapeDataString(searchText)).Result;

response.EnsureSuccessStatusCode();

 

dynamic results = JsonConvert.DeserializeObject(response.Content.ReadAsStringAsync().Result);

 

foreach (var result in results.value)

{

    Console.WriteLine("FullName:" + result.fullname + " score:" + (double)result["@search.score"]);

}

Learning More

The above is just a simple scenario of what you can do.  There are a lot of other things we could do with searches. For example, I can use query string options to filter, sort, project and page over the results. I can use hit-highlighting and faceting to create a richer way to navigate results and suggestions to implement auto-complete within my web or mobile UI.

In this example, I used the default ranking model, which uses statistics of the indexed text and search string to compute scores. You can also author your own scoring profiles that model scores in ways that match the needs of your application.

Check out the Azure Search documentation for more details on how to get started, and some of the more advanced use-cases you can take advantage of.  With the free tier now available at no cost to every Azure subscriber, there is no longer any reason not to have Search fully integrated within your applications. Virtual Machines: Support for SQL Server AlwaysOn, VM Depot images

Last month we added support for managing VMs within the Azure Preview Portal (http://portal.azure.com).  We also released built-in portal support that enables you to easily create multi-VM SharePoint Server Farms as well as a slew of additional Azure Certified VM images.  You can learn more about these updates in my last blog post.

Today, I’m excited to announce new support for automatically deploying SQL Server VMs with AlwaysOn configured, as well as integrated portal support for community supported VM Depot images. SQL Server AlwaysOn Template

AlwaysOn Availability Groups, released in SQL Server 2012 and enhanced in SQL Server 2014, guarantee high availability for mission-critical workloads. Last year we started supporting SQL Availability Groups on Azure Infrastructure Services. In such a configuration, two SQL replicas (primary and secondary), each in its own Azure VM, are configured for automatic failover, and a listener (DNS name) is configured for client connectivity. Other components required are a file share witness to guarantee quorum in the configuration to avoid “split brain” scenarios, and a domain controller to join all VMs to the same domain. The SQL as well as the domain controller replicas are each deployed to an availability set to ensure they are in different Azure failure and upgrade domains.

Prior to today’s release, setting up the Availability Group configuration could be tedious and time consuming. We have dramatically simplified this experience through a new SQL Server AlwaysOn template in the Azure Gallery. This template fully automates the configuration of a highly available SQL Server deployment on Azure Infrastructure Services using an Availability Group.

You can find the template by navigating to the Azure Gallery within the Azure Preview Portal (http://portal.azure.com), selecting the Virtual Machine category on the left and selecting the SQL Server 2014 AlwaysOn gallery item. In the gallery details page, select Create. All you need is to provide some basic configuration information such as the administrator credentials for the VMs and the rest of the settings are defaulted for you. You may consider changing the defaults for Listener name as this is what your applications will use to connect to SQL Server.

image

Upon creation, 5 VMs are created in the resource group: 2 VMs for the SQL Server replicas, 2 VMs for the Domain Controller replicas, and 1 VM for the file share witness.

Once created, you can RDP to one of the SQL Server VMs to see the Availability Group configuration as depicted below:

image

Try out the SQL Server AlwaysOn template in the Azure Preview Portal today and give us your feedback! VM Depot in Azure Gallery

Community-driven VM Depot images have been supported on the Azure platform for a couple of years now. But prior to today’s release they weren’t fully integrated into the mainline user experience.

Today, I’m excited to announce that we have integrated community VMs  into the Azure Preview Portal and the Azure gallery. With this release, you will find close to 300 pre-configured Virtual Machine images for Microsoft Azure.

Using these images, fully functional Virtual Machines can be deployed in the Preview Portal in minutes and customized for specific use cases. Starting from base operating system distributions (such as Debian, Ubuntu, CentOS, Suse and FreeBSD) through developer stacks (such as LAMP, Ruby on Rails, Node and Django), to complete applications (such as Wordpress, Drupal and Apache Solr), there is something for everyone in VM Depot.

Try out the VM Depot images in the Azure gallery from within the Virtual Machine category. image Web Sites: WebJobs and Process Management in the Preview Portal

Starting with today’s Azure release, Web Site WebJobs are now supported in the Azure Preview Portal.  You can also now drill into your Web Sites and monitor the health of any processes running within them (both to host your web code as well as your web jobs). Web Site WebJobs

Using WebJobs, you can now now run any code within your Azure Web Sites – and do so in a way that is readily parallelizable, globally scalable, and complete with remote debugging, full VS support and an optional SDK to facilitate authoring. For more information about the power of WebJobs, visit Azure WebJobs recommended resources.

With today’s Azure release, we now support two types of Webjobs: on Demand and Continuous.  To use WebJobs in the preview portal, navigate to your web site and select the WebJobs tile within the Web Site blade. Notice that the part also now shows the count of WebJobs available.

image

By drilling into the title, you can view existing WebJobs as well as create new OnDemand or Continuous WebJobs. Scheduled WebJobs are not yet supported in the preview portal, but expect to see this in the near future. Web Site Processes

I’m excited to announce a new feature in the Azure Web Sites experience in the Preview Portal - Websites Processes. Using Websites Process you can enumerate the different instances of your site, browse through the different processes on each instance, and even drill down to the handles and modules associated with each process. You can then check for detailed information like version, language and more.

image

In addition, you also get rich monitoring for CPU, Working Set and Thread count at the process level.  Just like with Task Manager for Windows, data collection begins when you open the Websites Processes blade, and stops when you close it.

image

This feature is especially useful when your site has been scaled out and is misbehaving in some specific instances but not in others. You can quickly identify runaway processes, find open file handles, and even kill a specific process instance. Monitoring and Management SDK: Programmatic Access to Monitoring Data

The Azure Management Portal provides built-in monitoring and management support that makes it easy for you to track the health of your applications and solutions deployed within Azure.

If you want to programmatically access monitoring and management features in Azure, you can also now use our .NET SDK from Nuget. We are releasing this SDK to general availability today, so you can now use it for your production services!

For example, if you want to build your own custom dashboard that shows metric data from across your services, you can get that metric data via the SDK:

// Create the metrics client by obtain the certificate with the specified thumbprint.

MetricsClient metricsClient = new MetricsClient(new CertificateCloudCredentials(SubscriptionId, GetStoreCertificate(Thumbprint)));

 

// Build the resource ID string.

string resourceId = ResourceIdBuilder.BuildWebSiteResourceId("webtest-group-WestUSwebspace", "webtests-site");

 

// Get the metric definitions.

MetricDefinitionCollection metricDefinitions = metricsClient.MetricDefinitions.List(resourceId, null, null).MetricDefinitionCollection;

 

// Display the available metric definitions.

Console.WriteLine("Choose metrics (comma separated) to list:");

int count = 0;

foreach (MetricDefinition metricDefinition in metricDefinitions.Value)

{

    Console.WriteLine(count + ":" + metricDefinition.DisplayName);

    count++;

}

 

// Ask the user which metrics they are interested in.

var desiredMetrics = Console.ReadLine().Split(',').Select(x =>  metricDefinitions.Value.ToArray()[Convert.ToInt32(x.Trim())]);

 

// Get the metric values for the last 20 minutes.

MetricValueSetCollection values = metricsClient.MetricValues.List(

    resourceId,

    desiredMetrics.Select(x => x.Name).ToList(),

    "",

    desiredMetrics.First().MetricAvailabilities.Select(x => x.TimeGrain).Min(),

    DateTime.UtcNow - TimeSpan.FromMinutes(20),

    DateTime.UtcNow

).MetricValueSetCollection;

 

// Display the metric values to the user.

foreach (MetricValueSet valueSet in values.Value )

{

    Console.WriteLine(valueSet.DisplayName + " for the past 20 minutes:");

    foreach (MetricValue metricValue in valueSet.MetricValues)

    {

        Console.WriteLine(metricValue.Timestamp + "\t" + metricValue.Average);

    }

}

 

Console.Write("Press any key to continue:");

Console.ReadKey();

We support metrics for a variety of services with the monitoring SDK:

Service

Typical metrics

Frequencies

Cloud services

CPU, Network, Disk

5 min, 1 hr, 12 hrs

Virtual machines

CPU, Network, Disk

5 min, 1 hr, 12 hrs

Websites

Requests, Errors, Memory, Response time, Data out

1 min, 1 hr

Mobile Services

API Calls, Data Out, SQL performance

1 hr

Storage

Requests, Success rate, End2End latency

1 min, 1 hr

Service Bus

Messages, Errors, Queue length, Requests

5 min

HDInsight

Containers, Apps running

15 min

If you’d like to manage advanced autoscale settings that aren’t possible to do in the Portal, you can also do that via the SDK. For example, you can construct autoscale based on custom metrics – you can autoscale by anything that is returned from MetricDefinitions.

All of the documentation on the SDK is available on MSDN. API Management: Support for Services REST API

We launched the Azure API Management service into preview in May of this year.  The API Management service enables  customers to quickly and securely publish APIs to partners, the public development community, and even internal developers.

Today, I’m excited to announce the availability of the API Management REST API which opens up a large number of new scenarios. It can be used to manage APIs, products, subscriptions, users and groups in addition to accessing your API analytics. In fact, virtually any management operation available in the Management API Portal is now accessible programmatically - opening up a host of integration and automation scenarios, including directly monetizing an API with your commerce provider of choice, taking over user or subscription management, automating API deployments and more.

We've even provided an additional SAS (Shared Access Signature) security option. An integrated experience in the publisher portal allows you to generate SAS tokens - so securely calling your API service couldn’t be easier. In just three easy steps:

  1. Enable the API on the System Settings page on the Publisher Portal
  2. Acquire a time-limited access token either manually or programmatically
  3. Start sending requests to the API, providing the token with every request

image 

See the REST API reference for full details. Delegation of user registration and product subscription

The new API Management REST API makes it easy to automate and integrate other processes with API management. Many customers integrating in this way already have a user account system and would prefer to use this existing resource, instead of the built-in functionality provided by the Developer Portal. This feature, called Delegation, enables your existing website or backend to own the user data, manage subscriptions and seamlessly integrate with API Management's dynamically generated API documentation.

image

It's easy to enable Delegation: in the Publisher Portal navigate to the Delegation section and enable Delegated Sign-in and Sign up, provide the endpoint URL and validation key and you're good to go. For more details, check out the how-to guide. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:twitter.com/scottgu

Categories: Architecture, Programming

Product Ownership

Software Requirements Blog - Seilevel.com - Thu, 08/21/2014 - 17:00
When a company decides, for whatever reason, that Agile is the right idea for them and they are going to go all out to switch over to Scrum as a methodology, the person they usually look to as taking on the new Product Owner roles (POs) are the existing Product Managers (PdMs). This makes all […]
Categories: Requirements

How to move your files to Google Drive

Google Code Blog - Thu, 08/21/2014 - 17:00
Posted by Chuck Coulson, Drive Technology Partnerships, Google

Google Drive for Work is a new premium offering for businesses that includes unlimited storage, advanced audit reporting and new security controls and features, such as encryption at rest.

If you're getting ready to move your company to Drive, one of the first things on your mind is how to migrate all your existing files with as little hassle as possible. It's easy to migrate your files by uploading them directly to Drive or using the Drive Sync client. But, what if you have files stored elsewhere that you want to consolidate? Or what if you want to migrate multiple users at once? Many independent software vendors (ISVs) have built solutions to help organizations migrate their files from different File Sync and Share (FSS) solutions, local hard drives and other data sources. Here are some of the options available for you to use:
  • Cloud Migrator, by Cloud Technology Solutions, migrates user accounts and files to Google Drive and other Google Apps services. (websiteblogpost)
  • Cloudsfer, by Tzunami, transfers files from Box, Dropbox and Microsoft OneDrive to Google Drive. (website)
  • Migrator for Google Apps, by Backupify, migrates and consolidates personal Google Drive or other Google Apps for Business accounts into a single domain. (websiteblogpost)
  • Mover migrates data from 23 cloud services providers, web services, and databases into Google Drive. (websiteblogpost)
  • Nava Certus, by LinkGard, provides a migration and synchronization solution for on-premise and cloud-based storage platforms, including Dropbox, Microsoft OneDrive, Amazon S3, as well as local file systems. (website,blogpost)
  • SkySync, by Portal Architects, integrates existing on-site storage systems as well as other cloud storage providers to Google Drive. (websiteblogpost)
These are just a few companies that offer migration solutions. Please visit the Google Apps Marketplace for a complete listing of tools and offerings that add value to the Google Apps platform.
Categories: Programming

The Plague of Methods and Frameworks

NOOP.NL - Jurgen Appelo - Thu, 08/21/2014 - 16:33
Methods and Frameworks

I know of no industry in the world that is as infested with methods and frameworks as the software business. Whether it’s RUP, XP, Scrum, AUP, DAD, or SAFe, it seems IT businesses are always looking for yet another method or framework that they can “implement” next month.

The post The Plague of Methods and Frameworks appeared first on NOOP.NL.

Categories: Project Management

Software Development Linkopedia August 2014

From the Editor of Methods & Tools - Thu, 08/21/2014 - 14:48
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about Agile retrospectives, software architecture, software developer psychology, software testing  in Agile teams, quality code and the (funny) history of programming. Web site: Fun Retrospectives Blog: How to make software architecture decisions? Blog: Cognitive Biases in Software Engineering Blog: How the Other Half Works: an Adventure in the Low Status of Software Engineers Blog: Confessions of an ex-developer Article: Tearing Down the Walls – Embedding QA in a TDD/Pairing and ...

Is pairing for everybody?

Actively Lazy - Thu, 08/21/2014 - 08:11

Pair programming is a great way to share knowledge. But every developer is different, does pairing work for everyone?

Pairing helps a team normalise its knowledge – what one person knows, everyone else learns through pairing: keyboard shortcuts, techniques, practices, third party libraries as well as the details of the source code you’re working in. This pushes up the average level of the team and stops knowledge becoming siloed.

Pairing also helps with discipline: it’s a lot harder to argue that you don’t need a unit test when there’s someone sitting next to you, literally acting as your conscience. It’s also a lot harder to just do the quick and dirty hack to get on to the next task, when the person sitting next to you has taken control of the keyboard to stop you committing war crimes against the source code.

The biggest problem most teams face is basically one of communication: coordinating, in detail, the activities of a team of developers is difficult. Ideally, every developer would know everything that is going on across the team – but this clearly isn’t practical. Instead, we have to draw boundaries to make it easier to reason about the system as a whole, without knowing the whole system to the same level of detail. I’ll create an API, some boundary layer, and we each work to our own side of it. I’ll create the service, you sort out the user interface. I’ll sort out the network protocol, you sort out the application layer. You have to introduce an architectural boundary to simplify the communication and coordination. Your architecture immediately reflects the relationships of the developers building it.

Whereas on teams that pair, these boundaries can be softer. They still happen, but the boundary becomes softer because as pairs rotate you see both sides of any boundary so it doesn’t become a black box you don’t know about and can’t change. One day I’m writing the user interface code, the next I’m writing the service layer that feeds it. This is how you spot inconsistencies and opportunities to fix the architecture and take advantage of implementation details on both sides. Otherwise this communication is hard. Continuous pair rotation means you can get close to the ideal that each developer knows, broadly, what is happening everywhere.

However, let’s be honest: pairing isn’t for everyone. I’ve worked with some people who were great at pairing, who were a pleasure to work with. People who had no problem explaining their thought process and no ego to get bruised when you point out the fatal flaw in their idea. People who spot when you’ve lost the train of thought and pick up where you drifted off from.

A good pairing session becomes very social. A team that is pairing can sound very noisy. It can be one of the hardest things to get used to when you start pairing: I seem to spend my entire day arguing and talking. When are we gonna get on and write some damned code? But that just highlights how little of the job is actually typing in source code. Most of the day is figuring out which change to make and where. A single line of code can take hours of arguing to get right and in the right place.

But programming tends to attract people who are less sociable than others – and let’s face it, we’re a pretty anti-social bunch: I spend my entire day negotiating with a machine that works in 1s and 0s. Not for me the subtle nuances of human communication, it either compiles or it doesn’t. I don’t have to negotiate or try and out politick the compiler. I don’t have to deal with the compiler having “one of those days” (well, I say that, sometimes I swear…). I don’t have to take the compiler to one side and offer comforting words because its cat died. I don’t have to worry about hurting the compiler’s feelings because I made the same mistake for the hundredth time: “yes of course I’m listening to you, no I’m not just ignoring you. Of course I value your opinions, dear. But seriously, this is definitely an IList of TFoo!”

So it’s no surprise that among the great variety of programmers you meet, some are extrovert characters who relish the social, human side of working in a team of people, building software. As well as the introvert characters who relish the quiet, private, intellectual challenge of crafting an elegant solution to a fiendish problem.

And so to pairing: any team will end up with a mixture of characters. The extroverts will tend to enjoy pairing, while the introverts will tend to find it harder and seek to avoid it. This isn’t necessarily a question of education or persuasion, the benefits are relatively intangible and more introverted developers may find the whole process less enjoyable than working solo. It sounds trite: but happy developers are productive developers. There’s no point doing anything that makes some of your peers unhappy. All teams need to agree rules. For example, some people like eating really smelly food in an open plan office. Good teams tend to agree rules about this kind of behaviour; everyone agrees that small sacrifices for an individual make a big difference for team harmony.

However, how do you resolve a difference of opinion with pairing? As a team decision, pairing is a bit all or nothing. Either we agree to pair on everything, so there’s no code ownership, regular rotation and we learn from each other. Or we don’t, and we each become responsible for our own dominion. We can’t agree that those that want to pair will go into the pairing room so as not to upset everyone else.

One option is to simply require that everyone on your team has to love pairing. I don’t know about you: hiring good people is hard. The last thing I want to do is start excluding people who could otherwise be productive. Isn’t it better to at least have somebody doing something, even if they’re not pairing?

Another option is to force developers to pair, even if they find it difficult or uncomfortable. But is that really going to be productive? Building resentment and unhappiness is not going to create a high performance team. Of course, the other extreme is just as likely to cause upset: if you stop all pairing, then those that want to will feel resentful and unhappy.

And what about the middle ground? Can you have a team where some people pair while others work on their own? It seems inevitable that Conway’s law will come into play: the structure of the software will reflect the structure of the team. It’s very difficult for there to be overlap between developers working on their own and developers that are pairing. For exactly the same reason it’s difficult for a group of individual developers to overlap on the same area of code at the same time: you’ll necessarily introduce some architectural boundary to ease coordination.

This means you still end up with a collection of silos, some owned by individual developers, some owned by a group of developers. Does this give you the best compromise? Or the worst of both worlds?

What’s your experience? What have you tried? What worked, what didn’t?


Categories: Programming, Testing & QA

Two lessons on haproxy checks and swap space

Agile Testing - Grig Gheorghiu - Thu, 08/21/2014 - 02:03
Let's assume you want to host a Wordpress site which is not going to get a lot of traffic. You want to use EC2 for this. You still want as much fault tolerance as you can get at a decent price, so you create an Elastic Load Balancer endpoint which points to 2 (smallish) EC2 instances running haproxy, with each haproxy instance pointing in turn to 2 (not-so-smallish) EC2 instances running Wordpress (Apache + MySQL). 
You choose to run haproxy behind the ELB because it gives you more flexibitity in terms of load balancing algorithms, health checks, redirections etc. Within haproxy, one of the Wordpress servers is marked as a backup for the other, so it only gets hit by haproxy when the primary one goes down. On this secondary Wordpress instance you set up MySQL to be a slave of the primary instance's MySQL. 
Here are two things (at least) that you need to make sure you have in this scenario:
1) Make sure you specify the httpchk option in haproxy.cfg, otherwise the primary server will not be marked as down even if Apache goes down. So you should have something like:
backend servers-http  server s1 10.0.1.1:80 weight 1 maxconn 5000 check port 80  server s2 10.0.1.2:80 backup weight 1 maxconn 5000 check port 80  option httpchk GET /
2) Make sure you have swap space in case the memory on the Wordpress instances gets exhausted, in which case random processes will be killed by the oom process (and one of those processes can be mysqld). By default, there is no swap space when you spin up an Ubuntu EC2 instance. Here's how to set up a 2 GB swapfile:
dd if=/dev/zero of=/swapfile1 bs=1024 count=2097152mkswap /swapfile1chmod 0600 /swapfile1swapon /swapfile1echo "/swapfile1 swap swap defaults 0 0" >> /etc/fstab
I hope these two things will help you if you're not already doing them ;-)

Which Software Size Metric Should I Use? Part 3

398982625_e475db57c5_bThe fourth step in our checklist for selecting a size metric is an evaluation of the temporal component. This step focuses your evaluation on answering the question, “Is the metric available when you need it?” When do you need to know how big a project is depends on what you intend to do with the data (that goal thing again). The majority of goals can be viewed as either estimation related (forward view) or measurement related (historical view). Different sizing metrics can be initially applied at different times during a project’s life. For example, Use Case Points can’t be developed until Use Cases are developed, lines of code can’t be counted until you are deep into construction, or at the very earliest, in technical design.

1Untitled

The major dichotomy is between estimation needs and measurement needs. As Figure 4 suggests, determining size from requirements (or earlier) will require focusing on functional metrics. Functional metrics can be applied earlier in the process (regardless of methodology) because they are based on a higher-level of abstraction that is more closely aligned with the business description of the project. Developing estimates or sizing later in the in the development process opens the possibility of more physical metrics which are more closely aligned with how developers view their work.


Categories: Process Management

No News is Bad News - Feedback Must Create Steering Signal from Plan †

Herding Cats - Glen Alleman - Wed, 08/20/2014 - 22:41

Screen Shot 2014-08-20 at 12.39.03 PMProject Manager: How's the project progressing to plan

Developers: We're spending money, consuming resources, producing outputs that the customer likes. 

Project Manager: I was more interested in what's our performance against our planned spend, planned resource consumption, and planned outputs of value to the customer? 

Developers: What do you mean, we didn't estimate any of that, we're managing this project with #NoEstimates. You know that new alternative to estimates for making decisions in software development. That is ways to make decisions with "No Estimates." of the impacts on the future of those estimates or of our work on the future cost, schedule, or technical performance. You know where we can use Decision making frameworks for projects that do not require estimates, or apply Investment models for software projects that do not require estimates, and have our project management methods for risk management, scope management, progress reporting, not require any of those annoying estimates. 'Cause we kinda suck at them anyway, so we just decided instead of learning how to estimate, we'll just not estimate and get back to coding.

Project Manager: Oh, you mean that approach of managing other peoples money that violates the principles of software microeconomics with Open Loop Control - where our organization can make business decisions on the allocation of our limited resources, without examining how those decisions effect the supply and demand of our resources. You do know about those resources? Like money, people, and time?

Developers: Yea, we don't need any that mumbo jumbo microeconomics that we all learned in school, since we didn't pay attention in that boring the statistics and probability class that tried to teach us that all variables on a project are actually random variables and we should to know something about their behaviour in the future if we're going have a hope in hell of ever managing this project in the presence of uncertainty about those values.

Project Manager: What's that smell? Maybe we'd better start rearranging the deck chairs on our ship here real soon, cause I smell an Iceberg getting closer.

No project can be managed to successful closure in the absence of steering targets defined at periodic intervals for the expenditure of cost, schedule, and technical performance. Knowing what those steering targets should be requires estimating their values, then measures the actual values to develop the needed steering signal - the variance between plan and actual.

The only way out of the need to estimate those intermediate steering targets is to straight line the budget, schedule, and needed technical performance - from start to end, then measure the actual performance. 

Like the intended route of the Titanic, our project does not proceed in a straight line, so that idea is a non-starter. And like the Titanic, our project cannot confuse the intended speed with the actual speed, just like we can't confuse the budget - the total planned crossing time - with the actual cost - the actual total crossing time.

Titanic-Route

Without those pesky intermediate targets to steer toward - those targets created by estimating the needed cost, needed scheduled arrival date, and, needed capabilities on the needed date for the needed cost. We're managing the project Open Loop, driving in a straight line. Never knowing what will pop up in front of our path. 

Say good bye to Kate Leonardo, you're gonna get wet.

† Full attribution for the inspiration for this post comes from the very useful blog by Gene Hughson

Related articles Staying on Plan Means Closed Loop Control How Not To Make Decisions Using Bad Estimates Control Systems - Their Misuse and Abuse More Than Target Needed To Steer Toward Project Success Why Project Management is a Control System
Categories: Project Management

Help! Too Many Incidents! - Capacity Assignment Policy In Agile Teams

Xebia Blog - Wed, 08/20/2014 - 22:26

As an Agile coach, scrum master, product owner, or team member you probably have been in the situation before in which more work is thrown at the team than the team has capacity to resolve.

In case of work that is already known this basically is a scheduling problem of determining the optimal order that the team will complete the work so as to maximise the business value and outcome. This typically applies to the case that a team is working to build or extend a new product.

The other interesting case is e.g. operational teams that work on items that arrive in an ad hoc way. Examples include production incidents. Work arrives ad hoc and the product owner needs to allocate a certain capacity of the team to certain types of incidents. E.g. should the team work on database related issues, or on front-end related issues?

If the team has more than enough capacity the answer is easy: solve them all! This blog will show how to determine what capacity of the team is best allocated to what type of incident.

What are we trying to solve?

Before going into details, let's define what problem we want to solve.

Assume that the team recognises various types of incidents, e.g. database related, GUI related, perhaps some more. Each type of incident will have an associated average resolution time. Also, each type will arrive at the team at a certain rate, the input rate. E.g. database related incidents arrive 3 times per month, whereas GUI related incidents occur 4 times per week. Finally, each incident type will have different operational costs assigned to it. The effect of database related incidents might be that 30 users are unable to work. GUI related incidents e.g. affect only part of the application affecting a few users.

At any time, the team has a backlog of incidents to resolve. With this backlog an operational cost is concerned. This operational we want to minimise.

What makes this problem interesting is that we want to minimise this cost under the constraint of having limited number of resources, or capacity. The product owner may wish to deliberately ignore GUI type of incidents and let the team work on database related incidents. Or assign 20% of the capacity to GUI related and 80% of the available capacity to database related incidents?

Types of Work

For each type of work we define the input rate, production rate, cost rate, waiting time, and average resolution time:

 \lambda_i = \text{average input rate for type '$i$'}, \lambda_i = \text{average input rate for type '$i$'},

 C_i = \text{operational cost rate for type '$i$'}, C_i = \text{operational cost rate for type '$i$'},

 x_i = \text{average resolution time for type '$i$'}, x_i = \text{average resolution time for type '$i$'},

 w_i = \text{average waiting time for type '$i$'}, w_i = \text{average waiting time for type '$i$'},

 s_i = \text{average time spend in the system for type '$i$'}, s_i = \text{average time spend in the system for type '$i$'},

 \mu_i = \text{average production rate for type '$i$'} \mu_i = \text{average production rate for type '$i$'}

Some items get resolved and spend the time s_i = x_i + w_is_i = x_i + w_i in the system. Other items never get resolved and spend time  s_i = w_i s_i = w_i in the system.

In the previous blog Little's Law in 3D the average total operational cost is expressed as:

 \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{S_i(S_i+T)} \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{S_i(S_i+T)}

To get the goal cost we need to sum this for all work types 'i'.

System

The process for work items is that they enter the system (team) as soon as they are found or detected. When they are found these items will contribute immediately to the total operational cost. This stops as soon as they are resolved. For some the product owner decides that the team will start working on them. The point that the team start working on an item the waiting time w_iw_i is known and on average they spend a time x_ix_i before it is resolved.

As the team has limited resources, they cannot work on all the items. Over time the average time spent in the system will increase. As shown in the previous blog Why Little's Law Works...Always Little's Law still applies when we consider a finite time interval.

This process is depicted below:

new doc 13_2

 \overline{M} = \text{fixed team capacity}, \overline{M} = \text{fixed team capacity},

 \overline{M_i} = \text{team capacity allocated to working on problems type '$i$'}, \overline{M_i} = \text{team capacity allocated to working on problems type '$i$'},

 \overline{N} = \text{total number of items in the system} \overline{N} = \text{total number of items in the system}

The total number of items allowed in the 'green' area is restricted by the team's capacity. The team may set a WiP limit to enforce this. In contrast the number of items in the 'orange' area is not constrained: incidents flow into the system as they are found and leave the system only after they have been resolved.

Without going into the details, the total operational cost can be rewritten in terms of x_ix_i and w_iw_i:

(1)  \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{w_i(w_i+T)} + \mu_i C_i \overline{x_i} \,\, \overline{w_i} + \frac{1}{2} \mu_i C_i \overline{x_i(x_i+T)} \text{Average operational cost for type '$i$'} = \frac{1}{2} \lambda_i C_i \overline{w_i(w_i+T)} + \mu_i C_i \overline{x_i} \,\, \overline{w_i} + \frac{1}{2} \mu_i C_i \overline{x_i(x_i+T)}

What are we trying to solve? Again.

Now that I have shown the system, defined exactly what I mean with the variables, I will refine what exactly we will be solving.

Find M_iM_i such that this will minimise (1) under the constraint that the team has a fixed and limited capacity.

Important note

The system we are considering is not stable. Therefore we need to be careful when applying and using Little's Law. To circumvent necessary conditions for Little's Law to hold, I will consider the average total operational cost over a finite time interval. This means that we will minimise the average of the cost over the time interval from start to a certain time. As the accumulated cost increases over time the average is not the same as the cost at the end of the time interval.

Note: For our optimisation problem to make sense the system needs to be unstable. For a stable system it follows from Little's Law that the average input rate for type i is equal to the average production rate for type 'i'. In case there is no optimisation since we cannot choose those to be different. The ability to choose them differently is the essence of our optimisation problem.

Little's Law

At this point Little's Law provides a few relations between the variables  M, M_i, N, w_i, x_i, \mu_i, \lambda_i M, M_i, N, w_i, x_i, \mu_i, \lambda_i . These relations we can use to find what values of M_iM_i will minimise the average total operational cost.

As described in the previous blog Little's Law in 3D Little's Law gives relations for the system as a whole, per work item type and for each subsystem. These relations are:

 \overline{N_i} = \lambda_i \,\, \overline{s_i} \overline{N_i} = \lambda_i \,\, \overline{s_i}

 \overline{N_i} - \overline{M_i} = \lambda_i \,\, \overline{w_i} \overline{N_i} - \overline{M_i} = \lambda_i \,\, \overline{w_i}

 \overline{M_i} = \mu_i \,\,\overline{x_i} \overline{M_i} = \mu_i \,\,\overline{x_i}

 M_1 + M_2 + ... = M M_1 + M_2 + ... = M

The latter relation is not derived from Little's Law but merely states that total capacity of the team is fixed.

Note that Little's Law also has given us relation (1) above.

Result

Again, without going into the very interesting details of the calculation I will just state the result and show how to use it to calculate the capacities to allocate to certain work item types.

First, for each work item type determine the product between the average input rate (\lambda_i\lambda_i) and the average resolution time (x_ix_i). The interpretation of this is the average number of new incidents arriving while the team works on resolving an item. Put the result in a row vector and name it 'V':

(2)  V = (\lambda_1 x_1, \lambda_2 x_2, ...) V = (\lambda_1 x_1, \lambda_2 x_2, ...)

Next, add all at the components of this vector and denote this by ||V||||V||.

Second, multiply the result of the previous step for each item by the quotient of the average resolution time (x_ix_i) and the cost rate (C_iC_i). Put the result in a row vector and name it 'W':

(3)  W = (\lambda_1 x_1 \frac{x_1}{C_1}, \lambda_2 x_2 \frac{x_2}{C_2}, ...) W = (\lambda_1 x_1 \frac{x_1}{C_1}, \lambda_2 x_2 \frac{x_2}{C_2}, ...)

Again, add all components of this row vector and call this ||W||||W||.

Then, the capacity to allocate to item of type 'k' is proportional to:

(4)  \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||) \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||)

Here, V_kV_k denotes the k-th component of the row vector 'V'. So, V_1V_1 is equal to \lambda_1 x_1\lambda_1 x_1. Likewise for W_kW_k.

Finally, because these should add up to 1, each of (4) is divided by the sum of all of them.

Example

If this seems complicated, let's do a real calculation and see how the formulas of the previous section are applied.

Two types of incidents

As a first example consider a team that collects data on all incidents and types of work. The data collected over time includes the resolution time, dates that the incident occurred and the date the issue was resolved. The product owner assigns a business value to each incident which corresponds to the cost rate of the incident which in this case is measured in the number of (business) uses affected. Any other means of assigning a cost rate will do also.

The team consist of 6 team members, so the team's capacity MM is equal to 12 where each member is allowed to work on a maximum of 2 incidents.

From their data they discover that they have 2 main types of incidents. See the so-called Cycle Time Histogram below.

new doc 13_9

The picture above shows two types of incidents, having typical average resolution times of around 2 days and 2 weeks. Analysis shows that these are related to the GUI and database components respectively. From their data the team determines that they have an average input rate of 6 per week and 2 per month respectively. The average cost rate for each type is 10 per day and 200 per day respectively.

That is, the database related issues have: \lambda = 2 \text{per month} = 2/20 = 1/10 \text{per day} \lambda = 2 \text{per month} = 2/20 = 1/10 \text{per day} ,  C = 200 \text{per day} C = 200 \text{per day} , and resolution time  x = 2 \text{weeks} = 10 \text{days} x = 2 \text{weeks} = 10 \text{days} . While the GUI related issues have:  \lambda = 6 \text{per week} = 6/5 \text{per day} \lambda = 6 \text{per week} = 6/5 \text{per day} ,  C = 10 \text{per day} C = 10 \text{per day} , and resolution time  x = 2 \text{days} x = 2 \text{days} .

The row vector 'V' becomes (product of \lambda\lambda and xx:

 V = (1/10 * 10, 6/5 * 2) = (1, 12/5) V = (1/10 * 10, 6/5 * 2) = (1, 12/5) ,   ||V|| = 1 + 12/5 = 17/5 ||V|| = 1 + 12/5 = 17/5

The row vector 'W' becomes:

 W = (1/10 * 10 * 10 / 200, 6/5 * 2 * 2 / 10) = (1/20, 12/25) W = (1/10 * 10 * 10 / 200, 6/5 * 2 * 2 / 10) = (1/20, 12/25) ,  ||W|| = 1/20 + 12/25 = 53/100 ||W|| = 1/20 + 12/25 = 53/100

Putting this together we obtain the result that a percentage of the team's capacity should be allocated to resolve database related issues that is equal to:

 M_\text{database}/M \sim 1/20 - 1/12 *(1/20 * 17/5 - 1 * 53/100) = 1/20 + 1/12 * 36/100 = 1/20 + 3/100 = 8/100 = 40/500 M_\text{database}/M \sim 1/20 - 1/12 *(1/20 * 17/5 - 1 * 53/100) = 1/20 + 1/12 * 36/100 = 1/20 + 3/100 = 8/100 = 40/500

and a percentage should be allocated to work on GUI related items that is

 M_\text{GUI}/M \sim 12/25 - 1/12 *(12/25 * 17/5 - 12/5 * 53/100) = 12/25 - 1/12 * 9/125 = 12/25 - 3/500 = 237/500 M_\text{GUI}/M \sim 12/25 - 1/12 *(12/25 * 17/5 - 12/5 * 53/100) = 12/25 - 1/12 * 9/125 = 12/25 - 3/500 = 237/500

Summing these two we get as the sum 277/500. This means that we allocate 40/237 ~ 16% and 237/277 ~ 84% of the team's capacity to database and GUI work items respectively.

Kanban teams may define a class of service to each of these incident types and put a WiP limit on the database related incident lane of 2 cards and a WiP limit of 10 to the number of cards in the GUI related lane.

Scrum teams may allocate part of the team's velocity to user stories related to database and GUI related items based on the percentages calculated above.

Conclusion

Starting with the expression for the average total operational cost I have shown that this leads to an interesting optimisation problem in which we ant to determine the optimal allocation of a team's capacity to different work item type in such a way that it will on average minimise the average total operation cost present in the system.

The division of the team's capacity over the various work item types is determined by the work item types' average input rate, resolution time, and cost rate and is proportional to

(4)  \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||) \frac{M_k}{M} \sim W_k - \frac{1}{M} (W_k ||V|| - V_k ||W||)

The data needed to perform this calculation is easily gathered by teams. Teams may use a cycle time histogram to find appropriate work item types. See this article on control charts for more information.

 

BE Agile before you Become Agile

Xebia Blog - Wed, 08/20/2014 - 20:49

People dislike change. It disrupts our routines and we need to invest to adapt. We only go along if we understand why change is needed and how we benefit from it.
The key to intrinsic motivation is to experience the benefits of the change yourself, rather than having someone else explain it to you.

Agility is almost an acronym for change. It is critical to let people experience the benefits of Agility before asking them to buy into this new way of working. This post explains how to create a great Agile experience in a fun, simple, cost efficient and highly effective way. BEing agile, before BEcoming agile!

The concept of a “Company Innovation Day”

Have you seen this clip about Dan Pinks’ Drive? According to him, the key factors for more motivation and better performance are: autonomy, mastery and purpose.
If you have some scrum experience this might sound familiar, right? That is because these 3 things really tie in nicely with agile and scrum, for example:

Autonomy = being able to self-direct;
• Let the team plan their own work
• Let the team decide how to best solve problems

Mastery = learning, applying and mastering new skills and abilities, a.k.a. "get better at stuff";
• Retrospect and improve
• Learn, apply and master new skills to get achieve goals as a team.

Purpose = understanding necessity and being as effective as possible;
• Write user stories that add value
• Define sprint goals that tie in to product- and business goals.

In the clip, the company "Atlassian" is mentioned. This is the company that makes "JIRA", one of the most popular Agile support tools. Atlassian tries to facilitate autonomy, mastery and purpose by organizing one day per quarter of “management free” innovation. They call it a “ship it day”.

Now this is cool! According to Dan, their people had fun (most important), fixed a whole array of bugs and delivered new product ideas as well. They have to ship all this in one day, again showing similarities with the time boxed scrum approach. When I first saw this, I realized that this kind of fast delivery of value is pretty much something you would like to achieve with Agile Scrum too! Doing Scrum right would feel like a continuous series of ship it days.

My own experience with innovation days

Recently I organized an innovation day with a client (for tips see on how to organize yours, click here). We invited the whole department to volunteer. If you didn’t feel like it, you could just skip it and focus on sprint work. Next we promoted the day and this resulted in a growing list of ideas coming in.
Except for the framing of the day, the formation of ideas and teams was totally self-organized and also result driven as we asked for the expected result. Ultimately we had 20 initiatives to be completed in one day.
On the day itself, almost everyone joined in and people worked hard to achieve results at the end of the day.
The day ended in presenting the results and having pizzas. Only some ideas just missed the deadline, but most were finished including usable and fresh new stuff with direct business value. When looking at the photos of that day it struck me that 9 out of ten photos showed smiling faces. Sweet!

The first innovation day was concluded with an evaluation. In my opinion evaluation is essential, because this is the perfect moment discuss deeper lessons and insights. Questions like; “how can we create the innovation day energy levels during sprints”, and “how can we utilize self-organizing abilities more” are invaluable as they could lead to new insights, inspiration and experiments for day-to-day operations.

The value of an innovation day as a starting point for Agile

All in all, I think an innovation day is the perfect way to get people experiencing the power of Agile.
Doing the innovation day on “day one” offers huge benefits when added to standard stuff like training and games. This is because the context is real. You have a real goal, a real timebox and you need to self-organize to achieve the desired result.
People doing the work get to experience their potential and the power of doing stuff within a simplified context. Managers get to experience unleashing the human potential when they focus only on the context and environment for that day.
I can only imagine the amazement and renewed joy when people experience the possibilities coming from a strong waterfall setting. All that good stuff from just a one-day investment!

Conclusion

It would be great if you would start out an Agile change initiative with an innovation day. Get people enthusiastic and inspired (e.g. motivated for change) first and then tell them why it works and how we are going to apply the same principles in day-to-day operations. This will result in less friction and resistance and give people a better sense for where they are heading.

Do you want to start doing innovation days or do you want to share your experience, feel free to leave a comment below.

Part 2: The Cloud Does Equal High performance

This a guest post by Anshu Prateek, Tech Lead, DevOps at Aerospike and Rajkumar Iyer, Member of the Technical Staff at Aerospike.

In our first post we busted the myth that cloud != high performance and outlined the steps to 1 Million TPS (100% reads in RAM) on 1 Amazon EC2 instance for just $1.68/hr. In this post we evaluate the performance of 4 Amazon instances when running a 4 node Aerospike cluster in RAM with 5 different read/write workloads and show that the r3.2xlarge instance delivers the best price/performance.

Several reports have already documented the performance of distributed NoSQL databases on virtual and bare metal cloud infrastructures:

Categories: Architecture

Xamarin.Forms with Zumero

Eric.Weblog() - Eric Sink - Wed, 08/20/2014 - 16:00

I am a Xamarin fanboy, so my excitement about Xamarin.Forms is perhaps unsurprising. But I see an awful lot of potential for this technology. I want to show you some of the stuff we've been doing with Xamarin.Forms here at Zumero.

Andrew Jackson in Two Minutes

First I am going to race through this demo very quickly. Then I'll circle back around and explain things.

STEP ONE: Download ZAG and run it

Visit http://zumero.com/dev-center/zss/#zag and download the ZAG application. For this demo, I'm using ZAG on Mac OS X (but you could choose Windows or Linux) and I am targetting iOS (but you could choose Android or Windows Phone).

When you run the app, you should see something like this:

STEP TWO: New Database

Under the "File" menu, choose "New Database...". You should see this dialog:

Fill in the Server URL and DBFile exactly as shown in the screen shot (Server URL: "http://demo.zumero.com:8080". DBFile: "demo"). Click the OK button. You should see a dialog asking you where to save the local copy of the database:

Click OK. You should see "Syncing with the ZSS Server":

And when the sync is complete, at the bottom of the ZAG window, you should see "Sync result: 0 (success)".

STEP THREE: Generate

Under the "Generate" menu, find and choose the item labeled "Xamarin.Forms C#":

You will be asked three questions, and you should be able to just click OK on all three.

STEP FOUR: Open the sln file

You can quit the ZAG application now. It should have generated a folder somewhere like /Users/eric/Documents/demo.zssdemo/. Under that folder you should find a Xamarin solution tree that looks something like this:

Double-click the demo.sln file to open it in Xamarin Studio. You should see four C# projects: a Portable Class Library called "demo.Shared", plus one app target each for iOS, Android, and Windows Phone 8:

STEP FIVE: Build and run the app

If you build and run the demo.iOS app in the iPhone simulator, you should see something like this:

STEP SIX: Sync

Click the "Sync" button in the upper right. You should see:

The defaults should be fine. Just tap the "Sync Now" button. When the sync is completed, you should see a list of tables:

STEP SEVEN: Andrew Jackson

Tap the "presidents" table. In an iPad instance of the simulator, you would see this:

And tap the seventh item. You should see Andrew Jackson, the only U.S. president ever to kill someone in a duel:

What is ZAG?

ZAG is short for "ZSS App Generator". It's a desktop app which generates ready-to-build source code and build scripts for mobile apps that sync using ZSS.

We think of ZAG as a way for getting people started faster. Many people come to our product without much experience in mobile app development. ZAG can be used to give them a starting point, sort of like sample code that is customized for their data.

What is Zumero for SQL Server?

Zumero for SQL Server (ZSS) is a solution for data sync between SQL Server and mobile devices.

More info from Dan Bricklin about offline in mobile apps: http://bricklin.com/offline.htm

More info about Zumero on our website: zumero.com

More info about Zumero from my previous blog entry: here

What is demo.zumero.com:8080?

This is a publicly accessible ZSS server provided so that folks can play with Zumero clients more easily. It contains some basic sample data such as U.S. presidents and the periodic table of the elements.

In real-world scenarios, a customer would use ZAG to generate their starter app after they have completed setup of their ZSS server.

What is Xamarin?

Xamarin is (IMHO) a great solution for building mobile apps.

One of the main benefits of the Xamarin platform is the ability to write the non-UI parts of your iOS/Android/WP8 apps in cross-platform code while implementing a native user interface for each mobile environment.

But I would use Xamarin even for a single-platform app, simply to get the benefits of working in .NET/C#.

More info on the Xamarin website: http://xamarin.com/

What is Xamarin.Forms?

Xamarin.Forms is Xamarin's solution for making [most of] your UI code cross-platform as well, while retaining fully native performance and feel.

In a nutshell, the coolness of Xamarin.Forms lies in the fact that in the solution generated by ZAG above, the demo.Shared project is a Portable Class Library even though it contains the entire user interface for the app.

More info on the Xamarin website: http://xamarin.com/forms

What is a Portable Class Library (PCL)?

A PCL is a .NET class library that is annotated with information about which platforms it should support. This metadata allows the tooling to enforce portability rules in both the development and the consumption of the library.

More info on Scott Hanselman's blog: http://www.hanselman.com/blog/CrossPlatformPortableClassLibrariesWithNETAreHappening.aspx

More info on the Xamarin website: http://developer.xamarin.com/guides/cross-platform/application_fundamentals/pcl/introduction_to_portable_class_libraries/

Why does ZAG generate separate projects for iOS, Android, and WinPhone?

Xamarin.Forms can make most of your UI code portable, but not all of it. The actual building of the mobile app is specific to each platform. But if you look in the code for each of those platform-specific projects, you'll see that there isn't much there.

What dependencies does the ZAG-generated app have?

The following NuGet packages will need to be retrieved:

  • Xamarin.Forms

  • SQLite-net PCL

  • SQLitePCL.raw_basic

  • Zumero

What is the "SQLite-net PCL" NuGet package?

This is the Portable Class Library (PCL) version of SQLite-net, the popular lightweight SQLite ORM by Frank Krueger (@praeclarum).

More info on GitHub: https://github.com/praeclarum/sqlite-net

More info on the NuGet website: https://www.nuget.org/packages/sqlite-net-pcl/

What is the "SQLitePCL.raw_basic" NuGet package?

SQLitePCL.raw is my Portable Class Library for accessing SQLite from C#.

More info on Github: https://github.com/ericsink/SQLitePCL.raw

More info on the NuGet website: https://www.nuget.org/packages/SQLitePCL.raw_basic/

What is SQLite?

SQLite is the most popular SQLite database for mobile devices.

More info on the SQLite website: sqlite.org

What is the "Zumero" NuGet package?

This is the Zumero Client SDK in the form of a Portable Class Library in a NuGet package.

More info on the Zumero website: http://zumero.com/dev-center/zss/

More info on the NuGet website: https://www.nuget.org/packages/Zumero/

What do Zumero's client-side SQLite files look like?

As much as possible, they look exactly like they looked in SQL Server.

  • Table and column names are the same.

  • All data values are the same (whenever possible).

  • Foreign keys in SQL Server are reconstructed as foreign keys in SQLite.

  • Since SQLite does not perform type checking, Zumero adds constraints to do so.

And so on...

What's happening in step two?

ZAG is acting as a Zumero client and synchronizing the data on the server into a local SQLite file. This file is used to obtain information about the tables and columns necessary to generate the mobile app.

The same sync is happening in step six, except then it is the mobile app performing the sync instead of ZAG.

What were those three questions in step three?

The first one is the project name:

Then ZAG wants to know the settings for your sync server. These should already be filled in with the ones you entered earlier:

Finally, ZAG is asking you where to save the source code and project files for the app to be generated:

Does a ZAG-generated app allow modifications to the data?

For the Xamarin.Forms C# target, yes. On the item detail page, you should be able to enter new values in text fields and 'Save' the changes.

But you should get a permission denied error if you try to sync those changes to our public demo server. :-)

Is ZAG generating UI stuff as XAML or as C# code?

Currently, it's XAML. You'll find the files in the 'xaml' folder in the demo.Shared project.

Does ZAG generate polished ready-to-use apps?

Oh definitely not. :-)

The output of ZAG should build and run with no errors (if it doesn't, it's a bug, and please let us know), but it's just a starting point for further development.

 

More Than 800 Videos on TVAgile.com

From the Editor of Methods & Tools - Wed, 08/20/2014 - 09:33
TVAgile.com has just passed the mark of the 800 resources available with an Oredev conference presentation that discusses how to foment creative collaboration based on the tenets of improv and open spaces. TVAgile.com is a directory of videos, interviews and tutorials focused agile software development approaches and practices: Scrum, Extreme Programming (XP), Test Driven Development (TDD) , Lean Software Development, Kanban, Behavior Driven Development (BDD), Agile Requirements, Continuous Integration, Pair Programming, Refactoring, … Explore all these resources on http://www.tvagile.com/