Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Better search on developers.google.com

Google Code Blog - Fri, 10/31/2014 - 18:56
Posted by Aaron Karp, Product Manager, developers.google.com

We recently launched a major upgrade for the search box on developers.google.com: it now provides clickable suggestions as you type.
We hope this becomes your new favorite way to navigate developers.google.com, and to make things even easier we enabled “/” as a shortcut key for accessing the search field on any page.

If you have any feedback on this new feature, please let us know by leaving a comment below. And we have more exciting upgrades coming soon, so stay tuned!

Categories: Programming

Stuff The Internet Says On Scalability For October 31st, 2014

Hey, it's HighScalability time:


A CT scanner without its clothes on. Sexy.
  • 255Tbps: all of the internet’s traffic on a single fiber; 864 million: daily Facebook users
  • Quotable Quotes:
    • @chr1sa: "No dominant platform-level software has emerged in the last 10 years in closed-source, proprietary form”
    • @joegravett: Homophobes boycotting Apple because of Tim Cook's brave announcement are going to lose it when they hear about Turing.
    • @CloudOfCaroline: #RICON MySQL revolutionized Scale-out. Why? Because it couldn't Scale-up. Turned a flaw into a benefit - @martenmickos
    • chris dixon: We asked for flying cars and all we got was the entire planet communicating instantly via pocket supercomputers
    • @nitsanw: "In the majority of cases, performance will be programmer bound" - Barker's Law
    • @postwait: @coda @antirez the only thing here worth repeating: we should instead be working with entire distributions (instead of mean or q(0.99))
    • Steve Johnson: inventions didn't come about in a flash of light — the veritable Eureka! moment — but were rather the result of years' worth of innovations happening across vast networks of creative minds.

  • On how Google is its own VC. cromwellian: The ads division is mostly firewalled off from the daily concerns of people developing products at Google. They supply cash to the treasury, people think up cool ideas and try to implement them. It works just like startups, where you don't always know what your business model is going to be. Gmail started as a 20% project, not as a grand plan to create an ad channel. Lots of projects and products at Google have no business model, no revenue model, the company does throw money at projects and "figure it out later" how it'll make money. People like their apps more than the web. Mobile ads are making a lot of money.  

  • Hey mobile, what's for dinner? "The world," says chef Benedict Evans, who has prepared for your pleasure a fine gourmet tasting menu: Presentation: mobile is eating the world. Smart phones are now as powerful as Thor and Hercules combined. Soon everyone will have a smart phone. And when tech is fully adopted, it disappears. 

  • How much bigger is Amazon’s cloud vs. Microsoft and Google?: Amazon’s cloud revenue at more than $4.7 billion this year. TBR pegs Microsoft’s public cloud IaaS revenue at $156 million and Google’s at $66 million. If those estimates are correct than Amazon’s cloud revenue is 30 times bigger than Microsoft’s.

  • Great discussion on the Accidental Tech Podcast (at about 25 minutes in) on how the the time of open APIs has ended. People who made Twitter clients weren't competing with Twitter, they were helping Twitter become who they are today. For Apple, developers add value to their hardware and since Apple makes money off the hardware this is good for Apple, because without apps Apple hardware is way less valuable. With their new developer focus Twitter and developer interests are still not aligned as Twitter is still worried about clients competing with them. Twitter doesn't want to become an infrastructure company because there's no money in it. In the not so distant past services were expected to have an open API, in essence services were acting as free infrastructure, just hoping they would become popular enough that those dependent on the service could be monetized. New services these days generally don't have full open APIs because it's hard to justify as a business case. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

hdiutil: could not access / create failed – Operation canceled

Mark Needham - Fri, 10/31/2014 - 10:45

Earlier in the year I wrote a blog post showing how to build a Mac OS X DMG file for a Java application and I recently revisited this script to update it to a new version and ran into a frustrating error message.

I tried to run the following command to create a new DMG file from a source folder…

$ hdiutil create -volname "DemoBench" -size 100m -srcfolder dmg/ -ov -format UDZO pack.temp.dmg

…but was met with the following error message:

...could not access /Volumes/DemoBench/DemoBench.app/Contents/Resources/app/database-agent.jar - Operation canceled
 
hdiutil: create failed - Operation canceled

I was initially a bit stumped and thought maybe the flags to hdiutil had changed but a quick look at the man page suggested that wasn’t the issue.

I decided to go back to my pre command line approach for creating a DMG – DiskUtility – and see if I could create it that way. This helped reveal the actual problem:

2014 10 31 09 42 20

I increased the volume size to 150 MB…

$ hdiutil create -volname "DemoBench" -size 100m -srcfolder dmg/ -ov -format UDZO pack.temp.dmg

and all was well:

....................................................................................................
..........................................................................
created: /Users/markneedham/projects/neo-technology/quality-tasks/park-bench/database-agent-desktop/target/pack.temp.dmg

And this post will serve as documentation to stop it catching me out next time!

Categories: Programming

Azure: Announcing New Real-time Data Streaming and Data Factory Services

ScottGu's Blog - Scott Guthrie - Fri, 10/31/2014 - 07:39

The last three weeks have been busy ones for Azure.  Two weeks ago we announced a partnership with Docker to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Last week we held our Cloud Day event and announced our new G-Series of Virtual Machines as well as Premium Storage offering.  The G-Series VMs provide the largest VM sizes available in the public cloud today (nearly 2x more memory than the largest AWS offering, and 4x more memory than the largest Google offering).  The new Premium Storage offering (which will work with both our D-series and G-series of VMs) will support up to 32TB of storage per VM, >50,000 IOPS of disk IO per VM, and enable sub-1ms read latency.  Combined they provide an enormous amount of power that enables you to run even bigger and better solutions in the cloud.

Earlier this week, we officially opened our new Azure Australia regions – which are our 18th and 19th Azure regions open for business around the world.  Then at TechEd Europe we announced another round of new features – including the launch of the new Azure MarketPlace, a bunch of great network improvements, our new Batch computing service, general availability of our Azure Automation service and more.

Today, I’m excited to blog about even more new services we have released this week in the Azure Data space.  These include:

  • Event Hubs: is a scalable service for ingesting and storing data from websites, client apps, and IoT sensors.
  • Stream Analytics: is a cost-effective event processing engine that helps uncover real-time insights from event streams.
  • Data Factory: enables better information production by orchestrating and managing diverse data and data movement.

Azure Event Hub is now available in general availability, and the new Azure Stream Analytics and Data Factory services are now in public preview. Event Hubs: Log Millions of events per second in near real time

The Azure Event Hub service is a highly scalable telemetry ingestion service that can log millions of events per second in near real time.  You can use the Event Hub service to collect data/events from any IoT device, from any app (web, mobile, or a backend service), or via feeds like social networks.  We are using it internally within Microsoft to monitor some of our largest online systems.

Once you collect events with Event Hub you can then analyze the data using any real-time analytics system (like Apache Storm or our new Azure Stream Analytics service) and store/transform it into any data storage system (including HDInsight and Hadoop based solutions).

Event Hub is delivered as a managed service on Azure (meaning we run, scale and patch it for you and provide an enterprise SLA).  It delivers:

  • Ability to log millions of events per second in near real time
  • Elastic scaling support with the ability to scale-up/down with no interruption
  • Support for multiple protocols including support for HTTP and AMQP based events
  • Flexible authorization and throttling device policies
  • Time-based event buffering with event order preservation

The pricing model for Event Hubs is very flexible – for just $11/month you can provision a basic Event Hub with guaranteed performance capacity to capture 1 MB/sec of events sent to your Event Hub.  You can then provision as many additional capacity units as you need if your event traffic goes higher. 

Getting Started with Capturing Events

You can create a new Event Hub using the Azure Portal or via the command-line.  Choose New->App Service->Service Bus->Event Hub in the portal to do so:

image

Once created, events can be sent to an Event Hub with either a strongly-typed API (e.g. .NET or Java client library) or by just sending a raw HTTP or AMQP message to the service.  Below is a simple example of how easy it is to log an IoT event to an Event Hub using just a standard HTTP post request.  Notice the Authorization header in the HTTP post – you can use this to optionally enable flexible authentication/authorization for your devices:

POST https://your-namespace.servicebus.windows.net/your-event-hub/messages?timeout=60&api-version=2014-01 HTTP/1.1<?xml:namespace prefix = "o" />

Authorization: SharedAccessSignature sr=your-namespace.servicebus.windows.net&sig=tYu8qdH563Pc96Lky0SFs5PhbGnljF7mLYQwCZmk9M0%3d&se=1403736877&skn=RootManageSharedAccessKey

ContentType: application/atom+xml;type=entry;charset=utf-8

Host: your-namespace.servicebus.windows.net

Content-Length: 42

Expect: 100-continue

 

{ "DeviceId":"dev-01", "Temperature":"37.0" }

Your Event Hub can collect up to millions of messages per second like this, each storing whatever data schema you want within them, and the Event Hubs service will store them in-order for you to later read/consume.

Downstream Event Processing

Once you collect events, you no doubt want to do something with them.  Event Hubs includes an intelligent processing agent that allows for automatic partition management and load distribution across readers.  You can implement any logic you want within readers, and the data sent to the readers is delivered in the order it was sent to the Event Hub.

In addition to supporting the ability for you to write custom Event Readers, we also have two easy ways to work with pre-built stream processing systems: including our new Azure Stream Analytics Service and Apache Storm.  Our new Azure Stream Analytics service supports doing stream processing directly from Event Hubs, and Microsoft has created an Event Hubs Storm Spout for use with Apache Storm clusters.

The below diagram helps express some of the many rich ways you can use Event Hubs to collect and then hand-off events/data for processing:

image

Event Hubs provides a super flexible and cost effective building-block that you can use to collect and process any events or data you can stream to the cloud.  It is very cost effective, and provides the scalability you need to meet any needs.

Learning More about Event Hubs

For more information about Azure Event Hubs, please review the following resources:

Stream Analytics: Distributed Stream Processing Service for Azure

I’m excited to announce the preview our new Azure Stream Analytics service – a fully managed real-time distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. The new Azure Stream Analytics service easily scales from small projects with just a few KB/sec of throughput to a gigabyte/sec or more of streamed data messages/events.  

Our Stream Analytics pricing model enable you to run low throughput streaming workloads continuously at low cost, and enables you to only have to scale up as your business needs increase.  We do this while maintaining built in guarantees of event delivery, and state management for fast recovery which enables mission critical business continuity.

Dramatically Simpler Developer Experience for Stream Processing Data

Stream Analytics supports a SQL-like language that dramatically lowers the bar of the developer expertise required to create a scalable stream processing solution. A developer can simply write a few lines of SQL to do common operations including basic filtering, temporal analysis operations, joining multiple live streams of data with other static data sources, and detecting stream patterns (or lack thereof).

This dramatically reduces the complexity and time it takes to develop, maintain and apply time-sensitive computations on real-time streams of data. Most other streaming solutions available today require you to write complex custom code, but with Azure Stream Analytics you can write simple, declarative and familiar SQL.

Fully Managed Service that is Easy to Setup

With Stream Analytics you can dramatically accelerate how quickly you can derive valuable real time insights and analytics on data from devices, sensors, infrastructure, or applications. With a few clicks in the Azure Portal, you can create a streaming pipeline, configure its inputs and outputs, and provide SQL-like queries to describe the desired stream transformations/analysis you wish to do on the data. Once running, you are able to monitor the scale/speed of your overall streaming pipeline and make adjustments to achieve the desired throughput and latency.

You can create a new Stream Analytics Job in the Azure Portal, by choosing New->Data Services->Stream Analytics:

image

Setup Streaming Data Input

Once created, your first step will be to add a Streaming Data Input.  This allows you to indicate where the data you want to perform stream processing on is coming from.  From within the portal you can choose Inputs->Add An Input to launch a wizard that enables you to specify this:

image

We can use the Azure Event Hub Service to deliver us a stream of data to perform processing on. If you already have an Event Hub created, you can choose it from a list populated in the wizard above.  You will also be asked to specify the format that is being used to serialize incoming event in the Event Hub (e.g. JSON, CSV or Avro formats).

Setup Output Location

The next step to developing our Stream Analytics job is to add a Streaming Output Location.  This will configure where we want the output results of our stream processing pipeline to go.  We can choose to easily output the results to Blob Storage, another Event Hub, or a SQL Database:

image

Note that being able to use another Event Hub as a target provides a powerful way to connect multiple streams into an overall pipeline with multiple steps.

Write Streaming Queries

Now that we have our input and output sources configured, we can now write SQL queries to transform, aggregate and/or correlate the incoming input (or set of inputs in the event of multiple input sources) and output them to our output target.  We can do this within the portal by selecting the QUERY tab at the top.

image

There are a number of interesting queries you can write to processing the incoming stream of data.  For example, in the previous Event Hub section of this blog post I showed how you can use an HTTP POST command to submit JSON based temperature data from an IoT device to an Event Hub with data in JSON format like so:

{ "DeviceId":"dev-01", "Temperature":"37.0" }

When multiple devices are streaming events simultaneously into our Event Hub like this, it would feed into our Stream Analytics job as a stream of continuous data events that look like the sequence below:

Wouldn’t it be interesting to be able to analyze this data using a time-window perspective instead?  For example, it would be useful to calculate in real-time what the average temperature of each device was in the last 5 seconds of multiple readings.

With the Stream Analytics Service we can now dynamically calculate this over our incoming live stream of data just by writing a SQL query like so:

SELECT DateAdd(second,-5,System.TimeStamp) as WinStartTime, system.TimeStamp as WinEndTime, DeviceId, Avg(Temperature) as AvgTemperature, Count(*) as EventCount 
    FROM input
    GROUP BY TumblingWindow(second, 5), DeviceId

Running this query in our Stream Analytics job will aggregate/transform our incoming stream of data events and output data like below into the output source we configured for our job (e,g, a blog storage file or a SQL Database):

The great thing about this approach is that the data is being aggregated/transformed in real time as events are being streamed to us, and it scales to handle literally gigabytes of data event streamed per second.

Scaling your Stream Analytics Job

Once defined, you can easily monitor the activity of your Stream Analytics Jobs in the Azure Portal:

image

You can use the SCALE tab to dynamically increase or decrease scale capacity for your stream processing – allowing you to pay only for the compute capacity you need, and enabling you to handle jobs with gigabytes/sec of streamed data. 

Learning More about Stream Analytics Service

For more information about Stream Analytics, please review the following resources:

Data Factory: Fully managed service to build and manage information production pipelines

Organizations are increasingly looking to fully leverage all of the data available to their business.  As they do so, the data processing landscape is becoming more diverse than ever before – data is being processed across geographic locations, on-premises and cloud, across a wide variety of data types and sources (SQL, NoSQL, Hadoop, etc), and the volume of data needing to be processed is increasing exponentially. Developers today are often left writing large amounts of custom logic to deliver an information production system that can manage and co-ordinate all of this data and processing work.

To help make this process simpler, I’m excited to announce the preview of our new Azure Data Factory service – a fully managed service that makes it easy to compose data storage, processing, and data movement services into streamlined, scalable & reliable data production pipelines. Once a pipeline is deployed, Data Factory enables easy monitoring and management of it, greatly reducing operational costs. 

Easy to Get Started

The Azure Data Factory is a fully managed service. Getting started with Data Factory is simple. With a few clicks in the Azure preview portal, or via our command line operations, a developer can create a new data factory and link it to data and processing resources.  From the new Azure Marketplace in the Azure Preview Portal, choose Data + Analytics –> Data Factory to create a new instance in Azure:

image

Orchestrating Information Production Pipelines across multiple data sources

Data Factory makes it easy to coordinate and manage data sources from a variety of locations – including ones both in the cloud and on-premises.  Support for working with data on-premises inside SQL Server, as well as Azure Blob, Tables, HDInsight Hadoop systems and SQL Databases is included in this week’s preview release. 

Access to on-premises data is supported through a data management gateway that allows for easy configuration and management of secure connections to your on-premises SQL Servers.  Data Factory balances the scale & agility provided by the cloud, Hadoop and non-relational platforms, with the management & monitoring that enterprise systems require to enable information production in a hybrid environment.

Custom Data Processing Activities using Hive, Pig and C#

This week’s preview enables data processing using Hive, Pig and custom C# code activities.  Data Factory activities can be used to clean data, anonymize/mask critical data fields, and transform the data in a wide variety of complex ways.

The Hive and Pig activities can be run on an HDInsight cluster you create, or alternatively you can allow Data Factory to fully manage the Hadoop cluster lifecycle on your behalf.  Simply author your activities, combine them into a pipeline, set an execution schedule and you’re done – no manual Hadoop cluster setup or management required. 

Built-in Information Production Monitoring and Dashboarding

Data Factory also offers an up-to-the moment monitoring dashboard, which means you can deploy your data pipelines and immediately begin to view them as part of your monitoring dashboard.  Once you have created and deployed pipelines to your Data Factory you can quickly assess end-to-end data pipeline health, pinpoint issues, and take corrective action as needed.

Within the Azure Preview Portal, you get a visual layout of all of your pipelines and data inputs and outputs. You can see all the relationships and dependencies of your data pipelines across all of your sources so you always know where data is coming from and where it is going at a glance. We also provide you with a historical accounting of job execution, data production status, and system health in a single monitoring dashboard:

image

Learning More about Stream Analytics Service

For more information about Data Factory, please review the following resources:

Other Great Data Improvements

Today’s releases make it even easier for customers to stream, process and manage the movement of data in the cloud.  Over the last few months we’ve released a bunch of other great data updates as well that make Azure a great platform to perform any data needs.  Since August: 

We released a major update of our SQL Database service, which is a relational database as a service offering.  The new SQL DB editions (Basic/Standard/Premium ) support a 99.99% SLA, larger database sizes, dedicated performance guarantees, point-in-time recovery, new auditing features, and the ability to easily setup active geo-DR support. 

We released a preview of our new DocumentDB service, which is a fully-managed, highly-scalable, NoSQL Document Database service that supports saving and querying JSON based data.  It enables you to linearly scale your document store and scale to any application size.  Microsoft MSN portal recently was rewritten to use it – and stores more than 20TB of data within it.

We released our new Redis Cache service, which is a secure/dedicated Redis cache offering, managed as a service by Microsoft.  Redis is a popular open-source solution that enables high-performance data types, and our Redis Cache service enables you to standup an in-memory cache that can make the performance of any application much faster.

We released major updates to our HDInsight Hadoop service, which is a 100% Apache Hadoop-based service in the cloud. We have also added built-in support for using two popular frameworks in the Hadoop ecosystem: Apache HBase and Apache Storm.

We released a preview of our new Search-As-A-Service offering, which provides a managed search offering based on ElasticSearch that you can easily integrate into any Web or Mobile Application.  It enables you to build search experiences over any data your application uses (including data in SQLDB, DocDB, Hadoop and more).

And we have released a preview of our Machine Learning service, which provides a powerful cloud-based predictive analytics service.  It is designed for both new and experienced data scientists, includes 100s of algorithms from both the open source world and Microsoft Research, and supports writing ML solutions using the popular R open-source language.

You’ll continue to see major data improvements in the months ahead – we have an exciting roadmap of improvements ahead.

Summary

Today’s Microsoft Azure release enables some great new data scenarios, and makes building applications that work with data in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

Words Have Power

Remember that the race does not always go to the just; however not running will ensure that you can't win. Don’t let your words keep you from running the race.

Remember that the race does not always go to the just; however not running will ensure that you can’t win. Don’t let your words keep you from running the race.

Words have power,when used correctly — the power to sell ideas, to rally the troops and to provide motivation. Or words can be a tactic signal of resistance to change and an abrogation of responsibility.  In the later set of scenarios, perfectly good words go bad.  I want to highlight three words, which when used to declare that action won’t be taken or as a tool to deny responsibility for taking action, might as well be swear words.  The words, in my opinion, that are the worst offenders are ‘but’, ‘can’t’ and ‘however’.  The use of any of these three words should send up a red flag that a course is being charted to a special process improvement hell for an organization.

‘But’

The ugliest word in the process improvement world, at least in the English language, is ‘but’ (not but with two t’s).  ‘But’ is a fairly innocuous word, so why do I relegate it to such an august spot on my bad words list?  Because the term is usually used to explain why the speaker (even though they know better) can’t or won’t fight for what is right.  As an example, I recently participated in a discussion about involving the business as an equal partner in a project (a fairly typical discussion for an organization that is transitioning to Agile).  Everyone involved thought that the concept was important, made sense and would help the IT department deliver more value to the organization, ‘but’ in their opinion, the business would not be interested in participating.  Not that anyone would actually discuss having the business be involved with them or invite them to the project party.  A quick probe exposed excuses like “but they do not have time, so we won’t ask” and the infamous, “but that isn’t how we do it here.”  All of the reasons why they would not participate were rationalizations, intellectual smoke screens, for not taking the more difficult steps of asking the business to participate in the process of delivering projects.  It was too frightening to ask and risk rejection, or worse yet acceptance, then have to cede informational power through knowledge sharing.  The use of the word ‘but’ is used to negate anything out of ordinary which gives the speaker permission to not get involved in rectifying the problem.  By not working to fix the problem, the consequences belong to someone else.

‘Can’t’

A related negation word is ‘can’t’. ‘Can’t’ is generally a more personal negation word than ‘but.’ Examples of usage include ‘I can’t’ or ‘we can’t’. Generally this bad word is used to explain why someone or some group lacks specific power to take action.  Again like ‘but’, ‘can’t’ is used to negate what the person using the word admits is a good idea.  The use of the term reflects an abrogation of responsibility and shifts the responsibility elsewhere. For example, I was discussing daily standups with a colleague recently.  He told me a story about a team that had stopped doing daily stand-up meetings because the product owner deemed them overhead.  He quoted the Scrum Master as saying, “It is not my fault that we can’t do stand-ups because our product owner doesn’t think meetings are valuable.” In short he is saying, “It isn’t my fault that the team is not in control of how the work is being done.”  The abrogation of responsibility for the consequences of the team’s actions is what makes ‘can’t’ into a bad word in this example.  ‘Can’t’ reinforces the head-trash which steals power from the practitioner that makes it easy to walk away from the struggle to change rather than looking for a way to embrace change.  When you empower someone else to manage your behavior, you are reinforcing your lack of power and reducing your motivation and the motivation of those around you.

‘However’

The third of this unholy trinity of negation words is ‘however’.  The struggle I have with this word is that it can be used insidiously to reflect a false use of logic to cut off debate.  A number of years ago, while reviewing an organization that decided to use Scrum and two-week iterations for projects, I was told, “we started involving the team in planning what was going to be done during the iterations, however they were not getting the work done fast enough, so we decided to tell them what they needed to do each iteration.”  The use of ‘however’ suggests a cause-and-effect relationship that may or may not be true and tends to deflect discussion from the root cause of the problem. The conversation went on for some period of time during which we came to the conclusion that by telling them what to do, the project had actually fared even worse.  What occurred was that the responsibility had been shifted away from poor portfolio planning onto the team’s shoulders.

In past essays I have discussed that our choices sometimes rob us of positional power.  The rationalization of those individual choices acts as an intellectual smokescreen to make us feel better about our lack of power. Rationalization provides a platform to keep a clinical distance from the problem. Rationalization can be a tool to avoid the passion and energy needed to generate change.

All of these unholy words can be used for good, and that it might be useful to have more instructions on how to recognize when they are be used in a bad way. A sort of a field guide to avoid mistaken recognition.  One easy mechanism for recognizing a poor use of ‘but’, ‘can’t’ and ‘however’ is to break the sentence or statement into three parts, everything before the unholy word, the unholy word and then everything after the unholy word.  By looking at the phrase that follows our unholy word all is exposed.  If the phrase rejects or explains why original and perfectly reasonable premise is bat poop crazy, then you have a problem.  I decided to spend some of my ample time in airports collecting observations of some of the negation phrases people use.  Some of shareable the examples I heard included:

  1. I told you so.
  2. It is not my fault.
  3. Just forget it.
  4. We tried that before.
  5. That will take too long.
  6. It doesn’t matter (passive aggressive).
  7. We don’t do it that way.
  8. My manager won’t go for it.

There were others that I heard that can’t be shared, and I am sure there are many other phrases that can be used to lull the listener into thinking that the speaker agrees and then pulls the rug out from the listener.

The use of negation words can be a sign that you are trying to absolve yourself from the risk of action.  I would like to suggest we ban the use of these three process improvement swear words and substitute enabling phrases such as “and while it might be difficult, here is what I am going to do about it.”  Our goal should be to act on problems that are blockers and issues rather than to ignore them or by doing that establishing  their reality by saying grace over them.  In my opinion, acting and failing is a far better course of action than doing nothing at all and putting your head in the sand.  The responsibility to act does not go away but rather affixes more firmly to those who do nothing than to those that are trying to change the world!  When you pretend to not have power you become a victim.  Victims continually cede their personal and positional power to those around them.  Remember that the race does not always go to the just; however not running will ensure that you can’t win. Don’t let your words keep you from running the race.


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 10/30/2014 - 22:50

The most unsuccessful three years in the education of cost estimators appears to be fifth-grade artihmetic - Norman R. Augustine

Opening line in the Welcome section of Software Estimation: Demystifying the Black Art, Steve McConnell.

Augustine is former Chairman and CEO of Martin Marietta. His seminal book Augustine's Laws, describes the complexities and conundrums of today's business management and offers solutions. Anyone interested in learning how successful management of complex technology based firms is done, should read that book. As well, read McConnell's book and see if you can find where 

Screen Shot 2014-10-30 at 3.47.41 PM

Because I sure can't find that proof or any mention that estimates don't work, other than for those who failed to pay attention in the 5th grade arithmetic class.

Categories: Project Management

Make your emails stand out in Inbox

Google Code Blog - Thu, 10/30/2014 - 19:30
Originally posted on the Google Apps Developers Blog

As we announced last week, Inbox is a whole new take on, well, the inbox. It’s built by the Gmail team, but it’s not Gmail—it’s a new product designed to help users succeed in today’s world of email overload and multiple devices. At the same time, Inbox can also help you as a sender by offering new tools to make your emails more interactive!

Specifically, you can now take advantage of a new feature called Highlights.

Exactly like it sounds, Highlights “highlight” or surface key information and actions from an email and display them as easy-to-see chips in the inbox. For example, if you’re an airline that sends flight confirmation emails, Highlights can surface the “Check-in for your flight” action and display live flight status information for recipients right in the user’s main list. The same can apply if you send customers hotel reservations, event details, event invitations, restaurant reservations, purchases, or other tickets. Highlights help ensure that your recipients see your messages and the important details at a glance.

To take advantage of Highlights, you can mark up your email messages to specify which details you want surfaced for your customers. This will make it possible for not only Inbox, but also Gmail, Google Now, Google Search, and Maps to interact more easily with your messages and give your recipients the best possible experience across Android, iOS and the web.

As an example, the following JSON-LD markup can be used by restaurants to send reservation confirmations to their users/customers:
<script type="application/ld+json">
{
"@context": "http://schema.org",
"@type": "FoodEstablishmentReservation",
"reservationNumber": "WTA1EK",
"reservationStatus": "http://schema.org/Confirmed",
. . . information about dining customer . . .
"reservationFor": {
"@type": "FoodEstablishment",
"name": "Charlie’s Cafe",
"address": {
"@type": "PostalAddress",
"streetAddress": "1600 Amphitheatre Parkway",
"addressLocality": "Mountain View",
"addressRegion": "CA",
"postalCode": "94043",
"addressCountry": "United States"
},
"telephone": "+1 650 253 0000"
},
"startTime": "2015-01-01T19:30:00-07:00",
"partySize": "2"
}
</script>

When your confirmation is received, users will see a convenient Highlight with the pertinents at the top of their Inbox, then can open the message to obtain the full details of their reservation as shown above.

Getting started is simple: read about email markup, check out more markup examples, then register at developers.google.com/gmail/markup and follow the instructions from there!


by Shalini Agarwal, Product Management, Inbox by Gmail
Categories: Programming

Start Your Digital Vision by Reenvisioning Your Customer Experience

You probably hear a lot about the Mega-Trends (Cloud, Mobile, Social, and Big Data), or the Nexus of Forces (the convergence of social, mobility, cloud and data insights patterns that drive new business scenarios), or the Mega-Trend of Mega-Trends (Internet of Things).

And you are probably hearing a lot about digital transformation and maybe even about the rise of the CDO (Chief Digital Officer.)

All of this digital transformation is about creating business change, driving business outcomes, and driving better business results.

But how do you create your digital vision and strategy?   And, where do you start?

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned from companies that are digital masters that created their digital visions and are driving business change.

3 Perspectives of Digital Vision

When it comes to creating your digital vision, you can focus on reenvisioning the customer experience, the operational processes, or your business model.

Via Leading Digital:

“Where should you focus your digital vision? Digital visions usually take one of three perspectives: reenvisioning the customer experience, reenvisioning operational processes, or combining the previous two approaches to reenvision business models.  The approach you take should reflect your organization’s capabilities, your customer’s needs, and the nature of competition in your industry.”

Start with Your Customer Experience

One of the best places to start is with your customer experience.  After all, a business exists to create a customer.  And the success of the business is how well it creates value and serves the needs of the customer.

Via Leading Digital:

“Many organizations start by reenvisioning the way they interact with customers.  They want to make themselves easier to work with, and they want to be smarter in how they sell to (and serve) customers.  Companies start from different places when reenvisioning the customer experience.”

Transform the Relationship

You can use the waves of technologies (Cloud, Mobile, Social, Data Insights, and Internet of Things), to transform how you interact with your customers and how they experience your people, your products, and your services.

Via Leading Digital:

“Some companies aim to transform their relationships with their customers.  Adam Bortman, chief digital officer of Starbucks, shared this vision: 'Digital has to help more partners and help the company be the way we can ... tell our story, build our brand, and have a relationship with our customers.' Burberry's CEO Angela Ahredts focused on multichannel coherence. 'We had a vision, and the vision was to be the first company who was fully digital end-to-end ... A customer will have total access to Burberry across any device, anywhere.'  Mare Menesquen, managing director of strategic marketing at cosmetics gitan L'Oreal, said, 'The digital world multiples the way our brands can create an emotion-filled relationship with their customers.’”

Serve Your Customers in Smarter Ways

You can use technology to personalize the experience for your customers, and create better interactions along the customer experience journey.

Via Leading Digital:

“Other companies envision how they can be smarter in serving (and selling to) their customers through analytics.  Caesars started with a vision of using real-time customer information to deliver a personalized experience to each customer.  The company was able to increase customer satisfaction and profits per customer using traditional technologies.  Then, as new technologies arose, it extended the vision to include a mobile, location-based concierge in the palm of every customer's hand.”

Learn from Customer Behavior

One of the most powerful things you can now do with the combination of Cloud, Mobile, Social, Big Data and Internet of Things is gain better customer insights.  For example, you can learn from the wealth of social media insights, or you can learn through better integration and analytics of your existing customer data.

Via Leading Digital:

“Another approach is to envision how digital tools might help the company to learn from customer behavior.  Commonwealth Bank of Australia sees new technologies as a key way of integrating customer inputs in its co-creation efforts.  According to CIO Ian Narev, 'We are progressively applying new technology to enable customers to play a greater part in product design.  That helps us create more intuitive products and services, readily understandable to our customers and more tailored to their individual needs.”

Change Customers’ Lives

If you focus on high-value activities, you can create breakthroughs in the daily lives of your customers.

Via Leading Digital:

“Finally, some companies are extending their visions beyond influencing customer experience to actually changing customers' lives.  For instance, Novartis CEO Joseph Jimenez wrote of this potential: ‘The technologies we use in our daily lives, such as smart phones and tablet devices, could make a real difference in helping patients to manage their own health.  We are exploring ways to use these tools to improve compliance rates and enable health-care professionals to monitor patient progress remotely.’”

If you want to change the world, one of the best places to start is right from wherever you are.

With a Cloud and a dream, what can you do to change the world?

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

The Future of Jobs

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data

Categories: Architecture, Programming

Aligning User Expectations with Business Objectives

Software Requirements Blog - Seilevel.com - Thu, 10/30/2014 - 17:00
Projects with clearly defined business objectives can and do fail even if they deliver functionality that syncs closely with the business objectives defined for the project, but do not meet user expectations. This may seem counter intuitive at first blush since the primary purpose of any enterprise software development effort is to deliver tangible financial […]
Categories: Requirements

Are You Living In The Past Or Future?

Making the Complex Simple - John Sonmez - Thu, 10/30/2014 - 15:00

In this video I talk about the important of living in the present moment and give some tips on how you can make sure you aren’t living in the past or future.

The post Are You Living In The Past Or Future? appeared first on Simple Programmer.

Categories: Programming

#Workout – Premium Print Edition

NOOP.NL - Jurgen Appelo - Thu, 10/30/2014 - 14:46
M30 Workout

The new Management 3.0 #Workout book is already available in PDF, Kindle and ePUB versions. And in just a few weeks, readers will be able to get the Premium Print Edition as well. Nearly 500 pages of high-quality, full-color paper, with great design, awesome photos, crisp illustrations, and concrete practices. It will be the most colorful and practical management book ever published!

The post #Workout – Premium Print Edition appeared first on NOOP.NL.

Categories: Project Management

Why Trust is Hard

Herding Cats - Glen Alleman - Thu, 10/30/2014 - 14:06

Screen Shot 2014-10-29 at 10.09.40 PMHugh McCleod's art for Zappo's provides the foundation for trust in that environment

If I'm the head of HR, I'm responsible for filling the desks at my company with amazing employees. I can hold people to all the right standards. But ultimately I can't control what they do. This is why hiring for culture works. What Zappos does is radical because it trusts. It says "Go do the best job you can do for the customer, without policy". And leaves employees to come up with human solutions. Something it turns out they're quite good at, if given the chance.

Now let's take another domain, one I'm very famailar with - fault tolerant process control systems. Software and support hardware applied to emergency shutdown of exothermic chemical reactors - those that make the unleaded gasoline for our cars, nuclear reactors and conventional fired power generation, gas turbine controls, and other must work properly machines. And a similar domain of DO-178c flight control systems, which must equally work without fail and provide all the needed capabilities on day one.

At Zappos, the HR Diector describes a work environment where employess are free to do the best job they can for the customer. In the domains above, employees also work to do the best job for the customer they can, but flight safety, live safety, equipment safety are also part of that best job. In other domains we work, doing the best job for the customer means processing with extremely low error rates, transactions for 100's of millions of dollars of value in the enterprise IT paradigm. Medical insurance provider services, HHS enrollment, enterprise IT in a variety of domains.

Zappo's can recover from an error, other domains can't. Nonrecoverable errors mean serious loss of revenue, or even loss of live. In the other domains, failure is similar consequences. I come from those domains, they inform my view of the software development world - where software fail safe and fault tolerance is the basis of business success.

So when we hear about the freedom to fail early and fail often in the absence of a domain or context, care is needed. Without a domain and context, it is difficult to assess the credibility of any concept, let alone one that is untested outside of personal ancedote. It comes down to Trust alone or Trust But VerifyI could also guarantee that Zappos has some of the verify process. It is doubtful employees are left to do anything they wish for their customer. The simple reason there is a business governance process at any firm, no matter the size. Behaviour, even full trust behavior fits inside that governance process.
Categories: Project Management

Splitting User Stories: Alternate Patterns

Too many things going on will lead to less attention to anyone subject.

Too many things going on will lead to less attention to anyone subject.

Splitting user stories is an important tool to help teams in a number ways ranging from improving the flow of stories through the development process, to improving the teams understanding of what is required to deliver the story. In almost every case, smaller is better.   We have identified a number techniques for splitting user stories and a framework for evaluating those splits. Additional splitting techniques include:

  1. And/Or Removal: User stories that include “and” or “or” typically reflects compound thoughts. This is an indication that the story is an epic, which will too large to be complete in a single sprint. Split the stories to eliminate instances of “and” and “or“. An example of a story with an “and / or” problem is: As a project manager I want to be able to review and approve time and expenses logged to my projects to ensure accurate reporting and billing. Stories could be constructed separately for reviewing time accounting, approving time accounting, reviewing expenses and approving expenses. Simplicity reduces the potential for confusion.
  2. Simple/Complex: Complexity makes a story harder to complete and therefore the story will take longer to deliver compared to a similarly-sized, simple story. Splitting can be used to isolate functionality that is more or less complex. Splitting based on complexity provides product owners the option of deciding on whether a strategy of doing the simple stories first. This approach could provide teams with insights that reduce the complexity of later stories.
  3. Splitting Non-functional Requirements: Many user stories combine function and non-functional components. For example the story “As a home brewer, I want a conversion calculator that returns results in 40 point type display so that I can determine the alcohol level in the beer.” The story could be split to address the functional side of the story (conversion results) from the non-functional component (size of display). Splitting the story lets team to deliver the calculation before having to address how it is displayed.

These three patterns for splitting user stories (in addition to those noted in previous articles including workflow, business rules, data variations, elementary processes or syntheses of patterns) are just tools for teams. Teams split stories to help them understand what they are committing to deliver, to reduce the complexity of large stories (or at the very least to isolate the hard parts) and so they can enhance their ability to consistently deliver value. Splitting stories increases productivity and quality and reduces the amount of time the team spends scratching their collective heads trying to figure out what they will deliver and how they will deliver.


Categories: Process Management

Episode 213: James Lewis on Microservices

Johannes Thönes talks to James Lewis, principal consultant at ThoughtWorks, about microservices. They discuss microservices’ recent popularity, architectural styles, deployment, size, technical decisions, and consumer-driven contracts. They also compare microservices to service-oriented architecture and wrap up the episode by talking about key figures in the microservice community and standing on the shoulders of giants. Recording […]
Categories: Programming

The fastest route between voice search and your app

Android Developers Blog - Wed, 10/29/2014 - 20:49
By Jarek Wilkiewicz, Developer Advocate, Google Search

How many lines of code will it take to let your users say Ok Google, and search for something in your app? Hardly any. Starting today, all you need is a small addition to your AndroidManifest.xml in order to connect the Google Now SEARCH_ACTION with your searchable activity:

<activity android:name=".SearchableActivity">
    <intent-filter>
        <action android:name="com.google.android.gms.actions.SEARCH_ACTION"/>
        <category android:name="android.intent.category.DEFAULT"/>
    </intent-filter>
</activity>

Once you make these changes, your app can receive the SEARCH_ACTION intent containing the SearchManager.QUERY extra with the search expression.

At Google, we always look for innovative ways to help you improve mobile search and drive user engagement back to your app. For example, users can now say to the Google app: “Ok Google, search pizza on Eat24” or “Ok Google, search for hotels in Maui on TripAdvisor.”

This feature is available on English locale Android devices running Jelly Bean and above with the Google app v3.5 or greater. Last but not least, users can enable the Ok Google hot-word detection from any screen, which offers them the fastest route between their search command and your app!


Join the discussion on
+Android Developers


Categories: Programming

No map is an island: Introducing a connected JavaScript Maps API experience

Google Code Blog - Wed, 10/29/2014 - 17:52
Cross-posted from the Google Geo Developers blog

Our digital lives are increasingly connected. We research on our laptops, look up directions on our phones and even navigate with our watches. And by creating maps unique to each user and offering features such as saved places, Google Maps has been making it easier to continue these tasks as we move from device to device.

However, although maps embedded from Google Maps are now built uniquely for every Google user, most of the now two million active sites and apps using the Maps APIs are still islands. When I look for a place to eat on Zagat, I can’t see how far away it is from work. When I look at a travel map in the New York Times, I can’t save those places in order to navigate to them later.

Today we’re taking a step towards connecting these two million sites and apps by introducing a signed-in JavaScript Maps API experience and a feature called attributed save. To help illustrate, we’ve partnered with the New York Times to bring this experience to their 36 hours travel column.

A connected JavaScript Maps API

When you add &signed_in=true to the Google Maps JavaScript API source url, your end users will have the option to sign into the map with their Google account. When they do so, your users will receive a map built for them, in the context of your app. Their saved places — including home and work addresses (if set by the end user) as well as other relevant places — will appear automatically on their map, providing a layer of context that anchors your content and makes it stand out even more.

Attributed save

Once users are signed into the Google Maps in your app, we can together create an integrated experience between your map content and Google Maps. With attributed save, signed-in users can save places from your app to be accessed later, with attribution and linkbacks, on Google Maps for the web, Android and iOS.

What’s more, you can also enable deep links into your mobile applications. For instance, users can save a place from your desktop app (such as Zagat.com), open up the place on Google Maps on their Android device, and deep link directly into your Android app.

Enabling attributed save is easy — just specify your app name, a link and a place search string or place ID when creating a marker and info window. Or use our SaveWidget to enable attributed save in your own custom info window.

In addition, we’re also launching attributed save across all embedded maps today. Attribution and linkback parameter will be inferred automatically from the domain and referrer of the host site, so if you’re using our embedded maps, you don’t need to do anything! If you’re using the Google Maps Embed API, you may customize the source and link back parameters yourself.

One final point: we’ve stated in the past that the JavaScript Maps API is cookieless if loaded from maps.googleapis.com. As of today, to enable the signed in maps experience on sites across the web, the signed-in version of the JavaScript Maps API now does rely on cookies to detect the end user’s signed-in state. Please review our documentation for further details.

That’s all for now. Go try it out. And remember, no map is an island, entire of itself...

Posted by Ken Hoetmer, Product Manager, Google Maps APIs
Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Wed, 10/29/2014 - 16:56

Vision without Execution is HallucinationJeffrey E. Garten, The Mind Of The CEO

All the rhetoric around any idea needs actionable outcomes that can be tested in the market place, beyond the personal anecdotes of self-selected conversations.

 

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Wed, 10/29/2014 - 15:56

The Sky's the limit when you don't know what you don't know.

Categories: Project Management

What’s Your Management 3.0 Story?

NOOP.NL - Jurgen Appelo - Wed, 10/29/2014 - 13:50
rea

Some readers told me they have used Moving Motivators during job interviews.

Some readers told me they used Delegation Boards on management teams.

Some readers told me they adopted Merit Money to get rid of bonuses.

I just visited REA Group in Melbourne, Australia, where they use lots of Management 3.0 practices, and have great experiences to share.

The post What’s Your Management 3.0 Story? appeared first on NOOP.NL.

Categories: Project Management

How to Be a 10x Better Speaker with 20 Benefits

NOOP.NL - Jurgen Appelo - Wed, 10/29/2014 - 13:36
speaking

Last week, one conference attendee told me my presentation at Agile Tour Toulouse was perfect.

It was very kind, but I didn’t believe her.

In five years, I have spoken at almost 100 conferences and joined a similar number of community and company events. Thanks to the many discussions I had with event organizers, I think I now understand how to be a more valuable speaker.

The post How to Be a 10x Better Speaker with 20 Benefits appeared first on NOOP.NL.

Categories: Project Management