Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

You Are Not Agile . . . Hybrid Models Revisited

A hybrid

A hybrid

Being Agile is a lot easier than it was even a few years ago. However, there are still roadblocks; including lack of management buy in, not changing the whole development life cycle or and only vaguely considering technical Agile practices. The discussions I have had with colleagues, readers of blog and a professor at Penn State about roadblock have generated a lot passion over the relative value and usefulness of hybrids.

Classic software development techniques are a cascade beginning with requirements definition and ending with implementation. The work moves through that cascade in a manner similar to an auto assembly line. The product is not functional until it rolls off the end of the line. The model uses specialized labor doing specific tasks over and over. Henry Ford changed the automotive market with this technique.  However, applying this model to software development can cause all sorts of bad behaviors. Those bad behaviors are generally caused by a mismatch between linear work like manufacturing and work that is more dynamic and more oriented to research and development. One example of a bad behavior the abuse of stage gates. Often teams continue working even when the method indicates they must wait for a stage gate decision. The problem is that if they wait for a decision they will never complete on time. Managers are ALWAYS aware what is happening, but choose to not ask. It generally is not because anyone in the process is a bad player, but rather that the process is corrupt.

Agile is different. Agile stops the cascade by breaking work into smaller chunks. Those smaller pieces are then designed, developed, tested and delivered using short sprints (known as cadence) in order to generate immediate feedback. The “new” process takes the reason for running stage gate stop signs away.

Hybrid models attempt to harvest the best pieces from the classic and Agile frameworks to fit the current organizational culture or structure. The process of making the Agile (or classic methods for that matter) fit into an organization optimized for classic methods is where issues generally creep in. The compromises required often stray from the assumptions of built into the Agile values and principles.  Examples of the assumptions built into Agile include stable teams, self management and delivering functional software at the end of every sprint. It is easier to wander away from those values and principles than to tackle changing the organization’s culture or structure. One of the most common (and damaging) compromises can be seen in organizations that continue the practice of leveraging dynamically staffed teams (often called matrix organizations). Stable teams often require rearranging organizational boundaries and a closer assessment of capabilities.

Typical organizational problems that if not addressed will lead organizations to generate classic/Agile hybrids include:

  1. Silos: Boundaries between groups require higher levels of planning and coordination to ensure the right people are available at the right time even when there are delays. Historically, large development organizations have included the role of scheduler/expediter in their organization chart to deal with these types of issues.
  2. Overly Lean: Many development organizations have suffered through years of cost cutting and have far fewer people than needed to complete the work they have committed to accomplishing. Personnel actively work on several projects at once to give the appearance of progress across a wide range of enterprises. Switching between tasks is highly inefficient which reduces the overall value delivered, often leading to more cost cutting pressure.
  3. Lack of Focus: Leaders in development organizations often feel the need to accept and start all projects that are presented. Anthony Mersino calls this the “say yes to everything syndrome.” This typically occurs in organizations without strong portfolio management in place. Ultimately, this means that people and teams need to multitask, leading to inefficiency.
  4. Lack of Automation: While I have never met a development practice that couldn’t be done on a small scale using pencil and paper, automation makes scale possible. For example, consider running several thousand regression tests by hand. In order to run the tests you would either need a significant duration or lots of people. Lots of people generally means more teams, more team boundaries, more hierarchy and more overhead – leading to the possibility of just running less tests to meet a date or budget.

The values and principles that underpin Agile really matter. They guide behavior so that it is more focused on delivering value quickly. The four values in the Agile Manifesto are presented as couplets. For example, in the first value: “Individuals and interactions over processes and tools,” the items on left side are valued more than those on the right (even though those on the right still have value). Hybrid models often generate compromises that shift focus from the attributes on the left more toward the center and perhaps back to those attributes on the right. Hybrids are not evil or bad, but they are generally cop-outs if they wander away from basic Agile values and principles rather than addressing tough organizational issues.

Agree or disagree, your thoughts are important to guiding the conversation about what is and isn’t Agile.


Categories: Process Management

Announcing the Android Auto Desktop Head Unit

Android Developers Blog - 6 hours 41 min ago

Posted by Josh Gordon, Developer Advocate

Today we’re releasing the Desktop Head Unit (DHU), a new testing tool for Android Auto developers. The DHU enables your workstation to act as an Android Auto head unit that emulates the in-car experience for testing purposes. Once you’ve installed the DHU, you can test your Android Auto apps by connecting your phone and workstation via USB. Your phone will behave as if it’s connected to a car. Your app is displayed on the workstation, the same as it’s displayed on a car.

The DHU runs on your workstation. Your phone runs the Android Auto companion app.

Now you can test pre-released versions of your app in a production-like environment, without having to work from your car. With the release of the DHU, the previous simulators are deprecated, but will be supported for a short period prior to being officially removed.

Getting started

You’ll need an Android phone running Lollipop or higher, with the Android Auto companion app installed. Compile your Auto app and install it on your phone.

Install the DHU

Install the DHU on your workstation by opening the SDK Manager and downloading it from Extras > Android Auto Desktop Head Unit emulator. The DHU will be installed in the <sdk>/extras/google/auto/ directory.

Running the DHU

Be sure your phone and workstation are connected via USB.

  1. Enable Android Auto developer mode by starting the Android Auto companion app and tapping on the header image 10 times. This is a one-time step.
  2. Start the head unit server in the companion app by clicking on the context menu, and selecting “Start head unit server”. This option only appears after developer mode is enabled. A notification appears to show the server is running.
  3. Start the head unit server in the Android Auto companion app before starting the DHU on your workstation. You’ll see a notification when the head unit server is running.
  4. On your workstation, set up port forwarding using ADB to allow the DHU to connect to the head unit server running on your phone. Open a terminal and type adb forward tcp:5277 tcp:5277. Don’t forget this step!
  5. Start the DHU.
      cd <sdk>/extras/google/auto/
      On Linux or OSX: ./desktop-head-unit
      On Windows, desktop-head-unit.exe

At this point the DHU will launch on your workstation, and your phone will enter Android Auto mode. Check out the developer guide for more info. We hope you enjoy using the DHU!

Join the discussion on

+Android Developers
Categories: Programming

Building better apps with Runtime Permissions

Android Developers Blog - 7 hours 30 min ago

Posted by Ian Lake, Developer Advocate

Android devices do a lot, whether it is taking pictures, getting directions or making phone calls. With all of this functionality comes a large amount of very sensitive user data including contacts, calendar appointments, current location, and more. This sensitive information is protected by permissions, which each app must have before being able to access the data. Android 6.0 Marshmallow introduces one of the largest changes to the permissions model with the addition of runtime permissions, a new permission model that replaces the existing install time permissions model when you target API 23 and the app is running on an Android 6.0+ device.

Runtime permissions give your app the ability to control when and with what context you’ll ask for permissions. This means that users installing your app from Google Play will not be required to accept a list of permissions before installing your app, making it easy for users to get directly into your app. It also means that if your app adds new permissions, app updates will not be blocked until the user accepts the new permissions. Instead, your app can ask for the newly added runtime permissions as needed.

Finding the right time to ask for runtime permissions has an important impact on your app’s user experience. We’ve gathered a number of design patterns in our new Permission design guidelines including best practices around when to request permissions, how to explain why permissions are needed, and how to handle permissions being denied.

Ask up front for permissions that are obvious

In many cases, you can avoid permissions altogether by using the existing intents system to utilize other existing specialized apps rather than building a full experience within your app. An example of this is using ACTION_IMAGE_CAPTURE to start an existing camera app the user is familiar with rather than building your own camera experience. Learn more about permissions versus intents.

However, if you do need a runtime permission, there’s a number of tools to help you. Checking for whether your app has a permission is possible with ContextCompat.checkSelfPermission() (available as part of revision 23 of the support-v4 library for backward compatibility) and requesting permissions can be done with requestPermissions(), bringing up the system controlled permissions dialog to allow the user to grant you the requested permission(s) if you don’t already have them. Keep in mind that users can revoke permissions at any time through the system settings so you should always check permissions every time.

A special note should be made around shouldShowRequestPermissionRationale(). This method returns true if the user has denied your permission request at least once yet have not selected the ‘Don’t ask again’ option (which appears the second or later time the permission dialog appears). This gives you an opportunity to provide additional education around the feature and why you need the given permission. Learn more about explaining why the app needs permissions.

Read through the design guidelines and our developer guide for all of the details in getting your app ready for Android 6.0 and runtime permissions. Making it easy to install your app and providing context around accessing user’s sensitive data are key changes you can make to build better apps.

Join the discussion on

+Android Developers
Categories: Programming

Announcing Great New SQL Database Capabilities in Azure

ScottGu's Blog - Scott Guthrie - 8 hours 52 min ago

Today we are making available several new SQL Database capabilities in Azure that enable you to build even better cloud applications.  In particular:

  • We are introducing two new pricing tiers for our  Elastic Database Pool capability.  Elastic Database Pools enable you to run multiple, isolated and independent databases on a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.
  • We are also introducing new higher-end scale options for SQL Databases that enable you to run even larger databases with significantly more compute + storage + networking resources.

Both of these additions are available to start using immediately.  Elastic Database Pools

If you are a SaaS developer with tens, hundreds, or even thousands of databases, an elastic database pool dramatically simplifies the process of creating, maintaining, and managing performance across these databases within a budget that you control. 

image

A common SaaS application pattern (especially for B2B SaaS apps) is for the SaaS app to use a different database to store data for each customer.  This has the benefit of isolating the data for each customer separately (and enables each customer’s data to be encrypted separately, backed-up separately, etc).  While this pattern is great from an isolation and security perspective, each database can end up having varying and unpredictable resource consumption (CPU/IO/Memory patterns), and because the peaks and valleys for each customer might be difficult to predict, it is hard to know how much resources to provision.  Developers were previously faced with two options: either over-provision database resources based on peak usage--and overpay. Or under-provision to save cost--at the expense of performance and customer satisfaction during peaks.

Microsoft created elastic database pools specifically to help developers solve this problem.  With Elastic Database Pools you can allocate a shared pool of database resources (CPU/IO/Memory), and then create and run multiple isolated databases on top of this pool.  You can set minimum and maximum performance SLA limits of your choosing for each database you add into the pool (ensuring that none of the databases unfairly impacts other databases in your pool).  Our management APIs also make it much easier to script and manage these multiple databases together, as well as optionally execute queries that span across them (useful for a variety operations).  And best of all when you add multiple databases to an Elastic Database Pool, you are able to average out the typical utilization load (because each of your customers tend to have different peaks and valleys) and end up requiring far fewer database resources (and spend less money as a result) than you would if you ran each database separately.

The below chart shows a typical example of what we see when SaaS developers take advantage of the Elastic Pool capability.  Each individual database they have has different peaks and valleys in terms of utilization.  As you combine multiple of these databases into an Elastic Pool the peaks and valleys tend to normalize out (since they often happen at different times) to require much less overall resources that you would need if each database was resourced separately:

databases sharing eDTUs

Because Elastic Database Pools are built using our SQL Database service, you also get to take advantage of all of the underlying database as a service capabilities that are built into it: 99.99% SLA, multiple-high availability replica support built-in with no extra charges, no down-time during patching, geo-replication, point-in-time recovery, TDE encryption of data, row-level security, full-text search, and much more.  The end result is a really nice database platform that provides a lot of flexibility, as well as the ability to save money.

New Basic and Premium Tiers for Elastic Database Pools

Earlier this year at the //Build conference we announced our new Elastic Database Pool support in Azure and entered public preview with the Standard Tier edition of it.  The Standard Tier allows individual databases within the elastic pool to burst up to 100 eDTUs (a DTU represents a combination of Compute + IO + Storage performance) for performance. 

Today we are adding additional Basic and Premium Elastic Database Pools to the preview to enable a wider range of performance and cost options.

  • Basic Elastic Database Pools are great for light-usage SaaS scenarios.  Basic Elastic Database Pools allows individual databases performance bursts up to 5 eDTUs.
  • Premium Elastic Database Pools are designed for databases that require the highest performance per database. Premium Elastic Database Pools allows individual database performance bursts up to 1,000 eDTUs.

Collectively we think these three Elastic Database Pool pricing tier options provide a tremendous amount of flexibility and optionality for SaaS developers to take advantage of, and are designed to enable a wide variety of different scenarios. Easily Migrate Databases Between Pricing Tiers

One of the cool capabilities we support is the ability to easily migrate an individual database between different Elastic Database Pools (including ones with different pricing tiers).  For example, if you were a SaaS developer you could start a customer out with a trial edition of your application – and choose to run the database that backs it within a Basic Elastic Database Pool to run it super cost effectively.  As the customer’s usage grows you could then auto-migrate them to a Standard database pool without customer downtime.  If the customer grows up to require a tremendous amount of resources you could then migrate them to a Premium Database Pool or run their database as a standalone SQL Database with a huge amount of resource capacity.

This provides a tremendous amount of flexibility and capability, and enables you to build even better applications. Managing Elastic Database Pools

One of the the other nice things about Elastic Database Pools is that the service provides the management capabilities to easily manage large collections of databases without you having to worry about the infrastructure that runs it.   

You can create and mange Elastic Database Pools using our Azure Management Portal or via our Command-line tools or REST Management APIs.  With today’s update we are also adding support so that you can use T-SQL to add/remove new databases to/from an elastic pool.  Today’s update also adds T-SQL support for measuring resource utilization of databases within an elastic pool – making it even easier to monitor and track utilization by database.

image Elastic Database Pool Tier Capabilities

During the preview, we have been and will continue to tune a number of parameters that control the density of Elastic Database Pools as we progress through the preview.

In particular, the current limits for the number of databases per pool and the number of pool eDTUs is something we plan to steadily increase as we march towards the general availability release.  Our plan is to provide the highest possible density per pool, largest pool sizes, and the best Elastic Database Pool economics while at the same time keeping our 99.99 availability SLA.

Below are the current performance parameters for each of the Elastic Database Pool Tier options in preview today:

 

Basic Elastic

Standard Elastic

Premium Elastic

Elastic Database Pool

eDTU range per pool (preview limits)

100-1200 eDTUs

100-1200 eDTUs

125-1500 eDTUs

Storage range per pool

10-120 GB

100-1200 GB

63-750 GB

Maximum database per pool (preview limits)

200

200

50

Estimated monthly pool and add-on  eDTU costs (preview prices)

Starting at $0.2/hr (~$149/pool/mo).

Each additional eDTU $.002/hr (~$1.49/mo)

Starting at $0.3/hr (~$223/pool mo). 

Each additional eDTU $0.003/hr (~$2.23/mo)

Starting at $0.937/hr (`$697/pool/mo).

Each additional eDTU $0.0075/hr (~$5.58/mo)

Storage per eDTU

0.1 GB per eDTU

1 GB per eDTU

.5 GB per eDTU

Elastic Databases

eDTU max per database (preview limits)

0-5

0-100

0-1000

Storage max per DB

2 GB

250 GB

500 GB

Per DB cost (preview prices)

$0.0003/hr (~$0.22/mo)

$0.0017/hr (~$1.26/mo)

$0.0084/hr (~$6.25/mo)

We’ll continue to iterate on the above parameters and increase the maximum number of databases per pool as we progress through the preview, and would love your feedback as we do so.

New Higher-Scale SQL Database Performance Tiers

In addition to the enhancements for Elastic Database Pools, we are also today releasing new SQL Database Premium performance tier options for standalone databases. 

Today we are adding a new P4 (500 DTU) and a P11 (1750 DTU) level which provide even higher performance database options for SQL Databases that want to scale-up. The new P11 edition also now supports databases up to 1TB in size.

Developers can now choose from 10 different SQL Database Performance levels.  You can easily scale-up/scale-down as needed at any point without database downtime or interruption.  Each database performance tier supports a 99.99% SLA, multiple-high availability replica support built-in with no extra charges (meaning you don’t need to buy multiple instances to get an SLA – this is built-into each database), no down-time during patching, point-in-time recovery options (restore without needing a backup), TDE encryption of data, row-level security, and full-text search.

image

Learn More

You can learn more about SQL Databases by visiting the http://azure.microsoft.com web-site.  Check out the SQL Database product page to learn more about the capabilities SQL Databases provide, as well as read the technical documentation to learn more how to build great applications using it.

Summary

Today’s database updates enable developers to build even better cloud applications, and to use data to make them even richer more intelligent.  We are really looking forward to seeing the solutions you build.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Software Development Linkopedia August 2015

From the Editor of Methods & Tools - 11 hours 20 min ago
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about commitment & estimation, better software testing, the dark side of metrics, scrum retrospectives, databases and Agile adoption. Blog: Against Estimate-Commitment Blog: Write Better Tests in 5 Steps Blog: Story Point […]

Learn app monetization best practices with Udacity and Google

Google Code Blog - Wed, 08/26/2015 - 18:17

Posted by, Ido Green, Developer Advocate

There is no higher form of user validation than having customers support your product with their wallets. However, the path to a profitable business is not necessarily an easy one. There are many strategies to pick from and a lot of little things that impact the bottom line. If you are starting a new business (or thinking how to improve the financial situation of your current startup), we recommend this course we've been working on with Udacity!

This course blends instruction with real-life examples to help you effectively develop, implement, and measure your monetization strategy. By the end of this course you will be able to:

  • Choose & implement a monetization strategy relevant to your service or product.
  • Set performance metrics & monitor the success of a strategy.
  • Know when it might be time to change methods.

Go try it at: udacity.com/course/app-monetization–ud518

We hope you will enjoy and earn from it!

Categories: Programming

7 Strategies for 10x Transformative Change

Peter Thiel, VC, PayPal co-founder, early Facebook investor, and most importantly, the supposed inspiration for Silicon Valley's intriguing Peter Gregory character, argues in his book Zero to One that a successful business needs to make a product that is 10 times better than its closest competitor

The title Zero to One refers to the idea of progress as either horizontal/extensive or vertical/intensive. For a more detailed explanation take a look at Peter Thiel's CS183: Startup - Class 1 Notes Essay.

Horizontal/extensive progress refers to copying things that work. Observe, imitate, and repeat.  The one word summary for the concept is  "globalization.” For more on this PAYPAL MAFIA: Reid Hoffman & Peter Thiel's Master Class in China is an interesting watch.

Vertical/intensive progress means doing something genuinely new, that is going from zero to one, as apposed to going from one to N, which is merely globalization. This is the creative spark. The hero's journey of over coming obstacles on the way to becoming the Master of the Universe you were always meant to be.

We see this pattern with Google a lot. Google often hits scaling challenges long before anyone else and because they have a systematizing culture they produce discrete replicatable technologies that then diffuse out to the rest of the world, often through open source efforts.

Google told us about the Google File System in 2003, MapReduce in 2004, Bigtable in 2006, The Datacenter as a Computer in 2009, Percolator (real-time updates) in 2010, Pregel (graph processing) in 2010, Dremel (interactive analysis) in 2010, Spanner (globally distributed database) in 2012,  Omega (cluster scheduling) in 2013, Borg (cluster manager) in 2015, and Jupiter Rising (advanced networking) in 2015.

Sometime later we've seen the development of open source parallels like HDFS, Hadoop, HBase, Giraph, YARN, Drill, and Mesos. 

So, how can you rise up and meet the 10x challenge?

Murat Demirbas, a computer science and engineering professor at SUNY Buffalo, and awesome writer on all things distributed, came up with some good suggestions in How to go for 10X

Categories: Architecture

The Product Roadmap is Not the Project Portfolio

I keep seeing talks and arguments about how the portfolio team should manage the epics for a program. That conflates the issue of project portfolio management and product management.

Teamsandvalue

Several potential teams affect each project (or program).

Starting at the right side of this image, the project portfolio team decides which projects to do and when for the organization.

The product owner value team decides which features/feature sets to do when for a given product. That team may well split feature sets into releases, which provides the project portfolio team opportunities to change the project the cross-functional team works on.

The product development team (the agile/lean cross-functional team) decides how to design, implement, and test the current backlog of work.

When the portfolio team gets in the middle of the product roadmap planning, the product manager does not have the flexibility to manage the product backlog or the capabilities of the product over time.

When the product owner value team gets in the middle (or doesn’t plan enough releases), they prevent the project portfolio team from being able to change their minds over time.

When the product development team doesn’t release working product often, they prevent the product owner team from managing the product value. In addition, the product development team prevents the project portfolio team from implementing the organizational strategy when they don’t release often.

All of these teams have dependencies on each other.

The project portfolio team optimizes the organization’s output.

The product owner value team optimizes the product’s output.

The product development team determines how to optimize for features moving across the board. When the features are complete, the product owner team can replan for this product and the project portfolio team can replan for the organization. Everyone wins.

That’s why the product owner team is not the project portfolio team. (In small organizations, it’s possible people have multiple roles. If so, which hat are they wearing to make this decision?

The product roadmap is not the project portfolio. Yes, you may well use the same ranking approaches. The product roadmap optimizes for this product. The project portfolio team optimizes for the overall organization. They fulfill different needs. Please do not confuse the two decisions.

Categories: Project Management

You Are Not Agile . . . If You Only Do Scrum

Being Agile is more than just Scrum or not being waterfall.

Being Agile is more than just Scrum or not being waterfall.

If you were a developer from the 1980s or early 1990’s and traveled in time to attend most Agile conferences today, you would probably take away the idea that Agile was all about Scrum and kicking sand at project managers. Unfortunately MANY organizations have same stilted perception. The perception that Agile is just another form of project management is EXACTLY wrong. If an organization only embraces the parts of Agile that address team organization and project control they are not truly Agile. In order to deliver the most value from adopting and embracing Agile, organizations and teams must understand and adopt Agile technical practices. Just doing Scrum is not going to cut it if your goal is improving the quantity and quality of functional software you deliver.

Tools, processes and techniques that facilitate the development, testing and implementation of functional software are technical practices. A very abridged list of technical practices includes:

  • User Stories
  • Story Mapping
  • Test Driven Development (including variants such as Behavior Driven Development)
  • Architecture
  • Technical Standards (including Coding Standards)
  • Refactoring
  • Automated Testing
  • Pair Programming
  • Continuous Integration

Implementing a combination of Scrum with Extreme Programming (xP) is one of the most common methods of addressing both the technical and management/control aspects of delivering functional software. xP is an Agile software development methodology originally conceived by Kent Beck and others in late 1990’s. There are many other techniques, frameworks and methodologies that support the technical side of Agile. These techniques directly impact how effectively and efficiently solutions are designed, coded and tested. As importantly efficiency directly impacts the amount of value that is possible to deliver.

You Are Not Agile . . . If You Do Waterfall described some of the attributes of organizations that get stuck transitioning to Agile from a process point of view. Similarly, many organizations begin an Agile transition by adopting Scrum perhaps with a few techniques such as User Stories; however, they never quite get to addressing the technical components. The issues of the business analysts, developers and testers don’t get addressed.

Allan Kelly, in an interview on the Software Process and Measurement Cast 353, stated that if you are not “doing” the technical side of Agile, you were not Agile. This statement is probably somewhat of a harsh assessment and perhaps a bit hyperbolic, but only a bit. Developing, enhancing and maintaining software (or any product) requires more than short iterations, stand-ups, retrospectives and demonstrations. Far be it from me to say those techniques don’t help and even alone are certainly better than most waterfall practices, but they do not address the nuts and bolts of developing software.  In order to deliver the maximum value we have to change how we are developing designs, writing code and then proving that it works because that is the true heart and soul of software development.

In the long run Agile has to address management buy in, changing the whole development life cycle or and the technical practices, or you won’t get the whole value that Agile can deliver.

I would be interested in your thoughts!


Categories: Process Management

Help, my diagram doesn't fit on one page!

Coding the Architecture - Simon Brown - Tue, 08/25/2015 - 17:16

This definitely goes into the category of a frequently asked question because it crops up time and time again, both during and after my software architecture sketching workshop.

I'm following your C4 approach but my software system is much bigger than the example you use in your workshop. How do you deal with the complexity of these larger systems? And what happens when my diagram doesn't fit on one page?

Even with a relatively small software system, it's tempting to try and include the entire story on a single diagram. For example, if you have a web application, it seems logical to create a single component diagram that shows all of the components that make up that web application. Unless your software system really is that small, you're likely to run out of room on the diagram canvas or find it difficult to discover a layout that isn't cluttered by a myriad of overlapping lines. Using a larger diagram canvas can sometimes help, but large diagrams are usually hard to interpret and comprehend because the cognitive load is too high. And if nobody understands the diagram, nobody is going to look at it.

Instead, don't be afraid to split that single complex diagram into a larger number of simpler diagrams, each with a specific focus around a business area, functional area, functional grouping, bounded context, use case, user interaction, feature set, etc. You can see an example of this in One view or many?, where I create one component diagram per web MVC controller rather than having a single diagram showing all components. The key is to ensure that each of the separate diagrams tells a different part of the same overall story, at the same level of abstraction.

Categories: Architecture

Breaking the SQL Barrier: Google BigQuery User-Defined Functions

Google Code Blog - Tue, 08/25/2015 - 16:55

Posted by, Thomas Park, Senior Software Engineer, Google BigQuery

Many types of computations can be difficult or impossible to express in SQL. Loops, complex conditionals, and non-trivial string parsing or transformations are all common examples. What can you do when you need to perform these operations but your data lives in a SQL-based Big data tool? Is it possible to retain the convenience and speed of keeping your data in a single system, when portions of your logic are a poor fit for SQL?

Google BigQuery is a fully managed, petabyte-scale data analytics service that uses SQL as its query interface. As part of our latest BigQuery release, we are announcing support for executing user-defined functions (UDFs) over your BigQuery data. This gives you the ability to combine the convenience and accessibility of SQL with the option to use a familiar programming language, JavaScript, when SQL isn’t the right tool for the job.

How does it work?

BigQuery UDFs are similar to map functions in MapReduce. They take one row of input and produce zero or more rows of output, potentially with a different schema.

Below is a simple example that performs URL decoding. Although BigQuery provides a number of built-in functions, it does not have a built-in for decoding URL-encoded strings. However, this functionality is available in JavaScript, so we can extend BigQuery with a simple User-Defined Function to decode this type of data:



function decodeHelper(s) {
try {
return decodeURI(s);
} catch (ex) {
return s;
}
}

// The UDF.
function urlDecode(r, emit) {
emit({title: decodeHelper(r.title),
requests: r.num_requests});
}

BigQuery UDFs are functions with two formal parameters. The first parameter is a variable to which each input row will be bound. The second parameter is an “emitter” function. Each time the emitter is invoked with a JavaScript object, that object will be returned as a row to the query.

In the above example, urlDecode is the UDF that will be invoked from BigQuery. It calls a helper function decodeHelper that uses JavaScript’s built-in decodeURI function to transform URL-encoded data into UTF-8.

Note the use of try / catch in decodeHelper. Data is sometimes dirty! If we encounter an error decoding a particular string for any reason, the helper returns the original, un-decoded string.

To make this function visible to BigQuery, it is necessary to include a registration call in your code that describes the function, including its input columns and output schema, and a name that you’ll use to reference the function in your SQL:



bigquery.defineFunction(
'urlDecode', // Name used to call the function from SQL.

['title', 'num_requests'], // Input column names.

// JSON representation of output schema.
[{name: 'title', type: 'string'},
{name: 'requests', type: 'integer'}],

urlDecode // The UDF reference.
);

The UDF can then be invoked by the name “urlDecode” in the SQL query, with a source table or subquery as an argument. The following query looks for the most-visited French Wikipedia articles from April 2015 that contain a cédille character (ç) in the title:



SELECT requests, title
FROM
urlDecode(
SELECT
title, sum(requests) AS num_requests
FROM
[fh-bigquery:wikipedia.pagecounts_201504]
WHERE language = 'fr'
GROUP EACH BY title
)
WHERE title LIKE '%ç%'
ORDER BY requests DESC
LIMIT 100

This query processes data from a 5.6 billion row / 380 Gb dataset and generally runs in less than two minutes. The cost? About $1.37, at the time of this writing.

To use a UDF in a query, it must be described via UserDefinedFunctionResource elements in your JobConfiguration request. UserDefinedFunctionResource elements can either contain inline JavaScript code or pointers to code files stored in Google Cloud Storage.

Under the hood

JavaScript UDFs are executed on instances of Google V8 running on Google servers. Your code runs close to your data in order to minimize added latency.

You don’t have to worry about provisioning hardware or managing pipelines to deal with data import / export. BigQuery automatically scales with the size of the data being queried in order to provide good query performance.

In addition, you only pay for what you use - there is no need to forecast usage or pre-purchase resources.

Developing your function

Interested in developing your JavaScript UDF without running up your BigQuery bill? Here is a simple browser-based widget that allows you to test and debug UDFs.

Note that not all JavaScript functionality supported in the browser is available in BigQuery. For example, anything related to the browser DOM is unsupported, including Window and Document objects, and any functions that require them, such as atob() / btoa().

Tips and tricks

Pre-filter input

In our URL-decoding example, we are passing a subquery as the input to urlDecode rather than the full table. Why?

There are about 5.6 billion rows in [fh-bigquery:wikipedia.pagecounts_201504]. However, one of the query predicates will filter the input data down to the rows where language is “fr” (French) - this is about 262 million rows. If we ran the UDF over the entire table and did the language and cédille filtering in a single WHERE clause, that would cause the JavaScript framework to process over 21 times more rows than it would with the filtered subquery. This equates to a lot of CPU cycles wasted doing unnecessary data conversion and marshalling.

If your input can easily be filtered down before invoking a UDF by using native SQL predicates, doing so will usually lead to a faster (and potentially cheaper) query.

Avoid persistent mutable state

You must not store and access mutable state across UDF execution for different rows. The following contrived example illustrates this error:



// myCode.js
var numRows = 0;

function dontDoThis(r, emit) {
emit(rowCount: ++numRows);
}

// The query.
SELECT max(rowCount) FROM dontDoThis(myTable);

This is a problem because BigQuery will shard your query across multiple nodes, each of which has independent V8 instances and will therefore accumulate separate values for numRows.

Expand select *

You cannot execute SELECT * FROM urlDecode(...) at this time; you must explicitly list the columns being selected from the UDF: select requests, title from urlDecode(...)

For more information about BigQuery User-Defined Functions, see the full feature documentation.

Categories: Programming

An Iterative Waterfall Isn’t Agile

Mike Cohn's Blog - Tue, 08/25/2015 - 16:43

I’ve noticed something disturbing over the past two years. And it’s occurred uniformly with teams I’ve worked with all across the world. It’s the tendency to create an iterative waterfall process and then to call it agile.

An iterative waterfall looks something like this: In one sprint, someone (perhaps a business analyst working with a product owner) figures out what is to be built. 

Because they’re trying to be agile, they do this with user stories. But rather than treating the user stories as short placeholders for future conversations, each user story becomes a mini-specification document, perhaps three to five pages long. And I’ve seen them longer than that. 

These mini-specs/user stories document nearly everything conceivable about a given user story.

Because this takes a full sprint to figure out and document, a second sprint is devoted to designing the user interface for the user story. Sometimes, the team tries to be a little more agile (in their minds) by starting the design work just a little before the mini-spec for a user story is fully written. 

Many on the team will consider this dangerous because the spec isn’t fully figured out yet. But, what the heck, they’ll reason, this is where the agility comes in.

Programmers are then handed a pair of documents. One shows exactly what the user story should look like when implemented, and the other provides all details about the story’s behavior. 

No programming can start until these two artifacts are ready. In some companies, it’s the programmers who force this way of working. They take an attitude of saying they will build whatever is asked for, but you better tell them exactly what is needed at the start of the sprint.

Some organizations then stretch things out even further by having the testers work an iteration behind the programmers. This seems to happen because a team’s user stories get larger when each user story needs to include a mini-spec and a full UI design before it can be coded.

Fortunately, most teams realize that programmers and testers need to work together in the same iteration, but not extend that to being a whole team working together. This leads to the process shown in this figure.

This figure shows a first iteration devoted to analysis. A second iteration (possibly slightly overlapping with the first) is devoted to user experience design. And then a third iteration is devoted to coding and testing.
This is not agile. It might be your organization’s first step toward becoming agile. But it’s not agile.

What we see in this figure is an iterative waterfall.

In traditional, full waterfall development, a team does all of the analysis for the entire project first. Then they do all the design for the entire project. Then they do all the coding for the entire project. Then they do all the testing for the entire project.

In the iterative waterfall of the figure above, the team is doing the same thing but they are treating each story as a miniature project. They do all the analysis for one story, then all the design for one story, then all the coding and testing for one story. This is an iterative waterfall process, not an agile process.

Ideally, in an agile process, all types of work would finish at exactly the same time. The team would finish analyzing the problem at exactly the same time they finished designing the solution to the problem, which would also be the same time they finished coding and testing that solution. All four of those disciplines (and any others I’m not using in this example) would all finish at exactly the same time.

It’s a little naïve to assume a team can always perfectly achieve that. (It can be achieved some times.) But it can remain the goal a team can work towards.

A team should always work to overlap work as much as possible. And upfront thinking (analysis, design and other types of work) should be done as late as possible and in as little detail as possible while still allowing the work to be completed within the iteration.

If you are treating your user stories as miniature specification documents, stop. Start instead thinking about each as a promise to have a conversation. 

Feel free to add notes to some stories about things you want to make sure you bring up during that conversation. But adding these notes should be an optional step, not a mandatory step in a sequential process. 

Leaving them optional avoids turning the process into an iterative waterfall process and keeps your process agile.

Get the Do’s and Don’ts for Notifications from Game Developer Seriously

Android Developers Blog - Mon, 08/24/2015 - 17:41

Posted by Lily Sheringham, Developer Marketing at Google Play

Editor’s note: We’ve been talking to developers to find out how they’ve been achieving success on Google Play. We recently spoke to Reko Ukko at Finnish mobile game developer, Seriously, to find out how to successfully use Notifications.

Notifications on Android let you send timely, relevant, and actionable information to your users' devices. When used correctly, notifications can increase the value of your app or game and drive ongoing engagement.

Seriously is a Finnish mobile game developer focused on creating entertaining games with quality user experiences. They use push notifications to drive engagement with their players, such as helping players progress to the next level when they’ve left the app after getting stuck.

Reko Ukko, VP of Game Design at Seriously, shared his tips with us on how to use notifications to increase the value of your game and drive ongoing engagement.

Do’s and don’ts for successful game notifications table, th, td { border: clear; border-collapse: collapse; } Do’s

Don’ts

Do let the user get familiar with your service and its benefits before asking for permission to send notifications.

Don’t treat your users as if they’re all the same - identify and group them so you can push notifications that are relevant to their actions within your app.

Do include actionable context. If it looks like a player is stuck on a level, send them a tip to encourage action.

Don’t spam push notifications or interrupt game play. Get an understanding of the right frequency for your audience to fit the game.

Do consider re-activation. If the player thoroughly completes a game loop and could be interested in playing again, think about using a notification. Look at timing this shortly after the player exits the game.

Don’t just target players at all hours of the day. Choose moments when players typically play games – early morning commutes, lunch breaks, the end of the work day, and in the evening before sleeping. Take time zones into account.

Do deep link from the notification to where the user expects to go to based on the message. For example. if the notification is about "do action X in the game now to win", link to where that action can take place.

Don’t forget to expire the notifications if they’re time-limited or associated with an event. You can also recycle the same notification ID to avoid stacking notifications for the user.

Do try to make an emotional connection with the player by reflecting the style, characters, and atmosphere of your game in the notification. If the player is emotionally connected to your game, they’ll appreciate your notifications and be more likely to engage.

Don’t leave notifications up to guess work. Experiment with A/B testing and iterate to compare how different notifications affect engagement and user behavior in your app. Go beyond measuring app opening metrics – identify and respond to user behavior.

Experiment with notifications yourself to understand what’s best for your players and your game. You can power your own notifications with Google Cloud Messaging, which is free, cross platform, reliable, and thoughtful about battery usage. Find out more about developing Notifications on Android.

Join the discussion on

+Android Developers
Categories: Programming

Decision Making Means Making Inferences

Herding Cats - Glen Alleman - Mon, 08/24/2015 - 17:03

In software development, we almost always encounter situations where a decision must be made when we are uncertain what the outcome might or even the uncertainty in data used to make that decision.

Decision making in the presence of uncertainty is standard management practice in all business and technical domains. From business investment decision, to technical choices for project work. 

Making decisions in the presence of uncertainty means making probabilistic inferences from the information available to the decision maker.

There are many techniques for decision making. Decision trees are common. Where the probability of an outcome of a decision is part of a branch of a tree. If I go left in the branch - the decision - what happens? If I go right what happens? Each branch point becomes the decision. Each of the two or more branches becomes the outcomes. The probabilistic aspect is applied to the branches, and the outcomes - which may be probabilistic as well and are assessed for befits to those making the decision.

Decision-tree
Another approach is Monte Carlo Simulation of decision trees. Here's a tool we use for many decisions in our domain, Palisade, Crystal Ball. There are others. They work like the manual process in the first picture, but let you tune the probabilistic branching, probabilistic outcomes to model complex decision making processes.

Screen Shot 2015-08-24 at 8.00.19 AMIn the project management paradigm of projects we work, there are networks of activities. Each of these activities has some dependency or prior work, and each activity produces dependencies on follow on work. These can be model with Monte Carlo Simulation as well.

Screen Shot 2015-08-24 at 8.06.05 AM

The Schedule Risk Analysis (SRA) of the network of work activities is mandated on a monthly basis in many of the programs we work. 

In Kanban and Scrum systems Monte Carlo Simulation is a powerful tool to reveal the expected performance of the development activity. Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects Using Monte Carlo Simulation, Troy Magennisis a good place to start for this approach.

Each of these approaches and others are designed to provide actionable information to the decision makers. This information requires a minimum understanding of what is happening to the system being managed:

  • What are the naturally occurring variances of the work activities that we have no control over - aleatory uncertainty?
  • What are the event based probabilities of some occurrence - epistemic uncertainty?
  • What are the consequences of each outcome - decision, probabilistic event, or naturally occurring variance - on the desired behavior of the system?
  • What choices can be made that will influence these outcomes?

In many cases, the information available to make these choices is in the future. Some is in the past. But that information in the past needs careful assessment.

Past data is Only useful if you can be assured the future is like the past. If not, making decision using past data without adjusting that data for the possible changes in the future takes you straight into the ditch - see The Flaw of Averages

In order to have any credible assessment of the impact of a decision using data in the future - where will the system be going in the future? - it is mandatory to ESTIMATE.

It is simply not possible to make decisions about future outcomes in the presence of uncertainty in that future without making estimates.

Anyone says you can is incorrect. And if they insist it can be done, ask for testable evidence of their conjecture, based on the mathematics of probabilistic systems. No testable credible testable data, then it's pure speculation. Move on.

The False Conjecture of Deciding in Presence of Uncertainty without Estimates

  • Slicing the work into similar sized chunks, performing work on those chunks and using that information to produce information about the future makes the huge assumption the future is like the past.
  • Record past performance, making nice plots, running static analysis for mean, modestandard deviation, variance is naive at best. The time series variances are rolled up hiding the latent variances that will emerge in the future. Time series analysis (ARIMA) is required to reveal the possible values in the dataset from the past that will emerge in the future, since the system under observation remains the same.

Time series analysis is a fundamental tool for making forecasting of future outcomes from past data. Weather forecasting - plus complex compressible fluid flow models - is based on time series analysis. Stock market forecasting uses time series analysis. Cost and Schedule modeling uses time series analysis. Adaptive process control algorithms, like the speed control and fuel management in your modern car uses time series analysis.

One of the originators of time series analysis, George E. P. Box and his seminal book Time Series Analysis, Forecasting and Control, is often seriously misquoted, when he said All Models are Wrong, Some are Useful.  Anyone misusing that quote to try and convince you, you can't model the future didn't (or can't) do the math in Box's book and likely got a D in the High School probability and statistics class.

So do the math, read the proper books, gather past data, model the future with dependency networks, Kanban and Scrum backlogs, measure current production, forecast future production based on Monte Carlo Models - and don't believe for a moment that you can make decision about future outcomes in the presence of uncertainties without estimating that future.

Related articles Making Conjectures Without Testable Outcomes Why Guessing is not Estimating and Estimating is not Guessing IT Risk Management Architecture -Center ERP Systems in the Manufacturing Domain
Categories: Project Management

Ask HighScalability: Choose an Async App Server or Multiple Blocking Servers?

Jonathan Willis, software developer by day and superhero by night, asked an interesting question via Twitter on StackOverflow

tl;dr Many Rails apps or one Vertx/Play! app?


I've been having discussions with other members of my team on the pros and cons of using an async app server such as the Play! Framework (built on Netty) versus spinning up multiple instances of a Rails app server. I know that Netty is asynchronous/non-blocking, meaning during a database query, network request, or something similar an async call will allow the event loop thread to switch from the blocked request to another request ready to be processed/served. This will keep the CPUs busy instead of blocking and waiting.

I'm arguing in favor or using something such as the Play! Framework or Vertx.io, something that is non-blocking... Scalable. My team members, on the other hand, are saying that you can get the same benefit by using multiple instances of a Rails app, which out of the box only comes with one thread and doesn't have true concurrency as do apps on the JVM. They are saying just use enough app instances to match the performance of one Play! application (or however many Play! apps we use), and when a Rails app blocks the OS will switch processes to a different Rails app. In the end, they are saying that the CPUs will be doing the same amount of work and we will get the same performance.

What do you think? The marketplace has seemingly moved, in the form of node.js, Golang, Akka, and even Java, to the async server model. Does that mean it's the only right way?

Here's my attempt at a response:

Categories: Architecture

My Agile 2015 Roundup

Agile 2015 was the week of Aug 3-7 this year. It was a great week. Here are the links to my interviews and talks.

Interview with Dave Prior. We spoke about agile programs, continuous planning, and how you might use certifications. I made a little joke about measurement.

Interview with Paul DuPuy of SolutionsIQ. We also spoke about agile programs. Paul had some interesting questions, one of which I was not prepared for. That’s okay. I answered it anyway.

The slides from Scaling Agile Projects to Programs: Networks of Autonomy, Collaboration and Exploration. At some point, the Agile Alliance will post the video of this on their site.

The slides from my workshop Agile Hiring: It’s a Team Sport. Because it was a workshop, there are built-in activities. You can try these where you work.

My pecha kucha (it was part of the lightning talks) of Living an Agile Life.

I hope you enjoy these. I had a great time at the conference.

 

Categories: Project Management

Managing in the Presence of Uncertainty

Herding Cats - Glen Alleman - Mon, 08/24/2015 - 05:43

A Tweet caught my eye this weekend

Screen Shot 2015-08-23 at 1.39.09 PMBefore moving to risk let's look at what Agile is

Agile development is a phrase used in software development to describe methodologies for incremental software development. Agile development is an alternative to traditional project management where emphasis is placed on empowering people to collaborate and make team decisions in addition to continuous planning, continuous testing and continuous integration.

Next the notion that Agile is actually risk management is very misunderstood. Agile provides raw information for risk management, but risk management has little to do with what software development method is being used. The continuous nature of Agile provides more frequent feedback on the state of the project. That is advantageous to risk management. Since Agile mandates this feedback on fine grained boundaries - weeks not months - the actions in the risk management paradigm below are also fine grained.

Where Does Risk Come From?

All risk comes from uncertainty. Uncertainty comes in two types. (1) Aleatory (naturally occurring in the underlying process and therefore irreducible) and (2) Epistemic (a probability that something unfavorable will happen). 

Risk results from uncertainty. To deal with the risk from Aleatory uncertainty we can only have margin, since the resulting risk is irreducible. This is schedule margin, cost margin, and product performance margin. This type of risk is just part of the world we live in. Natural variances in the work performed developing products needs margin. Natural variances in the performance is a server's throughput needed margin.

We can deal directly with the risk from Epistemic uncertainty by buying down the uncertainty. This is done with experiments, trials, incremental development and other risk reduction  activities that lower the uncertainty in the processes.

By the way many use the notion that risk is both positive and negative. This is not true. It's a naive understanding of the mathematics of risk processes. PMI does this. It is not allowed in our domain(s).

Agile and Incremental Delivery

There is a popular myth in the agile community that they have a lock on the notion of incremental delivery. This is again not true. Many product development lifecycles use incremental and iterative processes to produce products. Spiral development, Integrated Master Plan/Integrated Master Schedule, Incremental Commitment. All applicable to Software Intensive Systems and System of Systems domains, like Enterprise ERP.

Managing in the Presence of Uncertainty and the Resulting Risk

Here's how we manage SIS and SoS in the presence of uncertainty.

The methods used to collect requirements, turn those requirements into products and ship those products produces are of little concern to the Risk Management Process. They are of great concern to those engineering those products, but the Risk Management Process is above that activity.  You can start with the SEI Continuous Risk Management Guidebook for a framework of managing software development in the presence of risk. The management of risk in agile is very close to the management of risk in any other product or service development process. For any risk management process to work it needs to have these processes as a minimum. So when you here agile manages risk, confirm that is actually taking place by having the person making that system show clearly in a process description how each of the process areas below are implemented. Without that connection it just ain't true. Screen Shot 2015-08-23 at 3.03.44 PM   And here's how to put these processes together to ensure risk is being managed to increase the probability of success of your project.
Screen Shot 2015-08-23 at 3.42.33 PM And how to put these process areas to work on a project Screen Shot 2015-08-23 at 4.20.33 PM For those interested in managing projects in the presence of uncertainty and the risk that uncertainty creates, independent of any development methodology or development framework, here's a collection from the office library, in no particular order:

And a short white paper on Risk Management in Enterprise IT

Information Technology Risk Management from Glen Alleman Related articles IT Risk Management Making Conjectures Without Testable Outcomes Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Every Employee is a Digital Employee

“The questions that we must ask ourselves, and that our historians and our children will ask of us, are these: How will what we create compare with what we inherited? Will we add to our tradition or will we subtract from it? Will we enrich it or will we deplete it?”
― Leon Wieseltier

Digital transformation is all around us.

And we are all digital employees according to Gartner.

In the article, Gartner Says Every Employee Is a Digital Employee, Gartner says that the IT function no longer holds a monopoly on IT.

A Greater Degree of Digital Dexterity

According to Gartner, employees are creating increasing digital dexterity from the devices and apps they use, to participating in sharing economies.

Via Gartner Says Every Employee Is a Digital Employee:

"'Today's employees possess a greater degree of digital dexterity,' said Matt Cain, research vice president at Gartner. 'They operate their own wireless networks at home, attach and manage various devices, and use apps and Web services in almost every facet of their personal lives. They participate in sharing economies for transport, lodging and more.'"

Workers are Streamlining Their Work Life

More employees are using technology to simplify, streamline, and scale their work.

Via Gartner Says Every Employee Is a Digital Employee:

"This results in unprecedented numbers of workers who enjoy using technology and recognize the relevance of digitalization to a wide range of business models. They also routinely apply their own technology and technological knowledge to streamline their work life."

3 Ways to Exploit Digital Dexterity

According to Gartner, there are 3 Ways the IT organization should exploit employees' digital dexterity:

  1. Implement a digital workplace strategy
  2. Embrace shadow IT
  3. Use a bimodal approach
1. Implement a Digital Workplace Strategy

While it’s happening organically, IT can also help shape the digital workplace experience.  Implement a strategy that helps workers use computing resources in a more friction free way and that play better with their pains, needs, and desired outcomes.

Via Gartner Says Every Employee Is a Digital Employee:

“Making computing resources more accessible in ways that match employees' preferences will foster engagement by providing feelings of empowerment and ownership. The digital workplace strategy should therefore complement HR initiatives by addressing and improving factors such as workplace culture, autonomous decision making, work-life balance, recognition of contributions and personal growth opportunities.”

2. Embrace shadow IT

Treat shadow IT as a first class citizen.  IT should partner with the business to help the business realize it’s potential, and to help workers make the most of the available IT resources.

Via Gartner Says Every Employee Is a Digital Employee:

“Rather than try to fight the tide, the IT organization should develop a framework that outlines when it is appropriate for business units and individuals to use their own technology solutions and when IT should take the lead. IT should position itself as a business partner and consultant that does not control all technology decisions in the business.”

3. Use a bimodal approach

Traditional IT is slow.   It’s heavy in governance, standards, and procedures.   It addresses risk by reducing flexibility.   Meanwhile, the world is changing fast.  Business needs to keep up.  Business needs fast IT. 

So what’s the solution?

Bimodal IT.  Bimodal IT separates the fast demands of digital business from the slow/risk-averse methods of traditional IT.

Via Gartner Says Every Employee Is a Digital Employee:

“Bimodal IT separates the risk-averse and ‘slow’ methods of traditional IT from the fast-paced demands of digital business, which is underpinned by the digital workplace. This dual mode of operation is essential to satisfy the ever-increasing demands of digitally savvy business units and employees, while ensuring that critical IT infrastructure and services remain stable and uncompromised.”

Everyone has technology at their fingertips.  Every worker has the chance to re-imagine their work in a Mobile-First, Cloud-First world. 

With infinite compute, infinite capacity, global reach, and real-time insights available to you, how could you evolve your job?

You can evolve your digital work life right under your feet.

You Might Also Like

Empower Every Person on the Planet to Achieve More

Satya Nadella on a Mobile-First, Cloud-First World

We Help Our Customers Transform

Categories: Architecture, Programming

What Life is Like with Agile Results

“Courage doesn't always roar. Sometimes courage is the little voice at the end of the day that says I'll try again tomorrow.” -- Mary Anne Radmacher

Imagine if you could wake up productive, where each day is a fresh start.  As you take in your morning breath, you notice your mind is calm and clear.

You feel strong and well rested.

Before you start your day, you picture in your mind three simple scenes of the day ahead:

In the morning, you see yourself complete a draft you’ve been working on.

In the afternoon, you see yourself land your idea and win over your peers in a key meeting.

In the evening, you see yourself enjoying some quiet time as you sit down and explore your latest adventures in learning.

With an exciting day ahead, and a chance to rise and shine, you feel the day gently pull you forward with anticipation. 

You know you’ll be tested, and you know some things won’t work out as planned.   But you also know that you will learn and improve from every setback.  You know that each challenge you face will be a leadership moment or a learning opportunity.  Your challenges make you stronger.

And you also know that you will be spending as much time in your strengths as you can, and that helps keeps you strong, all day long. 

You motivate yourself from the inside out by focusing on your vision for today and your values.  You value achievement.  You value learning.  You value collaboration.  You value excellence.  You value empowerment.   And you know that throughout the day, you will have every chance to apply your skills to do more, to achieve more, and to be more. 

Each task, or each challenge, is also a chance to learn more.  From yourself, and from everyone all around you.  And this is how you never stop learning.

You may not like some of the tasks before you, but you like the chance to master your craft.  And you enjoy the learning.  And you love how you get better.  With each task on your To-Do list for today, you experiment and explore ways to do things better, faster, and easier.

Like a productive artist, you find ways to add unique value.   You add your personal twist to everything you do.  Your twist comes from your unique experience, seeing what others can’t see from your unique vantage point, and applying your unique strengths.

And that’s how you do more art.  Your art.  And as you do your art, you feel yourself come alive.  You feel your soul sing, as you operate at a higher level.  As you find your flow and realize your potential, your inner-wisdom winks in an approving way.  Like a garden in full bloom on a warm Summer’s day, you are living your arête.

As your work day comes to an end, you pause to reflect on your three achievements, your three wins, for the day.   You appreciate the way you leaned in on the tough stuff.  You surprised yourself in how you handled some of your most frustrating moments.  And you learned a new way to do your most challenging task.  You take note of the favorite parts of your day, and your attitude of gratitude feels you with a sense of accomplishment, and a sense of fulfillment.

Fresh and ready for anything, you head for home.

Try 30 Days of Getting Results.  It’s free. Surprise yourself with what you’re capable of.

Categories: Architecture, Programming

SPaMCAST 356 - Steve Turner, Time and Inbox Management

Software Process and Measurement Cast - Sun, 08/23/2015 - 22:00

The Software Process and Measurement 356 features our interview with Steve Turner.  Steve and I talked time management and email inbox tyranny! If you let your inbox and interruptions manage your day you will deliver less value than you should and feel far less in control than you could! 

Steve’s Bio:

With a background in technology and nearly 30 years of business expertise, Steve has spent the last eight years sharing technology and time management tools, techniques and tips with thousands of professionals across the country.  His speaking, training and coaching has helped many organizations increase the productivity of their employees. Steve has significant experience working with independent sales and marketing agencies. His proven ability to leverage technology (including desktops, laptops and mobile devices) is of great value to anyone in need of greater sales and/or productivity results. TurnerTime℠ is time management tools, techniques and tips to effectively manage e-mails, tasks and projects.  

Contact Information:
Email: steve@getturnertime.com
Web: www.GetTurnerTime.com

Call to Action!

For the remainder of August and September let’s try something a little different.  Forget about iTunes reviews and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Why Did the Tower Babel Fall”!  Check out the new installment at Software Process and Measurement Blog.

 

Upcoming Events

Software Quality and Test Management 
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams.  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code.

Agile Development Conference East
November 8-13, 2015
Orlando, Florida
http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement feature our essay on mind mapping.  To quote Tony Buzan, Mind Mapping is a technique for radiant thinking.  I view it as a technique to stimulate creative thinking and organize ideas. I think that is what Tony meant by radiant thinking. Mind Mapping is one of my favorite tools.  

We will also feature Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.

Anchoring the cast will be Gene Hughson returning with an entry from his Form Follows Function blog.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management