Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Mindset: The New Psychology of Success: Re-Read Week 4, Chapter 5: Business: Mindset and Leadership

Mindset Book Cover

Today we are lead into Chapter 5 in Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  In Chapter 5,  we explore the impact of mindsets in the business environment.  The impact of mindsets can be seen in positive and negative business outcomes.  Dweck begins this section with a focus on the negative impact of the fixed mindset at the C level in business.

One of the primary examples used in Chapter 5 is Enron. Enron, (watch the movie Enron: The Smartest Guys in the Room for background) represents one of most egregious examples of excess, corruption and the impact of leaders of the modern age.  Some, if not a majority, of the poor behavior described can be ascribed to the fixed mindset,  both individually and collectively.  The fixed mindset culture of an organization creates an environment in which employees are forced to look and act extraordinarily talented rather than focusing on outcomes. In organizations with fixed mindset, people protect their egos and how they perceived rather than learning from any potential bump in the road. A quote that illustrates Jeffrey Skilling’s fixed mindset drives the point home, “my genius not only defines and validates me. It defines and validates the company. It is what creates value. My genius is profit.”  In Skilling’s mind, he was more important than the business. Dweck also uses the example of Lee Iacocca as the big fish that that would only tolerate helper fish around him. For example, Iacocca used saving Chrysler recover his ego but then was not able to stop burnishing his ego, which nearly lead to nearly destroying Chrysler when things got difficult a second time around.  None of those around Iacocca were able to shift his focus back to the organization rather than his reputation, and the board had to finally force him out. Iacocca, like Skilling at Enron, is an example of someone who makes decisions based on their own good rather than the good of the organization

Jim Collins, author of Good to Great found that organizations that were continuously trying to improve performance through learning and were self-effacing about their progress prospered.  These attributes are a reflection of a growth mindset at the organizational level. One of the reasons organizations with a growth mindset prosper is that leaders with a growth mindset usually surround themselves with great teams rather than having to be the big fish with helpers.

Another negative outcome of a fixed mindset is brutal managers. Brutal managers (I chose not to use the word leader) believe that the needs or feelings of others can be ignored. A fixed mindset allows brutal managers to dehumanize those they manage.

Another danger for organizations (or teams) that are affected by a fixed mindset is groupthink.  It is an affliction where no one will criticize or provide outside information to the leader.  This form of fixed-mindset groupthink needs to find some mechanism to bring in other information so that you don’t fall prey to thinking within a bubble.

On the other hand, growth-oriented leaders peruse a journey of learning.  Dweck’s examples of growth mindset oriented leaders include:

  •    Jack Walsh, GE, was devoted to the concepts of team and growth.  Walsh is not my favorite example because of concepts like cutting the bottom 10% yield negative behaviors. However, this does not reflect a mindset issue.
  •    Lou Gerstner, IBM, opened lines of communication, attacked elitism, accepted that everyone has a lot to offer and focused on customers.
  •    Anne Mulcahy, Xerox, reshaped the company to believe in growth even though it required cutting and suffering.  Mulcahy worried about the morale and development of her people. 

In each case, the growth mindset CEO helped to lead their companies from near failures to highly innovative growth phases.

The chapter concludes with a set of questions and activities that help grow your mind.  For instance: Reflect on whether your workplace promotes groupthink while eschewing information that might challenge the status quo.

Organizational Transformation:  The prevailing organizational mindset, generally set by senior leaders will directly affect how and why change is introduced into an organization.  When helping to introduce change into an organization lead by someone with a fixed mindset, the change has to be perceived to burnish the leader’s ego and has to fix the leader’s world view.  This does not sound like a huge amount of fun; however, the knowledge of mindsets is an effective tool to help introduce change or to challenge groupthink.

Team Coaching:  The most valuable takeaway for a team-level coach is the added ability to recognize mindsets based on the impact they have on the team. In addition, helping teams understand the mindsets around them is also useful for helping the team understand the possible considerations and consequences of changes they attempt.

Previous Entries of the re-read of Mindset:

 


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Fri, 02/24/2017 - 21:50

Simplicity is key, because it is tied up with being fundamental - Harvey Freidman

Now the question really becomes - how simple is simple enough? When we hear some phrase like this, and we don't hear the units of measure of how simple, how to reach simple, then there is only a platitude - no actionable outcomes. How can we measure simplicity? How can we measure the coupling and cohesion of all the parts that make up a simple design, process, or system - to confirm the result is the simplest? How can we learn to ignore the platitudes of those making claims about simple systems are the best when they provide units of measure of the system, simple, or best?

So remember

Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong. - H.L. Mencken

Categories: Project Management

Keeping up to Date with the Support Library

Android Developers Blog - Fri, 02/24/2017 - 19:25
Posted by Agustin Fonts, Product Manager, Android Support Library

It's important to keep current when you're dealing with technology. That’s why we're constantly working to improve the quality of our software, particularly libraries that are linked into your apps, such as the Support Library.  The Support Library is a suite of libraries that provides backward compatibility along with additional features across many Android releases.

We have just released version 25.2 of the Support Library.  If you're making use of the android.support.v7.media.MediaRouter class in revision 25.1.1 or 25.1.0, we strongly recommend that you update due to a known issue.  If you haven't updated recently, you've missed out on some great bug fixes such as these:

25.2:
  • Corrected a severe mediarouter issue in which using an A2DP Bluetooth device and media routing APIs could cause the device to become unresponsive, requiring a reboot
  • Showing a slide presentation with screen mirroring no longer causes the device to disconnect from Wi-Fi
  • Media button now properly handles media apps that did not register themselves with setMediaButtonReceiver()
  • TextInputLayout correctly overlays hint and text if text is set by XML (AOSP issue 230171)
  • Corrected a memory leak in MediaControllerCompat (AOSP issue 231441)
  • RecyclerView no longer crashes when recycling view holders (AOSP issue 225762)
Reporting (and fixing) a Bug
The Support Library is developed by the Android Framework and Developer Relations teams, and, just like the Android platform, you can file bugs using the AOSP issue tracker, or submit fixes to our Git repository. Your feedback is critical in helping us to make the Support Library the most productive environment to use for developing Android applications.
Categories: Programming

Stuff The Internet Says On Scalability For February 24th, 2017

Hey, it's HighScalability time:

 

Great example of Latency As A Pseudo-Permanent Network Partition. A slide effectively cleaved Santa Cruz from the North Bay by slowing traffic to a crawl.
If you like this sort of Stuff then please support me on Patreon.
  • 40 TFLOPS: on Lambda; 7: new habitable planets with good beer; dozens: balloons needed in Loon network; 500 TB/sec: rate at which DNA is copied in human body; 1/2: web is encrypted; 34: regions in Azure; $8k: cost of Tesla self-driving hardware; 99.95%: DMCA takedowns are bot BS; 300 nanometers: new microscope; 7%: AMP traffic to publishers; 

  • Quotable Quotes:
    • @jasonlk: Elon Musk: Self-Driving Car Revolution Will Leave 15% of World Population Without Jobs
    • Near death Archimedes: Stand away, fellow, from my diagram!
    • rumpelstilskin21: Angular and React make for popular headlines on reddit but unless you are working for a major, large web site where such things might be deemed useful by management (and no one else) then quit trying to get educated by the amateurs on reddit.
    • StorageMojo: There is a new paradigm about to hit the industry, which will eviscerate large portions of the current storage ecosystem. Like other major shifts, it is powered by a class of users who are poorly served by existing products and technologies. But if our digital civilization is to survive and prosper, it has to happen. And it will, like it or not.
    • ThatMightBePaul: Worst case scenario: you try Go, don't like it, and you head back to Node more confident that it fits you better. That's still a pretty positive outcome, imo. So, invest the time in Go, and then see which feels right :)
    • Russ: it is the job of the application to properly figure out the network’s limits and try to live within them.
    • World's Second-Best Go Player: After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong. I would go as far as to say not a single human has touched the edge of the truth of Go.
    • @mjpt777: After fixing a few more false sharing issues we shaved another ~350ns of Aeron's RTT between machines.
    • @thomasfuchs: 1997: Let’s make a website! *fires up vi* 2007: Let’s make a website! *downloads jQuery* *fires up vi* 2017: Let’s make a website! [very long list of tech]
    • Basho: Do not follow the ancient masters, seek what they sought.
    • hellofunk: If many years ago, someone told me that a humongous company named Alphabet was thinking about deploying balloons all over the world, I'd have told you a thing or two about having a charming imagination. 
    • Russ: Sure, the Internet is broken. But anything we invent will, ultimately, be broken in some way or another. Sure the IETF is broken, and so is open source, and so is… whatever we might invent next. We don’t need a new Internet, we need a little less ego, a lot less mud slinging, and a lot more communication. 
    • @sAbakumoff: Analyzed the sentiment of 80000 Github Commit Comments, it seems that Ruby devs tend to be pretty positive, but c++ are angriest ones!
    • Michael Sawyer: The YouTubers' common enemy is YouTube
    • @jannis_r: "Good size for a microservice: if it fits into one engineers head" @adrianco #AWSTechBreakfast
    • packagecloud: setting [TZ] environment variable can save thousands (or in some cases, tens of thousands) of unnecessary system calls that can be generated by glibc over small periods of time. 
    • @istanboolean: "Hardware has stopped getting faster. Software has not stopped getting slower." @rob_pike
    • Greg Meddles: You're out of memory on some particular Amazon instance, so you bump up to the next biggest in size. That is always the naive solution. Whatever you're doing, you'll usually end up doing more of it. Eventually, you'll end up throwing good money after bad.
    • @viktorklang: Replace the use of sequential, concurrent, and parallel with dependent, coordinated, and independent? Thoughts?
    • Coast Guard Vice Adm. Marshall Lytle: Cyberwarfare is like a soccer game with all the fans on the field with you and no one is wearing uniforms
    • CockroachDB: If you’re serious about building a company around open source software, you must walk a narrow path: introduce paid features too soon, and risk curtailing adoption. Introduce paid features too late, and risk encouraging economic free riders. Stray too far in either direction, and your efforts will ultimately continue only as unpaid open source contribution
    • Veratyr: Deployment [of k8s] is just so much harder than it should be. Fundamentally (I discovered far later on in the process), Kubernetes is comprised of roughly the following services: kube-apiserver, kubelet, kube-proxy, kube-scheduler, kube-controller-manager. The other dependencies are: A CA infrastructure for certificate based authentication, etcd, a container runtime (rkt or Docker) and CNI.
    • @jbeda: I want to go on record: the amount of yaml required to do anything in k8s is a tragedy. Something we need to solve. 

  • What do you get for $5? Quite a lot. $5 Showdown: Linode vs. DigitalOcean vs. Amazon Lightsail vs. Vultr: Linode’s new plan is not only offering the consistently better performance...Linode is still a bit behind the curve when it comes to things like block storage volumes, default SSH keys and yeah, their UI.

  • Another wonderful engineering post from Riot Games. Under the hood of the League Client's Hextech UI: Any given build of the League client is expressed as a list of units called plugins... Back-end plugins that deal purely with data are written as C++ REST microservices...front-end plugins that deal with presentation are written as Javascript client applications and run inside Chromium Embedded Framework...The League client update really is a desktop deployment of an entire constellation of microservices...APIs are thoughtfully designed, any arbitrary combination of features can run cooperatively...In the League client, the common pattern is for dependencies to flow upwards...a WebSocket that allows the front-end plugins to observe back-end plugins for changes...To make implementation of complex video-based elements simpler, we created a state machine library based on Web Components...League client is patched out to players’ local drives, it doesn’t have the same immediate bandwidth constraints...we provide a number of purpose-specific audio channels - UI SFX, Notifications, Music, Voiceover, etc. - through a plugin dedicated to managing audio...We use straight-up native Custom Elements with heavy usage of Shadow DOM.

  • Does insurance cover this? The first SHA1 collision.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

The Pros for Big Bang Change Implementation

Vinigear Bottles

Not Sweet But Useful!

A big bang adoption of a process or system is an instant changeover, a “one-and-done” type of approach in which everyone associated with a new system or process switches over in mass at a specific point in time.  There are positives and negatives to big bang approaches.  We begin with the positives.

Patrick Holden, Project Portfolio Director – Software Development at SITA, struck a fairly common theme when asked about his preferences between the big bang and incremental approaches.  

While I favour incremental improvement, sometimes we really want to get on with the new, to change the mail system, phone, house, car or even your job you will need to prepare to different extents but you make the switch for one and all, it’s a Big Bang.  

The choice depends on divisibility, scope, and urgency.

The positives:

Big Bangs fit some types of changes.  Not all changes are easily divisible into increments which lead to an all or nothing implementation. As I have noted before, most of the bank mergers I was involved in were big bangs.  On a specific day, all of the branches of one bank would close and overnight (more typically over a weekend) lots of people would change signs on buildings, customer files, and software systems. Perhaps it was a failure in imagination, but due to regulations and the need for notifications, it was easier for everyone to change at once.  Organizational transformations rarely have the same external drivers, regulations and notification rules; however, because of interactions between teams, it might be easier not to take an incremental approach.  Adoptions of large scale Agile methods and frameworks such as SAFe are often approached in a big bang manner. In SAFe many teams and practitioners are indoctrinated, trained and transformed together which by definition is a big bang approach to implementing scaled Agile.

Big Bangs generate a too big to fail focus.  Large, expensive, and risky changes create their own atmosphere.  These types of implementations garner full management sponsorship and attention because they are too big to fail.  Christine Green, IFPUG Board Member, suggests, “it is harder to lose focus on big bang approaches when organizational leadership changes.” Big bangs can be used to address the risk of a loss of focus in some cases.  An example of an organization manufacturing a too big to fail scenario can be found in the often-told story of the early years of Fedex (Federal Express at the time).  It was said that the founder consciously borrowed money from smaller regional banks around Memphis so that he could use the impact of the risk of default to negotiate better service and rates.  Big bang changes are often too big to fail and therefore people ensure they don’t.

Management Expectations.   In many circumstances, management has little patience for the payback of continuous process improvement. As I was framing this theme, Christopher Hurney stated, “I’ve seen leadership expect Big Bang results in Agile adoptions, which is kind of ironic no? Considering that one of the precepts of Agility is an iterative approach.” The expectation is often generated by a sense of urgency.  In this situation, a specific issue needs to be addressed and even though an incremental (or even iterative) approach would deliver bits and pieces sooner, the organization perceives value only when all of the benefits are delivered.  The Healthcare.gov marketplace was delivered as a big bang.

Even if the big bang approach to process improvement feels wrong, however, there are reasons for leveraging the approach. Chris Hurney stated that decision makers “tend to feel as though they’ve reached a point where a process has become unsustainable” which makes the idea of implementing a change all at once worth the risk even though nearly everyone would prefer an incremental or continuous approach to change.

Previous Entries in the Big Bang, Incrementalism, or Somewhere In Between Theme

1. Big Bang, Incrementalism, or Somewhere In Between

Next:  Big Bang, The Cons!

 


Categories: Process Management

The Pros for Big Bang Change Implementation

Vinigear Bottles

Not Sweet But Useful!

A big bang adoption of a process or system is an instant changeover, a “one-and-done” type of approach in which everyone associated with a new system or process switches over in mass at a specific point in time.  There are positives and negatives to big bang approaches.  We begin with the positives.

Patrick Holden, Project Portfolio Director – Software Development at SITA, struck a fairly common theme when asked about his preferences between the big bang and incremental approaches.  

While I favour incremental improvement, sometimes we really want to get on with the new, to change the mail system, phone, house, car or even your job you will need to prepare to different extents but you make the switch for one and all, it’s a Big Bang.  

The choice depends on divisibility, scope, and urgency.

The positives:

Big Bangs fit some types of changes.  Not all changes are easily divisible into increments which lead to an all or nothing implementation. As I have noted before, most of the bank mergers I was involved in were big bangs.  On a specific day, all of the branches of one bank would close and overnight (more typically over a weekend) lots of people would change signs on buildings, customer files, and software systems. Perhaps it was a failure in imagination, but due to regulations and the need for notifications, it was easier for everyone to change at once.  Organizational transformations rarely have the same external drivers, regulations and notification rules; however, because of interactions between teams, it might be easier not to take an incremental approach.  Adoptions of large scale Agile methods and frameworks such as SAFe are often approached in a big bang manner. In SAFe many teams and practitioners are indoctrinated, trained and transformed together which by definition is a big bang approach to implementing scaled Agile.

Big Bangs generate a too big to fail focus.  Large, expensive, and risky changes create their own atmosphere.  These types of implementations garner full management sponsorship and attention because they are too big to fail.  Christine Green, IFPUG Board Member, suggests, “it is harder to lose focus on big bang approaches when organizational leadership changes.” Big bangs can be used to address the risk of a loss of focus in some cases.  An example of an organization manufacturing a too big to fail scenario can be found in the often-told story of the early years of Fedex (Federal Express at the time).  It was said that the founder consciously borrowed money from smaller regional banks around Memphis so that he could use the impact of the risk of default to negotiate better service and rates.  Big bang changes are often too big to fail and therefore people ensure they don’t.

Management Expectations.   In many circumstances, management has little patience for the payback of continuous process improvement. As I was framing this theme, Christopher Hurney stated, “I’ve seen leadership expect Big Bang results in Agile adoptions, which is kind of ironic no? Considering that one of the precepts of Agility is an iterative approach.” The expectation is often generated by a sense of urgency.  In this situation, a specific issue needs to be addressed and even though an incremental (or even iterative) approach would deliver bits and pieces sooner, the organization perceives value only when all of the benefits are delivered.  The Healthcare.gov marketplace was delivered as a big bang.

Even if the big bang approach to process improvement feels wrong, however, there are reasons for leveraging the approach. Chris Hurney stated that decision makers “tend to feel as though they’ve reached a point where a process has become unsustainable” which makes the idea of implementing a change all at once worth the risk even though nearly everyone would prefer an incremental or continuous approach to change.

Previous Entries in the Big Bang, Incrementalism, or Somewhere In Between Theme

1. Big Bang, Incrementalism, or Somewhere In Between

Next:  Big Bang, The Cons!

 


Categories: Process Management

Dunning-Kruger and Modern Software Project Management

Herding Cats - Glen Alleman - Thu, 02/23/2017 - 20:33

In "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments," David Dunning and Justin Kruger state that the less skilled or competent you are, the more confident you are that you're actually very good at what you do. Their central finding is not only do such people reach the erroneous conclusion and make unfortunate choices, but their incompetence robs them of the ability to realize it.

This, of course, is true of everyone in some way. We think we have a great sense of humor when we don't. We rate ourselves higher than others in a variety of skills

Lake Wobegon - where all the children are above average

Turns out though that less competent people overestimate themselves more than others. 

The reason is the absence of a quality called metacognition, the ability to step back and see our own cognitive process in perspective. Good singers know when they've hit a sour note, good directors know when a scene in a play isn't working, and intelligently self-aware people know when they're out of their depth. 

When I hear unsubstantiated claims, usually from sole proprietors, working on de minimis projects, I think of Dunning-Kruger. I've vowed to ignore them and move on. If it works in their domain, then any advice they provide is usually limited to their domain and their experiences in that domain. But the continued chant that certain processes, methods, fixing for dysfunction continue. Getting louder when they are asked to show the evidence their idea has a basis in principle.  This is the confirmation bais for their misunderstanding of the principles on which they are making their claims. 

 

 

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Making Conjectures Without Testable Outcomes Deadlines Always Matter
Categories: Project Management

Adding text and shapes with the Google Slides API

Google Code Blog - Thu, 02/23/2017 - 20:26
Originally shared on the G Suite Developers Blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
When the Google Slidesteam launched their very first API last November, it immediately opened up a whole new class of applications. These applications have the ability to interact with the Slides service, so you can perform operations on presentations programmatically. Since its launch, we've published several videos to help you realize some of those possibilities, showing you how to:
Today, we're releasing the latest Slides API tutorial in our video series. This one goes back to basics a bit: adding text to presentations. But we also discuss shapes—not only adding shapes to slides, but also adding text within shapes. Most importantly, we cover one best practice when using the API: create your own object IDs. By doing this, developers can execute more requests while minimizing API calls.



Developers use insertText requests to tell the API to add text to slides. This is true whether you're adding text to a textbox, a shape or table cell. Similar to the Google Sheets API, all requests are made as JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some object (objectID) on a slide:
{
"insertText": {
"objectId": objectID,
"text": "Hello World!\n"
}
Adding shapes is a bit more challenging, as you can see from itssample JSON structure:

{
"createShape": {
"shapeType": "SMILEY_FACE",
"elementProperties": {
"pageObjectId": slideID,
"size": {
"height": {
"magnitude": 3000000,
"unit": "EMU"
},
"width": {
"magnitude": 3000000,
"unit": "EMU"
}
},
"transform": {
"unit": "EMU",
"scaleX": 1.3449,
"scaleY": 1.3031,
"translateX": 4671925,
"translateY": 450150
}
}
}
}
Placing or manipulating shapes or images on slides requires more information so the cloud service can properly render these objects. Be aware that it does involve some math, as you can see from the Page Elements page in the docs as well as the Transforms concept guide. In the video, I drop a few hints and good practices so you don't have to start from scratch.

Regardless of how complex your requests are, if you have at least one, say in an array named requests, you'd make an API call with the aforementioned batchUpdate() method, which in Python looks like this (assuming SLIDES is the service endpoint and a presentation ID of deckID):

SLIDES.presentations().batchUpdate(presentationId=deckID,
body=requests).execute()
For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. As you can see, adding text is fairly straightforward. If you want to learn how to format and style that text, check out the Formatting Text post and video as well as the text concepts guide.
To learn how to perform text search-and-replace, say to replace placeholders in a template deck, check out the Replacing Text & Images post and video as well as the merging data into slides guide. We hope these developer resources help you create that next great app that automates the task of producing presentations for your users!
Categories: Programming

Neo4j: How do null values even work?

Mark Needham - Thu, 02/23/2017 - 00:28

Every now and then I find myself wanting to import a CSV file into Neo4j and I always get confused with how to handle the various null values that can lurk within.

Let’s start with an example that doesn’t have a CSV file in sight. Consider the following list and my attempt to only return null values:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value = null
RETURN value

(no changes, no records)

Hmm that’s weird. I’d have expected that at least keep the first value in the collection. What about if we do the inverse?

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value <> null
RETURN value

(no changes, no records)

Still nothing! Let’s try returning the output of our comparisons rather than filtering rows:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
RETURN value = null AS outcome

╒═══════╤═════════╕
│"value"│"outcome"│
╞═══════╪═════════╡
│null   │null     │
├───────┼─────────┤
│"null" │null     │
├───────┼─────────┤
│""     │null     │
├───────┼─────────┤
│"Mark" │null     │
└───────┴─────────┘

Ok so that isn’t what we expected. Everything has an ‘outcome’ of ‘null’! What about if we want to check whether the value is the string “Mark”?

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
RETURN value = "Mark" AS outcome

╒═══════╤═════════╕
│"value"│"outcome"│
╞═══════╪═════════╡
│null   │null     │
├───────┼─────────┤
│"null" │false    │
├───────┼─────────┤
│""     │false    │
├───────┼─────────┤
│"Mark" │true     │
└───────┴─────────┘

From executing this query we learn that if one side of a comparison is null then the return value is always going to be null.

So how do we exclude a row if it’s null?

It turns out we have to use the ‘is’ keyword rather than using the equality operator. Let’s see what that looks like:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value is null
RETURN value

╒═══════╕
│"value"│
╞═══════╡
│null   │
└───────┘

And the positive case:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value is not null
RETURN value

╒═══════╕
│"value"│
╞═══════╡
│"null" │
├───────┤
│""     │
├───────┤
│"Mark" │
└───────┘

What if we want to get rid of empty strings?

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value <> ""
RETURN value

╒═══════╕
│"value"│
╞═══════╡
│"null" │
├───────┤
│"Mark" │
└───────┘

Interestingly that also gets rid of the null value which I hadn’t expected. But if we look for values matching the empty string:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
WITH value WHERE value = ""
RETURN value

╒═══════╕
│"value"│
╞═══════╡
│""     │
└───────┘

It’s not there either! Hmm what’s going on here:

WITH [null, "null", "", "Mark"] AS values
UNWIND values AS value
RETURN value, value = "" AS isEmpty, value <> "" AS isNotEmpty

╒═══════╤═════════╤════════════╕
│"value"│"isEmpty"│"isNotEmpty"│
╞═══════╪═════════╪════════════╡
│null   │null     │null        │
├───────┼─────────┼────────────┤
│"null" │false    │true        │
├───────┼─────────┼────────────┤
│""     │true     │false       │
├───────┼─────────┼────────────┤
│"Mark" │false    │true        │
└───────┴─────────┴────────────┘

null values seem to get filtered out for every type of equality match unless we explicitly check that a value ‘is null’.

So how do we use this knowledge when we’re parsing CSV files using Neo4j’s LOAD CSV tool?

Let’s say we have a CSV file that looks like this:

$ cat nulls.csv
name,company
"Mark",
"Michael",""
"Will",null
"Ryan","Neo4j"

So none of the first three rows have a value for ‘company’. I don’t have any value at all, Michael has an empty string, and Will has a null value. Let’s see how LOAD CSV interprets this:

load csv with headers from "file:///nulls.csv" AS row
RETURN row

╒═════════════════════════════════╕
│"row"                            │
╞═════════════════════════════════╡
│{"name":"Mark","company":null}   │
├─────────────────────────────────┤
│{"name":"Michael","company":""}  │
├─────────────────────────────────┤
│{"name":"Will","company":"null"} │
├─────────────────────────────────┤
│{"name":"Ryan","company":"Neo4j"}│
└─────────────────────────────────┘

We’ve got the full sweep of all the combinations from above. We’d like to create a Person node for each row but only create a Company node and associated ‘WORKS_FOR’ relationshp if an actual company is defined – we don’t want to create a null company.

So we only want to create a company node and ‘WORKS_FOR’ relationship for the Ryan row.

The following query does the trick:

load csv with headers from "file:///nulls.csv" AS row
MERGE (p:Person {name: row.name})
WITH p, row
WHERE row.company <> "" AND row.company <> "null"
MERGE (c:Company {name: row.company})
MERGE (p)-[:WORKS_FOR]->(c)

Added 5 labels, created 5 nodes, set 5 properties, created 1 relationship, statement completed in 117 ms.

And if we visualise what’s been created:

Graph  15

Perfect. Perhaps this behaviour is obvious but it always trips me up so hopefully it’ll be useful to someone else as well!

There’s also a section on the Neo4j developer pages describing even more null scenarios that’s worth checking out.

The post Neo4j: How do null values even work? appeared first on Mark Needham.

Categories: Programming

14 extensions that can enrich your daily VSTS usage

Xebia Blog - Wed, 02/22/2017 - 22:54
Using VSTS on a daily basis I find that I add a regular list of VSTS Marketplace extensions to my VSTS environment. I find them convenient and helping me to get the most out of VSTS. The list below is primarily focussed on the Work and Code area and not so much on the Build

Publish your app with confidence from the Google Play Developer Console

Android Developers Blog - Wed, 02/22/2017 - 19:20
Posted by Kobi Glick, Product Manager, Google Play

Publishing a new app, or app update, is an important and exciting milestone for every developer. In order to make the process smoother and more trackable, we're announcing the launch of a new way to publish apps on Google Play with some new features. The changes will give you the ability to manage your app releases with more confidence via a new manage releases page in the Google Play Developer Console.




Manage your app updates with clarity and control

The new manage releases page is where you upload alpha, beta, and production releases of your app. From here, you can see important information and the status of all your releases across tracks.

The new manage releases page.
Easier access to existing and new publishing features

Publishing an app or update is a big step, and one that every developer wants to have confidence in taking. To help, we've added two new features.
First, we've added a validation step that highlights potential issues before you publish. The new "review and rollout" page will appear before you confirm the roll out of a new app and flag if there are validation errors or warnings. This new flow will make the app release process easier, especially for apps using multi-APK. It also provides new information; for example, in cases where you added new permissions to your app, the system will highlight it.


Second, it's now simpler to perform and track staged roll-outs during the publishing flow. With staged rollouts, you can release your update to a growing % of users, giving you a chance to catch and address any issues before affecting your whole audience.

If you want to review the history of your releases, it is now possible to track them granularly and download previous APKs.

Finally we've added a new artifacts library under manage releases where you can find all the files that help you manage a release.
Start using the new manage releases page today
You can access the new manage releases page in the Developer Console. Visit the Google Play Developer Help Center for more information. With these changes, we're helping you to publish, track and manage your app with confidence on Google Play.


How useful did you find this blogpost? ★ ★ ★ ★ ★                                                                               
Categories: Programming

Publish your app with confidence from the Google Play Developer Console

Android Developers Blog - Wed, 02/22/2017 - 19:20
Posted by Kobi Glick, Product Manager, Google Play

Publishing a new app, or app update, is an important and exciting milestone for every developer. In order to make the process smoother and more trackable, we're announcing the launch of a new way to publish apps on Google Play with some new features. The changes will give you the ability to manage your app releases with more confidence via a new manage releases page in the Google Play Developer Console.




Manage your app updates with clarity and control

The new manage releases page is where you upload alpha, beta, and production releases of your app. From here, you can see important information and the status of all your releases across tracks.

The new manage releases page.
Easier access to existing and new publishing features

Publishing an app or update is a big step, and one that every developer wants to have confidence in taking. To help, we've added two new features.
First, we've added a validation step that highlights potential issues before you publish. The new "review and rollout" page will appear before you confirm the roll out of a new app and flag if there are validation errors or warnings. This new flow will make the app release process easier, especially for apps using multi-APK. It also provides new information; for example, in cases where you added new permissions to your app, the system will highlight it.


Second, it's now simpler to perform and track staged roll-outs during the publishing flow. With staged rollouts, you can release your update to a growing % of users, giving you a chance to catch and address any issues before affecting your whole audience.

If you want to review the history of your releases, it is now possible to track them granularly and download previous APKs.

Finally we've added a new artifacts library under manage releases where you can find all the files that help you manage a release.
Start using the new manage releases page today
You can access the new manage releases page in the Developer Console. Visit the Google Play Developer Help Center for more information. With these changes, we're helping you to publish, track and manage your app with confidence on Google Play.


How useful did you find this blogpost? ★ ★ ★ ★ ★                                                                               
Categories: Programming

New features in Xcode 8.2 Simulator

Xebia Blog - Wed, 02/22/2017 - 14:01
In the release notes of Xcode 8.2, Apple introduced features for their new version of Xcode. In this blog I will explain how to use these new features. Read more

New features in Xcode 8.2 Simulator

Xebia Blog - Wed, 02/22/2017 - 14:01
In the release notes of Xcode 8.2, Apple introduced features for their new version of Xcode. In this blog I will explain how to use these new features. Read more

First Steps in gRPC Bindings for React Native

Xebia Blog - Wed, 02/22/2017 - 13:54
When you want to use gRPC in your React Native app there is no official support yet, but that shouldn’t stop you! In this post I’ll show you how we designed an implementation with type safety in mind and successfully called a service remotely from React Native on Android. Read more

First Steps in gRPC Bindings for React Native

Xebia Blog - Wed, 02/22/2017 - 13:54
When you want to use gRPC in your React Native app there is no official support yet, but that shouldn’t stop you! In this post I’ll show you how we designed an implementation with type safety in mind and successfully called a service remotely from React Native on Android. Read more

Big Bang, Incrementalism, or Somewhere In Between

Line Segment Shutdown

Line Segment Shutdown

When making any significant change to a team or organization, deciding whether to take a big bang or incremental approach is important.  Both of these approaches–and hybrids in between–can work.  Big Bang and incremental approaches mark the two ends of a continuum of how organizations make a change.  The decision is almost never straightforward and organizations often struggle with how they should approach change.  The decision process begins by defining Big Bang and incremental implementation approaches in between the two ends so they can be compared.

Big Bang Implementations

A big bang adoption of a process or system is an instant changeover, a “one-and-done” type of approach in which everyone associated with a new system or process switches over in mass at a specific point in time.  For example, most of the bank mergers I participated in were big bangs.  The systems were all cut over on a specific date (lots of pizza and coffee was required for the cutover weekend) and the next business day all of the branches and ATMs began the day using a single system.

Big bangs are always the culmination of a lot of specific activities including planning, coordination, software changes, data conversions, and reviews.  All of these activities are focused on making the Big Bang successful.  Individually, the steps have little to no value if the final step fails.

Big Bang changes are sometimes equated to “bet the business” scenarios: if the change doesn’t  work, everything needs to be backed out or significant business impact will ensue.

Incrementalism / Incremental Approach

An incremental approach focuses on defining identifying and implementing specific pieces of work.  These pieces are generally smaller standalone pieces of work that progress an organization toward an overall goal but not generally all parts of one specific cohesive project.  For example, quality or process programs often use a continuous process improvement model in which practitioners identify changes or improvements which are then captured as part of a backlog and prioritized for implementation. This type of work is sometimes called continuous process improvement. In this scenario, lots of individual pieces of work accumulate over time to deliver a big benefit. Incremental changes generate a fast feedback loop which delivers enhanced learning. The small changes typically found in incremental approaches are useful for experimentation.

In Between or Phased Implementations

The term phased adoption can have alternate meanings.  The first (and in 2017 the most common meeting) is to break implementation into smaller pieces so that the organization has use of functionality sooner.  This is closer to an incremental approach (next) than a big bang.  Phased approaches break a bigger project into smaller projects so the adoption will happen in several steps. After each step, the system is a little nearer to be fully adopted. Phased differs from incremental generally in scope and the types of work in the backlog.  For example, in bank mergers, one phase might be to convert checking accounts and trust accounts in another phase.  In an Agile adoption, a phased approach might be to transform one team after another in a serial fashion.

The second possible use of the term is the famed waterfall approach in which analysis is completed before design all the way to implementation. This approach is far less common than it was in the late 20th century before the Agile movement, however, make sure you check by asking how the word phased is being used.

Which implementation approach makes the most sense will always depend on context.  The right choice requires understanding the goal of the change, resources available to make the change and above all else the organization’s culture.  The choice is not as stark as Big Bang (everything at once) or incrementalism (lots of continuous little changes) although these are the choices most often considered.

 

In the next entry in this theme, we will explore the pros of the Big bang approach.


Categories: Process Management

Friction in Software

Actively Lazy - Tue, 02/21/2017 - 21:39

Friction can be a very powerful force when building software. The things that are made easier or harder can dramatically influence how we work. I’d like to discuss three areas where I’ve seen friction at work: dependency injection, code reviews and technology selection.

DI Frameworks

A few years ago a colleague and I discussed this and came to the conclusion that the reason most DI frameworks suck (I’m looking in particular at you, Spring) is that they make adding new dependencies so damned easy! There’s absolutely no friction. Maybe a little XML (shudder) or just a tiny little attribute. It’s so easy!

So when we started a new, greenfield project, we decided to put our theory to the test and introduced just a little bit of friction to dependency injection. I’ve written before about the basic scheme we adopted and the AOP endpoint it reached. But the end result was, I believe, very successful. After a couple of years of development we still had of the order of only 10-20 dependencies. The friction we’d introduced was light (add a couple of lines to a single class), but it was sufficient to act as a constant reminder not to just add a new dependency because it was easy.

Code Reviews

I was reminded of this recently when discussing code reviews. I have mixed feelings about code reviews: I’ve seen them work well, and it is better to have code reviews than not to have them; but it’s better still to pair program. But not all teams, not all developers, like pair programming – so code reviews exist. The trouble with code reviews is they can provide a form of friction.

If you & I are pairing on a piece of work, we will discuss the various trade-offs as we go: do we spend time on this, do we refactor that, etc etc. The constant judgements about what warrants attention and what can be left for another day are verbalised and agreed. In general I find the code written while pairing is high in quality but also remains tightly focused on task. The long rambling refactors I’ve been guilty of in the past disappear and the lazy “quick hacks” we all try and explain away to ourselves, aren’t so easy to gloss over when pairing.

But code reviews exist outside of this dynamic. In the cold light of the following day, someone uninvolved reviews your work and passes judgement on whether they think it’s up to scratch. It’s easy to see why this becomes combative: rather than being collaborative it can be seen as a judgement being passed, on not only the code but the author, too.

When reviewing code it is easy to set a very high bar, higher than you might set for yourself and higher than you might have agreed when pairing. Now, does this mean the comments aren’t valid? Absolutely not! You’re right, there is a test case missing here, although my change is unrelated, I should have added the missing test case. And you’re right this code is a mess; it was a mess before I was here and made a simple edit; but you’re right, I should have tidied it up. Everyone should practice code gardening.

These are all perfectly valid comments. But they create a form of friction. When I worked on a team that relied on these code reviews you knew you were going to get comments: so you kept the commit small, so as to minimize the diff. A small diff minimizes the amount of extra tests you could be asked to write. A small diff keeps most of the existing mess out of the review, so you won’t be asked to start refactoring.

Now, this seems dysfunctional: we’re deliberately trying to optimize a smooth passage through the review process, instead of optimizing for code quality. Worse than this though was what never happened: refactoring commits. Looking back I realise that the only code reviews I saw (as both reviewer and reviewee) were for feature changes. There were never any code reviews submitted for purely technical debt reduction. Sure, there’d be some individual commits in amongst the feature changes. But never any dedicated, multi-commit sessions, whose sole aim was to improve the code base. Which was a shame, because like any legacy code base, there was scope for improvement.

Comparing this to teams that don’t do code reviews, where I’ve tended to see more effort on reducing technical debt. Without fearing an endless cycle of review comments, developers are free to embark on refactoring efforts (that may or may not even work out!) – but at least they can try. Instead, code reviews provide a form of friction that might actually hurt code quality in the long run.

Technology Selection

I was talking to another colleague recently who is convinced that Hibernate is still the best way to get data in and out of a relational database. I can’t really work out how to persuade people they’re wrong – surely using Hibernate is enough to persuade you? Especially in a large, legacy code base – the pain that Hibernate causes is obvious. Yet plenty of people still believe in Hibernate. There are even people that still believe in Spring. Whether or not they still believe in the tooth fairy is unclear.

But I think technology selection is another area where friction is important. When contemplating moving away from something well-known and well used in industry like Spring or Hibernate there is a lot of friction. There are new technologies to learn, new approaches to understand and new risks to manage. This all adds friction, so sometimes it’s easiest just to stick with what we know. Sometimes it really is the right choice – the technology you have expertise in is the one you’ll be most productive in immediately. But there are longer term questions too, which are much harder to answer: will the team eventually be more productive using technology X than technology Y?

Friction in software is a powerful process: we’re very lazy creatures, constantly trying to optimise. Anything that slows us down or gets in our way quickly gets side-stepped or worked around. We can use this knowledge as a tool to guide developer behaviour; but sometimes we need to be aware of how friction can change behaviours for the worse as well.


Categories: Programming, Testing & QA

Build flexible layouts with FlexboxLayout

Android Developers Blog - Tue, 02/21/2017 - 19:48
Posted by Takeshi Hagikura, Developer Programs Engineer

At Google I/O last year we announced ConstraintLayout, which enables you to build complex layouts while maintaining a flat view hierarchy. It is also fully supported in Android Studio's Visual Layout Editor.

At the same time, we open sourced FlexboxLayout to bring the same functionalities of the CSS Flexible Layout module to Android. Here are some cases where FlexboxLayout is particularly effective.

FlexboxLayout can be interpreted as an advanced LinearLayout because both layouts align their child views sequentially. The significant difference between LinearLayout and FlexboxLayout is that FlexboxLayout has a feature for wrapping.

That means if you add the flexWrap="wrap" attribute, FlexboxLayout puts a view to a new line if there is not enough space left in the current line as shown in the picture below.


One layout for various screen sizes With that characteristic in mind, let's take a case where you want to put views sequentially but have them move to new lines if the available space changes (due to a device factor, orientation changes or the window resizing in the multi-window mode).


Nexus5X portrait


Nexus5X landscape

Pixel C with multi window mode enabled, divider line on the left.

Pixel C with multi window mode enabled, divider line on the middle.

Pixel C with multi window mode enabled, divider line on the right.
You would need to define multiple DP-bucket layouts (such as layout-600dp, layout-720dp, layout-1020dp) to handle various screen sizes with traditional layouts such as LinearLayout or RelativeLayout. But the dialog above is built with a single FlexboxLayout.

The technique used in the example is setting the flexWrap="wrap" as explained above,

<com .google.android.flexbox.flexboxlayout 
     android:layout_width="match_parent" 
     android:layout_height="wrap_content" 
     app:flexwrap="wrap">
then you can get the following layout where child views are aligned to a new line instead of overflowing its parent.




Another technique I'd like to highlight is setting the layout_flexGrow attribute to an individual child. This helps improve the look of the final layout when free space is left over. The layout_flexGrow attribute works similar to the layout_weight attribute in LinearLayout. That means FlexboxLayout will distribute the remaining space according to the layout_flexGrow value set to each child in the same line.

In the example below, it assumes each child has the layout_flexGrow attribute set to 1, so free space will be evenly distributed to each of them.
 <android .support.design.widget.TextInputLayout
     android:layout_width="100dp"
     android:layout_height="wrap_content" 
     app:layout_flexgrow="1">



You can check out the complete layout xml file in the GitHub repository.
RecyclerView integration  Another advantage of FlexboxLayout is that it can be integrated with RecyclerView. With the latest release of the alpha version the new FlexboxLayoutManager extends RecyclerView.LayoutManager, now you can make use of the Flexbox functionalities in a scrollable container in much more memory-efficient way.

Note that you can still achieve a scrollable Flexbox container with FlexboxLayout wrapped with ScrollView. But, you will be likely to experience jankiness or even an OutOfMemoryError if the number of items contained in the layout is large, as FlexboxLayout doesn't take view recycling into account for the views that go off the screen as the user scrolls.

(If you would like to learn more about the RecyclerView in details, you can check out the videos from the Android UI toolkit team such as 1, 2)

A real world example where the RecyclerView integration is useful is for apps like the Google Photos app or News apps, both expect large number of items while needing to handle various width of items.

One example is found in the demo application in the FlexboxLayout repository. As you can see in the repository, each image shown in RecyclerView has a different width. But by setting the flexWrap setting to wrap,

FlexboxLayoutManager layoutManager = new FlexboxLayoutManager();
layoutManager.setFlexWrap(FlexWrap.WRAP);
and setting the flexGrow (as you can see, you can configure the attributes through FlexboxLayoutManager and FlexboxLayoutManager.LayoutParams for child attributes instead of configuring it from xml) attribute to a positive value for each child,
void bindTo(Drawable drawable) {
  mImageView.setImageDrawable(drawable);
  ViewGroup.LayoutParams lp = mImageView.getLayoutParams();
  if (lp instanceof FlexboxLayoutManager.LayoutParams) {
    FlexboxLayoutManager.LayoutParams flexboxLp = 
        (FlexboxLayoutManager.LayoutParams) mImageView.getLayoutParams();
    flexboxLp.setFlexGrow(1.0f);
  }
}
you can see every image fits within the layout nicely regardless of the screen orientation.



If you would like to see complete FlexboxLayout example, you can check:

What's next? Check out the full documentation for other attributes to build flexible layouts tailored for your needs. We're very open to hear your feedback, if you find any issues or feature requests, please file an issue on the GitHub repository.



Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Tue, 02/21/2017 - 05:49

“I am not much given to regret, so I puzzled over this one a while. Should have taken much more statistics in college, I think.” — Max Levchin, Paypal Co-founder, Slide Founder

Perhaps anyone conjecturing that decisions can be made in the presence of uncertainty may want to call  their High School Probability and Statistics teacher and find out what they missed

Categories: Project Management