Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Introducing Python Fire, a library for automatically generating command line interfaces

Google Code Blog - Fri, 03/03/2017 - 18:47
By David Bieber, Software Engineer on Google Brain

Originally posted on the Google Open Source Blog

Today we are pleased to announce the open-sourcing of Python Fire. Python Fire generates command line interfaces (CLIs) from any Python code. Simply call the Fire function in any Python program to automatically turn that program into a CLI. The library is available from pypi via `pip install fire`, and the source is available on GitHub.

Python Fire will automatically turn your code into a CLI without you needing to do any additional work. You don't have to define arguments, set up help information, or write a main function that defines how your code is run. Instead, you simply call the `Fire` function from your main module, and Python Fire takes care of the rest. It uses inspection to turn whatever Python object you give it -- whether it's a class, an object, a dictionary, a function, or even a whole module -- into a command line interface, complete with tab completion and documentation, and the CLI will stay up-to-date even as the code changes.

To illustrate this, let's look at a simple example.#!/usr/bin/env pythonimport fire

class Example(object):  def hello(self, name='world'):    """Says hello to the specified name."""    return 'Hello {name}!'.format(name=name)

def main():  fire.Fire(Example)

if __name__ == '__main__':  main()

When the Fire function is run, our command will be executed. Just by calling Fire, we can now use the Example class as if it were a command line utility.

$ ./example.py helloHello world!$ ./example.py hello DavidHello David!$ ./example.py hello --name=GoogleHello Google!

Of course, you can continue to use this module like an ordinary Python library, enabling you to use the exact same code both from Bash and Python. If you're writing a Python library, then you no longer need to update your main method or client when experimenting with it; instead you can simply run the piece of your library that you're experimenting with from the command line. Even as the library changes, the command line tool stays up to date.

At Google, engineers use Python Fire to generate command line tools from Python libraries. We have an image manipulation tool built by using Fire with the Python Imaging Library, PIL. In Google Brain, we use an experiment management tool built with Fire, allowing us to manage experiments equally well from Python or from Bash.

Every Fire CLI comes with an interactive mode. Run the CLI with the `--interactive` flag to launch an IPython REPL with the result of your command, as well as other useful variables already defined and ready to use. Be sure to check out Python Fire's documentation for more on this and the other useful features Fire provides.

Between Python Fire's simplicity, generality, and power, we hope you find it a useful library for your own projects.
Categories: Programming

Introducing Python Fire, a library for automatically generating command line interfaces

Google Code Blog - Fri, 03/03/2017 - 18:47
By David Bieber, Software Engineer on Google Brain

Originally posted on the Google Open Source Blog

Today we are pleased to announce the open-sourcing of Python Fire. Python Fire generates command line interfaces (CLIs) from any Python code. Simply call the Fire function in any Python program to automatically turn that program into a CLI. The library is available from pypi via `pip install fire`, and the source is available on GitHub.

Python Fire will automatically turn your code into a CLI without you needing to do any additional work. You don't have to define arguments, set up help information, or write a main function that defines how your code is run. Instead, you simply call the `Fire` function from your main module, and Python Fire takes care of the rest. It uses inspection to turn whatever Python object you give it -- whether it's a class, an object, a dictionary, a function, or even a whole module -- into a command line interface, complete with tab completion and documentation, and the CLI will stay up-to-date even as the code changes.

To illustrate this, let's look at a simple example.#!/usr/bin/env pythonimport fire

class Example(object):  def hello(self, name='world'):    """Says hello to the specified name."""    return 'Hello {name}!'.format(name=name)

def main():  fire.Fire(Example)

if __name__ == '__main__':  main()

When the Fire function is run, our command will be executed. Just by calling Fire, we can now use the Example class as if it were a command line utility.

$ ./example.py helloHello world!$ ./example.py hello DavidHello David!$ ./example.py hello --name=GoogleHello Google!

Of course, you can continue to use this module like an ordinary Python library, enabling you to use the exact same code both from Bash and Python. If you're writing a Python library, then you no longer need to update your main method or client when experimenting with it; instead you can simply run the piece of your library that you're experimenting with from the command line. Even as the library changes, the command line tool stays up to date.

At Google, engineers use Python Fire to generate command line tools from Python libraries. We have an image manipulation tool built by using Fire with the Python Imaging Library, PIL. In Google Brain, we use an experiment management tool built with Fire, allowing us to manage experiments equally well from Python or from Bash.

Every Fire CLI comes with an interactive mode. Run the CLI with the `--interactive` flag to launch an IPython REPL with the result of your command, as well as other useful variables already defined and ready to use. Be sure to check out Python Fire's documentation for more on this and the other useful features Fire provides.

Between Python Fire's simplicity, generality, and power, we hope you find it a useful library for your own projects.
Categories: Programming

Stuff The Internet Says On Scalability For March 3rd, 2017

Hey, it's HighScalability time:

 

Only 235 trillion miles away. Engage. (NASA)
If you like this sort of Stuff then please support me on Patreon.
  • $5 billion: Netflix spend on new content; $1 billion: Netflix spend on tech; 10%: bounced BBC users for every additional second page load; $3.5 billion: Priceline Group ad spend; 12.6 million: hours streamed by Pornhub per day; 1 billion: hours streamed by YouTube per day; 38,000 BC: auroch carving; 5%: decrease in US TV sets;

  • Quotable Quotes:
    • Fahim ul Haq: Rule 1: Reading High Scalability a night before your interview does not make you an expert in Distributed Systems.
    • @Pinboard: Root cause of outage: S3 is actually hosted on Google Cloud Storage, and today Google Cloud Storage migrated to AWS
    • Matthew Green: ransomware currently is using only a tiny fraction of the capabilities available to it. Secure execution technologies in particular represent a giant footgun just waiting to go off if manufacturers get things only a little bit wrong.
    • dsr_: This [S3 outage] is analogous to "we needed to fsck, and nobody realized how long that would take".
    • tptacek: Uber isn't the driver's employer. Uber is a vendor to the driver. The driver is complaining that its vendor made commitments, on which the driver depended, and then reneged. The driver might be right or might be wrong, but in no discussion with a vendor in the history of the Fortune 500 has it ever been OK for the vendor to accuse their customer of "not taking responsibility for their own shit".
    • @felixsalmon: Hours of video served per day: Facebook: 100 million Netflix: 116 million YouTube: 1 billion
    • @Geek_Manager: "Everybody wants to write reusable code. Nobody wants to reuse anyone else's code." @eryno #leaddev
    • @ellenhuet: a private South Bay high school 1) having a growth fund and 2) being early in Snap is the most silicon valley thing ever
    • @_ginger_kid: I speak from experience as a cash strapped startup CTO. Would love to be multi region, just cannot justify it. V hard.
    • @Objective_Neo: SpaceX, $12 billion valuation: Launches 70m rockets into space and lands them safely. Snapchat, $20 billion valuation: Rainbow Filters.
    • @neil_conway: (2/4): MTTR (repair time) is AT LEAST as important as MTBF in determining service uptime and often easier to improve.
    • John Hagel: we’re likely to see a new category of gig work emerge – let’s call it “creative opportunity targeting.”...we anticipate that more and more of the workforce will be pulled into this arena of creative gig workgroups
    • Seyi Fabode: The constraint is that the broker model, even with new technology, is not value additive. 
    • Robert Kolker: From his experience with the Gary police, Hargrove learned the first big lesson of data: If it’s bad news, not everyone wants to see the numbers
    • gamache: A piece of hard-earned advice: us-east-1 is the worst place to set up AWS services. You're signing up for the oldest hardware and the most frequent outages.
    • Dan Sperber: we each have a great many mental devices that contribute to our cognition. There are many subsystems. Not two, but dozens or hundreds or thousands of little mechanisms that are highly specialized and interact in our brain. Nobody doubts that something like this is the case with visual perception. I want to argue that it’s also the case for the so-called central systems, for reasoning, for inference in general.
    • Joaquin Quiñonero Candela: Facebook today cannot exist without AI. Every time you use Facebook or Instagram or Messenger, you may not realize it, but your experiences are being powered by AI.
    • alicebob: Sometimes keeping things simple is worth more than keeping things globally available.
    • Sveta Smirnova: Both MySQL and PostgreSQL on a machine with a large number of CPU cores hit CPU resources limits before disk speed can start affecting performance.
    • @jamesiry: Using many $100,000’s of compute, Google collided a known weak hash. Meanwhile one botched memcpy leaked the Internet’s passwords.
    • @david4096: teaching engineers to say no is cheaper than Haskell
    • @cgvendrell: #AI will be dictated by Google. They're 1 order of magnitude ahead, they understood key = chip level of stack (TPU) + training data @chamath
    • @antirez: There are tons of more tests to do, but the radix trees could replace most hash tables inside Redis in the future: faster & smaller.
    • DHH: So it remains mostly our fault. Our choice, our dollars. Every purchase a vote for an ever more dysfunctional future. We will spend our way into the abyss.
    • @jamesurquhart: This is why I write about data stream processing and serverless—lessons I learned at @SOASTAInc about the value of real time and BizOps.
    • twakefield: The brilliance of open sourcing Borg (aka Kubernetes) is evident in times like these. We[0] are seeing more and more SaaS companies abstract away their dependencies on AWS or any particular cloud provider with Kubernetes.
    • flak: password hashes aren’t broken by cryptanalysis. They’re rendered irrelevant by time (hardware advancements). What was once expensive is now cheap, what was once slow is now fast. The amount of work hasn’t been reduced, but the difficulty of performing it has.
    • @darkuncle: biz decisions again ... gotta weigh cost/frequency of AWS single-region downtime vs. cost/complexity of multi-region & GSLB.
    • @nantonius: Reducing network latencies are a key enabler for moving away from monolith towards serverless. @adrianco:
    • tbrowbdidnso: These companies that all run their own hardware exclusively are telling everyone that it's stupid to run your own hardware... Why are we listening?
    • jasonhoyt: "People make mistakes all the time...the problem was that our systems that were designed to recognize and correct human error failed us." 
    • @chuhnk: Bob: Service Discovery is a SPOF. You should build async services. Me: How do you receive messages? Bob: A message bus Me: ...
    • @JoeEmison: These articles on serverless remind me of articles on NoSQL from a few years ago. FaaS may have low adoption b/c of the req'd architectures.
    • @Jason: We have 30-60% open rates for http://inside.com  emails vs 1% for app downloads!
    • @adrianco: Split brain syndrome: half your brain thinks message buses are reliable. Other half is wondering how to recover from split brain syndrome.
    • @dbrady: The older I get, the less I care about making tech decisions right and the more I care about retaining the ability to change a wrong one.
    • @littleidea: "Automation code, like unit test code, dies when the maintaining team isn’t obsessive about keeping the code in sync with the codebase."
    • @adulau: I don't ask for bug bounties, fame, cash or even tshirt. I just want a good security point of contact to fix the issues.
    • StorageMojo: most of the SSD vendors don’t make AFAs [all flash arrays]. They have little to lose by pushing NVMe/PCIe SSDs for broad adoption.
    • cookiecaper: I mean, that's not really AWS's problem, is it? Outages happen. If you have a mission-critical service like health care, you really shouldn't write systems with single points of failure like this, especially not systems that depend on something consumer-grade like S3.
    • plgeek: To me his main point is there is a spectrum of what you might consider evidence/proof. However, in Software Engineering their have been low standards set, and it's really not acceptable to continue with low standards. He is not saying the only sort of acceptable evidence is a double blind study.
    • n00b101: I asked an Intel chip designer about this and his opinion was that asynchronous processors are a "fantasy." His reasoning was that an asynchronous chip would still need to synchronize data communication within the chip. Apparently global clock synchronization accounts for about 20% of the power usage of a synchronous chip. In the asynchronous case, if you had to synchronize every communication, then the cost of communication is doubled.

  • Anti-virus software uses fingerprinting as a detection technique. Surprise, nature got there first. Update: CRISPR. Bacteria grab pieces of DNA from viri and store them. This lets them recognize a virus later. When a virus enters a bacteria the bacteria will send out enzymes to combat the invader. Usually the bacteria dies. Sometimes the bacteria wins. The bacteria sends out enzymes to find stray viruses and cut the enemy DNA into small pieces. Those enzymes take the little bits of DNA and splice them into the bacteria's own DNA. DNA is used as a memory device. Next time the virus shows up the bacteria creates molecular assassins that contain a copy of the virus DNA. If there's a pattern match then kill it. The protein looks something like a clam shell. It has a copy of the virus DNA. Whenever it bumps into some virus DNA it pulls apart the DNA, unzips it, reads it, if it's not the right one it moves on. If the RNA has the same sequence then molecular blades come out and chop. Like smart scissors. This is CRISPR.

  • Videos from microXchg 2017 are now available

  • A natural disaster occurred. S3 went down. Were you happy with how your infrastructure responded? @spire was. Mitigating an AWS Instance Failure with the Magic of Kubernetes: "Kubernetes immediately detected what was happening. It created replacement pods on other instances we had running in different availability zones, bringing them back into service as they became available. All of this happened automatically and without any service disruption, there was zero downtime. If we hadn’t been paying attention, we likely wouldn’t have noticed everything Kubernetes did behind the scenes to keep our systems up and running." How do you make this happen?: Distribute nodes across multiple AZs; Nodes should have capacity to handle at least one node failure; Use at least 2 pods per deployment; Use readiness and liveness probes.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Change Implementation Thoughts

IMG_0053.JPG
As part of the research and writing process for the series on change implementation approaches I have sought out ideas and advice from many people. Some I have talked to I have directly quoted (with permission), with others still to come as we explore continuous and hybrid models. Today I am including a longer set of important ideas and thoughts from Christopher Hurney in the form of a guest blog. Mr Hurney is an active part of the SPaMCAST community. Christopher and have talked and corresponded for several years and I learned much from our relationship. Christopher can be found on LinkedIn. Thank you, Chris!

Change Implementation Thoughts
Christopher Hurney

It’s likely the case that strategic decision makers in companies who bring in consultants tend to feel as though they’ve reached a point where a process has become unsustainable and lack confidence that the process deficiencies can be remedied in-house. People in leadership roles have a knack for abstracting themselves from what they feel is the minutia of any given process. Leadership tends to have a “black box” view of process. So it would stand to reason that Leadership also has limited patience to wait for incremental process improvements, let alone get “in the weeds” of improving processes. As a consultant, I have enough anecdotal evidence to perceive a general truth, more or less, about Leadership’s desire and expectations of Big Bang process improvement.

This, unfortunately, creates a definite disconnect between expectations and Change Management best-practices. Change Management speaks of “incremental vs. radical change” at the individual level – essentially, a change that may be small and negligible to one audience may be incredibly disruptive to another. The chasm between negligible and disruptive can be exponentially greater, driving up risk, when invoking “Big Bang” changes. Potential risk for change needs to be evaluated very carefully even when invoking small change. Risk can quickly grow beyond an org’s tolerance threshold as change size grows.

Risk aside, the other glaring problem with Big Bang change is there are too many parameters involved to know what went well and what didn’t. My father used to manage a Wafer Fabrication process for Rockwell Intl. in the 1980s. He used to tell stories of “technicians” who were charged with repairing failing equipment used in the wafer fabrication process – they would perform wholesale replacements of the “guts” of the equipment, rather than identify the specific failing part/root cause. Well, that’s an approach, I suppose. More likely than not, the state of the initial problem will change, perhaps even for the better. But with all of the other superfluous change, the possibility of implementing/replacing some other piece incorrectly increased, and more often than not, new problems surfaced.

In the world of Agile Adoptions (specifically as it pertains to software development), we are taught to make change incrementally, inspect, and adapt. This seems to me a no-brainer. If we implement small change, and observe, we can easily tell whether or not that specific change was effective. Then we can adapt accordingly. The adoption of Agile “processes” works that way, and the actual use of those processes to create software works that way: customers see small increments of working software in frequent iterations, and the say THUMBS UP or THUMBS DOWN. In the event of a THUMBS DOWN, we react appropriately.

I actually think you, Tom, said it best. “If Process Improvement is a never-ending journey (the consultee view), then why do so many people ask, “When will we arrive?” (the consulted’s point of view … unfortunately).”


Categories: Process Management

Android Studio 2.3

Android Developers Blog - Thu, 03/02/2017 - 23:22
By Jamal Eason, Product Manager, Android

Android Studio 2.3 is available to download today. The focus for this release is quality improvements across the IDE. We are grateful for all your feedback so far. We are committed to continuing to invest in making Android Studio fast & seamless for the millions of Android app developers across the world.

We are most excited about the quality improvements in Android Studio 2.3 but you will find a small set of new features in this release that integrate into each phase of your development flow. When designing your app, take advantage of the updated WebP support for your app images plus check out the updated ConstraintLayout library support and widget palette in the Layout Editor. As you are developing, Android Studio has a new App Link Assistant which helps you build and have a consolidated view of your URIs in your app. While building and deploying your app, use the updated run buttons for a more intuitive and reliable Instant Run experience. Lastly, while testing your app with the Android Emulator, you now have proper copy & paste text support.

What's new in Android Studio 2.3
For more detail about the features we added on top of the quality improvements Android Studio 2.3, check out the list of the new features below:
Build
  • Instant Run Improvements and UI Changes: As a part of our focus on quality, we have made some significant changes to Instant Run in Android Studio 2.3 to make the feature more reliable. The Run action will now always cause an application restart to reflect changes in your code that may require a restart, and the new Apply Changes action will attempt to swap the code while your app keeps running. The underlying implementation has changed significantly to improve on reliability, and we have also eliminated the startup lag for Instant Run enabled apps. Learn more.
New Instant Run Button Actions
  • Build Cache: Introduced but disabled by default in Android Studio 2.2, Build Cache is an underlying build optimization for faster builds in Android Studio. By caching exploded AARs and pre-dexed external libraries, the new build cached leads to faster clean builds. This is a user-wide build cache that is now turned on by default with Android Studio 2.3. Learn more.
Design
  • Chains and Ratios support in Constraint Layout: Android Studio 2.3 includes the stable release of ConstraintLayout With this release of ConstraintLayout, you can now chain two or more Android views bi-directionally together to form a group on one dimension. This is helpful when you want when you want to place two views close together but want to spread them across empty space. Learn more.
Constraint Layout Chains
ConstraintLayout also supports ratios, which is helpful when you want to maintain the aspect ratio of widget as the containing layout expands and contracts. Learn more about ratios. Additionally, both Chains and Ratios in ConstraintLayout can support programmatic creating with ConstraintSet APIs.
Constraint Layout Ratios

  • Layout Editor Palette: The updated widget palette in the Layout Editor allows you to search, sort and filter to find widgets for your layouts, plus gives you a preview of the widget before dragging on to the design surface. Learn more.

Layout Editor Widget Palette
  • Layout Favorites: You can now save your favorite attributes per widget in the updated Layout Editor properties panel. Simply star an attribute in the advanced panel and it will appear under the Favorites section. Learn more.

Favorites Attributes on Layout Editor Properties Panel
  • WebP Support: To help you save space in your APK, Android Studio can now generate WebP images from PNG assets in your project. The WebP lossless format is up to 25% smaller than a PNG. With Android Studio 2.3, you have a new wizard that converts PNG to lossless WebP and also allows you to inspect lossy WebP encoding as well. Right-click on any non-launcher PNG file to convert to WebP. And if you need to edit the image, you can also right-click on any WebP file in your project to convert back to PNG. Learn more.
WebP Image Conversion Wizard

  • Material Icon Wizard Update: The updated vector asset wizard supports search and filtering, plus it includes labels for each icon asset. Learn more.
Vector Asset Wizard

Develop
  • Lint Baseline: With Android Studio 2.3, you can set unresolved lint warnings as a baseline in your project. From that point forward, Lint will report only new issues. This is helpful if you have many legacy lint issues in your app, but just want to focus on fixing new issues. Learn more about Lint baseline and the new Lint checks & annotations added in this release.
Lint Baseline Support
  • App Links Assistant: Supporting Android App Links in your app is now easier with Android Studio. The new App Links Assistant allows you to easily create new intent filters for your URLs, declare your app's website association through a Digital Asset Links file, and test your Android App Links support. To access the App Link Assistant go to the following menu location: ToolsApp Link Assistant. Learn more.
App Links Assistant
  • Template Updates: By default, all templates in Android Studio 2.3 which use to contain RelativeLayout, now use ConstraintLayout. Learn more about templates and Constraint Layout. We have also added a new Bottom Navigation Activity template, which implements the Bottom Navigation Material Design guideline.

New Project Wizard Templates
  • IntelliJ Platform Update: Android Studio 2.3 includes the IntelliJ 2016.2 release, which has enhancements such as an updated inspection window and a notifications system. Learn more.
Test
  • Android Emulator Copy & Paste: Back by popular demand, we added back the Copy & Paste feature to the latest Emulator (v25.3.1). We have a shared clipboard between the Android Emulator and host operating system, which will allow you to copy text between both environments. Copy & Paste works with x86 Google API Emulator system images API Level 19 (Android 4.4 - Kitkat) and higher.

Copy & Paste support in Android Emulator
  • Android Emulator Command Line Tools: Starting with Android SDK Tools 25.3, we have moved the emulator from the SDK Tools folder into a separate emulator directory, and also deprecated and replaced the "android avd" command with a standalone avdmanager command. The previous command line parameters for emulator and "android avd" will work with the updated tools. We have also added location redirects for the emulator command. However, if you create Android Virtual Devices (AVDs) directly through the command line you should update any corresponding scripts. If you are using the Android Emulator through Android Studio 2.3, these change will not impact your workflow. Learn more.

To recap, Android Studio 2.3 includes these new features and more:

Develop
Build
Design
Test

Learn more about Android Studio 2.3 by reviewing the release notes.

Getting Started

Download
If you are using a previous version of Android Studio, you can check for updates on the Stable channel from the navigation menu (Help → Check for Update [Windows/Linux] , Android Studio → Check for Updates [OS X]). You can also download Android Studio 2.3 from the official download page. To take advantage of all the new features and improvements in Android Studio, you should also update to the Android Gradle plugin version to 2.3.0 in your current app project.
We appreciate any feedback on things you like, issues or features you would like to see. Connect with us -- the Android Studio development team -- on our Google+ page or on Twitter.
Categories: Programming

Android Studio 2.3

Android Developers Blog - Thu, 03/02/2017 - 23:22
By Jamal Eason, Product Manager, Android

Android Studio 2.3 is available to download today. The focus for this release is quality improvements across the IDE. We are grateful for all your feedback so far. We are committed to continuing to invest in making Android Studio fast & seamless for the millions of Android app developers across the world.

We are most excited about the quality improvements in Android Studio 2.3 but you will find a small set of new features in this release that integrate into each phase of your development flow. When designing your app, take advantage of the updated WebP support for your app images plus check out the updated ConstraintLayout library support and widget palette in the Layout Editor. As you are developing, Android Studio has a new App Link Assistant which helps you build and have a consolidated view of your URIs in your app. While building and deploying your app, use the updated run buttons for a more intuitive and reliable Instant Run experience. Lastly, while testing your app with the Android Emulator, you now have proper copy & paste text support.

What's new in Android Studio 2.3
For more detail about the features we added on top of the quality improvements Android Studio 2.3, check out the list of the new features below:
Build
  • Instant Run Improvements and UI Changes: As a part of our focus on quality, we have made some significant changes to Instant Run in Android Studio 2.3 to make the feature more reliable. The Run action will now always cause an application restart to reflect changes in your code that may require a restart, and the new Apply Changes action will attempt to swap the code while your app keeps running. The underlying implementation has changed significantly to improve on reliability, and we have also eliminated the startup lag for Instant Run enabled apps. Learn more.
New Instant Run Button Actions
  • Build Cache: Introduced but disabled by default in Android Studio 2.2, Build Cache is an underlying build optimization for faster builds in Android Studio. By caching exploded AARs and pre-dexed external libraries, the new build cached leads to faster clean builds. This is a user-wide build cache that is now turned on by default with Android Studio 2.3. Learn more.
Design
  • Chains and Ratios support in Constraint Layout: Android Studio 2.3 includes the stable release of ConstraintLayout With this release of ConstraintLayout, you can now chain two or more Android views bi-directionally together to form a group on one dimension. This is helpful when you want when you want to place two views close together but want to spread them across empty space. Learn more.
Constraint Layout Chains
ConstraintLayout also supports ratios, which is helpful when you want to maintain the aspect ratio of widget as the containing layout expands and contracts. Learn more about ratios. Additionally, both Chains and Ratios in ConstraintLayout can support programmatic creating with ConstraintSet APIs.
Constraint Layout Ratios

  • Layout Editor Palette: The updated widget palette in the Layout Editor allows you to search, sort and filter to find widgets for your layouts, plus gives you a preview of the widget before dragging on to the design surface. Learn more.

Layout Editor Widget Palette
  • Layout Favorites: You can now save your favorite attributes per widget in the updated Layout Editor properties panel. Simply star an attribute in the advanced panel and it will appear under the Favorites section. Learn more.

Favorites Attributes on Layout Editor Properties Panel
  • WebP Support: To help you save space in your APK, Android Studio can now generate WebP images from PNG assets in your project. The WebP lossless format is up to 25% smaller than a PNG. With Android Studio 2.3, you have a new wizard that converts PNG to lossless WebP and also allows you to inspect lossy WebP encoding as well. Right-click on any non-launcher PNG file to convert to WebP. And if you need to edit the image, you can also right-click on any WebP file in your project to convert back to PNG. Learn more.
WebP Image Conversion Wizard

  • Material Icon Wizard Update: The updated vector asset wizard supports search and filtering, plus it includes labels for each icon asset. Learn more.
Vector Asset Wizard

Develop
  • Lint Baseline: With Android Studio 2.3, you can set unresolved lint warnings as a baseline in your project. From that point forward, Lint will report only new issues. This is helpful if you have many legacy lint issues in your app, but just want to focus on fixing new issues. Learn more about Lint baseline and the new Lint checks & annotations added in this release.
Lint Baseline Support
  • App Links Assistant: Supporting Android App Links in your app is now easier with Android Studio. The new App Links Assistant allows you to easily create new intent filters for your URLs, declare your app's website association through a Digital Asset Links file, and test your Android App Links support. To access the App Link Assistant go to the following menu location: ToolsApp Link Assistant. Learn more.
App Links Assistant
  • Template Updates: By default, all templates in Android Studio 2.3 which use to contain RelativeLayout, now use ConstraintLayout. Learn more about templates and Constraint Layout. We have also added a new Bottom Navigation Activity template, which implements the Bottom Navigation Material Design guideline.

New Project Wizard Templates
  • IntelliJ Platform Update: Android Studio 2.3 includes the IntelliJ 2016.2 release, which has enhancements such as an updated inspection window and a notifications system. Learn more.
Test
  • Android Emulator Copy & Paste: Back by popular demand, we added back the Copy & Paste feature to the latest Emulator (v25.3.1). We have a shared clipboard between the Android Emulator and host operating system, which will allow you to copy text between both environments. Copy & Paste works with x86 Google API Emulator system images API Level 19 (Android 4.4 - Kitkat) and higher.

Copy & Paste support in Android Emulator
  • Android Emulator Command Line Tools: Starting with Android SDK Tools 25.3, we have moved the emulator from the SDK Tools folder into a separate emulator directory, and also deprecated and replaced the "android avd" command with a standalone avdmanager command. The previous command line parameters for emulator and "android avd" will work with the updated tools. We have also added location redirects for the emulator command. However, if you create Android Virtual Devices (AVDs) directly through the command line you should update any corresponding scripts. If you are using the Android Emulator through Android Studio 2.3, these change will not impact your workflow. Learn more.

To recap, Android Studio 2.3 includes these new features and more:

Develop
Build
Design
Test

Learn more about Android Studio 2.3 by reviewing the release notes.

Getting Started

Download
If you are using a previous version of Android Studio, you can check for updates on the Stable channel from the navigation menu (Help → Check for Update [Windows/Linux] , Android Studio → Check for Updates [OS X]). You can also download Android Studio 2.3 from the official download page. To take advantage of all the new features and improvements in Android Studio, you should also update to the Android Gradle plugin version to 2.3.0 in your current app project.
We appreciate any feedback on things you like, issues or features you would like to see. Connect with us -- the Android Studio development team -- on our Google+ page or on Twitter.
Categories: Programming

Automate incident investigation to save money and become proactive

Xebia Blog - Thu, 03/02/2017 - 13:41

How many hours did your best engineers spent investigating incidents and problems last month? Do those engineers get a big applause when they solved the issue? Most likely the answers are “a lot” and “yes”… The reason that problem and incident investigation is hard, is because usually you have to search through multiple tools, correlate […]

The post Automate incident investigation to save money and become proactive appeared first on Xebia Blog.

Getting Santa Tracker Into Shape

Android Developers Blog - Wed, 03/01/2017 - 23:22

Posted by Sam Stern, Developer Programs Engineer

Santa Tracker is a holiday tradition at Google.  In addition to bringing seasonal joy to millions of users around the world, it's a yearly testing ground for the latest APIs and techniques in app development.  That's why the full source of the app is released on Github every year.

In 2016, the Santa team challenged itself to introduce new content to the app while also making it smaller and more efficient than ever before.  In this post, you can read about the road to a more slimmer, faster Santa Tracker. APK Bloat Santa Tracker has grown over the years to include the visual and audio assets for over a dozen games and interactive scenes.  In 2015, the Santa Tracker APK size was 66.1 MB.

The Android Studio APK analyzer is a great tool to investigate what made the 2015 app so large.

09.png

First, while the APK size is 66.1 MB, we see that the download size is 59.5MB! The majority of that size is in the resources folder, but assets and native libraries contribute a sizable piece.

The 2016 app contains everything that was in the 2015 app while adding four completely new games.  At first, we assumed that making the app smaller while adding all of that would be impossible, but (spoiler alert!) here are the final results for 2016:

23.png

The download size for the app is now nearly 10MB smaller despite the addition of four new games and a visual refresh. The rest of this section will explore how we got there. Multiple APK Support on Google Play with APK Splits The 2015 app added the "Snowdown" game by Google's Fun Propulsion Labs team.  This game is written in C++, so it's included in Santa Tracker as a native library.  The team gave us compiled libraries for armv5, armv7, and x86 architectures.  Each version was about 3.5MB, which adds up to the 10.5MB you see in the lib entry for the 2015 APK.

Since each device is only using one of these architectures, two thirds of the native libraries could be removed to save space - the tradeoff here is that we’ll publish multiple APKs.  The Android gradle build system has native support for building an APK for each architecture (ABI) with only a few lines of configuration in the app's build.gradle file:

ext.abiList = ['armeabi', 'armeabi-v7a', 'x86'] android {        // ...    splits {        abi {            // Enable ABI splits            enable true            // Include the three architectures that we support for snowdown            reset()            include(*abiList)            // Also build a "universal" APK that will run on any device            universalApk true        }    } }

Once splits are enabled, each split needs to be given a unique version code so that they can co-exist in the Play Store:

// Generate unique versionCodes for each APK variant: ZXYYSSSSS //   Z is the Major version number //   X is the Minor version number //   YY is the Patch version number //   SSSS is information about the split (default to 0000) // Any new variations get added to the front import com.android.build.OutputFile; android.applicationVariants.all { variant ->    variant.outputs.each { output ->        // Shift abi over by 8 digits        def abiFilter = output.getFilter(OutputFile.ABI)        int abiVersionCode = (abiList.indexOf(abiFilter) + 1)        // Merge all version codes        output.versionCodeOverride = variant.mergedFlavor.versionCode + abiVersionCode    } }

In the most recent version of Santa Tracker, we published versions for armv5, armv7, and x86 respectively.  With this change in place, 10.5MB of native libraries was reduced to about 4MB per variant without losing any functionality. Optimize Images The majority of the Santa Tracker APK is image resources. Each game has hundreds of images, and each image comes in multiple sizes for different screen densities. Almost all of these images are PNGs, so in past years we ran PNGCrush on all of the files and figured our job was done.  We learned in 2016 that there have been advancements in lossless PNG compression, and Google's zopfli tool is currently the state of the art.

By running zopflipng on all PNG assets we losslessly reduced the size of most images by 10% and some by as much as 30%. This resulted in almost a 5MB size reduction across the app without sacrificing any quality. For instance this image of Santa was losslessly reduced from 10KB to only 7KB.  Don't bother trying to spot the differences, there are none!


Before (10.2KB) After (7.4KB) santa-before.png santa-before-zopfli.png

Unused Resources When working on Santa Tracker engineers are constantly refactoring the app, adding and removing pieces of logic and UI from previous years. While code review and linting help to find unused code, unused resources are much more likely to slip by unnoticed.  Plus there is no ProGuard for resources, so we can't be saved by our toolchain and unused images and other resources often sneak into the app.

Android Studio can help to find resources that are not being used and are therefore bloating the APK.  By clicking Analyze > Run Inspection by Name > Unused Resources Android Studio will identify resources that are not used by any known codepaths.  It's important to first eliminate all unused code, as resources that are "used" by dead code will not be detected as unused.

After a few cycles of analysis with Android Studio's helpful tools, we were able to find dozens of unused files and eliminate a few more MB of resources from the app. Memory Usage Santa Tracker is popular all around the world and has users on thousands of unique Android devices.  Many of these devices are a few years old and have 512MB RAM or less, so we have historically run into OutOfMemoryErrors in our games.

While the optimizations above made our PNGs smaller on disk, when loaded into a Bitmap their memory footprint is unchanged.  Since each game in Santa Tracker loads dozens of images, we quickly get into dangerous memory territory.

In 2015 six of our top ten crashes were memory related. Due to the optimizations below (and others) we moved memory crashes out of the top ten altogether. Image Loading Backoff When initializing a game in Santa Tracker we often load all of the Bitmaps needed for the first scene into memory so that the game can run smoothly.  The naive approach looks like this:

private LruCache<Integer, Drawable> mMemoryCache; private BitmapFactory.Options mOptions; public void init() {  // Initialize the cache  mMemoryCache = new LruCache<Integer, Drawable>(240);  // Start with no Bitmap sampling  mOptions = new BitmapFactory.Options();  mOptions.inSampleSize = 1; } public void loadBitmap(@DrawableRes int id) {    // Load bitmap    Bitmap bmp = BitmapFactory.decodeResource(getResources(), id, mOptions);    BitmapDrawable bitmapDrawable = new BitmapDrawable(getResources(), bmp);        // Add to cache    mMemoryCache.put(id, bitmapDrawable); }

However the decodeResource function will throw an OutOfMemoryError if we don't have enough RAM to load the Bitmap into memory.  To combat this, we catch these errors and then try to reload all of the images with a higher sampling ratio (scaling by a factor of 2 each time):

private static final int MAX_DOWNSAMPLING_ATTEMPTS = 3; private int mDownsamplingAttempts = 0; private Bitmap tryLoadBitmap(@DrawableRes int id) throws Exception {    try {        return BitmapFactory.decodeResource(getResources(), id, mOptions);    } catch (OutOfMemoryError oom) {        if (mDownSamplingAttempts < MAX_DOWNSAMPLING_ATTEMPTS) {            // Increase our sampling by a factor of 2            mOptions.inSampleSize *= 2;            mDownSamplingAttempts++;        }    }    throw new Exception("Failed to load resource ID: " + resourceId); }

With this technique low-memory devices will now see more pixelated graphics, but by making this tradeoff we almost completely eliminated memory errors from Bitmap loading. Transparent Pixels As mentioned above, an image's size on disk is not a good indicator of how much memory it will use. One glaring example is images with large transparent regions.  PNG can compress these regions to near-zero disk size but each transparent pixel still demands the same RAM.

For example in the "Dasher Dancer" game, animations were represented by a series of 1280x720 PNG frames.  Many of these frames were dominated by transparency as the animated object left the screen.  We wrote a script to trim all of the transparent space away and record an "offset" for displaying each frame so that it would still appear to be 1280x720 overall. In one test this reduced runtime RAM usage of the game by 60MB! And now that we were not wasting memory on transparent pixels, we needed less downscaling and could use higher-resolution images on low-memory devices. Additional Explorations In addition to the major optimizations described above, we explored a few other avenues for making the app smaller and faster with varying degrees of success. Splash Screens The 2015 app moved to a Material Design aesthetic where games were launched from a central list of 'cards'. We noticed that half of the games would cause the card 'ripple' effect to be janky on launch, but we couldn't find the root cause and were unable to fix the issue.

When work on the 2016 version of the app we were determined to fix the game jank launch. After hours of investigation, we realized it was only the games fixed to the landscape orientation that caused jank when launched.  The dropped frames were due to the forced orientation change. To create a smooth user experience, we introduced splash screens in between the launcher Activity and game Activities.  The splash screen would detect the current device orientation and the orientation needed to play the game being loaded and rotate itself at runtime to match.  This immediately removed any noticeable jank from game launches and made the whole app feel smoother. SVG When we originally took on the task of reducing the size of our resources, using SVG images seemed like an obvious optimization.  Vector images are dramatically smaller and only need to be included once to support multiple densities.  Due to the 'flat' aesthetic in Santa Tracker, we were even able to convert many of our PNGs to tiny SVGs without much quality loss. However loading these SVGs was completely impractical on slower devices, where they would be tens or hundreds of times slower than a PNG depending on the path complexity.

In the end we decided to follow the recommendation limiting vector image sizes at 200x200 dp and only used SVG for small icons in the app rather than large graphics or game assets. Conclusions When we started building Santa Tracker 2016 we were faced with a daunting problem: how can we make the app smaller and faster while adding exciting new content? The optimizations above were discovered by constantly challenging each other to do more with less and considering resource constraints with every change we made. In the end we were able to incrementally make the Santa Tracker app as healthy as it has ever been ... our next job will be helping Mr. Claus work off all that extra cookie weight.
Categories: Programming

Apply now to Launchpad Accelerator---now including Africa and Europe!

Google Code Blog - Wed, 03/01/2017 - 20:09
Posted By: Roy Glasberg, Global Lead, Launchpad Program & Accelerator

After recently hosting another amazing group of startups for Launchpad Accelerator, we're ready to kick things off again with the next class! Apply hereby 9am PST on April 24, 2017.

Starting today, we'll be accepting applications from growth-stage innovative tech startups from these countries:
  • Asia: India, Indonesia, Thailand, Vietnam, Malaysia and the Philippines
  • Latin America: Argentina, Brazil, Chile, Colombia and Mexico
And we're delighted to expand the program to countries in Africa and Europe for the first time!
  • Africa: Kenya, Nigeria and South Africa
  • Europe: Czech Republic, Hungary and Poland
The equity-free program will begin on July 17th, 2017 at the Google Developers Launchpad Space in San Francisco and will include 2 weeks of all-expense-paid training.

What are the benefits?

The training at Google HQ includes intensive mentoring from 20+ Google teams, and expert mentors from top technology companies and VCs in Silicon Valley. Participants receive equity-free support, credits for Google products, PR support and continue to work closely with Google back in their home country during the 6-month program.

What do we look for when selecting startups?

Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:
  • Be a technological startup.
  • Be targeting their local markets.
  • Have proven product-market fit (beyond ideation stage).
  • Be based in the countries listed above.
Additionally, we are interested in what kind of startup you are. We also consider:
  • The problem you are trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country or region?
  • Does your management team have a leadership mindset and the drive to become an influencer?
  • Will you share what you learn in Silicon Valley for the benefit of other startups in your local ecosystem?
If you're based outside of these countries, stay tuned, as we expect to add more countries to the program in the future.

We can't wait to learn more about your startup and work together to solve your challenges and help grow your business.
Categories: Programming

Apply now to Launchpad Accelerator---now including Africa and Europe!

Google Code Blog - Wed, 03/01/2017 - 20:09
Posted By: Roy Glasberg, Global Lead, Launchpad Program & Accelerator

After recently hosting another amazing group of startups for Launchpad Accelerator, we're ready to kick things off again with the next class! Apply hereby 9am PST on April 24, 2017.

Starting today, we'll be accepting applications from growth-stage innovative tech startups from these countries:
  • Asia: India, Indonesia, Thailand, Vietnam, Malaysia and the Philippines
  • Latin America: Argentina, Brazil, Chile, Colombia and Mexico
And we're delighted to expand the program to countries in Africa and Europe for the first time!
  • Africa: Kenya, Nigeria and South Africa
  • Europe: Czech Republic, Hungary and Poland
The equity-free program will begin on July 17th, 2017 at the Google Developers Launchpad Space in San Francisco and will include 2 weeks of all-expense-paid training.

What are the benefits?

The training at Google HQ includes intensive mentoring from 20+ Google teams, and expert mentors from top technology companies and VCs in Silicon Valley. Participants receive equity-free support, credits for Google products, PR support and continue to work closely with Google back in their home country during the 6-month program.

What do we look for when selecting startups?

Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:
  • Be a technological startup.
  • Be targeting their local markets.
  • Have proven product-market fit (beyond ideation stage).
  • Be based in the countries listed above.
Additionally, we are interested in what kind of startup you are. We also consider:
  • The problem you are trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country or region?
  • Does your management team have a leadership mindset and the drive to become an influencer?
  • Will you share what you learn in Silicon Valley for the benefit of other startups in your local ecosystem?
If you're based outside of these countries, stay tuned, as we expect to add more countries to the program in the future.

We can't wait to learn more about your startup and work together to solve your challenges and help grow your business.
Categories: Programming

Getting Started with Lyft Envoy for Microservices Resilience

This is a guest repost by Flynn at datawireio on Envoy, a Layer 7 communications bus, used throughout Lyft's service-oriented architecture.

Using microservices to solve real-world problems always involves more than simply writing the code. You need to test your services. You need to figure out how to do continuous deployment. You need to work out clean, elegant, resilient ways for them to talk to each other.

A really interesting tool that can help with the “talk to each other” bit is Lyft’s Envoy: “an open source edge and service proxy, from the developers at Lyft.” (If you’re interested in more details about Envoy, Matt Klein gave a great talk at the 2017 Microservices Practitioner Summit.)

Envoy Overview

It might feel odd to see us call out something that identifies itself as a proxy – after all, there are a ton of proxies out there, and the 800-pound gorillas are NGINX and HAProxy, right? Here’s some of what’s interesting about Envoy:

  • It can proxy any TCP protocol.
  • It can do SSL. Either direction.
  • It makes HTTP/2 a first class citizen, and can translate between HTTP/2 and HTTP/1.1 (either direction).
  • It has good flexibility around discovery and load balancing.
  • It’s meant to increase visibility into your system.
    • In particular, Envoy can generate a lot of traffic statistics and such that can otherwise be hard to get.
    • In some cases (like MongoDB and Amazon RDS) Envoy actually knows how to look into the wire protocol and do transparent monitoring.
  • It’s less of a nightmare to set up than some others.
  • It’s a sidecar process, so it’s completely agnostic to your services’ implementation language(s).

(Envoy is also extensible in some fairly sophisticated — and complex — ways, but we’ll dig into that later — possibly much later. For now we’re going to keep it simple.)

Being able to proxy any TCP protocol, including using SSL, is a pretty big deal. Want to proxy Websockets? Postgres? Raw TCP? Go for it. Also note that Envoy can both accept and originate SSL connections, which can be handy at times: you can let Envoy do client certificate validation, but still have an SSL connection to your service from Envoy.

Of course, HAProxy can do arbitrary TCP and SSL too — but all it can do with HTTP/2 is forward the whole stream to a single backend server that supports it. NGINX can’t do arbitrary protocols (although to be fair, Envoy can’t do e.g. FastCGI, because Envoy isn’t a web server). Neither open-source NGINX nor HAProxy handle service discovery very well (though NGINX Plus has some options here). And neither has quite the same stats support that a properly-configured Envoy does.

Overall, what we’re finding is that Envoy is looking promising for being able to support many of our needs with just a single piece of software, rather than needing to mix and match things.

Envoy Architecture
Categories: Architecture

“Level up” your gaming business with new innovations for apps

Google Code Blog - Wed, 03/01/2017 - 15:09
Originally shared on the Inside AdMob Blog
Posted by Sissie Hsiao, Product Director, Mobile Advertising, Google. Last played Fire Emblem Heroes for Android

Mobile games mean more than just fun. They mean business. Big business. According to App Annie, game developers should capture almost half of the $189B global market for in-app purchases and advertising by 20201.

Later today, at the Games Developer Conference (GDC) in San Francisco, I look forward to sharing a series of new innovations across ad formats, monetization tools and measurement insights for apps.

  • New playable and video ad formats to get more people into your game
  • Integrations to help you create better monetization experiences 
  • Measurement tools that provide insights about how players are interacting with your game
Let more users try your game with a playable ad format

There’s no better way for a new user to experience your game than to actually play it. So today, we introduced playables, an interactive ad format in Universal App Campaigns that allows users to play a lightweight version of your game, right when they see it in any of the 1M+ apps in the Google Display Network.

studio.justad.mobi-Management-studio-test_ad.php-browser&saved&id=703423(Nexus 5X)_nexus5x-portrait.pngJam City’s playable ad for Cookie Jam

Playables help you get more qualified installs from users who tried your game in the ad and made the choice to download it for more play time. By attracting already-engaged users into your app, playables help you drive the long-term outcomes you care about — rounds played, levels beat, trophies won, purchases made and more.

"Jam City wants to put our games in the hands of more potential players as quickly as possible. Playables get new users into the game right from the ad, which we've found drives more engagement and long-term customer value." Josh Yguado, President & COO Jam City, maker of Panda Pop and Cookie Jam.

Playables will be available for developers through Universal App Campaigns in the coming months, and will be compatible with HTML5 creatives built through Google Web Designer or third-party agencies.

Improve the video experience with ads designed for mobile viewing

Most mobile video ad views on the Google Display Network are watched on devices held vertically2. This can create a poor experience when users encounter video ad creatives built for horizontal viewing.

Developers using Universal App Campaigns will soon be able to use an auto-flip feature that automatically orients your video ads to match the way users are holding their phones. If you upload a horizontal video creative in AdWords, we will automatically create a second, vertical version for you.

Cookie Jam horizontal video and vertical-optimized video created through auto-flip technology

The auto-flip feature uses Google's machine learning technology to identify the most important objects in every frame of your horizontal video creative. It then produces an optimized, vertical version of your video ad that highlights those important components of your original asset. Early tests show that click-through rates are about 20% higher on these dynamically-generated vertical videos than on horizontal video ads watched vertically3.

Unlock new business with rewarded video formats, and free, unlimited reporting

Developers have embraced AdMob's platform to mediate rewarded video ads as a way to let users watch ads in exchange for an in-app reward. Today, we are delighted to announce that we are bringing Google’s video app install advertising demand from AdWords to AdMob, significantly increasing rewarded demand available to developers. Advertisers that use Universal App Campaigns can seamlessly reach this engaged, game-playing audience using your existing video creatives.

We are also investing in better measurement tools for developers by bringing the power of Firebase Analytics to more game developers with a generally available C++ SDK and an SDK for Unity, a leading gaming engine.

002-v1-entryPoint_v2.pngC++ and Unity developers can now access Firebase Analytics for real-time player insights

With Firebase Analytics, C++ and Unity developers can now capture billions of daily events — like level completes and play time — to get more nuanced player insights and gain a deeper understanding of metrics like daily active users, average revenue per user and player lifetime value.

This is an exciting time to be a game developer. It’s been a privilege to meet so many of you at GDC 2017 and learn about the amazing games that you’re all building. We hope the innovations we announced today help you grow long-term gaming businesses and we look forward to continuing on this journey with you.

Until next year, GDC!

1 - App Monetization Report, November 2016, App Annie
2 - More than 80% of video ad views in mobile apps on the Google Display Network are from devices held vertically video, Google Internal Data
3 - Google Internal Data
Categories: Programming

Big Bang Change Initiatives: The Cons

 

Big Bang

Big Bang

A big bang adoption is an instant changeover, a “one-and-done” type of approach, in which everyone associated with a new system or process switches over en masse at a specific point in time.  Big bang process improvements are useful; however nearly every person involved in planning and executing change avoids them like the plague.  Practitioners avoid big bangs for a number of very specific reasons although at the root of these reasons is risk.  They are risky because they have:

  1.    Slower feedback. Big bang programs gather production feedback only once when the project is implemented. Because there is only one implementation, defects are discovered late in the project and not contained early when they can have less of an impact.  Slower feedback increases risk; you have only one shot to get it right. Kim Pries, the Software Sensei, summed this reason for avoiding big bangs up by stating “Big bang is risky. Incremental allows some containment of error, although, like most things in life, it is not foolproof.”
  1.    Too many moving parts.  Because of their size, big bang projects are hard to control, therefore require more administrative overhead. The increased overhead is a tool for trying to control risk.  A direct corollary to too many moving parts is that size of the project is directly related to increased risk. This is complicated by the late production feedback, which makes it difficult to recognize risks. Jeremy Berriault, of the QA Corner, summed up this issue by stating, “Big bang leads to too many variables and leads to change requests.Focus on one set (of changes) and build over time. Solely focus on one change, interface or file set. Then see where it goes from there.”
  1.    Late recognition of value.  Big bang projects deliver a significant amount of value late in the project. This makes Big Bang process improvement more apt to be canceled if management’s focus wavers (Deming call this constancy of purpose in his famous 14 Points). Arguably, late recognition of value is a specialized version of delayed feedback that increases cancellation risk. Dominique Bourget, also known as the Process Philosopher, humorously summed this issue up by stating, “It’s like losing weight… is it better to do a bit each day rather trying to lose it all on the last day”

The strength of big bang implementations, that everything happens at once, directly increases risk. Increased risk makes big bang approaches to change unpopular. Patrick Holden summarizes the case against big bang change implementation by making the case for incremental and continuous by stating:

“With complex systems or processes with multiple layers, components, services and especially people, then these improvements should be incremental. This incremental approach should take the opportunities, manage risk, overcome inertia or resistance to change, fail and fix, test and improve, create a momentum and demonstrate the benefits.”

In the end, big bang change implementation is useful, but is a riskier approach. Use it with your eyes wide open.

 


Categories: Process Management

SE-Radio Episode 283: Alexander Tarlinder on Developer Testing

Felienne talks with Alexander Tarlinder on Developer Testing. Topics include Developer Testing, Agile Testing, Programming by Contract, Specification Based Testing, Venue: KTH, Stockholm Related Links Alexander on Twitter https://twitter.com/alexander_tar Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory https://www.amazon.com/Agile-Testing-Practical-Guide-Testers/dp/0321534468 Clean Code https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship-ebook/dp/B001GSTOAM Alexander’s book review site http://www.techbookreader.com/ Developer […]
Categories: Programming

SE-Radio Episode 283: Alexander Tarlinder on Developer Testing

Felienne talks with Alexander Tarlinder on Developer Testing. Topics include Developer Testing, Agile Testing, Programming by Contract, Specification Based Testing, Venue: KTH, Stockholm Related Links Alexander on Twitter https://twitter.com/alexander_tar Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory https://www.amazon.com/Agile-Testing-Practical-Guide-Testers/dp/0321534468 Clean Code https://www.amazon.com/Clean-Code-Handbook-Software-Craftsmanship-ebook/dp/B001GSTOAM Alexander’s book review site http://www.techbookreader.com/ Developer […]
Categories: Programming

Software Development Conferences Forecast February 2017

From the Editor of Methods & Tools - Tue, 02/28/2017 - 19:53
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

Sponsored Post: Aerospike, Loupe, Clubhouse, GoCardless, Auth0, InnoGames, Contentful, Stream, Scalyr, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring?
  • GoCardless is building the payments network for the internet. We’re looking for DevOps Engineers to help scale our infrastructure so that the thousands of businesses using our service across Europe can take payments. You will be part of a small team that sets the direction of the GoCardless core stack. You will think through all the moving pieces and issues that can arise, and collaborate with every other team to drive engineering efforts in the company. Please apply here.

  • InnoGames is looking for Site Reliability Engineers. Do you not only want to play games, but help building them? Join InnoGames in Hamburg, one of the worldwide leading developers and publishers of online games. You are the kind of person who leaves systems in a better state than they were before. You want to hack on our internal tools based on django/python, as well as improving the stability of our 5000+ Debian VMs. Orchestration with Puppet is your passion and you would rather automate stuff than touch it twice. Relational Database Management Systems aren't a black hole for you? Then apply here!

  • Contentful is looking for a JavaScript BackEnd Engineer to join our team in their mission of getting new users - professional developers - started on our platform within the shortest time possible. We are a fun and diverse family of over 100 people from 35 nations with offices in Berlin and San Francisco, backed by top VCs (Benchmark, Trinity, Balderton, Point Nine), growing at an amazing pace. We are working on a content management developer platform that enables web and mobile developers to manage, integrate, and deliver digital content to any kind of device or service that can connect to an API. See job description.
Fun and Informative Events
  • DBTA Roundtable Webinar: Fast Data: The Key Ingredients to Real-Time Success. Thursday February 23, 2017 | 11:00 AM Pacific Time. Join Stephen Faig, Research Director Unisphere Research and DBTA, as he hosts a roundtable discussion covering new technologies that are coming to the forefront to facilitate real-time analytics, including in-memory platforms, self-service BI tools and all-flash storage arrays. Brian Bulkowski, CTO and Co-Founder of Aerospike, will be speaking along with presenters from Attunity and Hazelcast. Learn more and register.

  • Your event here!
Cool Products and Services
  • Working on a software product? Clubhouse is a project management tool that helps software teams plan, build, and deploy their products with ease. Try it free today or learn why thousands of teams use Clubhouse as a Trello alternative or JIRA alternative.

  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • Auth0 is the easiest way to add secure authentication to any app/website. With 40+ SDKs for most languages and frameworks (PHP, Java, .NET, Angular, Node, etc), you can integrate social, 2FA, SSO, and passwordless login in minutes. Sign up for a free 22 day trial. No credit card required. Get Started Now.

  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorial. Stream is free up to 3 million feed updates so it's easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we'd like to ad a few zeros to that number. Check out the job opening on AngelList.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex is a SaaS database monitoring product that provides the best way for organizations to improve their database performance, efficiency, and uptime. Currently supporting MySQL, PostgreSQL, Redis, MongoDB, and Amazon Aurora database types, it's a secure, cloud-hosted platform that eliminates businesses' most critical visibility gap. VividCortex uses patented algorithms to analyze and surface relevant insights, so users can proactively fix future performance problems before they impact customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

Neo4j: Graphing the ‘My name is…I work’ Twitter meme

Mark Needham - Tue, 02/28/2017 - 16:50

Over the last few days I’ve been watching the chain of ‘My name is…’ tweets kicked off by DHH with interest. As I understand it, the idea is to show that coding interview riddles/hard tasks on a whiteboard are ridiculous.

Hello, my name is David. I would fail to write bubble sort on a whiteboard. I look code up on the internet all the time. I don't do riddles.

— DHH (@dhh) February 21, 2017

Other people quoted that tweet and added their own piece and yesterday Eduardo Hernacki suggested that traversing this chain of tweets seemed tailor made for Neo4j.

@eduardohki is someone traversing all this stuff? #Neo4j

— Eduardo Hernacki (@eduardohki) February 28, 2017

Michael was quickly on the scene and created a Cypher query which calls the Twitter API and creates a Neo4j graph from the resulting JSON response. The only tricky bit is creating a ‘bearer token’ but Jason Kotchoff has a helpful gist showing how to generate one from your Twitter consumer key and consumer secret.

Now that we’re got our bearer token let’s create a parameter to store it. Type the following in the Neo4j browser:

:param bearer: ''

Now we’re ready to query the Twitter API. We’ll start with the search API and find all tweets which contain the text ‘”my name” “I work”‘. That will return a JSON response containing lots of tweets. We’ll then create a node for each tweet it returns, a node for the user who posted the tweet, a node for the tweet it quotes, and relationships to glue them all together.

We’re going to use the apoc.load.jsonParams procedure from the APOC library to help us import the data. If you want to follow along you can use a Neo4j sandbox instance which comes with APOC installed. For your local Neo4j installation, grab the APOC jar and put it into your plugins folder before restarting Neo4j.

This is the query in full:

WITH 'https://api.twitter.com/1.1/search/tweets.json?count=100&result_type=recent&lang=en&q=' as url, {bearer} as bearer

CALL apoc.load.jsonParams(url + "%22my%20name%22%20is%22%20%22I%20work%22",{Authorization:"Bearer "+bearer},null) yield value

UNWIND value.statuses as status
WITH status, status.user as u, status.entities as e
WHERE status.quoted_status_id is not null

// create a node for the original tweet
MERGE (t:Tweet {id:status.id}) 
ON CREATE SET t.text=status.text,t.created_at=status.created_at,t.retweet_count=status.retweet_count, t.favorite_count=status.favorite_count

// create a node for the author + a POSTED relationship from the author to the tweet
MERGE (p:User {name:u.screen_name})
MERGE (p)-[:POSTED]->(t)

// create a MENTIONED relationship from the tweet to any users mentioned in the tweet
FOREACH (m IN e.user_mentions | MERGE (mu:User {name:m.screen_name}) MERGE (t)-[:MENTIONED]->(mu))

// create a node for the quoted tweet and create a QUOTED relationship from the original tweet to the quoted one
MERGE (q:Tweet {id:status.quoted_status_id})
MERGE (t)–[:QUOTED]->(q)

// repeat the above steps for the quoted tweet
WITH t as t0, status.quoted_status as status WHERE status is not null
WITH t0, status, status.user as u, status.entities as e

MERGE (t:Tweet {id:status.id}) 
ON CREATE SET t.text=status.text,t.created_at=status.created_at,t.retweet_count=status.retweet_count, t.favorite_count=status.favorite_count

MERGE (t0)-[:QUOTED]->(t)

MERGE (p:User {name:u.screen_name})
MERGE (p)-[:POSTED]->(t)

FOREACH (m IN e.user_mentions | MERGE (mu:User {name:m.screen_name}) MERGE (t)-[:MENTIONED]->(mu))

MERGE (q:Tweet {id:status.quoted_status_id})
MERGE (t)–[:QUOTED]->(q);

The resulting graph looks like this:

MATCH p=()-[r:QUOTED]->() RETURN p LIMIT 25

Graph  21

A more interesting query would be to find the path from DHH to Eduardo which we can find with the following query:

match path = (dhh:Tweet {id: 834146806594433025})<-[:QUOTED*]-(eduardo:Tweet{id: 836400531983724545})
UNWIND NODES(path) AS tweet
MATCH (tweet)<-[:POSTED]->(user)
RETURN tweet, user

This query:

  • starts from DHH’s tweet
  • traverses all QUOTED relationships until it finds Eduardo’s tweet
  • collects all those tweets and then finds the author
  • returns the tweet and the author

And this is the output:

Graph  20

I ran a couple of other queries against the Twitter API to hydrate some nodes that we hadn’t set all the properties on – you can see all the queries on this gist.

For the next couple of days I also have a sandbox running https://10-0-1-157-32898.neo4jsandbox.com/browser/. You can login using the credentials readonly/twitter.

If you have any questions/suggestions let me know in the comments, @markhneedham on twitter, or email the Neo4j DevRel team – devrel@neo4j.com.

The post Neo4j: Graphing the ‘My name is…I work’ Twitter meme appeared first on Mark Needham.

Categories: Programming

Cross Functional Doesn&#8217;t Mean Everyone Can Do Everything

Mike Cohn's Blog - Tue, 02/28/2017 - 16:00

Perhaps the most prevalent and persistent myth in agile is that a cross-functional team is one on which each person possesses every skill necessary to complete the work.

This is simply not true.

A cross-functional team has members with a variety of skills, but that does not mean each member has all of the skills.

Specialists Are Acceptable on Agile Teams

It is perfectly acceptable to have specialists on an agile team. And I suspect a lot of productivity has been lost by teams pursuing some false holy grail of having each team member able to do everything.

If my team includes the world’s greatest database developer, I want that person doing amazing things with our database. I don’t need the world’s greatest database developer to learn JavaScript.

Specialists Make It Hard to Balance Work

However, specialists can cause problems on any team using an iterative and incremental approach such as agile. Specialists make it hard to balance the types of work done by a team. If your team does have the world’s greatest database developer, how do you ensure your team always brings into an iteration the right amount of work for that person without bringing in too much for the programmers, the testers, or others?

To better see the impact of specialists, let’s look at a few examples. In Figure 1, we see a four-person team where each person is a specialist. Persons 1 and 2 are programmers and can only program. This is indicated by the red squares and the coding prompt icon within them. Persons 3 and 4 are testers who do nothing but test. They are indicated by the green square and the pencil and ruler icons within those. You can imagine any skills you’d like, but for these examples I’ll use programmers (red) and testers (green).

The four-person team in Figure 1 is capable of completing four red tasks in an iteration and four green tasks in an iteration. They cannot do five red tasks or five green tasks.

But if their work is distributed across two product backlog items as shown in Figure 2, this team will be able to finish that work in an iteration.

But, any allocation of work that is not evenly split between blue and green work will be impossible for this team to complete. This means the specialist team of Figure 1 could not complete the work in any of the allocations shown in Figure 3.

The Impact of Multi-Skilled Team Members

Next, let’s consider how the situation is changed if two of the specialist team members of Figure 1 are now each able to do both red and green work. I refer to such team members as multi-skilled individuals. Such team members are sometimes called generalists, but I find that misleading. We don’t need someone to be able to do everything. It is often enough to have a team member or two who has a couple of the skills a team needs rather than all of the skills.

Figure 4 shows this team. Persons 1 and 2 remain specialists, only able to do one type of work each. But now, Persons 3 and 4 are multi-skilled and each can do either red or green work.

This team can complete many more allocations of work than could the specialist team of Figure 1. Figure 5 shows all the possible allocations that become possible when two multi-skilled members are added to the team.

By replacing just a couple of specialists with multi-skilled members, the team is able to complete any allocation of work except work that would require 0 or 1 unit of either skill. In most cases, a team can avoid planning an iteration that is so heavily skewed simply through careful selection of the product backlog items to be worked on. In this example, if the first product backlog item selected was heavily green, the team would not select a second item that was also heavily green.

The Role of Specialists on an Agile Team

From this, we can see that specialists can exist on high-performing agile teams. But, it is the multi-skilled team members who allow that to be possible. There is nothing wrong with having a very talented specialist on a team--and there are actually many good reasons to value such experts.

But a good agile team will also include multi-skilled individuals. These individuals can smooth out the workload when a team needs to do more or less of a particular type of work in an iteration. Such individuals may also benefit a team in bringing more balanced perspectives to design discussions.

Evidence from My Local Grocery Store

As evidence that specialists are acceptable as long as they are balanced by multi-skilled team members, consider your local grocery store. A typical store will have cashiers who scan items and accept payment. The store will also have people who bag the groceries for you. If the bagger gets behind, the cashier shifts and helps bag items. The multi-skilled cashier/bagger allows the store to use fewer specialist baggers per shift.

What Role Do Specialists Play on Your Team?

What role do specialists play on your team? What techniques do you use to allow specialists to specialize? Please share your thoughts in the comments below.

Project Success Means Knowing ...

Herding Cats - Glen Alleman - Tue, 02/28/2017 - 05:37

To increase the Probability of Project success we have to know something about the attributes and measures of the Five Immutable Principles that enable this success, while managing in the presence of uncertainty

All engineering projects, including software projects, are a constrained optimization problem. How do we take the resources we have and deliver the best outcomes requested by those paying?

The answer is to apply the microeconomics of decision-making to the problem. Unlike models of mechanical engineering or classical physics, the models of microeconomics are never precise. They are probabilistic and statistical models, driven by the underlying processes of the two primary actors - suppliers and consumers. The suppliers provide solutions. The consumers define what is needed and pay for the solution. The Developers and the Customers in the software development world. Both these actors live in the presence of uncertainty. Uncertainty in knowledge (epistemic uncertainty) and uncertainty of the natural forces in the project (Aleatory uncertainty). Both these uncertainties create risks to the success of the project. Epistemic uncertainty can be reduced. Aleatory uncertainty can only be addressed with margin.

Making decisions in the presence of these uncertainties on software development projects means calculating or assessing the probability that some future event may impact our goal. As successful managers and developers, we should be interested in making decisions about our future actions, in the presence of this uncertainty, so that we satisfy goals along the way to Done.

This means Knowing 5 Immutable things about the project...

  • Knowing what Done looks like in units of measure meaningful to the decision makers.
  • Knowing what plans and schedules are needed to reach Done at the needed time, for the needed cost, with the needed technical and operational capabilities.
  • Knowing what resources are needed to produce these capabilities.
  • Knowing what impediments will be encountered along the way, and how those impediments will be handled, reduced, or avoided.
  • Knowing how to measure progress to plan, schedule, reduce impediments, and deliver the needed capabilities.

This knowing means knowing in the presence of uncertainty to a degree of precision and accuracy needed to make decisions

In order to know the precision and accuracy needed to make decisions, in the presence of uncertainty, we must start with estimating what is possible, estimating the attributes and measures of the needed outcomes of the work activities, that produce the capabilities for the needed cost and schedule.

  • Estimating what Done looks like in units of measure meaningful to the decision makers.
  • Estimating what Plans and Schedules will be needed to reach done.
  • Estimating what Resources we will need to reach done.
  • Estimating what Impediments will be encountered.
  • Estimating the Progress to plan that will be needed to arrive at done.

All of these knowables and the estimates of the work and the outcomes of that work, operate in the presence of uncertainty. When we operate in the presence of this uncertainty, estimating is needed to inform the decision makers along the path to success. Without this estimating process, we have no ability to assessing the past data in the presence of this uncertainty in order to Know, to the needed degree of precision and accuracy, to make informed decisions about the future. 

Without estimates, we'll be driving in the dark with the lights off

Categories: Project Management