Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
At the beginning of most races the starter will say something like, âready, set go.â At the team-level the concept of ready to develop acts as a filter to determine whether a user story is ready to be passed to the team and taken into a sprint. Ready to develop is often considered as a bookend to the definition of done. As a comparison, the definition of done is applicable both at the team level and at scale when multiple teams are pursuing a common goal. In practice, ready does not scale as easily as done. Ready often requires two setÂ of unique criteria for scaled Agile projects, Ready to Develop and Ready to Go.
The most common set of ready criteria is encapsulated in Ready to Develop. Â Ready to Develop is used at a team level. Â A simple set of five criteria are :
These five criteria are a great filter for a team to use to determine if the user story they are considering for a sprint can be worked on immediately, or if more grooming is needed. Teams will gravitate toward work they can address now rather than work that is ill defined. The same predilection is true when viewing work being considered by a team of teams (an Agile Release Train in SAFe is an example of a team of teams). However, a team of teams need a higher-level set of criteria Â to define whether they are ready to begin work. The Ready to GoÂ criteria I most use includes:
Implementing the concept of Ready to Go for starting scaled Agile project, release or program increment (SAFe construct) has a different goal and therefore different criteria than Ready to Develop. When the starter calls out ready, mentally run through the criteria for whether you are Ready to Go Â in order to begin workÂ at scale. Â Once you clear that hurdle you can apply the second set of unique criteria that is Ready to Develop. Â Using bothÂ you will be more apt to sprint from the starting line when it is time to GO.
Posted by Jamal Eason, Product Manager, Android
Previewed earlier this summer at Google I/O, Android Studio 1.3 is now available on the stable release channel. We appreciated the early feedback from those developers on our canary and beta channels to help ship a great product.
Android Studio 1.3 is our biggest feature release for the year so far, which includes a new memory profiler, improved testing support, and full editing and debugging support for C++. Letâs take a closer look.
New Features in Android Studio 1.3
Performance & Testing Tools
Android Studio now allows you to capture and analyze memory snapshots in the native Android HPROF format.
In addition to displaying a table of memory allocations that your app uses, the updated allocation tracker now includes a visual way to view the your app allocations.
For more flexibility in app testing, you now have the option to place your code tests in a separate module and use the new test plugin (âcom.android.testâ) instead of keeping your tests right next to your app code. This feature does require your app project to use the Gradle Plugin 1.3.
Code and SDK Management
Android Studio now has inline code annotation support to help you manage the new app permissions model in the M release of Android. Learn more about code annotations.
New data brinding features allow you to create declarative layouts in order to minimize boilerplate code by binding your application logic into your layouts. Learn more about data binding.
Managing Android SDK updates is now a part of the Android Studio. By default, Android Studio will now prompt you about new SDK & Tool updates. You can still adjust your preferences with the new & integrated Android SDK Manager.
As a part of the Android 1.3 stable release, we included an Early Access Preview of the C++ editor & debugger support paired with an experimental build plugin. See the Android C++ Preview page for information on how to get started. Support for more complex projects and build configurations is in development, but let us know your feedback.
Time to Update
An important thing to remember is that an update to Android Studio does not require you to change your Android app projects. With updating, you get the latest features but still have control of which build tools and app dependency versions you want to use for your Android app.
For current developers on Android Studio, you can check for updates from the navigation menu. For new users, you can learn more about Android Studio on the product overview page or download the stable version from the Android Studio download site.
We are excited to launch this set of features in Android Studio and we are hard at work developing the next set of tools to make develop Android development easier on Android Studio. As always we welcome feedback on how we can help you. Connect with the Android developer tools team on Google+.Join the discussion on
Posted by Ellie Powers, Product Manager, Google Play
Today, Google Play is making it easier for you to manage beta tests and get your users to join them. Since we launched beta testing two years ago, developers have told us that itâs become a critical part of their workflow in testing ideas, gathering rapid feedback, and improving their apps. In fact, weâve found that 80 percent of developers with popular apps routinely run beta tests as part of their workflow.Improvements to managing a beta test in the Developer Console
Currently, the Google Play Developer Console lets developers release early versions of their app to selected users as an alpha or beta test before pushing updates to full production. The select user group downloads the app on Google Play as normal, but canât review or rate it on the store. This gives you time to address bugs and other issues without negatively impacting your app listing.
Based on your feedback, weâre launching new features to more effectively manage your beta tests, and enable users to join with one click.
Beta testing is one of the fast iteration features of Google Play and Android that help drive success for developers like Wooga, the creators of hit games Diamond Dash, Jelly Splash, and Agent Alice. Find out more about how Wooga iterates on Android first from Sebastian Kriese, Head of Partnerships, and Pal Tamas Feher, Head of Engineering.
Kabam is a global leader in AAA quality mobile games developed in partnership with Hollywood studios for such franchises such as Fast & Furious, Marvel, Star Wars and The Hobbit. Beta testing helps Kabam engineers perfect the gameplay for Android devices before launch. âThe ability to receive pointed feedback and rapidly reiterate via alpha/beta testing on Google Play has been extremely beneficial to our worldwide launches,â said Kabam VP Rob Oshima.
Matt Small, Co-Founder of Vector Unit recently told us how theyâve been using beta testing extensively to improve Beach Buggy Racing and uncover issues they may not have found otherwise. You can read Mattâs blog post about beta testing on Google Play on Gamasutra to hear about their experience. Weâve picked a few of Mattâs tips and shared them below:
We hope this update to beta testing makes it easier for you to test your app and gather valuable feedback and that these tips help you conduct successful tests. Visit the Developer Console Help Center to find out more about setting up beta testing for your app.Join the discussion on
Posted by Wojtek KaliciĆski, Developer Advocate, Android
Auto Backup for Apps makes seamless app data backup and restore possible with zero lines of application code. This feature will be available on Android devices running the upcoming M release. All you need to do to enable it for your app is update the targetSdkVersion to 23. You can test it now on the M Developer Preview, where weâve enabled Auto Backup for all apps regardless of targetSdkVersion.
Auto Backup for Apps is provided by Google to both users and developers at no charge. Even better, the backup data stored in Google Drive does not count against the user's quota. Please note that data transferred may still incur charges from the user's cellular / internet provider.
By default, for users that have opted in to backup, all of the data files of an app are automatically copied out to a userâs Drive. That includes databases, shared preferences and other content in the applicationâs private directory, up to a limit of 25 megabytes per app. Any data residing in the locations denoted by Context.getCacheDir(), Context.getCodeCacheDir() and Context.getNoBackupFilesDir() is excluded from backup. As for files on external storage, only those in Context.getExternalFilesDir() are backed up.How to control what is backed up
You can customize what app data is available for backup by creating a backup configuration file in the res/xml folder and referencing it in your appâs manifest:
In the configuration file, specify <include/> or <exclude/> rules that you need to fine tune the behavior of the default backup agent. Please refer to a detailed explanation of the rules syntax available in the documentation.What to exclude from backup
You may not want to have certain app data eligible for backup. For such data, please use one of the mechanisms above. For example:
With such a diverse landscape of apps, itâs important that developers consider how to maximise the benefits to the user of automatic backups. The goal is to reduce the friction of setting up a new device, which in most cases means transferring over user preferences and locally saved content.
For example, if you have the userâs account stored in shared preferences such that it can be restored on install, they wonât have to even think about which account they used to sign in with previously - they can submit their password and get going!
If you support a variety of log-ins (Google Sign-In and other providers, username/password), itâs simple to keep track of which log-in method was used previously so the user doesnât have to.Transitioning from key/value backups
If you have previously implemented the legacy, key/value backup by subclassing BackupAgent and setting it in your Manifest (android:backupAgent), youâre just one step away from transitioning to full-data backups. Simply add the android:fullBackupOnly="true" attribute on <application/>. This is ignored on pre-M versions of Android, meaning onBackup/onRestore will still be called, while on M+ devices it lets the system know you wish to use full-data backups while still providing your own BackupAgent.
You can use the same approach even if youâre not using key/value backups, but want to do any custom processing in onCreate(), onFullBackup() or be notified when a restore operation happens in onRestoreFinished(). Just remember to call super.onFullBackup() if you want to retain the system implementation of XML include/exclude rules handling.What is the backup/restore lifecycle?
The data restore happens as part of the package installation, before the user has a chance to launch your app. Backup runs at most once a day, when your device is charging and connected to Wi-Fi. If your app exceeds the data limit (currently set at 25 MB), no more backups will take place and the last saved snapshot will be used for subsequent restores. Your appâs process is killed after a full backup happens and before a restore if you invoke it manually through the bmgr command (more about that below).Test your apps now
Before you begin testing Auto Backup, make sure you have the latest M Developer Preview on your device or emulator. After youâve installed your APK, use the adb shell command to access the bmgr tool.
Bmgr is a tool you can use to interact with the Backup Manager:
If you forget to invoke bmgr run, you might see errors in Logcat when trying the fullbackup and restore commands. If you are still having problems, make sure you have Backup enabled and a Google account set up in system Settings -> Backup & reset.Learn more Join the discussion on
Originally posted on the AdMob Blog.
Whatâs the secret to rapid growth for your app?
Play Store or App Store optimization? A sophisticated paid advertising strategy? A viral social media campaign?
While all of these strategies could help you grow your user base, the foundation for rapid growth is much more basic and fundamentalâyou need an engaging app.
This handbook will walk you through practical ways to increase your appâs user engagement to help you eventually transition to growth. Youâll learn how to:
Download a free copy here.
Posted by Raj Ajrawat, Product Specialist, AdMobJoin the discussion on
Originally posted on the Google Developers blog.
Posted by Chandu Thota, Engineering Director and Matthew Kulick, Product Manager
Just like lighthouses have helped sailors navigate the world for thousands of years, electronic beacons can be used to provide precise location and contextual cues within apps to help you navigate the world. For instance, a beacon can label a bus stop so your phone knows to have your ticket ready, or a museum app can provide background on the exhibit youâre standing in front of. Today, weâre beginning to roll out a new set of features to help developers build apps using this technology. This includes a new open format for Bluetooth low energy (BLE) beacons to communicate with peopleâs devices, a way for you to add this meaningful data to your apps and to Google services, as well as a way to manage your fleet of beacons efficiently.Eddystone: an open BLE beacon format
Working closely with partners in the BLE beacon industry, weâve learned a lot about the needs and the limitations of existing beacon technology. So we set out to build a new class of beacons that addresses real-life use-cases, cross-platform support, and security.
At the core of what it means to be a BLE beacon is the frame formatâi.e., a languageâthat a beacon sends out into the world. Today, weâre expanding the range of use cases for beacon technology by publishing a new and open format for BLE beacons that anyone can use: Eddystone. Eddystone is robust and extensible: It supports multiple frame types for different use cases, and it supports versioning to make introducing new functionality easier. Itâs cross-platform, capable of supporting Android, iOS or any platform that supports BLE beacons. And itâs available on GitHub under the open-source Apache v2.0 license, for everyone to use and help improve.
By design, a beacon is meant to be discoverable by any nearby Bluetooth Smart device, via its identifier which is a public signal. At the same time, privacy and security are really important, so we built in a feature called Ephemeral Identifiers (EIDs) which change frequently, and allow only authorized clients to decode them. EIDs will enable you to securely do things like find your luggage once you get off the plane or find your lost keys. Weâll publish the technical specs of this design soon.
Eddystone offers two key developer benefits: better semantic context and precise location. To support these, weâre launching two new APIs. The Nearby API for Android and iOS makes it easier for apps to find and communicate with nearby devices and beacons, such as a specific bus stop or a particular art exhibit in a museum, providing better context. And the Proximity Beacon API lets developers associate semantic location (i.e., a place associated with a lat/long) and related data with beacons, stored in the cloud. This API will also be used in existing location APIs, such as the next version of the Places API.Eddystone for beacon manufacturers: Single hardware for multiple platforms
Eddystoneâs extensible frame formats allow hardware manufacturers to support multiple mobile platforms and application scenarios with a single piece of hardware. An existing BLE beacon can be made Eddystone compliant with a simple firmware update. At the core, we built Eddystone as an open and extensible protocol thatâs also interoperable, so weâll also introduce an Eddystone certification process in the near future by closely working with hardware manufacturing partners. We already have a number of partners that have built Eddystone-compliant beacons.Eddystone for businesses: Secure and manage your beacon fleet with ease
As businesses move from validating their beacon-assisted apps to deploying beacons at scale in places like stadiums and transit stations, hardware installation and maintenance can be challenging: which beacons are working, broken, missing or displaced? So starting today, beacons that implement Eddystoneâs telemetry frame (Eddystone-TLM) in combination with the Proximity Beacon APIâs diagnostic endpoint can help deployers monitor their beaconsâ battery health and displacementâcommon logistical challenges with low-cost beacon hardware.Eddystone for Google products: New, improved user experiences
Weâre also starting to improve Googleâs own products and services with beacons. Google Maps launched beacon-based transit notifications in Portland earlier this year, to help people get faster access to real-time transit schedules for specific stations. And soon, Google Now will also be able to use this contextual information to help prioritize the most relevant cards, like showing you menu items when youâre inside a restaurant.
We want to make beacons useful even when a mobile app is not available; to that end, the Physical Web project will be using Eddystone beacons that broadcast URLs to help people interact with their surroundings.
Beacons are an important way to deliver better experiences for users of your apps, whether you choose to use Eddystone with your own products and services or as part of a broader Google solution like the Places API or Nearby API. The ecosystem of app developers and beacon manufacturers is important in pushing these technologies forward and the best ideas wonât come from just one company, so we encourage you to get some Eddystone-supported beacons today from our partners and begin building!Join the discussion on
Originally posted on the Google Developers blog.
Posted by Akshay Kannan, Product Manager
Mobile phones have made it easy to communicate with anyone, whether theyâre right next to you or on the other side of the world. The great irony, however, is that those interactions can often feel really awkward when you're sitting right next to someone.
Today, it takes several steps -- whether itâs exchanging contact information, scanning a QR code, or pairing via bluetooth -- to get a simple piece of information to someone right next to you. Ideally, you should be able to just turn to them and do so, the same way you do in the real world.
This is why we built Nearby. Nearby provides a proximity API, Nearby Messages, for iOS and Android devices to discover and communicate with each other, as well as with beacons.
Nearby uses a combination of Bluetooth, Wi-Fi, and inaudible sound (using the deviceâs speaker and microphone) to establish proximity. Weâve incorporated Nearby technology into several products, including Chromecast Guest Mode, Nearby Players in Google Play Games, and Google Tone.
With the latest release of Google Play services 7.8, the Nearby Messages API becomes available to all developers across iOS and Android devices (Gingerbread and higher). Nearby doesnât use or require a Google Account. The first time an app calls Nearby, users get a permission dialog to grant that app access.
A few of our partners have built creative experiences to show what's possible with Nearby.
Edjing Pro uses Nearby to let DJs publish their tracklist to people around them. The audience can vote on tracks that they like, and their votes are updated in realtime.
Trello uses Nearby to simplify sharing. Share a Trello board to the people around you with a tap of a button.
Pocket Casts uses Nearby to let you find and compare podcasts with people around you. Open the Nearby tab in Pocket Casts to view a list of podcasts that people around you have, as well as podcasts that you have in common with others.
Trulia uses Nearby to simplify the house hunting process. Create a board and use Nearby to make it easy for the people around you to join it.
To learn more, visit developers.google.com/nearby.Join the discussion on
I've posted a YouTube video that gives my perspective on #NoEstimates.
This is in the new Construx Brain Casts video series.
The architecture of COTS products comes fixed from the vendor. As standalone systems this is not a problem. When integration starts, it is a problem.Â
Here's a white paper from the past that addresses this critical enterprise IT issue
Inversion of Control from Glen Alleman
Warning: A quote I use in this article is quite graphic. That's the power of the writing, but if you are at all squirmy you may want to turn back now.
Debugging requires a particular sympathy for the machine. You must be able to run the machine and networks of machines in your mind while simulating what-ifs based on mere wisps of insight.
There's another process that is surprisingly similar to debugging: hunting down serial killers.
I ran across this parallel while reading Mindhunter: Inside the FBI's Elite Serial Crime Unit by John E. Douglas, a FBI profiler whose specialty is the dark debugging of twisted human minds.
Here's how John describes profiling:You have to be able to re-create the crime scene in your head. You need to know as much as you can about the victim so that you can imagine how she might have reacted. You have to be able to put yourself in her place as the attacker threatens her with a gun or a knife, a rock, his fists, or whatever. You have to be able to feel her fear as he approaches her. You have to be able to feel her pain as he rapes her or beats her or cuts her. You have to try to imagine what she was going through when he tortured her for his sexual gratification. You have to understand what it’s like to scream in terror and agony, realizing that it won’t help, that it won’t get him to stop. You have to know what it was like. And that is a heavy burden to have to carry.
Serial killers are like bugs in the societal machine. They hide. They blend in. They can pass for "normal" which makes them tough to find. They attack weakness causing untold damage until caught. And they will keep causing damage until caught. They are always hunting for opportunity.
After reading the book I'm quite grateful that the only bugs I've had to deal with are of the computer variety. The human bugs are very very scary.
Here are some other quotes from the book you may also appreciate:
In this episode, I’m finally going to answer a question about side projects. Full transcript: John:Â Â Â Â Â Â Â Â Â Â Â Â Â Â Hey, John Sonmez from simpleprogrammer.com. I got a question. Iâve been getting asked this a lot, so I’m finally going to answer this. Iâll probably do a blogpost at some point on this as well, but I always mention […]
The post How Do I Come Up With A Profitable Side Project Idea? appeared first on Simple Programmer.
I found another paper presented at Newspaper systems journal around architecture in manufacturing and ERP.
One of the 12 Principles of agile saysÂ The best architectures, requirements, and designsÂ emerge from self-organizing teams. This is a developers point of view of architecture. The architects point of view looks like.
Architectured Centered Design from Glen Alleman
The lovely folks at Thoughtworks interviewed me for a blog post, Embracing the Zen of Program Management. Â I hope you like the information there.
If you want to know about agile and lean program management, see Agile and Lean Program Management: Scaling Collaboration Across the Organization. In beta now.
In 1992, I thought I was the best programmer in the world. In my defense, I had just graduated from college, this was pre-Internet, and I lived in Boulder, Colorado working in small business jobs where I was lucky to even hear about other programmers much less meet them.
I eventually fell in with a guy named Bill O'Neil, who hired me to do contract programming. He formed a company with the regrettably generic name of Computer Research & Technologies, and we proceeded to work on various gigs together, building line of business CRUD apps in Visual Basic or FoxPro running on Windows 3.1 (and sometimes DOS, though we had a sense by then that this new-fangled GUI thing was here to stay).
Bill was the first professional programmer I had ever worked with. Heck, for that matter, he was the first programmer I ever worked with. He'd spec out some work with me, I'd build it in Visual Basic, and then I'd hand it over to him for review. He'd then calmly proceed to utterly demolish my code:
One thing that surprised me was that the code itself was rarely the problem. He occasionally had some comments about the way I wrote or structured the code, but what I clearly had no idea about is testing my code.
I dreaded handing my work over to him for inspection. I slowly, painfully learned that the truly difficult part of coding is dealing with the thousands of ways things can go wrong with your application at any given time – most of them user related.
That was my first experience with the buddy system, and thanks to Bill, I came out of that relationship with a deep respect for software craftsmanship. I have no idea what Bill is up to these days, but I tip my hat to him, wherever he is. I didn't always enjoy it, but learning to develop discipline around testing (and breaking) my own stuff unquestionably made me a better programmer.
It's tempting to lay all this responsibility at the feet of the mythical QA engineer.
QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv.— Bill Sempf (@sempf) September 23, 2014
If you are ever lucky enough to work with one, you should have a very, very healthy fear of professional testers. They are terrifying. Just scan this "Did I remember to test" list and you'll be having the worst kind of flashbacks in no time. Did I mention that's the abbreviated version of his list?
I believe a key turning point in every professional programmer's working life is when you realize you are your own worst enemy, and the only way to mitigate that threat is to embrace it. Act like your own worst enemy. Break your UI. Break your code. Do terrible things to your software.
This means programmers need a good working knowledge of at least the common mistakes, the frequent cases that average programmers tend to miss, to work against. You are tester zero. This is your responsibility.
Let's start with Patrick McKenzie's classic Falsehoods Programmers Believe about Names:
I think you can see where this is going. This is programming. We do this stuff for fun, remember?
But in true made-for-TV fashion, wait, there's more! Seriously, guys, where are you going? Get back here. We have more awesome failure states to learn about:
At this point I wouldn't blame you if you decided to quit programming altogether. But I think it's better if we learn to do for each other what Bill did for me, twenty years ago — teach less experienced developers that a good programmer knows they have to do terrible things to their code. Do it because if you don't, I guarantee you other people will, and when they do, they will either walk away or create a support ticket. I'm not sure which is worse.[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
When writing Cypher queries I sometimes find myself wanting to remove consecutive duplicates in collections that I’ve joined together.
e.g we might start with the following query where 1 and 7 appear consecutively:
RETURN [1,1,2,3,4,5,6,7,7,8] AS values ==> +-----------------------+ ==> | values | ==> +-----------------------+ ==> | [1,1,2,3,4,5,6,7,7,8] | ==> +-----------------------+ ==> 1 row
We want to end up with [1,2,3,4,5,6,7,8]. We can start by exploding our array and putting consecutive elements next to each other:
WITH [1,1,2,3,4,5,6,7,7,8] AS values UNWIND RANGE(0, LENGTH(values) - 2) AS idx RETURN idx, idx+1, values[idx], values[idx+1] ==> +-------------------------------------------+ ==> | idx | idx+1 | values[idx] | values[idx+1] | ==> +-------------------------------------------+ ==> | 0 | 1 | 1 | 1 | ==> | 1 | 2 | 1 | 2 | ==> | 2 | 3 | 2 | 3 | ==> | 3 | 4 | 3 | 4 | ==> | 4 | 5 | 4 | 5 | ==> | 5 | 6 | 5 | 6 | ==> | 6 | 7 | 6 | 7 | ==> | 7 | 8 | 7 | 7 | ==> | 8 | 9 | 7 | 8 | ==> +-------------------------------------------+ ==> 9 rows
Next we can filter out rows which have the same values since that means they have consecutive duplicates:
WITH [1,1,2,3,4,5,6,7,7,8] AS values UNWIND RANGE(0, LENGTH(values) - 2) AS idx WITH values[idx] AS a, values[idx+1] AS b WHERE a <> b RETURN a,b ==> +-------+ ==> | a | b | ==> +-------+ ==> | 1 | 2 | ==> | 2 | 3 | ==> | 3 | 4 | ==> | 4 | 5 | ==> | 5 | 6 | ==> | 6 | 7 | ==> | 7 | 8 | ==> +-------+ ==> 7 rows
Now we need to join the collection back together again. Most of the values we want are in field ‘b’ but we also need to grab the first value from field ‘a':
WITH [1,1,2,3,4,5,6,7,7,8] AS values UNWIND RANGE(0, LENGTH(values) - 2) AS idx WITH values[idx] AS a, values[idx+1] AS b WHERE a <> b RETURN COLLECT(a) + COLLECT(b) AS noDuplicates ==> +-------------------+ ==> | noDuplicates | ==> +-------------------+ ==> | [1,2,3,4,5,6,7,8] | ==> +-------------------+ ==> 1 row
What about if we have more than 2 duplicates in a row?
WITH [1,1,1,2,3,4,5,5,6,7,7,8] AS values UNWIND RANGE(0, LENGTH(values) - 2) AS idx WITH values[idx] AS a, values[idx+1] AS b WHERE a <> b RETURN COLLECT(a) + COLLECT(b) AS noDuplicates ==> +-------------------+ ==> | noDuplicates | ==> +-------------------+ ==> | [1,2,3,4,5,6,7,8] | ==> +-------------------+ ==> 1 row
Still happy, good times! Of course if we have a non consecutive duplicate that wouldn’t be removed:
WITH [1,1,1,2,3,4,5,5,6,7,7,8,1] AS values UNWIND RANGE(0, LENGTH(values) - 2) AS idx WITH values[idx] AS a, values[idx+1] AS b WHERE a <> b RETURN COLLECT(a) + COLLECT(b) AS noDuplicates ==> +---------------------+ ==> | noDuplicates | ==> +---------------------+ ==> | [1,2,3,4,5,6,7,8,1] | ==> +---------------------+ ==> 1 row
"A moment's insight is sometimes worth a life's experience." -- Oliver Wendell Holmes, Sr.
Some say weâre in the Age of Insight. Others say insight is the new currency in the Digital Economy.
And still others say that insight is the backbone of innovation.
Either way, we use âinsightâ an awful lot without talking about what insight actually is.
So, what is insight?
I thought it was time to finally do a deeper dive on what insight actually is. Here is my elaboration of âinsightâ on Sources of Insight:
You can think of it as âinsight explained.â
The simple way that I think of insight, or those âah haâ moments, is by remembering a question Ward Cunningham uses a lot:
âWhat did you learn that you didnât expect?â or âWhat surprised you?â
Ward uses these questions to reveal insights, rather than have somebody tell him a bunch of obvious or uneventful things he already knows. For example, if you ask somebody what they learned at their presentation training, theyâll tell you that they learned how to present more effectively, speak more confidently, and communicate their ideas better.
But if you instead ask them, âWhat did you learn that you didnât expect?â they might actually reveal some insight and say something more like this:
âEven though we say donât shoot the messenger all the time, you ARE the message.â
âIf you win the heart, the mind follows.â
Itâs the non-obvious stuff, that surprises you (at least at first). Or sometimes, insight strikes us as something that should have been obvious all along and becomes the new obvious, or the new normal.
Ward used this insights gathering technique to more effectively share software patterns. He wanted stories and insights from people, rather than descriptions of the obvious.
Iâve used it myself over the years and it really helps get to deeper truths. If you are a truth seeker or a lover of insights, youâll enjoy how you can tease out more insights, just by changing your questions. For example, if you have kids, donât ask, âHow was your day?â Ask them, âWhat was the favorite part of your day?â or âWhat did you learn that surprised you?â
Wow, I now this is a short post, but I almost left without defining insight.
According to the dictionary, insight is âThe capacity to gain an accurate and deep intuitive understanding of a person or thing.â Or you may see insight explained as inner sight, mental vision, or wisdom.
I like Edward de Bonoâs simple description of insight as âEureka moments.â
Some people count steps in their day. I count my âah-haâ moments. After all, the most important ingredient of effective ideation and innovation is âŠyep, you guessed it â insight!
For a deeper dive on the power of insight, read my page on Insight explained, on Sources Of Insight.com
Most problem are quite straightforward to solve: when something is slow, you can either optimize it or parallelize it. When you hit a throughput barrier, you partition a workload to more workers. Although when you face problems that involve Garbage Collection pauses or simply hit the limit of the virtual machine you're working with, it gets much harder to fix them.
When you're working on top of a VM, you may face things that are simply out of your control. Namely, time drifts and latency. Gladly, there are enough battle-tested solutions, that require a bit of understanding of how JVM works.
If you can serve 10K requests per second, conforming with certain performance (memory and CPU parameters), it doesn't automatically mean that you'll be able to linearly scale it up to 20K. If you're allocating too many objects on heap, or waste CPU cycles on something that can be avoided, you'll eventually hit the wall.
The simplest (yet underrated) way of saving up on memory allocations is object pooling. Even though the concept is sounds similar to just pooling objects and socket descriptors, there's a slight difference.
When we're talking about socket descriptors, we have limited, rather small (tens, hundreds, or max thousands) amount of descriptors to go through. These resources are pooled because of the high initialization cost (establishing connection, performing a handshake over the network, memory-mapping the file or whatever else). In this article we'll talk about pooling larger amounts of short-lived objects which are not so expensive to initialize, to save allocation and deallocation costs and avoid memory fragmentation.Object Pooling
Ryan Ripley “highly recommends” Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule. See his post:Â Pragmatic Agile Estimation: Predicting the Unpredictable.
He says this:
This is a practical book about the work of creating software and providing estimates when needed. Her estimation troubleshooting guide highlights many of the hidden issues with estimating such as: multitasking, student syndrome, using the wrong units to estimate, and trying to estimates things that are too big. — Ryan Ripley
Thank you, Ryan!
See Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule for more information.
I was sorting through a desk draw and came across a collection of papers from book chapters and journals done in the early 2000's when I was the architect of an early newspaper editorial system.
Here's one on Risk Management
Information Technology Risk Management from Glen Alleman This work was done early in the risk management development process. Tim Lister's quote came later Risk management is how adults management projects.
The Definition of Done (DoD) is an important technique for increasing the operational effectiveness of team-level Agile. The DoD provides a team with a set of criteria that they can use to plan and bound their work. As Agile is scaled up to deliver larger, more integrated solutions the question that is often asked is whether the concept of the DoD can be applied. And if it is applied, does the application require another layer of done (more complexity)?
The answer to the first question is simple and straightforward. If the question is whether the Definition of Done technique can be used as Agile projects are scaled, then the answer is an unequivocal âyesâ. In preparation for this essay I surveyed a few dozen practitioners and coaches on the topic to ensure that my use of the technique at scale wasnât extraordinary. To a person, they all used the technique in some form. Mario Lucero, an Agile Coach in Chile, (interviewed on SPaMCAST 334) said it succinctly, âNo, the use of Definition of Done doesn’t depend on how large is the project.â
While everyone agreed that the DoD makes sense in a scaled Agile environment, there is far less consensus on how to apply the technique. The divergence of opinion and practice centered on whether or not the teams working together continually integrated their code as part of their build management process. There are two different camps. The first camp typically finds themselves in organizations that integrated functions either as a final step in a sprint, performed integration as a separate function outside of development or as a separate hardening sprint. This camp generally feels that to apply the Definition of Done requires a separate DoD specifically for integration. This DoD would include requirements for integrating functions, testing integration and architectural requirements that span teams. The second camp of respondent finds themselves in environments where continuous integration is performed. In this scenario each respondent either added integration criteria in the team DoD or did nothing at all. The primary difference boiled down to whether the team members were responsible for making sure their code integrated with the overall system or whether someone else (real or perceived) was responsible.
In practice the way that DoD is practiced includes a bit of the infamous âit dependsâ magic. During our discussion on the topic, Luc Bourgault from Wolters Kluwer stated, âin a perfect world the definition should be same, but I think we should be accept differences when it makes sense.â Pradeep Chennavajhula, Senior Global VP at QAI, made three points:
The Definition of Done is useful for all Agile work whether a single team or a large scaled effort. However, how you have organized your Agile effort will have more of a potential impact on your approach.