Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Originally posted on Google Apps Developers Blog
Posted by Vartika Agarwal, Technical Program Manager, Identity & Authentication, and Wesley Chun, Developer Advocate, Google
As we indicated several years ago, we are moving away from the OAuth 1.0 protocol in order to focus our support on the current OAuth standard, OAuth 2.0, which increases security and reduces complexity for developers. OAuth 1.0 (3LO)1 was shut down on April 20, 2015. During this final phase, we will be shutting down OAuth 1.0 (2LO) on October 20, 2016. The easiest way to migrate to the new standard is to use OAuth 2.0 service accounts with domain-wide delegation.
If the migration for applications using these deprecated protocols is not completed before the deadline, those applications will experience an outage in their ability to connect with Google, possibly including the ability to sign-in, until the migration to a supported protocol occurs. To avoid any interruptions in service for your end-users, it is critical that you work to migrate your application(s) prior to the shutdown date.
With this step, we continue to move away from legacy authentication/authorization protocols, focusing our support on modern open standards that enhance the security of Google accounts and that are generally easier for developers to integrate with. If you have any technical questions about migrating your application, please post them to Stack Overflow under the tag google-oauth.
1 3LO stands for 3-legged OAuth: there's an end-user that provides consent. In contrast, 2-legged (2LO) doesn‚Äôt involve an end-user and corresponds to enterprise authorization scenarios such as enforcing organization-wide policy control access.
Hey, it's HighScalability time:
If you like this sort of Stuff then please consider offering your support on Patreon.
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...
While productivity is a simple calculation, there are a few mistakes organizations tend to make.¬† The five most common mistakes reduce the usefulness of measuring productivity, or worse can cause organizations to make poor decisions based on bad numbers.¬† The five most common usage and calculation mistakes are:
1.¬†¬†¬†¬†¬† Calculating productivity using inputs as outputs.¬† Productivity is the ratio of output created per unit of input.¬† For example, if a factory created widgets then labor productivity would be calculated as widgets per hour. A common software productivity metric is function points per person.¬† Inverting the equation would yield a metric of people per function point which make very little sense.¬† Solution: Repeat after me, productivity is output divided by input (a bit of snark).¬† Other metrics use an¬†output as a¬†driver to predict usage of resources. ¬†For example, we calculate delivery rate (answers how fast the process delivers) by dividing¬†calendar time by output.
2.¬†¬†¬†¬†¬† Using aggregate or average productivity for concrete planning. Productivity averages are often a useful tool for portfolio planning. Portfolio¬†planning occurs when organizations know few of the¬†details about¬†a piece of work. ¬†Solution: In software development and maintenance, never use an organization-level productivity number to set specific goals for a project, sprint or feature.¬†¬†¬†¬†
3.¬†¬†¬†¬†¬† Labor productivity is a loose proxy measure for some types of change.¬† Not all process improvements have a direct impact on labor productivity.¬† For example, Total Factor Productivity (TFP) would be a better measure to assess the impact of the adoption of Scrum. While we may see the echoes of the of changes in labor or capital productivity, we would be ascribing the impact to the wrong factors which could lead us to try other changes that may not have positive impacts. Solution: Most software development and maintenance organizations will not spend the time and effort needed to measure TFP. Continue using labor or capital productivity to evaluate changes, but in addition evaluate how directly the productivity measures the change.¬† Understanding how closely the proxy tracks the changes will help the analyst judge the change or alert him or her to search for other impacts. ¬†
4. ¬† ¬† ¬†Productivity may only be loosely tied¬†to delivered value. Productivity is a measure of raw, non-defective output, not whether that output useful or sellable. ¬†If a software team delivers a product that does not hit the mark or is not adopted in the marketplace, they may have been highly efficient even though their output is less valuable than anticipated.¬† Solution: Measure both value and productivity to provide a complete view of performance.
5.¬†¬†¬†¬†¬† Comparing productivity across teams undervalues technical complexity. One piece of software is often significantly more or less complex than another.¬† Technical complexity often varies not only between applications but within sections of code inside applications.¬† The more complex the code or business problem, the lower the productivity (complexity increases the amount of effort needed to create an output).¬† Solution: Each application should evaluate and determine its own productivity based on measuring delivered results. When teams use productivity as a planning tool they should tune the anticipated performance based on the predicted level of technical and business complexity.
Faster, better, cheaper has been the mantra of many a CFO and CIO.¬† Understanding and improving productivity is one of the tools available to improve performance.¬† In order to effectively use productivity as a¬†tool, we need to make sure we calculate it correctly and understand that the metric provides a single point of view.¬† Productivity is a measure ¬†of how effectively an organization transforms inputs into outputs; no more, no less. Productivity metrics provide the most value when coupled with other metrics such as value, speed, and complexity to generate a holistic view of the value delivery chain.¬†
Posted by Mike Pegg, Google Developers Team
What are the best ways to optimize battery and memory usage of your apps? How do you create a great app experience that is accessible to everyone, including users with disabilities? How do you build an offline-ready, service-working, app-manifesting, production-ready Progressive Web App using Firebase Hosting? And what are some of the best desserts that start with N? Tune in to Google I/O to get the answers to all of these questions (well, most of them...), along with a whole lot more. You can start planning your schedule now, as the first wave of 100 technical talks just went live at google.com/io!
Last year, you told us you wanted more: more technical content, more time, more space, more everything! We heard your feedback loud and clear and have added a full third day onto Google I/O to accommodate more comprehensive talks in larger spaces than in previous years. These talks will be spread over 14 suggested tracks, including Android, the Mobile Web, Play and more, to help you easily navigate your I/O experience. Of course, we‚Äôre also bringing back Codelabs, our self-paced workshops with Googlers nearby to give you a hand.
There are already over 200 I/O Extended events happening around the world. Join one of these events to participate in I/O from your local neighborhood alongside local developers who share the same passion for Google technology. You can also follow the festival from home; we‚Äôll have four different live stream channels to chose from, broadcasting many of the sessions in real time from Shoreline. All of the sessions will be available to watch on YouTube after I/O concludes, in case you miss one.
See you soon!
This is just the first wave of talks. We‚Äôll be adding more talks and events as we get closer to I/O, including a number of talks directly after the keynote (shhhh!! We‚Äôve got some new things to share). We look forward to seeing you in a few weeks -- whether it be in person at Shoreline, at an I/O Extended event, or on I/O Live!
Today I’ve received a very interesting question from a reader. Why does programming suck? While it may seem a little bit controversial, having a programmer talking about why programming sucks, may seem kind of odd but, well… It does suck sometimes. One of the reasons why programming sucks is the technology. Technology changes at a […]
Posted by Wayne Piekarski, Developer Advocate, Android Wear
If you‚Äôre an Android Wear developer, we wanted to let you know of a change you might need to make to your app to improve the performance of your user‚Äôs devices. If your app is using BIND_LISTENER intent filters in your manifest, it is important that you are aware that this API has been deprecated on all Android versions. The new replacement API introduced in Google Play Services 8.2 is more efficient for Android devices, so developers are encouraged to migrate to this as soon as possible to ensure the best user experience. It is important that all Android Wear developers are aware of this change and update their apps as soon as possible.Limitations of BIND_LISTENER API
When Android Wear introduced the WearableListenerService, it allowed you to listen to changes via the BIND_LISTENER intent filter in the AndroidManifest.xml. These changes included data item changes, message arrivals, capability changes, and peer connects/disconnects.
The WearableListenerService starts whenever any of these events occur, even if the app is only interested in one type. When a phone has a large number of apps using WearableListenerService and BIND_LISTENER, a watch appearing or disappearing can cause many services to start up. This applies memory pressure to the device, causing other activities and services to be shut down, and generates unnecessary work.Fine-grained intent filter API
In Google Play Services 8.2, we introduced a new fine-grained intent filter mechanism that allows developers to specify exactly what events they are interested in. For example, if you have multiple listener services, use a path prefix to filter only those data items and messages meant for the service, with syntax like this:
<service android:name=".FirstExampleService"> <intent-filter> <action android:name="com.google.android.gms.wearable.DATA_CHANGED" /> <action android:name="com.google.android.gms.wearable.MESSAGE_RECEIVED" /> <data android:scheme="wear" android:host="*" android:pathPrefix="/FirstExample" /> </intent-filter> </service>
There are intent filters for DATA_CHANGED, MESSAGE_RECEIVED, CHANNEL_EVENT, and CAPABILITY_CHANGED. You can specify multiple elements, and if any of them match, it will call your service and filter out anything else. If you do not include a element, all events will be filtered out and your service will never be called, so make sure to include at least one. You should be aware that registering in an AndroidManifest.xml for CAPABILITY_CHANGED will cause your service to be called any time a device advertising this capability appears or disappears, so you should use this only if there is a compelling reason.Live listeners
If you only need these events when an Activity or Service is running, then there is no need to register a listener in AndroidManifest.xml at all. Instead, you can use addListener() live listeners, which will only be active when the Activity or Service is running, and will not impact the device otherwise. This is particularly useful if you want to do live status updates for capabilities being available in an Activity, but with no further background impact. In general, you should try to use addListener(), and only use AndroidManifest.xml when you need to receive events all the time.Best practices
In general, you should only use a listener in AndroidManifest.xml for events that must launch your service. For example, if your watch app needs to send an interactive message or data to the phone.
You should try to limit the number of wake-ups of your service by using filters. If most of the events do not need to launch your app, then use a path and a filter that only matches the event you need. This is critical to limit the number of launches of your service.
If you have other cases where you do not need to launch a service, such as listening for status updates in an Activity, then register a live listener only for the duration it is needed.
There is more information available about Data Layer events and the use of WearableListenerService, and tags in the manifest. Android Studio has a guide with a summary of how to convert to the new API. The Android Wear samples also show best practices in the use of WearableListenerService, such as DataLayer and XYZTouristAttractions. The changes needed are very small, and can be seen in this git diff from one of the samples here.Removal of BIND_LISTENER
With the release of Android Studio 2.1, lint rules have been added that flag the use of BIND_LISTENER as a fatal error, and developers will need to make a small change to the AndroidManifest.xml to declare accurate intent filters. If you are still using BIND_LISTENER, you will receive the following error:
AndroidManifest.xml:11: Error: The com.google.android.gms.wearable.BIND_LISTENER action is deprecated. [WearableBindListener] <action android:name="com.google.android.gms.wearable.BIND_LISTENER" /> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This will only impact developers who are recompiling their apps with Android Studio 2.1 and will not affect existing apps on user‚Äôs devices.
For developers who are using Google Play Services earlier than 8.2, the lint rules will not generate an error, but you should update to a newer version and implement more accurate intent filters as soon as possible.
In order to give users the best experience, we plan to disable BIND_LISTENER in the near future. It is therefore important that developers take action now, to avoid any future disruption experienced by users of their apps.
Today Twitter is creating and persisting 3,000 (200 GB) images per second. Even better, in 2015 Twitter was able to save $6 million due to improved media storage policies.
It was not always so. Twitter in 2012 was primarily text based. A Hogwarts without all the cool moving pictures hanging on the wall. It’s now 2016 and Twitter has moved into to a media rich future. Twitter has made the transition through the development of a new Media Platform capable of supporting photos with previews, multi-photos, gifs, vines, and inline video.
Henna Kermani, a Software Development Engineer at Twitter, tells the story of the Media Platform in an interesting talk she gave at Mobile @Scale London: 3,000 images per second. The talk focuses primarily on the image pipeline, but she says most of the details also apply to the other forms of media as well.
Some of the most interesting lessons from the talk:
Doing the simplest thing that can possibly work can really screw you. The simple method of uploading a tweet with an image as an all or nothing operation was a form of lock-in. It didn’t scale well, especially on poor networks, which made it difficult for Twitter to add new features.
Decouple. By decoupling media upload from tweeting Twitter was able independently optimize each pathway and gain a lot of operational flexibility.
Move handles not blobs. Don’t move big chunks of data through your system. It eats bandwidth and causes performance problems for every service that has to touch the data. Instead, store the data and refer to it with a handle.
Moving to segmented resumable uploads resulted in big decreases in media upload failure rates.
Experiment and research. Twitter found through research that a 20 day TTL (time to live) on image variants (thumbnails, small, large, etc) was a sweet spot, a good balance between storage and computation. Images had a low probability of being accessed after 20 days so they could be deleted, which saves nearly 4TB of data storage per day, almost halves the number of compute servers needed, and saves millions of dollars a year.
On demand. Old image variants could be deleted because they could be recreated on the fly rather than precomputed. Performing services on demand increases flexibility, it lets you be lot smarter about how tasks are performed, and gives a central point of control.
Progressive JPEG is a real winner as a standard image format. It has great frontend and backend support and performs very well on slower networks.
Lots of good things happened on Twitter’s journey to a media rich future, let’s learn how they did it...The Old Way - Twitter in 2012
‚ÄúMost developers I know are actually pretty bad testers.‚ÄĚ This was the feedback from one tester, in a recent short survey. The survey also verified that more developers are taking part in testing tasks, as reported in 37% of the organizations. The survey included testers from organizations worldwide and was focused on testing efforts in […]
The post Why Developers Are Poor Testers and What Can Be Done About It appeared first on Simple Programmer.
Productivity is used to¬†evaluate how efficiently an organization converts inputs into outputs. ¬†However, productivity measures¬†can and often are misapplied for a variety of reasons ranging from simple misunderstanding to gaming the system. Many misapplications of productivity measurement cause organizational behavior problems both from leaders and employees. ¬†Five¬†of the most common productivity-related behavioral problems are:
The last two behavioral issues are often the most common and can occur even when organizations don‚Äôt explicitly measure productivity. Every organization, whether they explicitly measure productivity or not, wants software development to deliver more functionality and cost less.¬† Organizations that don‚Äôt take a systems thinking view ¬†can actually increase cost and reduce real productivity hurting the long term efficiency of the organization when they are trying to have the opposite impact.¬†
Posted by Jeff Nusz, Data Arts Team, Pixel Painter
Two weeks ago, we introduced Tilt Brush, a new app that enables artists to use virtual reality to paint the 3D space around them. Part virtual reality, part physical reality, it can be difficult to describe how it feels without trying it firsthand. Today, we bring you a little closer to the experience of painting with Tilt Brush using the powers of the web in a new Chrome Experiment titled Virtual Art Sessions.
Virtual Art Sessions lets you observe six world-renowned artists as they develop blank canvases into beautiful works of art using Tilt Brush. Each session can be explored from start to finish from any angle, including the artist‚Äôs perspective ‚Äď all viewable right from the browser.
Participating artists include illustrator Christoph Niemann, fashion illustrator Katie Rodgers, sculptor Andrea Blasich, installation artist Seung Yul Oh, automotive concept designer Harald Belker, and street artist duo Sheryo & Yok. The artists‚Äô unique approaches to this new medium become apparent when seeing them work inside their Tilt Brush creations. Watch this behind-the-scenes video to hear what the artists had to say about their experience:
We hope this experiment provides a window into the world of painting in virtual reality using Tilt Brush. We are excited by this new medium and hope the experience leaves you feeling the same. Visit g.co/VirtualArtSessions to start exploring.
Many teams have at least a moderate ability to plan and control their time. They're able to say, "We will work on these things over the coming sprint," and have a somewhat reasonable expectation of that being the case.
And that's the type of team we encounter in much of the Scrum literature--the literature that says to plan a sprint and keep change out.
But what should teams do when change cannot be kept out of a sprint?
In this post, I want to address this topic for two different types of teams:
Many teams will benefit from including a moderate amount of safety into each sprint. Basically, these teams should not assume they can keep all changes out of the sprint. For example, a team might want to leave room when planning a sprint for things like:
There could be many other similar examples. Consider your own environment. You want to try to set a high threshold for what constitutes a worthy interruption to a sprint. Teams really do best when they have large blocks of dedicated time that will not be interrupted.
To accommodate work like this, all a team needs to do is leave a bit of buffer when planning the sprint. Let’s see how that works.The Three Things That Must Go into Each Sprint
I think of a sprint as containing three things: corporate overhead, and plannable and unplanned time. I think of this graphically as shown in Figure 1.
Corporate overhead is that time that goes towards things like all-company meetings, answering emails from your past project, attending the HR sensitivity training and so on. Some of these activities may be necessary, but in many organizations, they consume a lot of time.
I put Scrum meetings (planning, daily scrum, etc.) in the corporate overhead category as well.
Plannable time is the second thing that goes into a sprint. This is the time that belongs collectively to the team.
But the team does not want to fill the entire remainder of the sprint with plannable time. The team needs to acknowledge the need to leave room for some amount of unplanned time.
Unplanned time goes towards three things:
I’m frequently asked what percentages teams should use for each of the three categories. I can’t answer that. But I can tell you how to figure it out:
After each sprint, consider how well the unplanned time the team allocated matched the unplanned time the team needed for the sprint. And then adjust up or down a bit for the next sprint. This is not something a team can ever get perfect.
Instead, it’s a game of averages. The team needs to save the right amount of time for unplanned tasks on average. Then some sprints will have more unplanned tasks occur and some sprints will have fewer.
When fewer occur, the team should get ahead on their work. so they’re prepared for when more unplanned tasks occur.What to Do When a Team Is Highly Interrupt-Driven
The preceding advice works well for the majority of agile teams--those that are only interrupted a moderate amount. Some teams, however, are highly interrupt-driven.
Again, I want to resist putting actual percentages on the regions in Figure 1, but I’m describing a situation in which the area of “unplanned time” becomes much larger than shown.
I actually want to talk about the cases in which unplanned time becomes the dominant of the three areas. Such teams are highly interrupt-driven.
These teams still want to include space in their sprints for unplanned time. But there are usually a few other things you may want to consider if you are on a highly interrupt-driven team.
First, you may want to adjust your sprint length. One option is to go with a long sprint length. Increasing sprint length has the benefit of making the rate of interruption more predictable because the variance will not be so great from sprint to sprint.
To see how that works, imagine you chose a one-year sprint. (Don’t do that!) It’s easy to imagine with such long sprints, the fluctuations a team faces with short sprints will wash out. Sure, this year (this sprint) might have more interruptions than last year (last sprint), but it’s such a long period that the team has time to recover from any excessive fluctuations.
The other option is to go with short, one-week sprints and just live with the unpredictability. The team will be less able to assure bosses “we will be done with this” by a given period, but I find that to be a worthwhile tradeoff.
Second, a highly interrupt-driven team should make sprint planning a very lightweight activity.
Sprint planning should be a quick effort to grab a few things the team thinks it can do in the coming week, and that’s that. It should be a very minimal effort--15 or 30 minutes for many teams.
To illustrate this, think about planning a party, and imagine a spectrum with planning a wedding reception on one end. That’s some serious party planning. At the other end of the spectrum is inviting some friends over tonight to watch the big game on TV. To plan that, I’m going to check the fridge for beer and order a pizza. That’s a different level of party planning.
Sprint planning for a highly interrupt-driven team should be much more like the latter--quick, easy and just enough to be successful.What Do You Do?
How do you handle interruptions on your agile team? Please share your thoughts in the comments below.
I recently found myself writing an R script to extract parts of a string based on a beginning and end index which is reasonably easy using the substr function:
> substr("mark loves graphs", 0, 4)  "mark"
But what if we have a vector of start and end positions?
> substr("mark loves graphs", c(0, 6), c(4, 10))  "mark"
Hmmm that didn’t work as I expected! It turns out we actually need to use the substring function instead which wasn’t initially obvious to me on reading the documentation:
> substring("mark loves graphs", c(0, 6, 12), c(4, 10, 17))  "mark" "loves" "graphs"
Easy when you know how!
Validated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast.
In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt behoefte. Een goed experiment kan ongelooflijk waardevol zijn, mits goed uitgevoerd. En hier zit meteen een veel voorkomend probleem: het experiment wordt niet goed afgerond. Er wordt wel een proef gedaan, maar daar zit vaak geen goede hypothese achter en de lessen worden niet of nauwelijks meegenomen. Ik heb gemerkt dat, om een hoger lerend vermogen in de organisatie te krijgen, het handig om een vaste structuur aan te houden voor experimenten.
Er zijn veel structuren die goed werken. Toyota (of Kanban) Kata vind ik persoonlijk heel erg fijn, maar ook de ‚Äúgewone‚ÄĚ Plan-Do-Check-Act werkt erg goed. Die structuur voor een staat met een simpel voorbeeld hieronder uitgelegd:
Welk probleem ga je oplossen? En hoe wil je dat gaan doen?
Als het hele team vanuit huis inbelt voor de stand up dan worden we niet minder effectief dan als iedereen aanwezig is en kunnen we beter omgaan met thuiswerkdagen
Wat is je verwachting van de uitkomsten? Wat ga je zien?
Geen lagere¬†productiviteit, hogere score op team happiness omdat mensen vanuit huis kunnen werken
Op welke manier ga je toetsen of je het probleem kunt oplossen? Is dit experiment ook safe to fail?
De komende zes weken belt iedereen in vanuit huis voor de stand up. We scoren in de retrospective op productiviteit en happiness. Daarna doen we zes weken de stand up samen op kantoor
Verzamel zo veel mogelijk data tijdens je experiment. Wat zie je gebeuren?
Het opzetten van de call duurt erg lang (10-15 minuten). Het is lastig iedereen aan het woord te laten. Bij het inbellen kunnen we het gewone board niet gebruiken omdat niemand post-its kan verhangen.
A well designed experiment is as likely to fail as it is to succeed ‚Äď Free to Don¬†Reinertsen
¬†Dit is vast niet het beste experiment dat geformuleerd kan worden. Maar daar gaat het niet om. Waar het om gaat is dat het leerproces ontstaat bij de verschillen tussen de voorspelling en de observaties. Het is dus belangrijk om allebei deze stappen te doen en bewust stil te staan bij het leerproces wat daarop volgt. Op basis van je observaties kun je een nieuw experiment formuleren voor nieuwe verbeteringen.
Hoe doe jij je experimenten? Ik ben benieuwd naar wat goed werkt in jouw organisatie.
How we can transfer salesforce data to hadoop? It is big challenge to everyday users. What are different features of data transfer tools.
From Estimating Software Costs: Bringing Realism To Estimating, 2nd Edition.
Agile addresses 1, 2, 3, 4, 5, and 6 well.¬†
So if these are the causes of project difficulties - and there may be others since this publication, what are the fixes?
There‚Äôs a stigma in our society about quitting that causes us to cling on to projects long after they should be let go. Quitters never win. Quitting lasts forever. Champions never quit. You‚Äôre never a loser till you quit trying. No one wants to lose. No one wants to be a loser. Well, at least […]
The post 6 Red Flags That You Need To Start Cutting Your Losses appeared first on Simple Programmer.