Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=8' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Saying goodbye to OAuth 1.0 (2LO)

Google Code Blog - Fri, 04/22/2016 - 17:00

Originally posted on Google Apps Developers Blog

Posted by Vartika Agarwal, Technical Program Manager, Identity & Authentication, and Wesley Chun, Developer Advocate, Google

As we indicated several years ago, we are moving away from the OAuth 1.0 protocol in order to focus our support on the current OAuth standard, OAuth 2.0, which increases security and reduces complexity for developers. OAuth 1.0 (3LO)1 was shut down on April 20, 2015. During this final phase, we will be shutting down OAuth 1.0 (2LO) on October 20, 2016. The easiest way to migrate to the new standard is to use OAuth 2.0 service accounts with domain-wide delegation.

If the migration for applications using these deprecated protocols is not completed before the deadline, those applications will experience an outage in their ability to connect with Google, possibly including the ability to sign-in, until the migration to a supported protocol occurs. To avoid any interruptions in service for your end-users, it is critical that you work to migrate your application(s) prior to the shutdown date.

With this step, we continue to move away from legacy authentication/authorization protocols, focusing our support on modern open standards that enhance the security of Google accounts and that are generally easier for developers to integrate with. If you have any technical questions about migrating your application, please post them to Stack Overflow under the tag google-oauth.

1 3LO stands for 3-legged OAuth: there's an end-user that provides consent. In contrast, 2-legged (2LO) doesn’t involve an end-user and corresponds to enterprise authorization scenarios such as enforcing organization-wide policy control access.

Categories: Programming

Stuff The Internet Says On Scalability For April 22nd, 2016

Hey, it's HighScalability time:


A perfect 10. Really stuck that landing. Nadia Comaneci approves.

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • $1B: Supercell’s Clash Royale projected annual haul; 3x: Messenger and WhatsApp send more messages than SMS; 20%: of big companies pay zero corporate taxes; Tens of TB's RAM: Netflix's Container Runtime; 1 Million: People use Facebook over Tor; $10.0 billion: Microsoft raining money in the cloud; 

  • Quotable Quotes:
    • @nehanarkhede: @LinkedIn's use of @apachekafka:1.4 trillion msg/day, 1400 brokers. Powers database replication, change capture etc
    • @kenkeiter~ Full-duplex on a *single antenna* -- this is huge.  (single chip, too -- that's the other huge part, obviously) 
    • John Langford: In the next few years, I expect machine learning to solve no important world issues.
    • Dan Rayburn: By My Estimate, Apple’s Internal CDN Now Delivers 75% Of Their Own Content
    • @BenedictEvans: If Google sees the device as dumb glass, Apple sees the cloud as dumb pipes & dumb storage. Both views could lead to weakness
    • @JordanRinke: We need less hackathons, more apprenticeships. Less bootcamps, more classes. Less rockstars, more mentors. Develop people instead of product
    • @alicegoldfuss: Nagios screaming / Data center ablaze? No / Cable was unplugged
    • Mark Bates: As I was working on the software part time, I was keen to minimise the [cognitive] scope required when making changes. A RoR monolith was the best choice in this case.
    • Google: Our tests have shown that AMP documents load an average of four times faster and use 10 times less data than the equivalent non-amp’ed result.
    • @stevesi: In earning's call @sundarpichai says going “from mobile-first to AI-first world" emphasizing AI and machine learning across services.
    • Rex Sorgatz: Unfortunately, the entire thesis of my story is that having the history of recorded music in your pocket dictates that you will develop tastes outside “the usual.”
    • Newzoo: Clash Royale has rocketed to such quick success because of its strong core gameplay elements combined with some serious pressure to spend real money to keep up with your friends
    • vgt:  I'm going to plug Google Cloud's Preemptible VMs as a simpler alternative to Spot Instances: - Preemptible VMs are sold at a fixed 70% off discount, removing pricing volatility entirely
    • @mfdii~ "Cloud Native" is code words for "rewrite the entire f*cking app"
    • There are so many great Quotable Quotes this week they wouldn't all fit in the summary. Please see the full post to read them all.

  • Imperfection as a strategy. Why a Chip That’s Bad at Math Can Help Computers Tackle Harder Problems: In a simulated test using software that tracks objects such as cars in video, Singular’s approach [computer chips are hardwired to be incapable of performing mathematical calculations correctly] was  capable of processing frames almost 100 times faster than a conventional processor restricted to doing correct math—while using less than 2 percent as much power.

  • You have to fight magic with magic, super-villains with super-heroes, and algorithms with algorithms. How I Investigated Uber Surge Pricing in D.C. Also, Investigating the algorithms that govern our lives.

  • Mitchell Hashimoto in The Cloudcast #246  on some cloud trends. Seeing a lot of interest in non-Amazon clouds right now. A lot of interest in Azure is coming from more boring successful companies, not hot Silicon Valley startups.  This is not a clean market segmentation, but there are three flavors of cloud: Google Compute for the green field startup crowd, Amazon for enterprise, and Azure for super-enterprise. One enterprise attractor for Azure is Azure Stack, an on-premises solution. Mitchell is seeing a broad adoption of the cloud across industries you may not expect to be using the cloud. Also seeing a transition to a multi-cloud strategy to create pricing leverage. The idea seems to be to rehearse and plan to move to another cloud, though they may not actually do it, but when pricing negotiations come up there's a lot of leverage saying you can move to a completely different platform. The cloud is not so much a pay as you go model for this use case, it's more about trying to lock-in long term cost savings. International companies are interested in price, but also what features are available in different regions and when they become available.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Metrics:  5 Productivity Usage and Calculation Mistakes

The more complex the door, the lower the 'door' productivity.

The more complex the door, the lower the ‘door’ productivity – but not always.

While productivity is a simple calculation, there are a few mistakes organizations tend to make.  The five most common mistakes reduce the usefulness of measuring productivity, or worse can cause organizations to make poor decisions based on bad numbers.  The five most common usage and calculation mistakes are:

1.      Calculating productivity using inputs as outputs.  Productivity is the ratio of output created per unit of input.  For example, if a factory created widgets then labor productivity would be calculated as widgets per hour. A common software productivity metric is function points per person.  Inverting the equation would yield a metric of people per function point which make very little sense.  Solution: Repeat after me, productivity is output divided by input (a bit of snark).  Other metrics use an output as a driver to predict usage of resources.  For example, we calculate delivery rate (answers how fast the process delivers) by dividing calendar time by output.

2.      Using aggregate or average productivity for concrete planning. Productivity averages are often a useful tool for portfolio planning. Portfolio planning occurs when organizations know few of the details about a piece of work.  Solution: In software development and maintenance, never use an organization-level productivity number to set specific goals for a project, sprint or feature.    

3.      Labor productivity is a loose proxy measure for some types of change.  Not all process improvements have a direct impact on labor productivity.  For example, Total Factor Productivity (TFP) would be a better measure to assess the impact of the adoption of Scrum. While we may see the echoes of the of changes in labor or capital productivity, we would be ascribing the impact to the wrong factors which could lead us to try other changes that may not have positive impacts. Solution: Most software development and maintenance organizations will not spend the time and effort needed to measure TFP. Continue using labor or capital productivity to evaluate changes, but in addition evaluate how directly the productivity measures the change.  Understanding how closely the proxy tracks the changes will help the analyst judge the change or alert him or her to search for other impacts.  

4.      Productivity may only be loosely tied to delivered value. Productivity is a measure of raw, non-defective output, not whether that output useful or sellable.  If a software team delivers a product that does not hit the mark or is not adopted in the marketplace, they may have been highly efficient even though their output is less valuable than anticipated.  Solution: Measure both value and productivity to provide a complete view of performance.

5.      Comparing productivity across teams undervalues technical complexity. One piece of software is often significantly more or less complex than another.  Technical complexity often varies not only between applications but within sections of code inside applications.  The more complex the code or business problem, the lower the productivity (complexity increases the amount of effort needed to create an output).  Solution: Each application should evaluate and determine its own productivity based on measuring delivered results. When teams use productivity as a planning tool they should tune the anticipated performance based on the predicted level of technical and business complexity.

Faster, better, cheaper has been the mantra of many a CFO and CIO.  Understanding and improving productivity is one of the tools available to improve performance.  In order to effectively use productivity as a tool, we need to make sure we calculate it correctly and understand that the metric provides a single point of view.  Productivity is a measure  of how effectively an organization transforms inputs into outputs; no more, no less. Productivity metrics provide the most value when coupled with other metrics such as value, speed, and complexity to generate a holistic view of the value delivery chain. 


Categories: Process Management

Start planning your Google I/O 2016 schedule

Google Code Blog - Thu, 04/21/2016 - 19:00

Posted by Mike Pegg, Google Developers Team

What are the best ways to optimize battery and memory usage of your apps? How do you create a great app experience that is accessible to everyone, including users with disabilities? How do you build an offline-ready, service-working, app-manifesting, production-ready Progressive Web App using Firebase Hosting? And what are some of the best desserts that start with N? Tune in to Google I/O to get the answers to all of these questions (well, most of them...), along with a whole lot more. You can start planning your schedule now, as the first wave of 100 technical talks just went live at google.com/io!

Last year, you told us you wanted more: more technical content, more time, more space, more everything! We heard your feedback loud and clear and have added a full third day onto Google I/O to accommodate more comprehensive talks in larger spaces than in previous years. These talks will be spread over 14 suggested tracks, including Android, the Mobile Web, Play and more, to help you easily navigate your I/O experience. Of course, we’re also bringing back Codelabs, our self-paced workshops with Googlers nearby to give you a hand.

Attending Remotely?

There are already over 200 I/O Extended events happening around the world. Join one of these events to participate in I/O from your local neighborhood alongside local developers who share the same passion for Google technology. You can also follow the festival from home; we’ll have four different live stream channels to chose from, broadcasting many of the sessions in real time from Shoreline. All of the sessions will be available to watch on YouTube after I/O concludes, in case you miss one.

See you soon!

This is just the first wave of talks. We’ll be adding more talks and events as we get closer to I/O, including a number of talks directly after the keynote (shhhh!! We’ve got some new things to share). We look forward to seeing you in a few weeks -- whether it be in person at Shoreline, at an I/O Extended event, or on I/O Live!

Categories: Programming

Why Does Programming Suck?

Making the Complex Simple - John Sonmez - Thu, 04/21/2016 - 13:00

Today I’ve received a very interesting question from a reader. Why does programming suck? While it may seem a little bit controversial, having a programmer talking about why programming sucks, may seem kind of odd but, well… It does suck sometimes. One of the reasons why programming sucks is the technology. Technology changes at a […]

The post Why Does Programming Suck? appeared first on Simple Programmer.

Categories: Programming

Deprecation of BIND_LISTENER with Android Wear APIs

Android Developers Blog - Wed, 04/20/2016 - 22:26

Posted by Wayne Piekarski, Developer Advocate, Android Wear

If you’re an Android Wear developer, we wanted to let you know of a change you might need to make to your app to improve the performance of your user’s devices. If your app is using BIND_LISTENER intent filters in your manifest, it is important that you are aware that this API has been deprecated on all Android versions. The new replacement API introduced in Google Play Services 8.2 is more efficient for Android devices, so developers are encouraged to migrate to this as soon as possible to ensure the best user experience. It is important that all Android Wear developers are aware of this change and update their apps as soon as possible.

Limitations of BIND_LISTENER API

When Android Wear introduced the WearableListenerService, it allowed you to listen to changes via the BIND_LISTENER intent filter in the AndroidManifest.xml. These changes included data item changes, message arrivals, capability changes, and peer connects/disconnects.

The WearableListenerService starts whenever any of these events occur, even if the app is only interested in one type. When a phone has a large number of apps using WearableListenerService and BIND_LISTENER, a watch appearing or disappearing can cause many services to start up. This applies memory pressure to the device, causing other activities and services to be shut down, and generates unnecessary work.

Fine-grained intent filter API

In Google Play Services 8.2, we introduced a new fine-grained intent filter mechanism that allows developers to specify exactly what events they are interested in. For example, if you have multiple listener services, use a path prefix to filter only those data items and messages meant for the service, with syntax like this:


 <service android:name=".FirstExampleService">  
   <intent-filter>  
       <action android:name="com.google.android.gms.wearable.DATA_CHANGED" />  
       <action android:name="com.google.android.gms.wearable.MESSAGE_RECEIVED" />  
       <data android:scheme="wear" android:host="*" android:pathPrefix="/FirstExample" />  
   </intent-filter>  
 </service>  

There are intent filters for DATA_CHANGED, MESSAGE_RECEIVED, CHANNEL_EVENT, and CAPABILITY_CHANGED. You can specify multiple elements, and if any of them match, it will call your service and filter out anything else. If you do not include a element, all events will be filtered out and your service will never be called, so make sure to include at least one. You should be aware that registering in an AndroidManifest.xml for CAPABILITY_CHANGED will cause your service to be called any time a device advertising this capability appears or disappears, so you should use this only if there is a compelling reason.

Live listeners

If you only need these events when an Activity or Service is running, then there is no need to register a listener in AndroidManifest.xml at all. Instead, you can use addListener() live listeners, which will only be active when the Activity or Service is running, and will not impact the device otherwise. This is particularly useful if you want to do live status updates for capabilities being available in an Activity, but with no further background impact. In general, you should try to use addListener(), and only use AndroidManifest.xml when you need to receive events all the time.

Best practices

In general, you should only use a listener in AndroidManifest.xml for events that must launch your service. For example, if your watch app needs to send an interactive message or data to the phone.

You should try to limit the number of wake-ups of your service by using filters. If most of the events do not need to launch your app, then use a path and a filter that only matches the event you need. This is critical to limit the number of launches of your service.

If you have other cases where you do not need to launch a service, such as listening for status updates in an Activity, then register a live listener only for the duration it is needed.

Documentation

There is more information available about Data Layer events and the use of WearableListenerService, and tags in the manifest. Android Studio has a guide with a summary of how to convert to the new API. The Android Wear samples also show best practices in the use of WearableListenerService, such as DataLayer and XYZTouristAttractions. The changes needed are very small, and can be seen in this git diff from one of the samples here.

Removal of BIND_LISTENER

With the release of Android Studio 2.1, lint rules have been added that flag the use of BIND_LISTENER as a fatal error, and developers will need to make a small change to the AndroidManifest.xml to declare accurate intent filters. If you are still using BIND_LISTENER, you will receive the following error:

 AndroidManifest.xml:11: Error: The com.google.android.gms.wearable.BIND_LISTENER action is deprecated. [WearableBindListener]  
          <action android:name="com.google.android.gms.wearable.BIND_LISTENER" />  
              ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  

This will only impact developers who are recompiling their apps with Android Studio 2.1 and will not affect existing apps on user’s devices.

For developers who are using Google Play Services earlier than 8.2, the lint rules will not generate an error, but you should update to a newer version and implement more accurate intent filters as soon as possible.

In order to give users the best experience, we plan to disable BIND_LISTENER in the near future. It is therefore important that developers take action now, to avoid any future disruption experienced by users of their apps.

Categories: Programming

How Twitter Handles 3,000 Images Per Second

Today Twitter is creating and persisting 3,000 (200 GB) images per second. Even better, in 2015 Twitter was able to save $6 million due to improved media storage policies.

It was not always so. Twitter in 2012 was primarily text based. A Hogwarts without all the cool moving pictures hanging on the wall. It’s now 2016 and Twitter has moved into to a media rich future. Twitter has made the transition through the development of a new Media Platform capable of supporting photos with previews, multi-photos, gifs, vines, and inline video.

Henna Kermani, a Software Development Engineer at Twitter, tells the story of the Media Platform in an interesting talk she gave at Mobile @Scale London: 3,000 images per second. The talk focuses primarily on the image pipeline, but she says most of the details also apply to the other forms of media as well.

Some of the most interesting lessons from the talk:

  • Doing the simplest thing that can possibly work can really screw you. The simple method of uploading a tweet with an image as an all or nothing operation was a form of lock-in. It didn’t scale well, especially on poor networks, which made it difficult for Twitter to add new features.

  • Decouple. By decoupling media upload from tweeting Twitter was able independently optimize each pathway and gain a lot of operational flexibility. 

  • Move handles not blobs. Don’t move big chunks of data through your system. It eats bandwidth and causes performance problems for every service that has to touch the data. Instead, store the data and refer to it with a handle.

  • Moving to segmented resumable uploads resulted in big decreases in media upload failure rates.

  • Experiment and research. Twitter found through research that a 20 day TTL (time to live) on image variants (thumbnails, small, large, etc) was a sweet spot, a good balance between storage and computation. Images had a low probability of being accessed after 20 days so they could be deleted, which saves nearly 4TB of data storage per day, almost halves the number of compute servers needed, and saves millions of dollars a year.

  • On demand. Old image variants could be deleted because they could be recreated on the fly rather than precomputed. Performing services on demand increases flexibility, it lets you be lot smarter about how tasks are performed, and gives a central point of control.

  • Progressive JPEG is a real winner as a standard image format. It has great frontend and backend support and performs very well on slower networks.

Lots of good things happened on Twitter’s journey to a media rich future, let’s learn how they did it...

The Old Way - Twitter in 2012
Categories: Architecture

Why Developers Are Poor Testers and What Can Be Done About It

Making the Complex Simple - John Sonmez - Wed, 04/20/2016 - 13:15

‚ÄúMost developers I know are actually pretty bad testers.‚ÄĚ This was the feedback from one tester, in a recent short survey. The survey also verified that more developers are taking part in testing tasks, as reported in 37% of the organizations. The survey included testers from organizations worldwide and was focused on testing efforts in […]

The post Why Developers Are Poor Testers and What Can Be Done About It appeared first on Simple Programmer.

Categories: Programming

Software Development Linkopedia April 2016

From the Editor of Methods & Tools - Wed, 04/20/2016 - 09:07
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about scaling Agile, Test-Driven Development, recruiting developers, managing technical debt, distributed teams, breaking the rules, test automation, Agile metrics and machine learning. Web site: Agile Scaling Knowledgebase Blog: What do you […]

Metrics: 5 Behaviors Caused By Improper Productivity Measurement

Kafka Statue

Are you measuring a team effort?

Productivity is used to evaluate how efficiently an organization converts inputs into outputs.  However, productivity measures can and often are misapplied for a variety of reasons ranging from simple misunderstanding to gaming the system. Many misapplications of productivity measurement cause organizational behavior problems both from leaders and employees.  Five of the most common productivity-related behavioral problems are:

  1. Attributing team efforts to individuals.  Most software development and maintenance activities are team efforts. Productivity measures usually reflect the output of the teams doing the work, and can’t be ascribed to an individual.  However, there is often a chronic need to ascribe team output, whether good and bad, to an individual.  For example, often productivity data is used in an individual’s review and evaluation. This can lead to everyone trying to game the metric.  The solution for this issue is just not to fall prey to using a team metric to measure an individual.     
  2. Measuring productivity can reduce productivity.  Measurement of productivity requires time and effort to count the inputs and outputs of a system. If an organization does not measure productivity they could either create more output with the effort they would have used to measure productivity or reduce payroll, in either case leading to an increasing productivity.  If the measurement of productivity is not directly tied to process improvement or cost avoidance it will have no return on investment. Avoid overhead that does not deliver significant ROI. The solution is to use productivity measures as part of process improvement or cost reduction program.
  3. Gold plating an output to increase productivity.¬† One of the most famous adages in management circles is that ‚Äúyou get what you measure.‚ÄĚ ¬†Holding teams or individuals accountable for productivity¬†above¬†all else will lead to pressure to add features even if not asked for in order to¬†increase the development output.¬† This is often¬†referred to as gold plating. Gold plating leads to delivering more functionality than needed often at the expense of needs that are higher value. ¬†One fix I recommend¬†is¬†measuring productivity, value and customer satisfaction to ensure teams have more than a one-dimensional view of their¬†performance.¬†
  4. Optimizing local processes without regard to the overall system. The focus of any system is to deliver the most value possible and to improve over time.  Maximizing the efficiency of a part of a system may not translate to the whole system becoming more efficient. For example, I recently observed a large organization with multiple software teams that efficiently developed software.  After developing and testing the software, the operations/implementation group performed a review and security testing prior to putting the implementation into production.  The operations team had a six-month backlog.  While waiting for review, two changes in the last month had to be withdrawn because the market need had changed before they ever made it to production. At the same time, a process improvement team was looking at ways to increase development productivity. Making the software development process any more efficient would only exacerbate the implementation group bottleneck.  In the end, the organization reallocated three development teams to support the implementation function in working through the backlog and streamlining the process.  As we identified when in our Re-read Saturday of The Goal, systems only increase output if you improve the throughput of the bottlenecks in the system.
  5. What doesn’t get measured gets overused?¬† Some organizations that focus on measuring productivity get very good at finding and utilizing resources that don‚Äôt influence the input side of the equation.¬† A classic issue is under-reporting of effort in time accounting.¬† For example, many organizations cap salaried employee‚Äôs time entry at 40 hours per week, even though they are often working 60 hours or more. The overtime is effectively hidden and because it considered free. Therefore, there is often little pressure to measure the overtime¬†(until people start¬†quitting).¬† Any resource that¬†is considered free will be¬†overused.

The last two behavioral issues are often the most common and can occur even when organizations don’t explicitly measure productivity. Every organization, whether they explicitly measure productivity or not, wants software development to deliver more functionality and cost less.  Organizations that don’t take a systems thinking view  can actually increase cost and reduce real productivity hurting the long term efficiency of the organization when they are trying to have the opposite impact. 


Categories: Process Management

Testing Promises with Mocha

Xebia Blog - Tue, 04/19/2016 - 21:03
If you test Javascript promises with Mocha, there are several styles you can use to write your tests. If you follow the Mocha docs on testing asynchronous code you risk writing 'evergreen' tests. Evergreen tests never fail, even if your code is broken. That is something you clearly do not want to happen. So what

Experience virtual reality art in your browser

Google Code Blog - Tue, 04/19/2016 - 19:55

Posted by Jeff Nusz, Data Arts Team, Pixel Painter

Two weeks ago, we introduced Tilt Brush, a new app that enables artists to use virtual reality to paint the 3D space around them. Part virtual reality, part physical reality, it can be difficult to describe how it feels without trying it firsthand. Today, we bring you a little closer to the experience of painting with Tilt Brush using the powers of the web in a new Chrome Experiment titled Virtual Art Sessions.

Virtual Art Sessions lets you observe six world-renowned artists as they develop blank canvases into beautiful works of art using Tilt Brush. Each session can be explored from start to finish from any angle, including the artist‚Äôs perspective ‚Äď all viewable right from the browser.

Participating artists include illustrator Christoph Niemann, fashion illustrator Katie Rodgers, sculptor Andrea Blasich, installation artist Seung Yul Oh, automotive concept designer Harald Belker, and street artist duo Sheryo & Yok. The artists’ unique approaches to this new medium become apparent when seeing them work inside their Tilt Brush creations. Watch this behind-the-scenes video to hear what the artists had to say about their experience:


Virtual Art Sessions makes use of Google Chrome‚Äôs V8 Javascript engine for high-performance processing power to render large volumes of data in real time. This includes point cloud data of the artist‚Äôs physical form, 3D geometry data of the artwork, and position data of the VR controllers. It also relies on Chrome‚Äôs support of WebM video and WebGL to produce the 360¬į representations of the artists and artwork ‚Äď the artist portrayals alone require the browser to draw over 200,000 points at 30 times a second. For a deeper look, read the technical case study or browse the project code that is available open source from the site‚Äôs tech page.

We hope this experiment provides a window into the world of painting in virtual reality using Tilt Brush. We are excited by this new medium and hope the experience leaves you feeling the same. Visit g.co/VirtualArtSessions to start exploring.

Categories: Programming

Sprint Planning for Agile Teams That Have Lots of Interruptions

Mike Cohn's Blog - Tue, 04/19/2016 - 15:00

Many teams have at least a moderate ability to plan and control their time. They're able to say, "We will work on these things over the coming sprint," and have a somewhat reasonable expectation of that being the case.

And that's the type of team we encounter in much of the Scrum literature--the literature that says to plan a sprint and keep change out.

But what should teams do when change cannot be kept out of a sprint?

In this post, I want to address this topic for two different types of teams:

  • A team that has occasional, but not excessive, interruptions
  • A team that is highly interrupt-driven
Planning with a Moderate Margin of Safety

Many teams will benefit from including a moderate amount of safety into each sprint. Basically, these teams should not assume they can keep all changes out of the sprint. For example, a team might want to leave room when planning a sprint for things like:

  • Fixing critical operational issues, such as a server going down
  • Fixing high-severity bugs
  • Doing first- or second-level tech support

There could be many other similar examples. Consider your own environment. You want to try to set a high threshold for what constitutes a worthy interruption to a sprint. Teams really do best when they have large blocks of dedicated time that will not be interrupted.

To accommodate work like this, all a team needs to do is leave a bit of buffer when planning the sprint. Let’s see how that works.

The Three Things That Must Go into Each Sprint

I think of a sprint as containing three things: corporate overhead, and plannable and unplanned time. I think of this graphically as shown in Figure 1.

Figure 1:

Corporate overhead is that time that goes towards things like all-company meetings, answering emails from your past project, attending the HR sensitivity training and so on. Some of these activities may be necessary, but in many organizations, they consume a lot of time.

I put Scrum meetings (planning, daily scrum, etc.) in the corporate overhead category as well.

Plannable time is the second thing that goes into a sprint. This is the time that belongs collectively to the team.

But the team does not want to fill the entire remainder of the sprint with plannable time. The team needs to acknowledge the need to leave room for some amount of unplanned time.

Unplanned time goes towards three things:

  • Emergencies
  • Tasks that will get bigger than the team thinks
  • Tasks that no one thinks of during the sprint planning meeting
The Appropriate Percentages

I’m frequently asked what percentages teams should use for each of the three categories. I can’t answer that. But I can tell you how to figure it out:

After each sprint, consider how well the unplanned time the team allocated matched the unplanned time the team needed for the sprint. And then adjust up or down a bit for the next sprint. This is not something a team can ever get perfect.

Instead, it’s a game of averages. The team needs to save the right amount of time for unplanned tasks on average. Then some sprints will have more unplanned tasks occur and some sprints will have fewer.

When fewer occur, the team should get ahead on their work. so they’re prepared for when more unplanned tasks occur.

What to Do When a Team Is Highly Interrupt-Driven

The preceding advice works well for the majority of agile teams--those that are only interrupted a moderate amount. Some teams, however, are highly interrupt-driven.

Again, I want to resist putting actual percentages on the regions in Figure 1, but I’m describing a situation in which the area of “unplanned time” becomes much larger than shown.

I actually want to talk about the cases in which unplanned time becomes the dominant of the three areas. Such teams are highly interrupt-driven.

These teams still want to include space in their sprints for unplanned time. But there are usually a few other things you may want to consider if you are on a highly interrupt-driven team.

First, you may want to adjust your sprint length. One option is to go with a long sprint length. Increasing sprint length has the benefit of making the rate of interruption more predictable because the variance will not be so great from sprint to sprint.

To see how that works, imagine you chose a one-year sprint. (Don’t do that!) It’s easy to imagine with such long sprints, the fluctuations a team faces with short sprints will wash out. Sure, this year (this sprint) might have more interruptions than last year (last sprint), but it’s such a long period that the team has time to recover from any excessive fluctuations.

The other option is to go with short, one-week sprints and just live with the unpredictability. The team will be less able to assure bosses “we will be done with this” by a given period, but I find that to be a worthwhile tradeoff.

Second, a highly interrupt-driven team should make sprint planning a very lightweight activity.

Sprint planning should be a quick effort to grab a few things the team thinks it can do in the coming week, and that’s that. It should be a very minimal effort--15 or 30 minutes for many teams.

To illustrate this, think about planning a party, and imagine a spectrum with planning a wedding reception on one end. That’s some serious party planning. At the other end of the spectrum is inviting some friends over tonight to watch the big game on TV. To plan that, I’m going to check the fridge for beer and order a pizza. That’s a different level of party planning.

Sprint planning for a highly interrupt-driven team should be much more like the latter--quick, easy and just enough to be successful.

What Do You Do?

How do you handle interruptions on your agile team? Please share your thoughts in the comments below.

R: substr ‚Äď Getting a vector of positions

Mark Needham - Mon, 04/18/2016 - 20:49

I recently found myself writing an R script to extract parts of a string based on a beginning and end index which is reasonably easy using the substr function:

> substr("mark loves graphs", 0, 4)
[1] "mark"

But what if we have a vector of start and end positions?

> substr("mark loves graphs", c(0, 6), c(4, 10))
[1] "mark"

Hmmm that didn’t work as I expected! It turns out we actually need to use the substring function instead which wasn’t initially obvious to me on reading the documentation:

> substring("mark loves graphs", c(0, 6, 12), c(4, 10, 17))
[1] "mark"   "loves"  "graphs"

Easy when you know how!

Categories: Programming

Experimenteren kun je leren

Xebia Blog - Mon, 04/18/2016 - 17:04
Validated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast. In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt

Experimenteren kun je leren

Xebia Blog - Mon, 04/18/2016 - 17:04

pdcaValidated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast.

In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt behoefte. Een goed experiment kan ongelooflijk waardevol zijn, mits goed uitgevoerd. En hier zit meteen een veel voorkomend probleem: het experiment wordt niet goed afgerond. Er wordt wel een proef gedaan, maar daar zit vaak geen goede hypothese achter en de lessen worden niet of nauwelijks meegenomen. Ik heb gemerkt dat, om een hoger lerend vermogen in de organisatie te krijgen, het handig om een vaste structuur aan te houden voor experimenten.

Er zijn veel structuren die goed werken. Toyota (of Kanban) Kata vind ik persoonlijk heel erg fijn, maar ook de ‚Äúgewone‚ÄĚ Plan-Do-Check-Act werkt erg goed. Die structuur voor een staat met een simpel voorbeeld hieronder uitgelegd:

  1. Hypothese

Welk probleem ga je oplossen? En hoe wil je dat gaan doen?

Als het hele team vanuit huis inbelt voor de stand up dan worden we niet minder effectief dan als iedereen aanwezig is en kunnen we beter omgaan met thuiswerkdagen

  1. Voorspelling van de uitkomsten

Wat is je verwachting van de uitkomsten? Wat ga je zien?

Geen lagere productiviteit, hogere score op team happiness omdat mensen vanuit huis kunnen werken

  1. Experiment

Op welke manier ga je toetsen of je het probleem kunt oplossen? Is dit experiment ook safe to fail?

De komende zes weken belt iedereen in vanuit huis voor de stand up. We scoren in de retrospective op productiviteit en happiness. Daarna doen we zes weken de stand up samen op kantoor

  1. Observaties

Verzamel zo veel mogelijk data tijdens je experiment. Wat zie je gebeuren?

Het opzetten van de call duurt erg lang (10-15 minuten). Het is lastig iedereen aan het woord te laten. Bij het inbellen kunnen we het gewone board niet gebruiken omdat niemand post-its kan verhangen.

A well designed experiment is as likely to fail as it is to succeed ‚Äď Free to Don¬†Reinertsen

 Dit is vast niet het beste experiment dat geformuleerd kan worden. Maar daar gaat het niet om. Waar het om gaat is dat het leerproces ontstaat bij de verschillen tussen de voorspelling en de observaties. Het is dus belangrijk om allebei deze stappen te doen en bewust stil te staan bij het leerproces wat daarop volgt. Op basis van je observaties kun je een nieuw experiment formuleren voor nieuwe verbeteringen.

Hoe doe jij je experimenten? Ik ben benieuwd naar wat goed werkt in jouw organisatie.

 

Hadoop and Salesforce Integration: the Ultimate Successful Database Merger

How we can transfer salesforce data to hadoop? It is big challenge to everyday users. What are different features of data transfer tools.

Categories: Architecture

30 Problems That Affect Software Projects

Herding Cats - Glen Alleman - Mon, 04/18/2016 - 15:10

From Estimating Software Costs: Bringing Realism To Estimating, 2nd Edition.

  1. Initial requirements are seldom more than 50 percent complete
  2. Requirements grow at about 2 percent per calendar month during development
  3. About 20 percent of initial requirements are delayed until a second release
  4. Finding and fixing bugs is the most expensive software activity
  5. Creating paper documents is the second most expensive software activity
  6. Coding is the third most expensive software activity
  7. Meetings and discussion are the fourth most expensive activity
  8. Most forms of testing are less that 30 percent efficient in finding bugs
  9. Most forms of testing touch less than 50 percent of the code being tested
  10. There are more defects in requirements and design than in source code
  11. There are more defects in test cases than in the software itself
  12. Defects in requirements, design and code average 5.0 per function point
  13. Total defect-removal efficiency before release averages only about 80 percent
  14. About 15 percent of software defects are delivered to customers
  15. Delivered defects are expensive and cause customer dissatisfaction
  16. About 5 percent of modules in applications will contain 50 percent of all defects
  17. About 7 percent of all defect repairs will accidentally inject new defects
  18. Software reuse is the only effective for materials that approach zero defects
  19. About 5 percent of software outsource contracts end up in litigation
  20. About 35 percent of projects greater that 10,000 function points will be canceled
  21. About 50 percent of project greater than 110,000 function points will be one year late
  22. The failure mode for most cost estimates is to be excessively optimistic.
  23. Productivity rates in the U.S. are about 10 function points per staff month
  24. Assignment scopes for development are about 150 function points
  25. Assignment scopes for maintenance are about 750 function points
  26. Development costs about $1,200 per function point in the U.S.
  27. Maintenance costs about $150 per function point per calendar year
  28. After delivery applications grow at about 7 percent per calendar year during use
  29. Average defect repair rates are about ten bugs or defect per month
  30. Programmers need about ten days of annual training to stay current

Agile addresses 1, 2, 3, 4, 5, and 6 well. 

So if these are the causes of project difficulties - and there may be others since this publication, what are the fixes?

 

Categories: Project Management

6 Red Flags That You Need To Start Cutting Your Losses

Making the Complex Simple - John Sonmez - Mon, 04/18/2016 - 15:00

There‚Äôs a stigma in our society about quitting that causes us to cling on to projects long after they should be let go. Quitters never win. Quitting lasts forever. Champions never quit. You‚Äôre never a loser till you quit trying. No one wants to lose. No one wants to be a loser. Well, at least […]

The post 6 Red Flags That You Need To Start Cutting Your Losses appeared first on Simple Programmer.

Categories: Programming