Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=8' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

The Learning Continues! New lessons for Advanced Android course

Android Developers Blog - Tue, 01/05/2016 - 20:33

Posted by Joanna Smith, Developer Advocate

Magic moments happen when your app does something very useful with minimal effort from your users -- like figuring out their location for them automatically. The new Places lesson in the Advanced Android App Development course teaches you how to add a Place Picker to your app so that users can pick a nearby location without having to type anything.

The Advanced Android App Development course, built by Udacity in conjunction with Google, is a follow-up course to Developing Android apps. The advanced course is for Android Developers who are ready to learn how to polish, productionize and publish their app, and even distribute it through Google Play.



Updates to the course also include an explanation of the new GCM Receiver, as well as an entirely new lesson on publishing your app, which explains how to build and sign an APK so you you can distribute your app on Google Play.

After all, why build an app if you can’t get it to your users?
Get started now, because it's going to be awesome!

Categories: Programming

DoubleClick’s ‘Mobile Bootcamp’ for app success

Google Code Blog - Tue, 01/05/2016 - 19:39

Originally Posted on DoubleClick Publisher Blog

Mobile app usage has grown 90% in the past two years and contributes to 77% of the total increase in time spent in digital media. People intuitively turn to mobile devices for information and entertainment, typically spending 37 hours per month in apps. This shift in the consumer mobile experience presents a significant opportunity for increased engagement and monetization for publishers with apps.

To help you effectively capture this mobile opportunity, we’ve gathered our most actionable research and best practices for reaching, engaging, and monetizing app audiences.


Learn something new and want to know more? For more details on our four strategies for app success, review the DoubleClick for Publishers ‘Mobile Bootcamp’ blog series on growing app audiences, engaging users, ensuring high app quality, and effectively monetizing.

Categories: Programming

Sponsored Post: Netflix, StatusPage.io, Redis Labs, Jut.io, SignalFx, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Manager - Site Reliability Engineering: Lead and grow the the front door SRE team in charge of keeping Netflix up and running. You are an expert of operational best practices and can work with stakeholders to positively move the needle on availability. Find details on the position here: https://jobs.netflix.com/jobs/398

  • Senior Service Reliability Engineer (SRE): Drive improvements to help reduce both time-to-detect and time-to-resolve while concurrently improving availability through service team engagement.  Ability to analyze and triage production issues on a web-scale system a plus. Find details on the position here: https://jobs.netflix.com/jobs/434

  • Manager - Performance Engineering: Lead the world-class performance team in charge of both optimizing the Netflix cloud stack and developing the performance observability capabilities which 3rd party vendors fail to provide.  Expert on both systems and web-scale application stack performance optimization. Find details on the position here https://jobs.netflix.com/jobs/860482

  • Senior Devops Engineer - StatusPage.io is looking for a senior devops engineer to help us in making the internet more transparent around downtime. Your mission: help us create a fast, scalable infrastructure that can be deployed to quickly and reliably.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Your event could be here. How cool is that?
Cool Products and Services
  • Turn chaotic logs and metrics into actionable data. Scalyr is a tool your entire team will love. Get visibility into your production issues without juggling multiple tools and tabs. Loved and used by teams at Codecademy, ReturnPath, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • Real-time correlation across your logs, metrics and events.  Jut.io just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Sumo Logic alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Can a Traditional SRS Be Converted into User Stories?

Mike Cohn's Blog - Tue, 01/05/2016 - 16:00

A lot of traditionally managed projects begin with a Software Requirements Specification (SRS). Then sometime during the project, a decision is made to adopt an agile approach. And so a natural question is whether the SRS can serve as the newly agile project's product backlog. Some teams contemplate going so far as rewriting the SRS into a product backlog with user stories. Let's consider whether that's necessary.

But before taking up this question, I want to clarify what I mean by a Software Requirements Specification or SRS. I find this type of document to vary tremendously from company to company. In general, though, what I'm referring to is the typical document full of "The system shall ..." type statements.

But you can imagine any sort of traditional requirements document and my advice should still apply. This is especially the case for any document with numbered and nested requirements statements, regardless of whether each statement is really written as "the system shall ..."

Some Drawbacks to Using the SRS as a Product Backlog

On an agile product, the product backlog serves two purposes:

  • It serves as a repository for the work to be done
  • It facilitates prioritization of work

That is, the product backlog tells a team what needs to be done and can be used as a planning tool for sequencing the work. In contrast, a traditional SRS addresses only the issue of what is to be done on the project.

There is no attempt with an SRS to write requirements that can be implemented within a sprint or that are prioritized. Writing independent requirements is a minor goal at best, as shown by the hierarchical organization of most SRS documents, with their enumerated requirements such as 1.0, 1.1, 1.1.1, and so on.

These are not problems when an SRS is evaluated purely as a requirements document. But when the items within an SRS will also be used as product backlog items and prioritized, it creates problems.

With an SRS, it is often impossible for a team to develop requirement 1.1.1 without concurrently developing 1.1.2 and 1.1.5. These dependencies make it not as easy as picking a story at a time from a well-crafted product backlog.

Prioritizing sub-items on an SRS is also difficult, as is identifying a subset of features that creates a minimum viable product.

A Software Requirements Specification is good at being a requirements specification. It’s good at saying what a system or product is to do. (Of course, it misses out on all the agile aspects of emergent requirements, collaboration, discovery, and so on. I’m assuming these will still happen.)

But an SRS is not good for planning, prioritizing and scheduling work. A product backlog is used for both of these purposes on an agile project.

My Advice

In general, I do not recommend taking the time to rewrite a perfectly fine SRS. Rewriting the SRS into user stories and a nice product backlog could address the problems I’ve outlined. But, it is not usually worth the time required to rewrite an SRS into user stories.

Someone would have to spend time doing this, and usually that person could spend their time more profitably. I would be especially reluctant to rewrite an SRS if other teammates would be stalled in starting their own work while waiting for the rewritten product backlog.

A ScrumMaster or someone on the team will have to find ways of tracking progress against the SRS and making sure requirements within it don’t fall through the cracks. Usually enlisting help from QA in validating that everything in an SRS is done or listed in the product backlog is a smart move.

One additional important strategy would be educating those involved in creating SRS documents to consider doing so in a more agile-friendly manner for future projects. If you can do this, you’ll help your next project avoid the challenges created by a mismatch between agile and starting with an SRS document.

What Do You Think?

Please join the discussion and share your thoughts and experiences below. Have you worked on an agile project that started with a traditional SRS? How did it go? Would it have been different if the SRS had been rewritten as a more agile product backlog?

Managing for Happiness FAQ

NOOP.NL - Jurgen Appelo - Mon, 01/04/2016 - 18:36

In June 2016, John Wiley & Sons will release my “new” book Managing for Happiness, which will be a re-release of last year’s #Workout book. Some people asked me questions about that.

The post Managing for Happiness FAQ appeared first on NOOP.NL.

Categories: Project Management

Server-Side Architecture. Front-End Servers and Client-Side Random Load Balancing

Chapter by chapter Sergey Ignatchenko is putting together a wonderful book on the Development and Deployment of Massively Multiplayer Games, though it has much broader applicability than games. Here's a recent chapter from his book.

Enter Front-End Servers

[Enter Juliet]
Hamlet:
Thou art as sweet as the sum of the sum of Romeo and his horse and his black cat! Speak thy mind!
[Exit Juliet]

— a sample program in Shakespeare Programming Language

 

 

Front-End Servers as an Offensive Line

 

Our Classical Deployment Architecture (especially if you do use FSMs) is not bad, and it will work, but there is still quite a bit of room for improvement for most of the games out there. More specifically, we can add another row of servers in front of the Game Servers, as shown on Fig VI.8:

Categories: Architecture

Architects as Servant Leaders

As more teams and organizations transition to agile, they discover something important about leadership. Leadership is part of everything we do in an agile project. It doesn’t matter if it’s development or testing, management or architecture. We need people with high initiative and leadership capabilities.

That leads me to these questions:

  • We need project management. Do we need project managers?
  • We need management. Do we need managers?
  • We need architecture. Do we need architects?

As with all interesting questions, often the answers are, “It depends.” What do those people do? How do they do it?

In December, I gave a talk, “Agile Architect as Servant Leader” for IASA. That talk outlines some of the ways agile architects might work as servant leaders. See the slides: Agile Architect as Servant Leader.

 Scaling Collaboration Across the OrganizationThere is more about servant leadership in Agile and Lean Program Management, for program managers, program product owners, and architects.

Here is the link to the recording: Agile Architect as Servant Leader.

Categories: Project Management

Sometimes, Children Really Get It Right

Making the Complex Simple - John Sonmez - Mon, 01/04/2016 - 14:00

Maybe it’s because I’m a radically-biased father of three amazing princesses. Maybe it’s because I still have a big kid living inside of me who gets excited to play with Legos and K’NEX. Or maybe it’s because the adults in this world over-complicate matters by trying too hard to be politically correct and polite every […]

The post Sometimes, Children Really Get It Right appeared first on Simple Programmer.

Categories: Programming

SPaMCAST 375 – Quality Essay, Estimating Testing, Discovery Driven Planning

Software Process and Measurement Cast - Sun, 01/03/2016 - 23:00

This week’s Software Process and Measurement Cast opens with our essay on quality and measuring quality. Software quality is a simple phrase that is sometimes difficult to define. In SPaMCAST 374, Jerry Weinberg defined software quality as value. In our essay, we see how others have tackled the subject and add our perspective.

Jeremy Berriault brings the QA Corner to the first SPaMCAST of 2016, discussing the sticky topic of estimating testing. Estimating has always been a hot button issue that only gets hotter when you add in testing.  Jeremy provides a number of pragmatic observations that can help reduce heat the topic generates.

Wrapping up the cast, Steve Tendon discusses the topic of discovery driven planning from his book, Tame The Flow. Discovery driven planning is a set of ideas that recognizes that most decisions are made in situations that are full of uncertainty and complexity. We need new tools and mechanisms to avoid disaster.

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

 

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Three, Hubbard explores three misconceptions of measurement that lead people to believe they can’t measure something, three reasons why people think something shouldn’t be measured and four useful measurement assumptions.

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. The next CMMI Capability Challenge will be held on January 12 at 1 PM EST. 

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST


Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization's performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.   

Next week even more!  

Next SPaMCAST

The next Software Process and Measurement Cast is a panel discussion featuring all of the regulars from the Software Process and Measurement Cast, including Jeremy Berriault, Steve Tendon, Kim Pries, Gene Hughson and myself.  We prognosticated a bit on the topics that will motivate software development and process improvement in 2016.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 375 – Quality Essay, Estimating Testing, Discovery Driven Planning

Software Process and Measurement Cast - Sun, 01/03/2016 - 23:00

This week’s Software Process and Measurement Cast opens with our essay on quality and measuring quality. Software quality is a simple phrase that is sometimes difficult to define. In SPaMCAST 374, Jerry Weinberg defined software quality as value. In our essay, we see how others have tackled the subject and add our perspective.

Jeremy Berriault brings the QA Corner to the first SPaMCAST of 2016, discussing the sticky topic of estimating testing. Estimating has always been a hot button issue that only gets hotter when you add in testing.  Jeremy provides a number of pragmatic observations that can help reduce heat the topic generates.

Wrapping up the cast, Steve Tendon discusses the topic of discovery driven planning from his book, Tame The Flow. Discovery driven planning is a set of ideas that recognizes that most decisions are made in situations that are full of uncertainty and complexity. We need new tools and mechanisms to avoid disaster.

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

 

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Three, Hubbard explores three misconceptions of measurement that lead people to believe they can’t measure something, three reasons why people think something shouldn’t be measured and four useful measurement assumptions.

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. The next CMMI Capability Challenge will be held on January 12 at 1 PM EST. 

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST


Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization's performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.   

Next week even more!  

Next SPaMCAST

The next Software Process and Measurement Cast is a panel discussion featuring all of the regulars from the Software Process and Measurement Cast, including Jeremy Berriault, Steve Tendon, Kim Pries, Gene Hughson and myself.  We prognosticated a bit on the topics that will motivate software development and process improvement in 2016.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 375 – Quality Essay, Estimating Testing, Discovery Driven Planning

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

This week’s Software Process and Measurement Cast opens with our essay on quality and measuring quality. Software quality is a simple phrase that is sometimes difficult to define. In SPaMCAST 374, Jerry Weinberg defined software quality as value. In our essay, we see how others have tackled the subject and add our perspective.

Jeremy Berriault brings the QA Corner to the first SPaMCAST of 2016, discussing the sticky topic of estimating testing. Estimating has always been a hot button issue that only gets hotter when you add in testing.  Jeremy provides a number of pragmatic observations that can help reduce heat the topic generates.

Wrapping up the cast, Steve Tendon discusses the topic of discovery driven planning from his book, Tame The Flow. Discovery driven planning is a set of ideas that recognizes that most decisions are made in situations that are full of uncertainty and complexity. We need new tools and mechanisms to avoid disaster.

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

 

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Three, Hubbard explores three misconceptions of measurement that lead people to believe they can’t measure something, three reasons why people think something shouldn’t be measured and four useful measurement assumptions.

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. The next CMMI Capability Challenge will be held on January 12 at 1 PM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST
Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization’s performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.

Next week even more!

Next SPaMCAST

The next Software Process and Measurement Cast is a panel discussion featuring all of the regulars from the Software Process and Measurement Cast, including Jeremy Berriault, Steve Tendon, Kim Pries, Gene Hughson and myself.  We prognosticated a bit on the topics that will motivate software development and process improvement in 2016.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 01/03/2016 - 17:43

When you think about Agile self organizing teams and their benefits to sofwtare development, remember the characters in Lord of the Flies were a self organizing team - @agileklzkittens

Related articles Architecture -Center ERP Systems in the Manufacturing Domain Self-Organization The Art of Systems Architecting
Categories: Project Management

Agile Hong Kong Meetup - 15th Jan 2016

Coding the Architecture - Simon Brown - Sun, 01/03/2016 - 11:43

Happy new year and I wish you all the best for 2016. My first trip of the year starts next week and I'll be doing some work in Shenzhen, China. As a result, I'll also be in Hong Kong on January 15th, presenting "The Art of Visualising Software Architecture" at a meetup organised by Agile Hong Kong. You can register on the Meetup page. See you there!

p.s. If anybody would like a private, in-house 1-day software architecture sketching workshop on the 15th, please drop me a note.

Categories: Architecture

How To Measure Anything, Third Edition, Chapter 3, The Illusion of Intangibles: Why Immeasurables Aren’t

 How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 3 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled The Illusion of Intangibles: Why Immeasurables Aren’t.  In this chapter Hubbard explores three misconceptions of measurement that lead people to believe they can measure something, three reasons why people think something should not be measured and four useful measurement assumptions.

Hubbard begins the chapter with a discussion of the  reasons people most commonly suggest that something can’t be measured. The misconceptions are summarized into three categories:

  1. The concept of measurement. – The first misconception stems from not understanding the definition of measurement. Hubbard defines measurement as an observation that quantifiably reduce uncertainty. If we considered that most of the business decisions are faced with are based on imperfect information, therefore, made under uncertainty. The   measurement is an activity that adds information that improves on prior knowledge. Like the bias that causes people to avoid tackling risks that can’t be reduced to zero, some people will avoid measurement if they can’t reduce uncertainty to zero. Rarely if ever does measurement eliminate all uncertainty but rather  reduces uncertainty, however; all measures that reduce enough uncertainty is often valuable. The concept of the need for measurement data that reduce uncertainty can be seen in portfolio management questions which decide which projects will be funded even before all of the requirements are known.

    All types of attributes can be measured regardless of whether they are qualitative (for example team capabilities) or quantitative (for example project cost). Another consideration when an understanding measurement is that there are numerous measurement scales. Different measurement scales include nominal, ordinal, interval and  ratio scales.  Each scale allows different statistical operations and can present different conceptual challenges. It is imperative to understand the how can be used and the mathematical operations that can be leveraged for each (we will explore these on the blog in the near future).

    Hubbard concludes this section with a discussion of two of his basic assumptions. The first is the assumption is that there is a prior state of uncertainty that be quantified or estimated. And that uncertainty is a feature of the observer not necessarily that of the thing being observed.  This is a basic argument of Bayesian statistic. In Bayesian statistics both the initial and change in uncertainty will be quantified.

    Bottom line, it is imperative to understand the definition of measurement, measurement scales and Bayesian statistics so that you can apply the concepts of measurement to reducing uncertainty.

  1. The object of measurement. – The second misconception stems from the use of sloppy language or a failure to  define what is being measured. In order to measure something, we must unambiguously state what the object of measurement means. For example, many organizations wish to understand the productivity of development and maintenance teams but don’t construct a precise  definition  of the concept AND why we care / why the measure is valuable.

    Bottom Line: Decomposing what is going to be measured from vague to explicit should always be the first step in the measurement process.

  1. Methods of measurement. – The third misconception is a reflection of not understanding what constitutes measurement. The process and procedures are often constrained to direct measurement such as counting the daily receipts at a retail store. Most of the difficult measurement in the business (or a variety of other) environments must be done using indirect measurement. For example, in Chapter 2 Hubbard used the example of  Eratosthenes measurement of the circumference of the earth. Eratosthenes used observations of shadows and the angle of the sun to indirectly determine the circumference. A direct measure would have been if he had used a really long tape measure (pretty close to impossible).

    A second topic related to this misconception is thought that valuable measurement requires either measuring the whole population or a large sample. Studying populations is often impractical.  Hubbard shares the rule of five (proper random samples of five observations) or the single sample majority rule yield can dramatically significantly narrow the range of uncertainty.

    Bottom line: Don’t rely on your intuition about sample size.  The natural tendency is to believe a large sample is needed to reduce uncertainty, therefore, many managers will either decide that measurement is not possible managers because they are uncomfortable with sampling.

Even when it possible to measure the argument often turns to why you shouldn’t. Hubbard summarizes the “shouldn’t” into three categories.

  1. Cost too much / economic objections. – Hubbard suggests that most variables that are measured yield little or no informational value, they do not reduce uncertainty in decisions. The value delivered by measurement must outweigh the cost (this is one of the reasons you should “why” you want any measure) of collecting and analysis.

    Bottom Line:  Calculate the value of information based on the reducing uncertainty. Variables that have enough value justify deliberate measurement is justified. Hubbard suggests (and I concur) when someone says something is too expensive or too hard to measure the question in return should be compared to what.

  1. Measures lack usefulness or meaningfulness. – It is often said that you can prove anything with statistics as a reason to point out that measurement is not very meaningful. Hubbard suggests you “you can prove anything” the statement is patently unprovable. What is really meant is that numbers can be used to confuse people especially those without skills in statistics.

    Bottom Line: Investing in statistical knowledge is important for anyone that needs to make decisions and wants to outperform expert judgment.

  1. Ethical objection – Measurement can be construed as dehumanizing. Reducing everything can be thought of as taking all human factors out of a decision, however, measurement does not suggest there are only black and white answers. Measurement increase information while reducing uncertainty.  Hubbard provides a great quote, ”   the preference for ignorance is over even marginal reduction ignorance is never moral.”

    Bottom Line: Information and reduction of uncertainty are  neither moral or immoral.

The chapter is capped by four useful measurement assumptions that provide a summary for the chapter.

  1. Everything has been measured before. Do your research before you start!
  2. You have data than you think. – Consider both direct and indirect data.
  3. You need far fewer data points than you think. – Remember the rule of five.
  4. New observations are more accessible than you think. – Sample and use simple measures.

More than occasionally I have been told the measuring is meaningless since the project or piece work being measured is unique and that the past does not predict the future.  Interesting these same people yell the loudest when I suggest that if the past does not count that team members can be considered fungible assets and trade in and out of a project.  Measurement and knowledge of the past reduce almost always reduces uncertainty.

Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How To Measure Anything, Third Edition, Introduction

HTMA, Chapter 1

HTMA, Chapter 2

 

 


Categories: Process Management

Prime Your Mind for 2016

“Chance favors the prepared mind.” -- Louis Pasteur

The future is either created or destroyed by the decisions we make and the actions we take.

It's 2016 and change is in the air.

For some people, this time of year is their favorite. It's a time of year filled with hope, possibility, and dreams. 

For others, this is a horrible time of year, filled with despair, shattered dreams, and bitter disappointment.

Either way, let's get a fresh start, as we turn the page for a new year.

Let's give ourselves permission to dream big, and re-imagine what this next year could be all about.

Prime Your Mind to Empower Yourself and Your Business for an Amazing 2016

If you don't know what priming is, it's a psychology concept that basically means we embody the concepts and stereotypes we're exposed to.  For example, if we see the color yellow, we find the word banana faster.

You can use priming in a very pragmatic way to inspire your way forward.  Rather than hold on to old beliefs, mental models, and references, you can fill your mind with examples and ideas for new possibilities.

I've written a fairly exhaustive approach to how you can prime your mind for 2016:

Prime Your Mind for 2016

But I'll summarize some key ideas in this post so you can get started stirring up your big bold ambitions for the new year.

3 Key Ideas to Prime Your Mind with for 2016

The big ideas really come down to this:

  1. People examples of transformation. Fill your head with examples of how people have created amazing personal transformation.  TED Talks are a great source of inspiration and examples of how people have transformed themselves, and in many cases, how they are helping transform the world around them.
  2. Technology examples of transformation.   Fill your head with examples of how the mega-trends are shaping the world through Cloud, Mobile, Social, and Big Data.  Fill your mind with examples of how the mega-trends are coming together in a “Nexus of Forces” as Gartner would say, to change the world.  Fill your mind with examples of the mega-trend of mega-trends – the Internet of Things – is re-shaping the world, in extraordinary ways.  Read Future Visions, a free download by Microsoft, to get a glimpse into how science fiction could shape the science around us.
  3. Business examples of transformation.   Fill your head with examples of amazing examples of how businesses are driving digital business transformation.  Read NEXT at Microsoft to see some of the crazy things Microsoft is up to.  Read customer stories of transformation to see what Microsoft customers are up to.  Explore what sorts of things customers are up to on the Industry Solutions page.   For some truly phenomenal stories of digital transformation, check out what Microsoft UK is up to in education, business, and society.
Your Personal Preparation for 2016

Here is a quick way you can use books to help you prepare for the world around you:

  • Read a book like Leading Digital to get the overview of how digital transformation works.  You can see how companies like Starbucks and Burberry drove their digital transformation and you can learn the success patterns of business leaders who are leading and learning how to create customers and create new value in a mobile-first, cloud-first world.
  • Read books like Consumption Economics to fully grasp how value creation is throttled by value absorption – the ability of users and consumers to use the value that businesses can now create in a digital economy. 
  • Read books like B4b to see how companies are shifting to business outcomes for customers and helping customer achieve new levels of value from their technology investments. 
  • Read books like the Challenger Sale to learn how to go from somebody who pushes solutions to somebody who becomes a trusted advisor for their client and learns how to 1) teach, 2) tailor, and 3) take control.   Teaching is all about knowing your stuff and being able to help people see the art of the possible and sharing new ideas.  Tailoring is all about making ideas relevant.  It means you need to really understand a client’s pains, needs, and desired outcomes so that whatever comes out of your mouth, speaks to that.  Taking control means asking the right questions that drive conversations, strategies, and execution forward in an empowering way.
  • Read books like The Lean Startup to learn how to create and launch products, while making better, faster business decisions.   Learn how to innovate using principles from lean manufacturing and agile development to ship better, and win more raving fans.
  • Read books like Scaling Up to master the four key decision areas: people, strategy, execution, and cash, to create a company where the team is engaged, customers are doing your marketing, and everyone is making impact.  It includes one-page tools including a One-Page Strategic Plan and the Rockefeller Habits Checklist.
  • Read books like The Business Model Navigator to learn how businesses are re-imaging their business models for a mobile-first, cloud-first world.
  • Read books like Anticipate to put it all together and become a more visionary leader and build some mad skills to survive and thrive in the digital economy.
  • Read a book like Getting Results the Agile Way to help you master productivity, time management, and work-life balance.

Best wishes for a 2016 where you create and live the change you want to see.

You Might Also Like

10 Productivity Tools from Agile Results

Get Your Goals On

Habits, Dreams, and Goals

How To Defeat Procrastination

Microsoft Explained

Personal Effectiveness Toolbox

The Microsoft Story

Categories: Architecture, Programming

The Notion of Value in Agile Projects

Herding Cats - Glen Alleman - Sat, 01/02/2016 - 18:53

The Holy Grail of all Agile discussions goes like this ...

We focus on value over cost

This is a mantra repeated by agilest and vendors of agile tools as well. The big question is ...

What are the units of measure of Value?

The units of measure of Cost are dollars. When we hear We Focus on Value over Cost, in what units of measure are these two variables being compared to  one another? A better question is where is this Value being defined?. And another question how is this value being defined? 

If we are going to move beyond the platitude of value over cost - which by the way is simply bad economics, since the ,,,

Value of something can't be determined unless you know the cost to acquire that value

but let's ignore this naïve concept for the moment. How can Value be defined and measured? 

In our Software Intensive System of Systems world, System Engineering is the dominant paradigm for increasing the probability of program success. This is not the Systems Engineering of the IT server systems engineering. This is the INCOSE Systems Engineering, defined as:

Systems Engineering is an interdisciplinary approach and means to enable the realization of successful systems.

Systems Engineering focuses on defining customer needs and required functionality early in the development cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the problems encountered for:

  • Operations
  • Performance
  • Testing
  • Development
  • Cost and Schedule
  • Training and Support
  • Retirement

Systems Engineering integrates all the disciplines and specialty groups into a team effort forming a development process that proceeds from concept to production to operation. Systems Engineering considers both the business and the technical needs of all customers with the goal of providing a quality product that meets the user needs.

In this paradigm there are two primary measures that the product or service being produced satisfies the needs of those paying for the work:

  • Measures of Effectiveness are operational measures of success closely related to the achievement of mission or operational objectives. They provide insight into the accomplishment of mission needs independent of the chosen solution.
  • Measures of Performance characterize the physical or functional attributes relating to the system operation. They provide insight into the performance of the specific system.

So Now What?

We've got some high level definitions, but are no closer to the units of measure needed to compare Value with Cost.

The Measures of Effectiveness (MOE) are defined by the customer or user point of view. These are the customer's key indicators that the mission has been achieved in terms of performance, suitability, and affordability across the lifecycle of the product or service.

MOE's focus on the systems capabilities to achieve mission success, within the total operational environment. MOEs represent the customer's most important evaluation and acceptance criteria.

If the customer doesn't know the Measures of Effectiveness in some form, to some level of confidence, the software project is on a Death March and no software development method is going to fix this problem. 

The Measures of Performance state the attributes considered important to ensure that the system has the capability to achieve the operational objectives. MOPs are used to assess whether the system meets design or performance requirements that are necessary to satisfy the Measures of Effectiveness. MOPs are derived from or provide insight to the MOEs or other user needs.

If the customer doesn't know the Measures of Performance in some form, to some level of confidence, the software project is on a Death March and no software development method is going to fix that problem. 

In the End

When we hear value over cost and don't have a unit of measure for Value  it's just a platitude. When we hear value over cost and don't know the cost to achieve that Value, it's just a platitude.

So don't fall for the platitude approach to spending other peoples money in the presence of uncertainty. Define the MOEs, MOPs and the cost to achieve them.

Follow On

With the MOEs and MOPs there is a 3rd measure for the products or services that must also be connected with the cost for achieving them. Technical Performance Measures.

With all three (MOE,MOP, TPM) those paying for the work can monetize these to establish a common basis of measure with the Cost to produce the value.

Until those conjecturing you should focus on Value over Cost can produce units of measure for that comparison, consider their statements as just platitudes with no actionable outcomes.

Categories: Project Management

Estimating the Future Based on What We Know Today

Herding Cats - Glen Alleman - Sat, 01/02/2016 - 18:32

In the business of estimating in the presence of uncertainty, a useful tool is Bayesian analysis of what we know today to make forecasts or estimates of the future. The Bayesian approach to inference, as well as decision-making and forecasting, involves conditioning on what is known to make statements about what is not known. 

Bayesian estimating consider a probability of some outcome in the future as a belief or an opinion. This is different from the frequentist approach of estimating where it is assumed there is a long-run frequency of events. These events could be a cost, an expected completion date, some possible performance parameter. A probability that some value will occur. This is useful when there are long-term frequencies of an occurrence. When that is not the case - for example in project work which may be a unique undertaking - a Bayesian approach to estimating is called for.

Conditioning our decisions on what is known, means making use prior knowledge. This knowledge in the project domain comes from past performance of the parameters of the project. These include cost, schedule, work capacity, technical performance and other variables involved in the planning and execution of the work of the project.

This information is a distinguishing feature of the Bayesian approach to estimating the future. To do this we first need to fully specify what is known and what is unknown about the past and the future. Then what is known in making probabilistic statements about what is unknown.

The Bayesian approach to estimating differs from the traditional approach - frequentist - in that it interprets probability as a measure of believability in an event. That is how confident are we in an event occurring. 

For project work Bayesian estimating asks what's the believability that this project will cost some amount or less. Or what's the believability that this project will complete on some date or before. This belief is based on prior information about the question. The assessment of the question is then a probability based on this prior  condition. 

This is stated as Bayes Theorem

CodeCogsEqn

Where P(A) and P(B) are the probabilities of A and B without regard to each other. And P(A|B) is the conditional probability of observing the event A, given that B is true.And P(B|A) is the conditional probability of observing the event B, given that A is true.

For project work this can be very useful, given we have prior knowledge of some parameter's behaviors and would like to know some probability of that parameters behavior in the future.

This is distinctly different from averaging past behavior and projecting the future behavior. It is also distinctly different from assuming that the past behavior is going to be like the future behavior. This two assumptions of course are seriously flawed but at the same time often used in naive estimating or forecasting.

This Bayesian approach to forecasting or estimating future outcomes is also the basis of machine learning using Markov Chain Monte Carlo Simulation. 

When faced with questions like when will we be done or how much will it cost when we are done - and these are normal everyday questions asked by any business that expects to stay in business - then Bayesian modeling can be  useful. Along with frequentist modeling and standard Monte Carlo Simulation of the processes that drive the project.

A good starting place for the whole topic of estimating software development is ...

Introduction to monte-carlo analysis for software development - Troy Magennis (Focused Objective) from Troy Magennis     So don't let anyone tell you estimates - good estimates - can't be made for software development projects. They may not know how - because they haven't done their home work - but it is certainty possible and for any non-trivial project, not only possible but important for the success of the project and business that is funding that project.
Categories: Project Management

Zopfli Optimization: Literally Free Bandwidth

Coding Horror - Jeff Atwood - Sat, 01/02/2016 - 10:38

In 2007 I wrote about using PNGout to produce amazingly small PNG images. I still refer to this topic frequently, as seven years later, the average PNG I encounter on the Internet is very unlikely to be optimized.

For example, consider this recent Perry Bible Fellowship cartoon.

Saved directly from the PBF website, this comic is a 800 × 1412, 32-bit color PNG image of 671,012 bytes. Let's save it in a few different formats to get an idea of how much space this image could take up:

BMP24-bit3,388,854 BMP8-bit1,130,678 GIF8-bit, no dither147,290 GIF8-bit, max dither283,162 PNG32-bit671,012

PNG is a win because like GIF, it has built-in compression, but unlike GIF, you aren't limited to cruddy 8-bit, 256 color images. Now what happens when we apply PNGout to this image?

Default PNG671,012 PNGout623,8597%

Take any random PNG of unknown provenance, apply PNGout, and you're likely to see around a 10% file size savings, possibly a lot more. Remember, this is lossless compression. The output is identical. It's a smaller file to send over the wire, and the smaller the file, the faster the decompression. This is free bandwidth, people! It doesn't get much better than this!

Except when it does.

In 2013 Google introduced a new, fully backwards compatible method of compression they call Zopfli.

The output generated by Zopfli is typically 3–8% smaller compared to zlib at maximum compression, and we believe that Zopfli represents the state of the art in Deflate-compatible compression. Zopfli is written in C for portability. It is a compression-only library; existing software can decompress the data. Zopfli is bit-stream compatible with compression used in gzip, Zip, PNG, HTTP requests, and others.

I apologize for being super late to this party, but let's test this bold claim. What happens to our PBF comic?

Default PNG671,012 PNGout623,8597% ZopfliPNG585,11713%

Looking good. But that's just one image. We're big fans of Emoji at Discourse, let's try it on the original first release of the Emoji One emoji set – that's a complete set of 842 64×64 PNG files in 32-bit color:

Default PNG2,328,243 PNGout1,969,97315% ZopfliPNG1,698,32227%

Wow. Sign me up for some of that.

In my testing, Zopfli reliably produces 3 to 8 percent smaller PNG images than even the mighty PNGout, which is an incredible feat. Furthermore, any standard gzip compressed resource can benefit from Zopfli's improved deflate, such as jQuery:

Or the standard compression corpus tests:

gzip -­9kzipZopfli Alexa­ 10k128mb125mb124mb Calgary1017kb979kb975kb Canterbury731kb674kb670kb enwik836mb35mb35mb

(Oddly enough, I had not heard of kzip – turns out that's our old friend Ken Silverman popping up again, probably using the same compression bag of tricks from his PNGout utility.)

But there is a catch, because there's always a catch – it's also 80 times slower. No, that's not a typo. Yes, you read that right.

gzip -­95.6s 7­zip ­mm=Deflate ­mx=9128s kzip336s Zopfli454s

Gzip compression is faster than it looks in the above comparsion, because level 9 is a bit slow for what it does:

TimeSize gzip -111.5s40.6% gzip -212.0s39.9% gzip -313.7s39.3% gzip -415.1s38.2% gzip -518.4s37.5% gzip -624.5s37.2% gzip -729.4s37.1% gzip -845.5s37.1% gzip -966.9s37.0%

You decide if that whopping 0.1% compression ratio difference between gzip -7and gzip -9 is worth the doubling in CPU time. In related news, this is why pretty much every compression tool's so-called "Ultra" compression level or mode is generally a bad idea. You fall off an algorithmic cliff pretty fast, so stick with the middle or the optimal part of the curve, which tends to be the default compression level. They do pick those defaults for a reason.

PNGout was not exactly fast to begin with, so imagining something that's 80 times slower (at best!) to compress an image or a file is definite cause for concern. You may not notice on small images, but try running either on a larger PNG and it's basically time to go get a sandwich. Or if you have a multi-core CPU, 4 to 16 sandwiches. This is why applying Zopfli to user-uploaded images might not be the greatest idea, because the first server to try Zopfli-ing a 10k × 10k PNG image is in for a hell of a surprise.

However, remember that decompression is still the same speed, and totally safe. This means you probably only want to use Zopfli on pre-compiled resources, which are designed to be compressed once and downloaded millions of times – rather than a bunch of PNG images your users uploaded which may only be viewed a few hundred or thousand times at best, regardless of how optimized the images happen to be.

For example, at Discourse we have a default avatar renderer which produces nice looking PNG avatars for users based on the first letter of their username, plus a color scheme selected via the hash of their username. Oh yes, and the very nice Roboto open source font from Google.

We spent a lot of time optimizing the output avatar images, because these avatars can be served millions of times, and pre-rendering the whole lot of them, given the constraints of …

  • 10 numbers
  • 26 letters
  • ~250 color schemes
  • ~5 sizes

… isn't unreasonable at around 45,000 unique files. We also have a centralized https CDN we set up to to serve avatars (if desired) across all Discourse instances, to further reduce load and increase cache hits.

Because these images stick to shades of one color, I reduced the color palette to 8-bit (actually 128 colors) to save space, and of course we run PNGout on the resulting files. They're about as tiny as you can get. When I ran Zopfli on the above avatars, I was super excited to see my expected 3 to 8 percent free file size reduction and after the console commands ran, I saw that saved … 1 byte, 5 bytes, and 2 bytes respectively. Cue sad trombone.

(Yes, it is technically possible to produce strange "lossy" PNG images, but I think that's counter to the spirit of PNG which is designed for lossless images. If you want lossy images, go with JPG or another lossy format.)

The great thing about Zopfli is that, assuming you are OK with the extreme up front CPU demands, it is a "set it and forget it" optimization step that can apply anywhere and will never hurt you. Well, other than possibly burning a lot of spare CPU cycles.

If you work on a project that serves compressed assets, take a close look at Zopfli. It's not a silver bullet – as with all advice, run the tests on your files and see – but it's about as close as it gets to literally free bandwidth in our line of work.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Distributing a beta version of an iOS app

Agile Testing - Grig Gheorghiu - Fri, 01/01/2016 - 20:41

I am not an iOS expert by any means, but recently I’ve had to maintain an iOS app and distribute it to beta testers. I had to jump through a few hoops, so I am documenting here the steps I had to take.

First of all, I am using Xcode 6.4 with the Fabric 2.1.1 plugin. I assume you are already signed up for the Fabric/Crashlytics service and that you also have an Apple developer account.
  1. Ask each beta tester to send you the UUID of the devices they want to run your app on.
  2. Go to developer.apple.com -> “Certificates, Identifiers and Profiles” -> “Devices” and add each device with its associated UUID. Let’s say you add a device called “Tom’s iPhone 6s” with its UUID.
  3. Go to Xcode -> Preferences -> Accounts. If you already have an account set up, remove it by selecting it and clicking the minus icon on the lower left side. Add an account: click the plus icon, choose “Add Apple ID” and enter your Apple ID and password. This will import your Apple developer provisioning profile into Xcode, with the newly added device UUIDs (note: there may be a better way of adding/modifying the provisioning profile within Xcode but this worked for me)
  4. Make sure the Fabric plugin is running on your Mac.
  5. Go to Xcode and choose the iOS application you want to distribute. Choose iOS Device as the target for the build.
  6. Go to Xcode -> Product -> Archive. This will build the app, then the Fabric plugin will pop up a message box asking you if you want to distribute the archive build. Click Distribute.
  7. The Fabric plugin will pop up a dialog box asking you for the email of the tester(s) you want to invite. Enter one or more email addresses. Enter release notes. At this point the Fabric plugin will upload your app build to the Fabric site and notify the tester(s) that they are invited to test the app.

Stuff The Internet Says On Scalability For January 1st, 2016

Hey, Happy New Year, it's HighScalability time:


River system? Vascular system? Nope. It's a map showing how all roads really lead to Rome.

 

If you like Stuff The Internet Says On Scalability then please consider supporting me on Patreon.
  • 71: mentions of innovation by the Chinese Communist Party; 60.5%: of all burglaries involve forcible entry; 280,000-squarefoot: Amazon's fulfillment center in India capable of shipping 2 million items; 11 billion: habitable earth like planets in the goldilocks zone in just our galaxy; 800: people working on the iPhone's camera (how about the app store?); 3.3 million: who knew there were so many Hello Kitty fans?; 26 petabytes: size of League of Legends' data warehouse; 

  • Quotable Quotes:
    • George Torwell: Tor is Peace / Prism is Slavery / Internet is Strength
    • @SciencePorn: Mr Claus will eat 150 BILLION calories and visit 5,556 houses per second this Christmas Eve.
    • @SciencePorn: Blue Whale's heart is so big, a small child can swim through the veins.
    • @BenedictEvans: There are close to 4bn people on earth with a phone (depending on your assumptions). Will go to at least 5bn. So these issues will grow.
    • @JoeSondow: "In real life you won't always have a calculator with you." — math teachers in the 80s
    • James Hamilton: This is all possible due to the latencies we see with EC2 Enhanced networking. Within an availability zone, round-trip times are now tens of microseconds, which make it feasible to propose and commit transactions to multiple resilient nodes in less than a millisecond.
    • Benedict Evans: The mobile ecosystem, now, is heading towards perhaps 10x the scale of the PC industry, and mobile is not just a new thing or a big thing, but that new generation, whose scale makes it the new centre of gravity of the tech industry. Almost everything else will orbit around it. 
    • Ruth Williams: Bacteria growing in an unchanging environment continue to adapt indefinitely.
    • @Raju: Not one venture-backed news aggregator has yet shown a Sustainable Business Model
    • @joeerl: + choose accurate names + favor beauty over performance + design minimal essential API's + document the unobvious
    • @shibuyashadows: There is no such thing as a full-node anymore. Now there are two types: Mining Nodes Economic Nodes. Both sets are now semi-centralized on the network, are heavily inter-dependent and represent the majority of the active Bitcoin users.
    • @TheEconomist: In 1972 a man with a degree aged 25-34 earned 22% more than a man without. Today, it's 70%
    • Dr. David Miller~ We are in the age of Howard Hughes. People make their fortune elsewhere and spend it on space. 
    • Credit for CRISPR: Part of that oversimplification is rooted in the fact that most modern life-science researchers aren’t working to uncover broad biological truths. These days the major discoveries lie waiting in the details
    • @BenedictEvans: Idle observation: Facebook will almost certainly book more revenue in 2015 than the entire internet ad industry made up until to 2000
    • Eric Clemmons: Ultimately, the problem is that by choosing React (and inherently JSX), you’ve unwittingly opted into a confusing nest of build tools, boilerplate, linters, & time-sinks to deal with before you ever get to create anything.
    • Kyle Russell: Why do I need such a powerful PC for VR? Immersive VR experiences are 7x more demanding than PC gaming.
    • @josevalim: The system that manages rate limits for Pinterest written in Elixir with a 90% response time of 800 microseconds.
    • catnaroek: The normal distribution is important because it arises naturally when the preconditions of the central limit theorem hold. But you still have to use your brain - you can't unquestioningly assume that any random variable (or sample or whatever) you will stumble upon will be approximately normally distributed.
    • Dominic Chambers: Now, if you consider the server-side immutable state atom to be a materialized view of the historic events received by a server, you can see that we've already got something very close to a Samza style database, but without the event persistence.
    • Joscha Bach: In my view, the 20th century’s most important addition to understanding the world is not positivist science, computer technology, spaceflight, or the foundational theories of physics. It is the notion of computation. Computation, at its core, and as informally described as possible, is very simple: every observation yields a set of discernible differences.

  • The New Yorker is picking up on the Winner Takes All theme that's been developing, I guess that makes it an official meme. What's missing from their analysis is that users are attracted to the eventual winners because they provide a superior customer experience. Magical algorithms are in support of experience. As long as a product doesn't fail at providing that experience there's little reason to switch after being small networked into a choice. You might think many many products could find purchase along the long tail, but in low friction markets that doesn't seem to be the case. Other choices become invisible and what's invisible starves to death.

  • I wonder how long it took to get to the 1 billionth horse ride? Uber Hits One Billionth Ride in 5.5 years.

  • Let's say you are a frog that has been in a warming pot for the last 15 years, what would you have missed? Robert Scoble has put together quite a list. 15 years ago there was no: Facebook, YouTube, Twitter, Google+, Quora, Uber, Lyft, iPhone, iPads, iPod, Android, HDTV, self driving cars, Waze, Google Maps, Spotify. Soundcloud, WordPress, Wechat, Flipkart, AirBnb, Flipboard, LinkedIn, AngelList, Techcrunch, Google Glass, Y Combinator, Techstars, Geekdom, AWS, OpenStack, Azure, Kindle, Tesla, and a lot more.

  • He who controls the algorithm reaps the rewards. Kansas is now the 5th state where lottery prizes may have been fixed.

  • What Is The Power Grid? A stunning 60% of generated energy is lost before it can be consumed, which is why I like my power grids like my databases: distributed and shared nothing.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture