Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Influential Agile Leader, May 9-10, 2017

Is your agile transition proceeding well? Or, is it stuck in places? Maybe the teams aren‚Äôt improving. Maybe no one knows “how to get it all done.” Maybe you’re tired and don‚Äôt know how you‚Äôll find the energy to continue. Or, ¬†you can’t understand how to engage your management or their management in creating an agile culture?

You are the kind of person who would benefit from the Influential Agile Leader event in Toronto, May 9-10, 2017.

Gil Broza and I co-facilitate. It’s experiential, so you learn by doing. You practice your coaching and influence in the mornings. You’ll have a chance to map your organizational dynamics to see where to put your energy. You’ll split into smaller sessions in the afternoon, focusing on your specific business challenges.

If you would like to bring your agile transition to the next level, or, at the very least, unstick it, please join us.

We sold out of super early bird registration. Our early bird registration is still a steal.

If you have questions, please post a comment or email me. Hope to work with you at The Influential Agile Leader.

(See the servant leadership tag for the Pragmatic Manager  and the leadership tag on this blog to see relevant articles I’ve written before.)

Categories: Project Management

Tester in an agile team: a necessity or dispensable?

Xebia Blog - Tue, 02/07/2017 - 17:13
Let’s imagine it’s the year 2025 and we peek inside an average IT company to take a look at the software development teams working there: what are the chances that there will still be a person who is a tester in each of these teams? Some of you will say: “of course they’ll be gone,

What’s in an AMP URL?

Google Code Blog - Mon, 02/06/2017 - 20:00
Posted by Alex Fischer, Software Engineer, Google Search.

TL;DR: Today, we're adding a feature to the AMP integration in Google Search that allows users to access, copy, and share the canonical URL of an AMP document. But before diving deeper into the news, let's take a step back to elaborate more on URLs in the AMP world and how they relate to the speed benefits of AMP.

What's in a URL? On the web, a lot - URLs and origins represent, to some extent, trust and ownership of content. When you're reading a New York Times article, a quick glimpse at the URL gives you a level of trust that what you're reading represents the voice of the New York Times. Attribution, brand, and ownership are clear.

Recent product launches in different mobile apps and the recent launch of AMP in Google Search have blurred this line a little. In this post, I'll first try to explain the reasoning behind some of the technical decisions we made and make sense of the different kinds of AMP URLs that exist. I'll then outline changes we are making to address the concerns around URLs.

To start with, AMP documents have three different kinds of URLs:
  • Original URL: The publisher's document written in the AMP format. http://www.example.com/amp/doc.html
  • AMP Cache URL: The document served through an AMP Cache (e.g., all AMPs served by Google are served through the Google AMP Cache). Most users will never see this URL. https://www-example-com.cdn.ampproject.org/c/www.example.com/amp/doc.html
  • Google AMP Viewer URL: The document displayed in an AMP viewer (e.g., when rendered on the search result page). https://www.google.com/amp/www.example.com/amp.doc.html



Although having three different URLs with different origins for essentially the same content can be confusing, there are two main reasons why these different URLs exist: caching and pre-rendering. Both are large contributors to AMP's speed, but require new URLs and I will elaborate on why that is.
AMP Cache URLsLet's start with AMP Cache URLs. Paul Bakaus, a Google Developer Advocate for AMP, has an excellent post describing why AMP Caches exist. Paul's post goes into great detail describing the benefits of AMP Caches, but it doesn't quite answer the question why they require new URLs. The answer to this question comes down to one of the design principles of AMP: build for easy adoption. AMP tries to solve some of the problems of the mobile web at scale, so its components must be easy to use for everyone.

There are a variety of options to get validation, proximity to users, and other benefits provided by AMP Caches. For a small site, however, that doesn't manage its own DNS entries, doesn't have engineering resources to push content through complicated APIs, or can't pay for content delivery networks, a lot of these technologies are inaccessible.

For this reason, the Google AMP Cache works by means of a simple URL "transformation." A webmaster only has to make their content available at some URL and the Google AMP Cache can then cache and serve the content through Google's world-wide infrastructure through a new URL that mirrors and transforms the original. It's as simple as that. Leveraging an AMP Cache using the original URL, on the other hand, would require the webmaster to modify their DNS records or reconfigure their name servers. While some sites do just that, the URL-based approach is easier to use for the vast majority of sites.
AMP Viewer URLsIn the previous section, we learned about Google AMP Cache URLs -- URLs that point to the cached version of an AMP document. But what about www.google.com/amp URLs? Why are they needed? These are "AMP Viewer" URLs and they exist because of pre-rendering.
AMP's built-in support for privacy and resource-conscientious pre-rendering is rarely talked about and often misunderstood. AMP documents can be pre-rendered without setting off a cascade of resource fetches, without hogging up users' CPU and memory, and without running any privacy-sensitive analytics code. This works regardless of whether the embedding application is a mobile web page or a native application. The need for new URLs, however, comes mostly from mobile web implementations, so I am using Google's mobile search result page (SERP) as an illustrative example.
How does pre-rendering work?When a user performs a Google search that returns AMP-enabled results, some of these results are pre-rendered behind the scenes. When the user clicks on a pre-rendered result, the AMP page loads instantly.

Pre-rendering works by loading a hidden iframe on the embedding page (the search result page) with the content of the AMP page and an additional parameter that indicates that the AMP document is only being pre-rendered. The JavaScript component that handles the lifecycle of these iframes is called "AMP Viewer".

The AMP Viewer pre-renders an AMP document in a hidden iFrame.

The user's browser loads the document and the AMP runtime and starts rendering the AMP page. Since all other resources, such as images and embeds, are managed by the AMP runtime, nothing else is loaded at this point. The AMP runtime may decide to fetch some resources, but it will do so in a resource and privacy sensible way.

When a user clicks on the result, all the AMP Viewer has to do is show the iframe that the browser has already rendered and let the AMP runtime know that the AMP document is now visible.
As you can see, this operation is incredibly cheap - there is no network activity or hard navigation to a new page involved. This leads to a near-instant loading experience of the result.
Where do google.com/amp URLs come from?All of the above happens while the user is still on the original page (in our example, that's the search results page). In other words, the user hasn't gone to a different page; they have just viewed an iframe on the same page and so the browser doesn't change the URL.

We still want the URL in the browser to reflect the page that is displayed on the screen and make it easy for users to link to. When users hit refresh in their browser, they expect the same document to show up and not the underlying search result page. So the AMP viewer has to manually update this URL. This happens using the History API. This API allows the AMP Viewer to update the browser's URL bar without doing a hard navigation.

The question is what URL the browser should be updated to. Ideally, this would be the URL of the result itself (e.g., www.example.com/amp/doc.html); or the AMP Cache URL (e.g., www-example-com.cdn.ampproject.org/www.example.com/amp/doc.html). Unfortunately, it can't be either of those. One of the main restrictions of the History API is that the new URL must be on the same origin as the original URL (reference). This is enforced by browsers (for security reasons), but it means that in Google Search, this URL has to be on the www.google.com origin.
Why do we show a header bar?The previous section explained restrictions on URLs that an AMP Viewer has to handle. These URLs, however, can be confusing and misleading. They can open up the doors to phishing attacks. If an AMP page showed a login page that looks like Google's and the URL bar says www.google.com, how would a user know that this page isn't actually Google's? That's where the need for additional attribution comes in.

To provide appropriate attribution of content, every AMP Viewer must make it clear to users where the content that they're looking at is coming from. And one way of accomplishing this is by adding a header bar that displays the "true" origin of a page.

What's next?I hope the previous sections made it clear why these different URLs exist and why there needs to be a header in every AMP viewer. We have heard how you feel about this approach and the importance of URLs. So what next? As you know, we want to be thoughtful in what we do and ensure that we don't break the speed and performance users expect from AMP pages.

Since the launch of AMP in Google Search in Feb 2016, we have taken the following steps:
  • All Google URLs (i.e., the Google AMP cache URL and the Google AMP viewer URL) reflect the original source of the content as best as possible: www.google.com/amp/www.example.com/amp/doc.html.
  • When users scroll down the page to read a document, the AMP viewer header bar hides, freeing up precious screen real-estate.
  • When users visit a Google AMP viewer URL on a platform where the viewer is not available, we redirect them to the canonical page for the document.
In addition to the above, many users have requested a way to access, copy, and share the canonical URL of a document. Today, we're adding support for this functionality in form of an anchor button in the AMP Viewer header on Google Search. This feature allows users to use their browser's native share functionality by long-tapping on the link that is displayed.


In the coming weeks, the Android Google app will share the original URL of a document when users tap on the app's share button. This functionality is already available on the iOS Google app.

Lastly, we're working on leveraging upcoming web platform APIs that allow us to improve this functionality even further. One such API is the Web Share API that would allow AMP viewers to invoke the platform's native sharing flow with the original URL rather than the AMP viewer URL.

We as Google have every intention in making the AMP experience as good as we can for both, users and publishers. A thriving ecosystem is very important to us and attribution, user trust, and ownership are important pieces of this ecosystem. I hope this blog post helps clear up the origin of the three URLs of AMP documents, their role in making AMP fast, and our efforts to further improve the AMP experience in Google Search. Lastly, an ecosystem can only flourish with your participation: give us feedback and get involved with AMP.

Categories: Programming

Get ready for Google Developer Day at GDC 2017

Android Developers Blog - Mon, 02/06/2017 - 18:50
Posted by Noah Falstein, Chief Game Designer at Google

The Game Developers Conference (GDC) kicks off on Monday, February 27th with our annual Google Developer Day. Join us as we demonstrate how new devices, platforms, and tools are helping developers build successful businesses and push the limits of mobile gaming on Android.

Expect exciting announcements, best practices, and tips covering a variety of topics including Google Play, Daydream VR, Firebase, Cloud Platform, machine learning, monetization, and more. In the afternoon, we'll host panels to hear from developers first-hand about their experiences launching mobile games, building thriving communities, and navigating the successes and challenges of being an indie developer.
Visit our site for more info and the Google Developer Day schedule. These events are part of the official Game Developer's Conference, so you will need a pass to attend. For those who can't make it in person, watch the live stream on YouTube starting at 10am PST on Monday, February 27th.


How useful did you find this blogpost? ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ                                                                                
Categories: Programming

Part 2 of Thinking Serverless ‚ÄĒ‚Ää Platform Level Issues‚Ää

This is a guest repost by Ken Fromm, a 3x tech co-founder — Vivid Studios, Loomia, and Iron.io. Here's Part 1.

Job processing at scale at high concurrency across a distributed infrastructure is a complicated feat. There are many components involvement‚Ää—‚Ääservers and controllers to process and monitor jobs, controllers to autoscale and manage servers, controllers to distribute jobs across the set of servers, queues to buffer jobs, and whole host of other components to ensure jobs complete and/or are retried, and other critical tasks that help maintain high service levels. This section peels back the layers a bit to provide insight into important aspects within the workings of a serverless platform.

Throughput

Throughput has always been the coin of the realm in computer processing‚Ää—‚Äähow quickly can events, requests, and workloads be processed. In the context of a serverless architecture, I’ll break throughput down further when discussing both latency and concurrency. At the base level, however, a serverless architecture does provide a more beneficial architecture than legacy applications and large web apps when it comes to throughput because it provide for far better resource utilization.

In a post by Travis Reeder on What is Serverless Computing and Why is it Important he addresses this topic.

Cost and optimal use of resources is a huge reason to do serverless. If you are a big company with a bunch of apps/APIs/microservices, you are currently running those things 24/7 and they are using resources 100% of the time, no matter if they are in use or not. With a FaaS infrastructure, instead of running apps 24/7, you can execute functions for any number of apps on demand and share all the same resources. Theoretically, you could reduce waste (idle time) to almost nothing while still providing fast response time. For a FaaS provider, this cost savings is passed up to the end user, the developer. For an enterprise, this can reduce capex and opex big time.

Another way of looking at it is that by moving to more discrete tasks that can run in universal platform with self-contained dependencies, tasks can run anytime anywhere across a serverless architecture. This is in contrast to a set of stand alone monolithic applications whereby operations teams have to spend significant cycles arbitrating which applications to scale, when, and how. (A serverless architecture can also increase throughput of application and feature development but much has been said in this regard as it relates to microservices and functions as a service.)

A Graph of Tasks and Projects

The graph below shows a set of tasks over time for a single account on the a serverless platform. The overarching yellow line indicates all tasks for an account and the other lines represent projects within the account. The project lines should be viewed as a microservice or a specific set of application functions. A few years ago, the total set would have been built as a traditional web application and hosted as a long-running application. As you can see, however, each service or set of functions has a different workload characteristic. Managing the aggregated set at an application level is far more complex than managing at the task level within a serverless platform, not to mention the resource savings by scaling commodity task servers as opposed to much more complex application servers.

All Tasks (Application View) vs Specific Tasks (Serverless View)

Latency
Categories: Architecture

Fashion gets a digital upgrade with the Google Awareness API

Android Developers Blog - Mon, 02/06/2017 - 17:03
Posted by Jeremy Brook, Group Creative Business Partner, the ZOO

Last summer, we made the Awareness API available to all developers through Google Play services for the first time, providing a powerful and unified sensing platform that enables apps to be aware of all aspects of a user's environment. By using a combination of context signals, such as location, physical activity, weather and nearby beacons, developers can better understand their users individually and provide more engaging and customized mobile app experiences.
We have already seen some great implementations of the API in obvious scenarios, such as shopping for a new home in the neighborhood or recommending a music playlist while starting a jog. For New York Fashion Week, we explored other creative integrations of the Awareness API and collaborated with H&M Group's digital fashion house Ivyrevel and its Fashion Tech Lab to bring couture into the digital age with the 'Data Dress,' a personalized dress designed entirely based on a user's context signals.

Currently under development, the Android app specifically uses the Snapshot API within the platform to passively monitor each user's daily activity and lifestyle with their permission. Where do you regularly eat out for dinner or hang out with friends? Are they more casual or formal meetups? What's the usual weather when you're outside? After the course of a week, the user's context signals are passed through an algorithm that creates a digitally tailored dress design for the user to purchase.

The Android app is launching in closed alpha stage, and is currently being tested by selected global style influencers including Ivyrevel's co-founder Kenza Zouiten. If you want a truly 'tailored' digital experience, sign up here to participate in a future trial of the app before the public release.


Categories: Programming

Top 5 Ingredients for developing Cloud Native Applications

Xebia Blog - Mon, 02/06/2017 - 07:50
Introduction Cloud Native Applications is a trend in IT that promises to develop and deploy applications at scale fast and cost-efficient by leveraging cloud services to get run-time platform capabilities such as performance, scalability and security out of the box. Teams are able to focus on delivering functionality to increase the pace of innovation.  Everything

SPaMCAST 429 ‚Äď Ryan Ripley, Agile Certifications Good and Bad Influences

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 429 is a ¬†special event. Ryan Ripley (who appeared on SPaMCAST 404 and is the host of the Agile for Humans Podcast) and the I recently connected virtually to discuss the role and impact of certifications on the Agile movement. ¬†Certifications are an important gating tool in the job market and may provide evidence that people are keeping up to date with changes in the industry. ¬†Or certifications could represent the calcifying of boundaries that make the adage ‚Äėinspect and adapt‚Äô a thing of the past. ¬†We discuss! ¬†We are going to release the audio on both our podcasts serially, the SPaMCAST today and then Agile for Humans on the 13th! ¬†

Make sure both Agile for Humans and the Software Process and Measurement Cast are part of your weekly rituals!  

Mr. Ryan Ripley has worked on agile teams for the past 10 years in development, scrum master and management roles. He’s worked at various Fortune 1000 companies in the medical device, wholesale, and financial services industries.

Ryan is great at taking tests and holds the PMI-ACP, PSM I, PSM II, PSE, PSPO I, PSD I, CSM, CSPO, and CSP agile certifications. He lives in Indiana with his wife Kristin and three children. Ryan blogs at ryanripley.com and hosts the Agile for Humans Podcast. You can also follow Ryan on Twitter: @ryanripley

Re-Read Saturday News

This week we tackle Chapter 2 in Carol Dweck‚Äôs Mindset: The New Psychology of Success (buy your copy and read along). In Chapter 2 Dweck provides a deeper dive into fixed and growth mindsets. ¬†The chapter begins with Dweck‚Äôs relating how the discovery that there were two meanings to the word ‚Äėability‚Äô shaped the work. ¬†The first definition for ability is a fixed capability that needs to be proven (continually); the second definition is that an ability is a capability that can be developed through learning. The distinction between two definitions are at the heart of the behavioral differences between the growth and fixed mindsets. ¬†Those that believe that abilities can be developed will seek stretch goals and view failures as¬†learning opportunities, while those with a fixed mindset will have a very different point of view. ¬†

Every week we discuss the chapter then consider the implications of what we have ‚Äúread‚ÄĚ from the point of view of someone pursuing an organizational transformation and also how to use the material when coaching teams. ¬†

Remember to buy a copy of Carol Dweck’s Mindset and read along!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 430 will shift back to the magazine format with an essay on product owners.  The product owner role  is nuanced and sometimes hard.  The essay will help you sort things out.  

We will also have columns from Steve Tendon with another chapter in his Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban, published by J Ross (buy a copy here) and an installment of Gene Hughson’s Form Follows Function Blog (the same Gene, that Ryan called out on this week’s cast).

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 429 - Ryan Ripley, Agile Certifications Good and Bad Influences

Software Process and Measurement Cast - Sun, 02/05/2017 - 23:00

The Software Process and Measurement Cast 429 is a ¬†special event. Ryan Ripley (who appeared on SPaMCAST 404 and is the host of the Agile for Humans Podcast) and the I recently connected virtually to discuss the role and impact of certifications on the Agile movement. ¬†Certifications are an important gating tool in the job market and may provide evidence that people are keeping up to date with changes in the industry. ¬†Or certifications could represent the calcifying of boundaries that make the adage ‚Äėinspect and adapt‚Äô a thing of the past. ¬†We discuss! ¬†We are going to release the audio on both our podcasts serially, the SPaMCAST today and then Agile for Humans on the 13th! ¬†

Make sure both Agile for Humans and the Software Process and Measurement Cast are part of your weekly rituals!  

Mr. Ryan Ripley has worked on agile teams for the past 10 years in development, scrum master and management roles. He’s worked at various Fortune 1000 companies in the medical device, wholesale, and financial services industries.

Ryan is great at taking tests and holds the PMI-ACP, PSM I, PSM II, PSE, PSPO I, PSD I, CSM, CSPO, and CSP agile certifications. He lives in Indiana with his wife Kristin and three children. Ryan blogs at ryanripley.com and hosts the Agile for Humans Podcast. You can also follow Ryan on Twitter: @ryanripley

Re-Read Saturday News

This week we tackle Chapter 2 in Carol Dweck‚Äôs Mindset: The New Psychology of Success (buy your copy and read along). In Chapter 2 Dweck provides a deeper dive into fixed and growth mindsets. ¬†The chapter begins with Dweck‚Äôs relating how the discovery that there were two meanings to the word ‚Äėability‚Äô shaped the work. ¬†The first definition for ability is a fixed capability that needs to be proven (continually); the second definition is that an ability is a capability that can be developed through learning. The distinction between two definitions are at the heart of the behavioral differences between the growth and fixed mindsets. ¬†Those that believe that abilities can be developed will seek stretch goals and view failures as¬†learning opportunities, while those with a fixed mindset will have a very different point of view. ¬†

Every week we discuss the chapter then consider the implications of what we have ‚Äúread‚ÄĚ from the point of view of someone pursuing an organizational transformation and also how to use the material when coaching teams. ¬†

Remember to buy a copy of Carol Dweck’s Mindset and read along!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 430 will shift back to the magazine format with an essay on product owners.  The product owner role  is nuanced and sometimes hard.  The essay will help you sort things out.  

We will also have columns from Steve Tendon with another chapter in his Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban, published by J Ross (buy a copy here) and an installment of Gene Hughson’s Form Follows Function Blog (the same Gene, that Ryan called out on this week’s cast).

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Mindset: The New Psychology of Success: Re-Read Week 3,Chapter 2 ‚Äď Inside the Mindsets

Mindset Book Cover

Today we tackle Chapter 2 in Carol Dweck‚Äôs Mindset: The New Psychology of Success (buy your copy and read along). In Chapter 2 Dweck provides a deeper dive into fixed and growth mindsets. ¬†The chapter begins with Dweck‚Äôs relating how the discovery that there were two meanings to the word ‘ability’ shaped the work. ¬†The first definition for ability is a fixed capability that needs to be proven (continually); the second definition is that an ability is a capability that can be developed through learning. The distinction between two definitions are at the heart of the behavioral differences between the growth and fixed mindsets. ¬†Those that believe that abilities can be developed will seek stretch goals and view failures as a learning opportunities, while those with a fixed mindset will have a very different point of view.

As we dive into examples of the distinctions, keep in mind that mindsets are beliefs, and while beliefs are sticky, they can change based on many different stimuli. I would add that until beliefs are challenged and changed they have an impact on how we behave even if they are at odds with evidence or the norms of the team or organization.

In Chapter 2, Dweck illustrates the difference between a fixed and growth mindset using several attributes common in the team and business environment.   For example, for those with a fixed mindset, the operational definition of their personal success relates to proving they are smart and capable.  Alternately those with growth mindsets define success in terms of stretching and growth.

Another illustration of a behavior demonstrated by someone with a fixed mindset is the avoidance of taking risks that expose their deficiencies.  When someone avoids exposing their deficiencies, they will tend to avoid anything they are not currently good at so they do not have to deal with their deficiencies.  This avoidance of challenges not only affects the individual by potentially constraining their future because they will accept of even get fewer opportunities to advance but just as importantly it can impact the ability of the team to deliver outside-the-box solutions.

Another reason that mindsets matter is because our mindset influences who we have around us. The manager who collects ‚Äúyes-men‚ÄĚ as subordinates is a reflection of a fixed mindset seeking positive feedback that props up their ego. ¬†The opposite point of view is the growth mindset manager that fosters an environment in which critical feedback is viewed as a tool to improve performance.

Other differences highlighted in the chapter include the idea that those with a fixed mindset will tend to repeat concepts and strategies that have been successful in the past with little regard to context. ¬†It is an example of ‘the answer seeking the proper question’ syndrome. ¬†Whereas those with a growth mindset search begin by evaluating the context and then trying new ideas and concepts. ¬†Tests provide another illustration of how the different mindsets approach information/data. ¬†For those with a fixed mindset, tests define who you are whereas those with a growth mindset will approach the results as of a test as a single data point. The value of an individual test is powerful. ¬†Many organizations I am aware of use physiological and capabilities tests to screen applicants. However, rarely do these tests attempt to judge whether they are capturing more than just a current state or whether the applicant is committed to continuous growth.

Entitlement is also a marker of a fixed mindset.  In this circumstance, because they perceive that traits are fixed when they succeed they feel a sense of superiority.  That sense of superiority reinforces their perception that their traits are better than other people’s traits and therefore they have they are entitled to succeed based on their given traits.

Dweck provides further elaboration in Chapter 2 discussing how failure is interpreted between the mindsets.  In a fixed mindset failure defines, while failure represents a challenge and learning experience in a growth mindset.  Listen to people as they rationalize failure, those who have the propensity to blame others generally fall into a fixed mindset.

One of the final areas examined in the chapter is the how the two mindsets perceive effort.  Dweck suggests that those with a fixed mindset perceive the need to work hard at being good as a lack of talent.  A true genius should be great nearly instantaneously.  Someone with a growth mindset sees effort as having a transformative power.  The suggestion that you can become an expert by spending 10,000 hours of study on a topic is a reflection of a growth mindset.

Near the end of the chapter in a Q&A section, Dweck makes a couple of very important points.  The first is that people are often a mixture of mindsets.  Whole people fall on a continuum.  For some areas of their life they may be more fixed and in others more growth oriented.  Secondly, the effort is not the only attribute that leads to success.  While success with a set of constraints will always make a person improve the constraints will also influence success. For example, a person can study programming for 10,000 hours, but if they don’t have a computer to program they will never get any feedback and never successfully execute a program.

Chapter 2 from a coach’s perspective

At the level of transforming an organization, it is important to begin by assessing the overall organizational culture.  While Dweck is describing mindsets at person level, we can observe the same basic traits when observing organizational culture.  For example, I have observed many organizations that view failures as career-limiting events (or worse).  I remember once during my career when a project team was summarily terminated by a CEO when the first attempt to deploy a relational database application had performance issues.  In the end, the test environment and the production environment had been tuned differently.  33 people lost their jobs in 15 minutes.  No one ever tried to innovate on a large scale again and in less than five years the firm failed.  A transformation agent needs to be able to profile the organization and build a path for change that reflects how the organization perceives they grow their capabilities.  Often in organizations that are innovation adverse (rarely will you hear an organization ever say they are innovation adverse, you need to observeask for stories), it is better to pursue incremental change rather than large-scale transformations. Note changing an organization’s overall culture will require addressing the mindset issue, we address organizational culture change later in the year.

At a team level, a coach can find nearly limitless applications of mindset both in a tactical and longer term manner.  In the short term, a coach should understand each team member’s mindset.  The coach can use that knowledge to help leaders and team members better understand the mixture of work a team should accept and who will be better suited for different roles. In the longer term, a coach can use his or her understanding of the mindsets on the team to construct exercises and scenarios to help shift composition of mindsets on a team.  In the end, every team will have members that fall somewhere on the mindset continuum and that probably is par for the course.

Previous Entries of the re-read of Mindset:


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Sat, 02/04/2017 - 23:14

Every systematic development of any subject ought to begin with a definition, so that everyone may understand what the discussion is about.
Marcus Tullius Cicero (196BC ‚Äí 16BC), De Officiis, Book 1, Moral Goodness

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Making Conjectures Without Testable Outcomes Deadlines Always Matter
Categories: Project Management

Being an Agile Security Officer: pwn the process

Xebia Blog - Sat, 02/04/2017 - 20:15
This is the third part of my 'Being an Agile Security Officer series'. As mentioned in my previous blog, in the Agile world the Product Owner is the person who translates business and customer desires into work items for the teams. To do this, product owners have several techniques and means at their disposal. In

Cone of Uncertainty - Part Cinq (Updated)

Herding Cats - Glen Alleman - Sat, 02/04/2017 - 16:41

The notion¬†of the Cone of Uncertainty has been around for awhile. Barry Boehm's work in ‚ÄúSoftware Engineering Economics‚ÄĚ. Prentice-Hall, 1981. ¬†The poster below is from Steve McConnell's site and makes several things clear.

  • The Cone is a project management framework describing the uncertainty aspects of estimates (cost and schedule) and other project attributes (cost, schedule, and technical performance parameters). Estimates of cost, schedule, technical¬†performance on the left side of the cone have a lower probability of being precise and accurate than estimates on the right side of the cone. This is due to many reasons. One is levels of uncertainty¬†early in the project. Aleatory and Epistemic uncertainties, which create the risk to the success of the project. Other uncertainties that create risk include:
    • Unrealistic performance expectation with missing Measures of Effectiveness and Measures of Performance
    • Inadequate assessment of risks and unmitigated exposure to these risks with proper handling plans.
    • Unanticipated technical issues with alternative plans and solutions to maintain effectiveness
  • Since all project work contains uncertainty, reducing this uncertainty¬†- which reduces risk - is the role of the project team and their management. Either the team itself, the Project or Program Manager, or on larger programs the Risk Management owner.¬†

Here's a simple definition of the Cone of Uncertainty: 

The Cone of Uncertainty describes the evolution of the measure of uncertainty during a project. For project success, uncertainty not only must decrease over time, but must also diminishe its impact on the project's outcome. This is done by active risk management, through probabalistic decision-making. At the beginning of a project, there is usually little known about the product or work results. Estimates are needed but are subject to large level of uncertainty. As more research and development is done, more information is learned about the project, and the uncertainty then decreases, reaching 0% when all risk has been mitigated or transferred. This usually happens by the end of the project.

So the question is? - How much variance reduction needs to take place in the project attributes (risk, effectiveness, performance, cost, schedule - shown below) at what points in time, to increase the probability of project success? This is the basis of Closed Loop Project Control  Estimates of the needed reduction of uncertanty, estimates of the possisble reduction of uncertainty, and estimates of the effectiveness of these reduction efforts are the basis of the Close Loop Project Control System.

This is the paradigm of the Cone of Uncertainty - it's a planned development compliance engineering tool, not an after the fact data collection tool

The Cone is NOT the result of the project's past performance. The Cone IS the Planned boundaries (upper and lower limits) of the needed reduction in uncertainty (or other performance metrics) as the project proceeds. When actual measures of cost, schedule, and technical performance are outside the planned cone of uncertainty, corrective actions must be taken to move those uncertanties inside the cone, if the project is going to meet it's cost, schedule, and technical performance goals. 

If your project's uncertanties are outside the Planned boundaries at the time when they should be inside those boundaries, then you are reducing the proabbility of project success

The Measures that are modeled in the Cone of Uncertainty are the Quantitative basis of a control process that establishes the goal for the performance measures. Capturing the actual performance, comparing it to the planned performance, and compliance with the upper and lower control limits provides guidance for making adjustments to maintain the variables perform inside their acceptable limits.

The Benefits of the Use of the Cone of Uncertainty 

The planned value, the upper and lower control limits, the measures of actual values form a Close Loop Control System - a measurement based feedback process to improve the effectiveness and efficiency of the project management processes by [1]

  • Analyzing trends that help focus on problem areas at the earliest point in time - when the¬†variable under control starts misbehaving, intervention can be taken. No need to wait till the end to find out¬†you're not going to make it.
  • Providing early insight into error-prone products that can then be corrected earlier and thereby at lower cost - when the trends are headed to the UCL and LCL, intervention can take place.
  • Avoiding or minimizing cost overruns and schedule slips by detecting them early - by observing trends to breaches of the UCL and LCL.
    enough in the project to implement corrective actions
  • Performing better technical planning, and making adjustments to resources based on discrepancies between planned and actual progress.

Screen Shot 2017-01-12 at 3.48.34 PM

A critical success factor for all project work is Risk Management. And risk management includes the management of all kinds of risks. Risks from all kinds of sources of uncertainty, including technical risk, cost risk, schedule, management risk. Each of these uncertainties and the risks they produce can take on a range of values described by probability and statistical distribution functions. Knowing what ranges are possible and knowing what ranges are acceptable is a critical project success factor.

We need to know the Upper Control Limits (UCL) and Lower Control Limit (LCL) of the ranges of all the variables that will impact the success of our project. We need to know these ranges as a function of time With this paradigm we have logically connected project management processes with Control System processesIf the variances, created by uncertainty going outside the UCL and LCL. Here's a work in progress paper "Is there an underlying Theory of Project Management," that addresses some of the issues with control of project activities.

Here are some examples of Planned variances and managing of the actual variances to make sure the project stays on plan.

A product weight as a function of the programs increasing maturity. In this case, the projected base weight is planned and the planned weights of each of the major subsystems are laid out as a function of time. Tolerance bands for the project base weight provide management with actionable information about the progression of the program. If the vehicle gets overweight, money and time are needed to correct the undesirable variance. This is a closed loop control system for managing the program with a Technical Performance Measure (TPM). There can be cost and schedule performance measures as well.

Screen Shot 2017-01-13 at 4.23.56 PM

Below is another example of a Weight reduction attribute that has error bands. In this example (an actual vehicle like the example above) the weight must be reduced as the program proceeds left to right. We have a target weight at Test Readiness Review of 23KG. A 25KG vehicle was sold in the proposal, and we need a target weight that has a safety margin, so 23KG is our target.

As the program proceeds, there are UCL and LCL bands that follow the planned weight.  The Orange dots are the actual weights from a variety of sources - a Design Model (3D Catia CAD system), a detailed design model, a bench scale model that can be measured, a non-flying prototype, and then the 1st Flight Article). As the program progresses each of the weight measurements for each of the models through to a final article is compared to the planned weight. We need to keep these values inside the error bands of NEEDED weight reduction if we are to stay on plan.

This is the critical concept in successful project management

We must have a Plan for the critical attributes - Mission Effectiveness, Technical Performance, Key Performance Parameters - for the items. If these are not compliant, the project is bcome subject to one of the Root Causes of program performance shortfall. We must have a burndown or burnup plan for producing the end item deliverables for the program that match those parameters over the course of the program. Of course, we have a wide range of possible outcomes for each item in the beginning. And as the program proceeds the variances measures on those items move toward compliance of the target number in this case Weight.

Screen Shot 2017-01-13 at 4.21.56 PM

Here's another example of the Cone of Uncertainty, in this case, the uncertainty is the temperature of an oven being designed by an engineering team. The UCL and LCL are defined BEFORE the project starts. These are used to inform the designer of the progress of the project as it proceeds. Staying inside the control limits is the Planned progress path to the final goal - in this case, temperature.

The Cone of Uncertanty, is the signaling boundaries of the Closed Loop Control system used to manage the project to success

Screen Shot 2017-01-13 at 4.38.04 PM

It turns out the cone can also be a flat range with Upper and Lower Control Limits of the variable that is being developed - a design to variable - in this example a Measure of Performance. In this case, a Measure of Performance that needs to stay within the Upper and Lower limits as the project progresses through its gates. If this variable is out of bounds the project will have to pay in some way to get it back to Green.

A Measure of Performance characterizes physical or functional attributes relating to the system operation, measured or estimated under specific conditions. Measures of Performance are (1) Attributes that assure the system has the capability and capacity to perform and (2) Assessment of the system to assure it meets design requirements to satisfy the Measures of Effectiveness, (3) Corrective actions to return the actual performance to the planned performance when that actual performance goes outside the Upper and Lower control limits. Again this is simple statistical process control, using feedback to take corrective actions to control future outcomes - feedforward.  In the probabilistic and statistical program management paradigm, feedforward control using past performance, with future models (Monte Carlo model of future behaviors) to determine what corrective actions are needed to Keep The Program Green.

Screen Shot 2017-01-15 at 7.37.49 PM

Another cone style is the cone of confidence in a delivery date. This Actual case it's a Low Earth Orbit Vehicle Launch date. In this case, as the program moves from left to right, we need to assure that the Launch Date moves from a low confidence Date to a date that has a chance of being correct. The BLUE bars are the probabilistic ranges of the current estimate date. As the program moves forward those ranges must be reduced if we're going to show up as needed. The Planned date and a date with a margin are the build to dates. As the program moves the confidence of the date must increase and move toward the need date.

  • The probabilistic completion times change as the program matures.
  • The efforts that produce these improvements must be defined and managed.
  • The¬†error bands of the assessment points must include the¬†risk mitigation activities as well.
  • The¬†planned activities show how the¬†error band narrows over time:
    • This is the basis of a¬†risk tolerant plan.
    • The probabilistic interval become more reliable as the risk mitigation and the maturity assessment add confidence to the planned¬†launch date.

Just a reminder again - the Cone of Uncertainty is a DESIRED path, NOT the result of an unmanaged project outcome.

Risk Management, as shown below, is how Adults Manage Projects

Screen Shot 2017-01-13 at 7.09.21 AM

Wrap Up On the Misunderstanding of the Purpose and Value of the Cone of Uncertainty

When you hear... 

I have data that shows that uncertainty (or any other needed attribute) doesn't reduce and therefore the COU is a FAKE ... OR ... I see data on my projects where the variance is getting worse as we move forward, instead of narrowing as the Planned COU tells us it should be to meet our goals ...

...then that project is out of control,  starting with a missing steering target that means it's Open Loop Control and will be late, over budget, and likely not perform to the needed effectiveness and performance parameters. And when you see these out of control situations, go find the Root Cause and generate the Corrective Act. 

This data is an observation of a project not being managed as Tim Lister suggests - Risk Management is How Adults Manage Projects. 

And if these observations are taking place without corrective actions of the Root Causes of the performance shortfall, the management is behaving badly. Their just observers of the train wreck that is going to happen real soon.

The Engineering Reason for the Cone of Uncertainty Model and the Value it Provides the Designing Makers

The Cone of Uncertainty is NOT an output from the project's behaviour, by then that's too late.
The Cone of Uncertanty is a Steering Target Input to the Management Framework for increasing the probability of the project's success.
This is the Programmatic Management of the project in support of the Technical Management of the project. The processes is an engineering discipline. Systems Engineering, Risk Engineering, Safety and Mission Assurance Engineering, are typical roles where we work.
To suggest otherwise is to invert the paradigm and removes any value from the post-facto observations of the project's performance. At that point it's Too Late, the Horse has left and there's no getting him back.
Defining the planned and needed variance levels at planned points in the project is the basis of the closed loop control system needed increase the probability of success.
When variances outside the planned variance appear, the Root Cause of those must be found and corrective action take.

Here's an example from a Galorath presentation, using the framework of the Cone of Uncertainty, and the actual project cones of how to put this all together. Repeating again, the Cone of Uncertainty is the framework for the 

planned reduction of the uncertainty in critical performance measures of the project.

If your project is not reducing the uncertainty as planned for these critical performance measures - cost, schedule, and technical performance - then it's headed for trouble and you may not even know it.

Screen Shot 2017-01-22 at 12.23.32 PM

Resources

[1] Systems Engineering Measurement Primer, INCOSE

[2] System Analysis, Design, and Development Concepts, Principles, and Practices, Charles Wasson, John Wiley & Sons

[3] SMC Systems Engineering Primer & Handbook: Concept, Processes, and Techniques, Space & Missle Systems Center, U.S. Air Force

[4] Defense Acquisition Guide, Chapter 4, Systems Engineering, 15 May 2013.

[5] Program Managers Tool Kit, 16th Edition, Defense Acquisition University.

[6] "Open Loop / Close Loop Project Controls"

[7] "Reducing Estimation Uncertainty with Continuous Assessment: Tracking the 'Cone of Uncertainty',"¬†Pongtip Aroonvatanaporn, Chatchai Sinthop, Barry Boehm.¬†ASE‚Äô10, September 20‚Äď24, 2010, Antwerp, Belgium.¬†

[8]¬†Boehm, B. ‚ÄúSoftware Engineering Economics‚ÄĚ. Prentice-Hall, 1981.

[9] Boehm, B., Abts, C., Brown, A. W., Chulani, S., Clark, B. K., Horowitz, E., Madachy, R., Reifer, D. J., and Steece, B. Software Cost Estimation with COCOMO II, Prentice-Hall,
2000.
[10] Boehm, B., Egyed, A., Port, D., Shah, A., Kwan, J., and Madachy, R. "Using the WinWin Spiral Model: A Case Study," IEEE Computer, Volume 31, Number 7, July 1998, pp.  33-44 

[11] Cohn, M. Agile Estimating and Planning, Prentice-Hall, 2005

[12] DeMarco, T. Controlling Software Projects: Management, Measurement, and Estimation, Yourdon Press, 1982.

[13] Fleming, Q. W. and Koppelman, J. M. Earned Value Project Management, 2nd edition, Project Management Institute, 2000

[14] Galorath, D. and Evans, M. Software Sizing, Estimation, and Risk Management, Auer-bach, 2006

[15]Jorgensen, M. and Boehm, B. ‚ÄúSoftware Development Effort Estimation: Formal Models or Expert Judgment?‚ÄĚ IEEE Software, March-April 2009, pp. 14-19

[16] Jorgensen, M. and Shepperd, M. ‚ÄúA Systematic Review of Software Development Cost Estimation Studies,‚ÄĚ IEEE Trans. Software Eng., vol. 33, no. 1, 2007, pp. 33-53

[17] Krebs, W., Kroll, P., and Richard, E. Un-assessments ‚Äďreflections by the team, for the team. Agile 2008 Conference

[18] McConnell, S. Software Project Survival Guide, Microsoft Press, 1998

[19] Nguyen, V., Deeds-Rubin, S., Tan, T., and Boehm, B. "A SLOC Counting Standard," COCOMO II Forum 2007

[20] Putnam L. and Fitzsimmons, A. ‚ÄúEstimating Software Costs, Parts 1,2 and 3,‚ÄĚ Datamation, September through December 1979

[21] Stutzke, R. D. Estimating Software-Intensive Systems, Pearson Education, Inc, 2005. 

Related articles

Complex, Complexity, Complicated Economics of Software Development Herding Cats: Economics of Software Development Estimating Probabilistic Outcomes? Of Course We Can! I Think You'll Find It's a Bit More Complicated Than That Risk Management is How Adults Manage Projects

 

Categories: Project Management

Get a sneak peek at Android Nougat 7.1.2

Android Developers Blog - Sat, 02/04/2017 - 08:04
Posted by Dave Burke, VP of Engineering

The next maintenance release for Android Nougat -- 7.1.2 -- is just around the corner! To get the recipe just right, starting today, we're rolling out a public beta to eligible devices that are enrolled in the Android Beta Program, including Pixel and Pixel XL, Nexus 5X, Nexus Player, and Pixel C devices. We're also preparing an update for Nexus 6P that we expect to release soon.

Android 7.1.2 is an incremental maintenance release focused on refinements, so it includes a number of bugfixes and optimizations, along with a small number of enhancements for carriers and users.

If you'd like to try the public beta for Android 7.1.2, the easiest way is through the Android Beta Program. If you have an eligible device that's already enrolled, you're all set -- your device will get the public beta update in the next few days and no action is needed on your part. If your device isn't enrolled, it only takes a moment to visit android.com/beta and opt-in your eligible Android phone or tablet -- you'll soon receive the public beta update over-the-air. As always, you can also download and flash this update manually.

We're expecting to launch the final release of the Android 7.1.2 in just a couple of months, Like the beta, it will be available for Pixel, Pixel XL, Nexus 5X, Nexus 6P, Nexus Player, and Pixel C devices. Meanwhile we welcome your feedback or requests in the Android Beta community as we work towards the final over-the-air update. Thanks for being part of the public beta!
Categories: Programming

Stuff The Internet Says On Scalability For February 3rd, 2017

Hey, it's HighScalability time:


 

We live in interesting times. F/A-18 Super Hornets Launch drone swarm.
If you like this sort of Stuff then please support me on Patreon.
  • 100 billion: words needed to train large networks; 73,653: hard drives at Backblaze; 300 GB hour: raw 4k footage; 1993: server running without rebooting; 64%: of money bet is on the Patriots; 950,000: insect species; 374,000: people employed by solar energy; 10: SpaceX launched Iridium Next satellites; $1 billion: Pokémon Go revenue; 1.2 Billion: daily active Facebook users; $7.17 billion: Apple service revenue; 45%: invest in private cloud this year; 

  • Quoteable Quotes:
    • @kevinmarks: #msvsummit @varungyan: Google's scale is about 10^10 RPCs per second in our microservices
    • language: "Order and chaos are not a properties of things, but relations of an observer to something observed - the ability for an observer to distinguish or specify pattern."
    • general_ai: Doing anything large on a machine without CUDA is a fool's errand these days. Get a GTX1080 or if you're not budget constrained, get a Pascal-based Titan. I work in this field, and I would not be able to do my job without GPUs -- as simple as that. You get 5-10x speedup right off the bat, sometimes more. A very good return on $600, if you ask me.
    • Al-Khwarizmi: Maybe I'm just not good at it and I'm a bit bitter, but my feeling is that this DL [deep learning] revolution is turning research in my area from a battle of brain power and ingenuity to a battle of GPU power and economic means
    • Space Rogue: pcaps or it didn't happen
    • LtAramaki: Everyone thinks they understand SOLID, and when they discuss it with other people who say they understand SOLID, they think the other party doesn't understand SOLID. Take it as you will. I call this the REST phenomenon.
    • evaryont: I don’t see this as them [Google] trying to “seize” a corner of the web, but rather Google taking it’s paranoia to the next level. If they can’t ever trust anyone in the system [Certificate Authority], why not create your own copy of the system that no one else can use? Being able to have perfect security from top to bottom, similar to their recently announced custom chips they put in every one of their servers.
    • David Press: The benefits of SDN are less about latency and uptime and more about flexibility and programmability.
    • Benedict Evans: Web 2.0 was followed not by anything one could call 3.0 but rather a basic platform shift...one can see the rise of machine learning as a fundamental new enabling technology...one can see quite a lot of hardware building blocks for augmented reality glasses...so the things that are emerging at the end of the mobile S-Curve might also be the beginning of the next curve. 
    • @kevinmarks: 20% people have 0 microservices in production - the rest are already running microservices
    • @joeerl: You've got to be joking - should be 1M clients/server at least
    • SikhGamer: We considered using RabbitMQ at work but ultimately opted for SNS and SQS instead. Main reason being that we cared about delivering value and functionality. Over the cost of yet managing another resource. And the problems of reliability become Amazon's problem. Not ours.
    • DataStax: A firewall is the simplest, most effective means to secure a database. Sounds complicated, but it’s so easy a government agent could do it.
    • @danielbryantuk: "If you think good architecture is expensive, try bad architecture" @KevlinHenney #OOP2017
    • Peter Dizikes: The new method [wisdom from crowds] is simple. For a given question, people are asked two things: What they think the right answer is, and what they think popular opinion will be. The variation between the two aggregate responses indicates the correct answer.
    • Philip Ball: Looked at this way, life can be considered as a computation that aims to optimize the storage and use of meaningful information. So living organisms can be regarded as entities that attune to their environment by using information to harvest energy and evade equilibrium.
    • Ed Sutton: The study shows the effectiveness of personality targeting by showing that marketers can attract up to 63% more clicks and up to 1400% more conversions in real-life advertising campaigns on Facebook when matching products and marketing messages to consumers’ personality characteristics.
    • Pete Trbovitch: Today’s mobile app ecosystem most closely resembles the PC shareware era. Apps that are offered free to download can carry an ad-supported income model, paid extended content, or simply bonus features to make the game easier to beat. The bar to entry is as low as it’s ever been 
    • @BenedictEvans: Global mainframe capacity went up 4-5x from 2000-2010. ‘Dead’ technology can have a very long half-life
    • @searls: I keep seeing teams spend months building custom infrastructure that could be done in 20 minutes with Heroku, Github, Travis. Please stop.
    • @mdudas: Starbucks says popularity of its mobile app has created long lines at pickup counters & led to drop in transactions.
    • @cdixon: Software eats networking: Nicira (NSX) will generate $1B revenue for VMWare this year
    • raubitsj: With respect to vibration: we [Google] found vibration caused by adjacent drives in some of our earlier drive chassis could cause off-track writes. This will cause future reads to the data to return uncorrectable read errors. Based on Backblaze's methodology they will likely call out these drives as failed based on SMART or RAID/ReedSolomon sync errors.

  • Well this is different. GitLab live streamed the handling of their GitLab.com Database Incident - 2017/01/31. It wasn't what you would call riveting, but that's an A+++ for transparency. They even took audience questions during the process. What went wrong? The snippets function was DDoSd which generated a large increase of data to the database so the slaves were not able to keep up with the replication state. WAL transaction files that were no longer in the production backlog were being requested so transaction logs were missed. They were starting the copy again from a known good state then things went sideways. They were lucky to have a 6 hour old backup and that's what they were restoring too. Sh*te happens, how the team handled it and their knowledge of the system should give users confidence going forward.

  • OK, this turned out to be false, but nobody doubted it could be true or where things are going in the future. Hotel ransomed by hackers as guests locked out of rooms.

  • Interesting use of Lambda by AirBnB. StreamAlert: Real-time Data Analysis and Alerting. There's an evolution from compiling software using libraries that must be in the source tree; running software that requires downloading lots of package from a repository; and now using services that require a lot of other services to be available in the environment for a complex pipeline to run. StreamAlert just doesn't use Lambda, it also uses Kinesis, SNS, S3, Cloudwatch, KMS, and IAM. Each step is both a deeper level of lock-in and an enabler of richer functionality. What does StreamAlert do?: a real-time data analysis framework with point-in-time alerting. StreamAlert is unique in that it’s serverless, scalable to TB’s/hour, infrastructure deployment is automated and it’s secure by default. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Product Owner Role Compared to the Sponsors Role

Super Sponsor

Super Sponsor

The product owner is the voice of the customer and performs a significant of a number of activities, including acting as a conduit/facilitator for communication between the team and the outside world.  Other activities include: defining and elaborating stories, prioritizing the backlog and accepting work when the team completes a story.  The product owner’s role is related or influenced by the sponsor’s roles (sometimes known as executive or project sponsor).  The sponsor is the person that has overall responsibility and accountability for a piece of work.  The sponsor champions the project based on whether the work fits the business’s needs and overall strategy, finds and secures the budget and has the overall responsibility for the work to deliver the promised value. This is true even though he or she will probably never write a line of code or test.  Sponsors empower the product owner to act for them on a more tactical basis.  A comparison of how the sponsor’s typical roles translate to the product owner is shown below:

table-2
The role of the sponsor and the role of the product owner are different and require separate points of view.  The sponsor owns the purse and faces the outside world while the product owner is more focused on the team.  While the roles are separate in small organizations the person playing the role can be the same person; however, in that case split personalities will be highly useful.


Categories: Process Management

Performance, Scalability, and High Availability: 3 Key Infrastructure Adaptability Requirements

This is a guest post by Tony Branson

Performance, scalability, and HA are often used interchangeably, and any confusion about them can result in unrealistic metrics and deployment delays. It is important to invest your time and understand the differences among these three approaches before you invest your money in resilient systems.

Performance

Categories: Architecture

How to create your own Lint rule

Xebia Blog - Thu, 02/02/2017 - 08:12
When you are part of a multi-team project in Android, it becomes relatively hard to have a common understanding of how components should be used. This is where Android Lint can help you! In this blog we will show you how you can write your own Lint rules and test them. As an example, we

Introducing Associate Android Developer Certification by Google

Google Code Blog - Wed, 02/01/2017 - 21:27
Originally posted on Android Developer Blog

The Associate Android Developer Certification program was announced at Google I/O 2016, and launched a few months later. Since then, over 322 Android developers spanning 61 countries have proven their competency and earned the title of Google Certified Associate Android Developer.
To establish a professional standard for what it means to be an Associate Android developer in the current job market, Google created this certification, which allows us to recognize developers who have proven themselves to uphold that standard.

We conducted a job task analysis to determine the required competencies and content of the certification exam. Through field research and interviews with experts, we identified the knowledge, work practices, and essential skills expected of an Associate Android developer.

The certification process consists of a performance-based exam and an exit interview. The certification fee includes three exam attempts. The cost for certification is $149 USD, or 6500 INR if you reside in India. After payment, the exam will be available for download, and you have 48 hours to complete and submit it for grading.

In the exam, you will implement missing features and debug an Android app using Android Studio. If you pass, you will undergo an exit interview where, you will answer questions about your exam and demonstrate your knowledge of Associate Android Developer competencies.

Check out this short video for a quick overview of the Associate Android Developer certification process:



Earning your AAD Certification signifies that you possess the skills expected of an Associate Android developer, as determined by Google. You can showcase your credential on your resume and display your digital badge on your social media profiles. As a member of the AAD Alumni Community, you will also have access to program benefits focused on increasing your visibility as a certified developer.

Test your Android development skills and earn the title of Google Certified Associate Android Developer. Visit g.co/devcertification to get started!


Categories: Programming

Introducing Associate Android Developer Certification by Google

Android Developers Blog - Wed, 02/01/2017 - 21:24
Posted by JP Souchak, Program Manager

The Associate Android Developer Certification program was announced at Google I/O 2016, and launched a few months later. Since then, over 322 Android developers spanning 61 countries have proven their competency and earned the title of Google Certified Associate Android Developer.

To establish a professional standard for what it means to be an Associate Android developer in the current job market, Google created this certification, which allows us to recognize developers who have proven themselves to uphold that standard.

We conducted a job task analysis to determine the required competencies and content of the certification exam. Through field research and interviews with experts, we identified the knowledge, work practices, and essential skills expected of an Associate Android developer.

The certification process consists of a performance-based exam and an exit interview. The certification fee includes three exam attempts. The cost for certification is $149 USD, or 6500 INR if you reside in India. After payment, the exam will be available for download, and you have 48 hours to complete and submit it for grading.

In the exam, you will implement missing features and debug an Android app using Android Studio. If you pass, you will undergo an exit interview where, you will answer questions about your exam and demonstrate your knowledge of Associate Android Developer competencies.

Check out this short video for a quick overview of the Associate Android Developer certification process:



Earning your AAD Certification signifies that you possess the skills expected of an Associate Android developer, as determined by Google. You can showcase your credential on your resume and display your digital badge on your social media profiles. As a member of the AAD Alumni Community, you will also have access to program benefits focused on increasing your visibility as a certified developer.

Test your Android development skills and earn the title of Google Certified Associate Android Developer. Visit g.co/devcertification to get started!


Categories: Programming