Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Updates to end user consent for 3rd-party apps and Single Sign-on providers

Google Code Blog - Mon, 04/03/2017 - 20:00
Originally Posted on G Suite Developers Blog
Posted by Rodrigo Paiva, Product Manager & Nicholas Watson, Software Engineer, Identity, and Wesley Chun, Developer Advocate, G Suite


At Google, we're mindful of keeping our users' data and account information secure. So whether you're writing an app that requires access to user data or helping your users change their passwords, we'll keep you up-to-date on policy changes, and now today, when it comes to consent and 3rd-party applications. Starting April 5, 2017, if you're an application developer or a 3rd-party Single Sign-On (SSO) provider, your G Suite users may encounter a redirect when they authenticate with your identity service to make it clear to users which account they're authenticating as well as the permissions they're granting to applications.

These changes will occur on these platforms:
  • Google and 3rd-party applications on iOS
  • Mobile browsers on iOS and Android
  • Web browsers (Chrome, Firefox and other modern browsers)
Note that Android applications that use the standard authentication libraries are already prompting users to select appropriate account information, so they're not impacted by these changes.
More visibility with new permission requests for your application 
It's important that your users are presented with account information and credential consent, and apps should make this process easy and clear. One new change that you may now see is that only non-standard permission requests will be presented in the secondary consent screen in your application.

Currently when an application requests permissions, all of them are displayed together. However, users should have greater visibility into permissions being requested beyond the standard "email address" and "profile" consent. By clicking to select their account, a user consents to these core permissions,. The secondary consent screen will appear only if additional permissions are requested by the application.

Only non-standard permissions will be presented in the secondary consent screen that the user must approve.
Along with these changes, your application name will be more visible to users, and they can click-through to get your contact information. We recommend application developers use a public-facing email address so that users can quickly contact you for support or assistance. For more details, check out this developer guide.

If your application may also be used by G Suite customers that employ a 3rd-party Single Sign-On (SSO) service, we recommend that you utilize the hd and/or login_hint parameters, if applicable. Even with the changes to the 3rd-party SSO auth flow, these parameters will be respected if provided. You can review the OpenID Connect page in the documentation for more information.


An application that uses the hd parameter to specify the domain name automatically
Changes coming for 3rd-party SSO redirection
G Suite users may also notice redirection when signing into 3rd-party SSO providers. If no accounts are signed in, the user must confirm the account after signing in to the 3rd-party SSO provider to ensure that they're signed in with the correct G Suite account:
The end user who has just signed in with one Google account should select that account as confirmation.
As mentioned, by clicking to the select their account, a user is opting into "email address" and "profile" consent. Once the user consents to any additional non-standard permissions that may be requested, they will be redirected back to your application.

If the user is already signed in to one or more accounts that match the hdhint, the Account Chooser will display all of the accounts and require the user to select the appropriate G Suite account before being redirected to the 3rd-party SSO provider then back to your application:

A user who is signed into several Google accounts will be required to choose the appropriate account.
See updates starting April 2017
These changes will help your users understand their permissions more clearly across all platforms, whether they're using Google or a 3rd-party SSO provider for authentication. We've started to roll out the new interstitial page on iOS devices, and changes for browsers will begin to roll out starting April 5, 2017.

Categories: Programming

Code Health: Google's Internal Code Quality Efforts

Google Testing Blog - Mon, 04/03/2017 - 19:46
By Max Kanat-Alexander, Tech Lead for Code Health and Author of Code Simplicity

There are many aspects of good coding practices that don't fall under the normal areas of testing and tooling that most Engineering Productivity groups focus on in the software industry. For example, having readable and maintainable code is about more than just writing good tests or having the right tools‚ÄĒit's about having code that can be easily understood and modified in the first place. But how do you make sure that engineers follow these practices while still allowing them the independence that they need to make sound engineering decisions?

Many years ago, a group of Googlers came together to work on this problem, and they called themselves the "Code Health" group. Why "Code Health"? Well, many of the other terms used for this in the industry‚ÄĒengineering productivity, best practices, coding standards, code quality‚ÄĒhave connotations that could lead somebody to think we were working on something other than what we wanted to focus on. What we cared about was the processes and practices of software engineering in full‚ÄĒany aspect of how software was written that could influence the readability, maintainability, stability, or simplicity of code. We liked the analogy of having "healthy" code as covering all of these areas.

This is a field that many authors, theorists, and conference speakers touch on, but not an area that usually has dedicated resources within engineering organizations. Instead, in most software companies, these efforts are pushed by a few dedicated engineers in their extra time or led by the senior tech leads. However, every software engineer is actually involved in code health in some way. After all, we all write software, and most of us care deeply about doing it the "right way." So why not start a group that helps engineers with that "right way" of doing things?

This isn't to say that we are prescriptive about engineering practices at Google. We still let engineers make the decisions that are most sensible for their projects. What the Code Health group does is work on efforts that universally improve the lives of engineers and their ability to write products with shorter iteration time, decreased development effort, greater stability, and improved performance. Everybody appreciates their code getting easier to understand, their libraries getting simpler, etc. because we all know those things let us move faster and make better products.

But how do we accomplish all of this? Well, at Google, Code Health efforts come in many forms.

There is a Google-wide Code Health Group composed of 20%contributors who work to make engineering at Google better for everyone. The members of this group maintain internal documents on best practices and act as a sounding board for teams and individuals who wonder how best to improve practices in their area. Once in a while, for critical projects, members of the group get directly involved in refactoring code, improving libraries, or making changes to tools that promote code health.

For example, this central group maintains Google's code review guidelines, writes internal publications about best practices, organizes tech talks on productivity improvements, and generally fosters a culture of great software engineering at Google.

Some of the senior members of the Code Health group also advise engineering executives and internal leadership groups on how to improve engineering practices in their areas. It's not always clear how to implement effective code health practices in an area‚ÄĒsome people have more experience than others making this happen broadly in teams, and so we offer our consulting and experience to help make simple code and great developer experiences a reality.

In addition to the central group, many products and teams at Google have their own Code Health group. These groups tend to work more closely on actual coding projects, such as addressing technical debt through refactoring, making tools that detect and prevent bad coding practices, creating automated code formatters, or making systems for automatically deleting unused code. Usually these groups coordinate and meet with the central Code Health group to make sure that we aren't duplicating efforts across the company and so that great new tools and systems can be shared with the rest of Google.

Throughout the years, Google's Code Health teams have had a major impact on the ability of engineers to develop great products quickly at Google. But code complexity isn't an issue that only affects Google‚ÄĒit affects everybody who writes software, from one person writing software on their own time to the largest engineering teams in the world. So in order to help out everybody, we're planning to release articles in the coming weeks and months that detail specific practices that we encourage internally‚ÄĒpractices that can be applied everywhere to help your company, your codebase, your team, and you. Stay tuned here on the Google Testing Blog for more Code Health articles coming soon!

Categories: Testing & QA

Introducing the Mobile Sites certification, for web developers

Google Code Blog - Mon, 04/03/2017 - 18:00
Posted by Chris Hohorst, Head of Mobile Sites Transformation

Mobile now accounts for over half of all web traffic1, making performance on small screens more important than ever.

Despite this increase, a recent study by Google found that the average time it takes to load a mobile landing page is 22 seconds. When you consider that 53% of mobile site visitors will leave a site if it takes more than three seconds to load, it's clear why conversion rates are consistently lower on mobile than desktop.

Website visitors now expect their mobile experience to be as flawless as desktop, and the majority of online businesses are failing to deliver.

With this in mind, we're introducing the new Google Mobile Sites certification. Passing the Mobile Sites exam signals that you have a demonstrated ability to build and optimize high-quality sites, and allows you to promote yourself as a Google accredited mobile site developer.

Through codifying best practice in mobile site development, we hope to improve the general standard of mobile design and speed, and make it easier to find the best talent.
What the exam covers
To pass the exam, you'll need to show proficiency across mobile site design, mobile UX best practice, mobile site speed optimization, and advanced web technologies. We've put together a study guide that covers everything you'll need to know.
What are the benefits?
We know that a lot of web developers are doing great work on mobile sites - this certification is a way of promoting them to a wider audience. Being certified means being recognized by Google as an expert in mobile site optimization, which will make you more accessible and attractive to potential clients looking for a good match for those services.

The certification will display on your Partners profile, helping you stand out to businesses looking for mobile site development, and can also be shared across social media.

How to sign up

Check out our study guide to get started. Then, to take the exam, please click on the Mobile Sites certification link and log in to your Google Partners account. If you're not signed up yet, you can create a Partners user profile by registering here.
The exam is open to all web developers globally in English and, once completed, the certification will remain valid for 12 months.

1 Google Analytics data, U.S., Q1 2016 from Find Out How You Stack Up to Industry Benchmarks for Mobile Page Speed
Categories: Programming

AWS Lambda: Encrypted environment variables

Mark Needham - Mon, 04/03/2017 - 06:49

Continuing on from my post showing how to create a ‘Hello World’ AWS lambda function I wanted to pass encrypted environment variables to my function.

The following function takes in both an encrypted and unencrypted variable and prints them out.

Don’t print out encrypted variables in a real function, this is just so we can see the example working!

import boto3
import os

from base64 import b64decode

def lambda_handler(event, context):
    encrypted = os.environ['ENCRYPTED_VALUE']
    decrypted = boto3.client('kms').decrypt(CiphertextBlob=b64decode(encrypted))['Plaintext']

    # Don't print out your decrypted value in a real function! This is just to show how it works.
    print("Decrypted value:", decrypted)

    plain_text = os.environ["PLAIN_TEXT_VALUE"]
    print("Plain text:", plain_text)

Now we’ll zip up our function into HelloWorldEncrypted.zip, ready to send to AWS.

zip HelloWorldEncrypted.zip HelloWorldEncrypted.py

Now it’s time to upload our function to AWS and create the associated environment variables.

If you’re using a Python editor then you’ll need to install boto3 locally to keep the editor happy but you don’t need to include boto3 in the code you send to AWS Lambda – it comes pre-installed.

Now we write the following code to automate the creation of our Lambda function:

import boto3
from base64 import b64encode

fn_name = "HelloWorldEncrypted"
kms_key = "arn:aws:kms:[aws-zone]:[your-aws-id]:key/[your-kms-key-id]"
fn_role = 'arn:aws:iam::[your-aws-id]:role/lambda_basic_execution'

lambda_client = boto3.client('lambda')
kms_client = boto3.client('kms')

encrypt_me = "abcdefg"
encrypted = b64encode(kms_client.encrypt(Plaintext=encrypt_me, KeyId=kms_key)["CiphertextBlob"])

plain_text = 'hijklmno'

lambda_client.create_function(
        FunctionName=fn_name,
        Runtime='python2.7',
        Role=fn_role,
        Handler="{0}.lambda_handler".format(fn_name),
        Code={ 'ZipFile': open("{0}.zip".format(fn_name), 'rb').read(),},
        Environment={
            'Variables': {
                'ENCRYPTED_VALUE': encrypted,
                'PLAIN_TEXT_VALUE': plain_text,
            }
        },
        KMSKeyArn=kms_key
)

The tricky bit for me here was figuring out that I needed to pass the value that I wanted to base 64 encode the output of the value encrypted by the KMS client. The KMS client relies on a KMS key that we need to setup. We can see a list of all our KMS keys by running the following command:

$ aws kms list-keys

The format of these keys is arn:aws:kms:[zone]:[account-id]:key/[key-id].

Now let’s try executing our Lambda function from the AWS console:

$ python CreateHelloWorldEncrypted.py

Let’s check it got created:

$ aws lambda list-functions --query "Functions[*].FunctionName"
[
    "HelloWorldEncrypted", 
]

And now let’s execute the function:

$ aws lambda invoke --function-name HelloWorldEncrypted --invocation-type RequestResponse --log-type Tail /tmp/out | jq ".LogResult"
"U1RBUlQgUmVxdWVzdElkOiA5YmNlM2E1MC0xODMwLTExZTctYjFlNi1hZjQxZDYzMzYxZDkgVmVyc2lvbjogJExBVEVTVAooJ0RlY3J5cHRlZCB2YWx1ZTonLCAnYWJjZGVmZycpCignUGxhaW4gdGV4dDonLCAnaGlqa2xtbm8nKQpFTkQgUmVxdWVzdElkOiA5YmNlM2E1MC0xODMwLTExZTctYjFlNi1hZjQxZDYzMzYxZDkKUkVQT1JUIFJlcXVlc3RJZDogOWJjZTNhNTAtMTgzMC0xMWU3LWIxZTYtYWY0MWQ2MzM2MWQ5CUR1cmF0aW9uOiAzNjAuMDQgbXMJQmlsbGVkIER1cmF0aW9uOiA0MDAgbXMgCU1lbW9yeSBTaXplOiAxMjggTUIJTWF4IE1lbW9yeSBVc2VkOiAyNCBNQgkK"

That’s a bit hard to read, some decoding is needed:

$ echo "U1RBUlQgUmVxdWVzdElkOiA5YmNlM2E1MC0xODMwLTExZTctYjFlNi1hZjQxZDYzMzYxZDkgVmVyc2lvbjogJExBVEVTVAooJ0RlY3J5cHRlZCB2YWx1ZTonLCAnYWJjZGVmZycpCignUGxhaW4gdGV4dDonLCAnaGlqa2xtbm8nKQpFTkQgUmVxdWVzdElkOiA5YmNlM2E1MC0xODMwLTExZTctYjFlNi1hZjQxZDYzMzYxZDkKUkVQT1JUIFJlcXVlc3RJZDogOWJjZTNhNTAtMTgzMC0xMWU3LWIxZTYtYWY0MWQ2MzM2MWQ5CUR1cmF0aW9uOiAzNjAuMDQgbXMJQmlsbGVkIER1cmF0aW9uOiA0MDAgbXMgCU1lbW9yeSBTaXplOiAxMjggTUIJTWF4IE1lbW9yeSBVc2VkOiAyNCBNQgkK" | base64 --decode
START RequestId: 9bce3a50-1830-11e7-b1e6-af41d63361d9 Version: $LATEST
('Decrypted value:', 'abcdefg')
('Plain text:', 'hijklmno')
END RequestId: 9bce3a50-1830-11e7-b1e6-af41d63361d9
REPORT RequestId: 9bce3a50-1830-11e7-b1e6-af41d63361d9	Duration: 360.04 ms	Billed Duration: 400 ms 	Memory Size: 128 MB	Max Memory Used: 24 MB	

And it worked, hoorah!

The post AWS Lambda: Encrypted environment variables appeared first on Mark Needham.

Categories: Programming

Systems Engineering

Herding Cats - Glen Alleman - Mon, 04/03/2017 - 03:45

For non-trivial problems in any domain, Systems Engineering provides a starting framework for identifying problems, assessing possible solutions, implementing those solutions, measuring the performance of the efforts to deliver the solutions and the effectiveness of those solutions. Here's the collective wisdom of Systems Engineering from Mitre.

Screen Shot 2017-04-02 at 8.15.30 PM

This book has many resources for a variety of problems in many domains, including agile development. This text speaks to managing in the presence of uncertainty and the processes needed to make decisions including estimating.

Systems engineering and policy analysis must account for costs and affordability. An elegant engineering solution that the customer cannot afford is useless; so too is a policy option that would make many people happy, but at a prohibitive cost. Therefore, careful efforts to estimate the cost of a particular option and the risk that the actual cost may exceed the estimate are necessary for systems engineering and policy analysis. Engineers who design products for commercial sale are familiar with the concept of ‚Äúprice points,‚ÄĚ and a manufacturer may wish to produce several products with similar purposes, each of which is optimal for its own selling price. In the case of systems engineering for the government, it may be necessary to conduct a policy analysis to determine how much the government is willing to spend, before conducting a systems engineering analysis to arrive at the technically ‚Äúbest‚ÄĚ solution at that cost level.

Related articles Capabilities Based Planning First Then Requirements Systems Thinking, System Engineering, and Systems Management Essential Reading List for Managing Other People's Money Release Early and Release Often Making Conjectures Without Testable Outcomes
Categories: Project Management

SPaMCAST 436 - Incrementalism, UAT and Agile, Systems Thinking

Software Process and Measurement Cast - Mon, 04/03/2017 - 01:59

The Software Process and Measurement Cast 436 features our essay titled, Change Fatigue, Tunnel Vision, and Watts Humphrey, in which we answer the question of whether the state and culture of the organization or team, can have a large impact on whether a Big Bang approach or an incremental approach makes sense to change.

Our second column is from Jeremy Berriault. Jeremy discusses user acceptance testing and Agile. There are lots of different ways to accomplish user acceptance testing in an Agile environment.  The only wrong way is not to do UAT in Agile.  Jeremy  blogs at https://jberria.wordpress.com/  

Jon M Quigley brings his column, The Alpha and Omega of Product Development, to the Cast. This week Jon puts all the pieces together and discusses systems thinking.  One of the places you can find Jon is at Value Transformation LLC.

Re-Read Saturday News

This week we wrap-up our re-read of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  In the wrap-up, we discuss overall impressions of the book and suggest a set of exercises to reinforce your growth mindset.

The next book in the series will be Holacracy (Buy a copy today) by Brian J. Robertson. After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy, therefore we will read the whole book together.

Every week we discuss a chapter then consider the implications of what we have ‚Äúread‚ÄĚ from the point of view of both someone pursuing an organizational transformation and using the material when coaching teams. ¬†

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our discussion with Steven Adams on our recent re-read of  The Five Dysfunctions of a Team by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing).  Steven provides insight and some ideas on how to get the most from the re-read feature!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 436 - Incrementalism, UAT and Agile, Systems Thinking

Software Process and Measurement Cast - Mon, 04/03/2017 - 01:59

The Software Process and Measurement Cast 436 features our essay titled, Change Fatigue, Tunnel Vision, and Watts Humphrey, in which we answer the question of whether the state and culture of the organization or team, can have a large impact on whether a Big Bang approach or an incremental approach makes sense to change.

Our second column is from Jeremy Berriault. Jeremy discusses user acceptance testing and Agile. There are lots of different ways to accomplish user acceptance testing in an Agile environment.  The only wrong way is not to do UAT in Agile.  Jeremy  blogs at https://jberria.wordpress.com/  

Jon M Quigley brings his column, The Alpha and Omega of Product Development, to the Cast. This week Jon puts all the pieces together and discusses systems thinking.  One of the places you can find Jon is at Value Transformation LLC.

Re-Read Saturday News

This week we wrap-up our re-read of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  In the wrap-up, we discuss overall impressions of the book and suggest a set of exercises to reinforce your growth mindset.

The next book in the series will be Holacracy (Buy a copy today) by Brian J. Robertson. After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy, therefore we will read the whole book together.

Every week we discuss a chapter then consider the implications of what we have ‚Äúread‚ÄĚ from the point of view of both someone pursuing an organizational transformation and using the material when coaching teams. ¬†

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our discussion with Steven Adams on our recent re-read of  The Five Dysfunctions of a Team by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing).  Steven provides insight and some ideas on how to get the most from the re-read feature!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 436 ‚Äď Incrementalism, UAT and Agile, Systems Thinking

 

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play MusicListen Now

The Software Process and Measurement Cast 436 features our essay titled, Change Fatigue, Tunnel Vision, and Watts Humphrey, in which we answer the question of whether the state and culture of the organization or team, can have a large impact on whether a Big Bang approach or an incremental approach makes sense to change.

Our second column is from Jeremy Berriault. Jeremy discusses user acceptance testing and Agile. There are lots of different ways to accomplish user acceptance testing in an Agile environment.  The only wrong way is not to do UAT in Agile.  Jeremy  blogs at https://jberria.wordpress.com/  

Jon M Quigley brings his column, The Alpha and Omega of Product Development, to the Cast. This week Jon puts all the pieces together and discusses systems thinking.  One of the places you can find Jon is at Value Transformation LLC.

Re-Read Saturday News

This week we wrap-up our re-read of Carol Dweck’s Mindset: The New Psychology of Success (buy your copy and read along).  In the wrap-up, we discuss overall impressions of the book and suggest a set of exercises to reinforce your growth mindset.

The next book in the series will be Holacracy (Buy a copy today) by Brian J. Robertson. After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy, therefore we will read the whole book together.

Every week we discuss a chapter then consider the implications of what we have ‚Äúread‚ÄĚ from the point of view of both someone pursuing an organizational transformation and using the material when coaching teams. ¬†

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our discussion with Steven Adams on our recent re-read of  The Five Dysfunctions of a Team by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing).  Steven provides insight and some ideas on how to get the most from the re-read feature!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

AWS Lambda: Programatically create a Python ‚ÄėHello World‚Äô function

Mark Needham - Sun, 04/02/2017 - 23:11

I’ve been playing around with AWS Lambda over the last couple of weeks and I wanted to automate the creation of these functions and all their surrounding config.

Let’s say we have the following Hello World function:

def lambda_handler(event, context):
    print("Hello world")

To upload it to AWS we need to put it inside a zip file so let’s do that:

$ zip HelloWorld.zip HelloWorld.py
$ unzip -l HelloWorld.zip 
Archive:  HelloWorld.zip
  Length     Date   Time    Name
 --------    ----   ----    ----
       61  04-02-17 22:04   HelloWorld.py
 --------                   -------
       61                   1 file

Now we’re ready to write a script to create our AWS lambda function.

import boto3

lambda_client = boto3.client('lambda')

fn_name = "HelloWorld"
fn_role = 'arn:aws:iam::[your-aws-id]:role/lambda_basic_execution'

lambda_client.create_function(
    FunctionName=fn_name,
    Runtime='python2.7',
    Role=fn_role,
    Handler="{0}.lambda_handler".format(fn_name),
    Code={'ZipFile': open("{0}.zip".format(fn_name), 'rb').read(), },
)

[your-aws-id] needs to be replaced with the identifier of our AWS account. We can find that out be running the following command against the AWS CLI:

$ aws ec2 describe-security-groups --query 'SecurityGroups[0].OwnerId' --output text
123456789012

Now we can create our function:

$ python CreateHelloWorld.py

2017 04 02 23 07 38

And if we test the function we’ll get the expected output:

2017 04 02 23 02 59

The post AWS Lambda: Programatically create a Python ‘Hello World’ function appeared first on Mark Needham.

Categories: Programming

Own the Future of Management, with Me

NOOP.NL - Jurgen Appelo - Sun, 04/02/2017 - 08:21
Sorry, the deadline has passed. I’m not selling shares at this time.

Since today, I am no longer the only owner of the Management 3.0 business. I have co-owners. Yay!! And you can now join me as a co-owner too.

Management 3.0 is about better management with fewer managers. The ideas and practices help leaders and other change agents with managing for more happiness at work. And the brand was named the leader of the Third Wave of Agile, because of our focus on the entire business, rather than just on development practices and projects. By improving the management of organizations, we hope we are helping to make the world a better place.

As a co-owner of Management 3.0, you support and participate in my adventure. Either passively or actively, you help our team to offer healthy games, tools, and practices to more managers in more organizations.

Since last week, a Foundation owns the Management 3.0 business model and this Foundation has issued virtual shares. The business has grown by more than 40% each year in 2014, 2015, and 2016. In other words, the ownership of shares not only contributes to happier people in healthier organizations. It is also a smart business investment!

Until 1 April 2017, I sell virtual shares (officially: certificates) for EUR 50 per share. On 1 April 2017, I stop selling them for a while. I may continue selling more shares later, but probably at a higher price. And there are more reasons not to wait!

When you buy 10 or more shares before 1 April 2017, I send you a free copy of the book Managing for Happiness, personally signed by me on a unique hand-drawn bookplate.

When you buy 100 or more shares before 1 April 2017, you are entitled to a free one-hour keynote on location (excluding travel and accommodation expenses).

When you buy 1,000 or more shares before 1 April 2017, you gain the status of honored business partner, with special privileges and exclusive access to the team and me.

And everyone who buys shares has a chance to win one of my last eight copies of #Workout, the exclusive Limited Edition. Some people sell it for $2000+ on Amazon.

It is important to known that Management 3.0 is a global brand. I prefer that ownership is distributed across the world. I reserve the right not to sell too many shares to people in the same country. (And yes, it’s first come, first served.)

What are the next steps?

1. Check out my FAQ for all details (read it here);
2. Fill out the application form (APPLY HERE);
3. Sign the simple agreement (I will send it);
4. Pay the share price (information will follow).

I asked the notary and my accountant to make it so simple that it’s five minutes of work and you could be co-owner in one day.

When this simple procedure is complete, we add you to the exclusive list of Management 3.0 owners. You can proudly wear a bragging badge on your website, and the team will inform you about new developments on a regular basis.

Don’t wait too long!

This offer is valid until 1 April 2017. The available shares per country are limited.

OWN THE FUTURE OF MANAGEMENT – APPLY NOW

The post Own the Future of Management, with Me appeared first on NOOP.NL.

Categories: Project Management

Mindset: The New Psychology of Success: Re-Read Week 10, Wrap-up

Mindset Book Cover

Next week we will begin our read of Holacracy.  Buy a copy today and read along!  In preparation I suggest listening to the interview with Jeff Dalton on Software Process and Measurement Cast 433, Jeff has practical experience with using the concepts of holacracy in his company and as a tool in his consultancy.  Today we wrap up our re-read of Mindsets.  If you have not read the book or have not read each of our installments please use the links at the bottom of this entry and enjoy.

My big takeaway from the book is how recognize mindsets and then how to use mindsets as a tool help coach and lead organizations to deliver more value. One of the powerful aspects of the book are the examples.  The examples are helpful to provide analogies that can be used as a yardstick to aid recognition and reaction.  As a coach mindsets give me a tool to help predict, understand and guide behavior.

Nearly everyone that I have discussed the book with finds the concept of mindset useful as a tool.  However, there are those that suggest that the concept is pseudo-science / popular psychology, and can’t be relied on as an absolute. A quick literature search will unearth an undercurrent of articles that question the science underlying the concept of mindsets. The one argument that resonates with me is that concepts are very difficult to test in a controlled manner (there are too many variables to account for).  The negative articles are offset by equally weighty articles in support of the science behind mindsets.  Because the literature is unsettled I would suggest that practitioners approach mindsets less as settled fact, but rather as a tool whose results need to be evaluated based on context rather blindly accepted.  Even if the science was less contentious, most coaches are not trained psychologists, our evaluations can be colored by our own interpretations and personal baskets therefore, used cautiously. 

As an experiment as we wrap up the re-read, I would like to challenge the readers of this blog to see if they can shift their mindset to be even more growth oriented (you would not be reading this column if you had an extreme version of a fixed mindset).  

Try taking the following steps:

  1. Stop justifying and rationalizing outcomes or behaviors that don’t meet your expeditions.  Step back, work harder, practice more and try new behaviors.
  2. Expand your circle of influence.  I once read a book that suggested that you never eat lunch alone and that you eat lunch with someone different each day.  Talk and network with people that have growth mindsets.
  3. Dream. Do something new.  I have begun challenging myself to do something uncomfortable every day. While I don’t always succeed, I would like to think I have expanded what I think is possible . . .just a bit.
  4. Admit that you will never be perfect,. but always strive to be better. One of the reasons I read, blog and interview people is a realization that I can never know everything.  
  5. The journey is just as important as the destination.  Focus on the journey rather than the end.

I would like to hear your ideas for fostering and using a growth mindset. ¬†I will continue using mindsets as one of the tools in change toolbox, and even though science is unsettled, at least there is an academic debate on the topic with observational and experimental data. ¬†If there wasn’t wouldn‚Äôt we be following a fixed mindset pattern?

Previous Entries of the re-read of Mindset:

Basics and Introduction

Chapter 1: Mindsets

Chapter 2: Inside the Mindsets

Chapter 3: The Truth About Ability and Accomplishment

Chapter 4: Sports: The Mindset of a Champion

Chapter 5: Business: Mindset and Leadership

Chapter 6: Relationships: Mindsets in Love (or Not)

Chapter 7:  Parents, Teachers, Coaches: Where Do Mindsets Come From?

Chapter 8: Changing Mindsets: A Workshop


Categories: Process Management

Principles, Processes, and Practices of Project Success

Herding Cats - Glen Alleman - Sat, 04/01/2017 - 00:30

Principles are timeless. Practices and Process are Fads.

A Principle a fundamental truth or proposition that serves as the foundation for a system of belief or behavior or for a chain of reasoning. A Practice is an application or use of an idea, belief, or method rather than just the Principle of such application or use. A Process is a series of steps and decisions involved in the way work is completed.

For project management domain, what are some Principles? Fundamental truths that serve as the foundation for a system of belief? Here's my version in the form of questions that when answered form the foundation for the Practices and Processes.

  1. What does Done look like in units of measure meaningful to the decision makers?
  2. What is the Plan and the Schedule for the work in that Plan to reach Done for the needed cost on the needed date to deliver the needed Value to those paying for the work?
  3. What are the resources - time, talent and treasure - needed to reach Done as planned?
  4. What impediments will be encountered along the way to Done and what work must be performed to handle these impediments?
  5. What are the units of measures of progress to plan for each deliverable?

With these 5 Principles, there are 5 Processes needed to implement them

Screen Shot 2017-03-31 at 5.21.34 PM
Each of these processes has Practices that have been shown to be widely applicable in many domains, including Agile development

Let's start with Identifying the needed Capabilities. Capabilities are what the customer bought. They are the ability to get something done.

Screen Shot 2017-03-31 at 5.22.28 PM

Once the Capabilities are identified, we need the technical and operational requirements of the solution that implements the Capabilities

Screen Shot 2017-03-31 at 5.23.13 PM

With the Requirements in place - and these can be a list of Features held in the Product Roadmap and Release Plan for agile, we need to lay out how they will be built

Screen Shot 2017-03-31 at 5.24.36 PM

Then we need to execute this baseline

Screen Shot 2017-03-31 at 5.25.22 PM

With execution underway, managing the risks of the project becomes our focus beyond the engineering work

Screen Shot 2017-03-31 at 5.26.36 PM

With all this in place - to whatever scale is appropriate for the problem at hand, we need the final pieces

Screen Shot 2017-03-31 at 5.28.11 PM Screen Shot 2017-03-31 at 5.28.41 PM

There, now we've got Principles, Processes, and Practices all connected.

Related articles Two Books in the Spectrum of Software Development We've Been Doing This for 20 Years ... Applying the Right Ideas to the Wrong Problem Estimating Processes in Support of Economic Analysis Managing in Presence of Uncertainty
Categories: Project Management

Stuff The Internet Says On Scalability For March 31st, 2017

Hey, it's HighScalability time:

 

What lies beneath? Networks...of blood vessels. (Wellcome Image Awards)
If you like this sort of Stuff then please support me on Patreon.
  • 5000: node (150,000 pod) clusters in Kubernetes 1.6; 15 years: time to @spacex launch with a recycled rocket booster; 174 mbps: Internet speed in Dublin; 10 nm: Intel’s new Moore approved process; 30 minutes: to create Samsung's S8; 50 billion: of your cells replaced each day; 2 million: new red blood cells per second; 3dbm: attenuation of human body, same as a wall; 12: hours of tardis sounds; 350: pages to stop a bullet; 2: meters of DNA pack in a space .000006m wide; 

  • Quotable Quotes:
    • @swardley: Having met many "leaders" in technology & business, I wouldn't bet on the future survival of humanity. If anything AI might help the odds
    • Francis Pouliot: Any contentious hard fork of the Bitcoin blockchain shall be considered an alternative cryptocurrency (altcoin), regardless of the relative hashing power on the forked chain.
    • @coda: WhatsApp: 900M users, built w/ < 35 devs, using #erlang Krispy Kreme: 1004 locations, 3700 employees, original glazed is 190 #calories
    • @BenedictEvans: Still think it's interesting Instagram shifted emphasis from interests to friends. Is that a law of nature for social if you want scale?
    • @johnrobb: "each robot per thousand workers decreased employment by 6.2 workers and wages by 0.7 percent"
    • Alex Woodie: The Hadoop dream of unifying data and compute in a distributed manner has all but failed in a smoking heap of cost and complexity, according to technology experts and executives who spoke to Datanami.
    • @RichRogersIoT: "First you learn the value of abstraction, then you learn the cost of abstraction, then you are ready to engineer." - @KentBeck
    • @codemanship: Don't explain code quality to execs. Explain high cost of change. Explain slowing down of innovation. Explain longer cycle times.
    • @malwareunicorn: Bad malware pickup lines: Hey girl, I heard you like sandboxes. I would never try to escape yours ;)
    • dkhenry: The selling of data isn't the policy you need to fight. The monopoly power of ISP's is the problem you must push back on. 
    • @MaxWendkos: An SEO expert walks into a bar, bars, pub, tavern, public house, Irish pub, drinks, beer, alcohol
    • Barry Lampert: the point of Amazon isn't to offer a consumer the absolute lowest price possible; it's to offer the lowest price possible given the convenience that Amazon offers
    • Daniel Lemire: Let us make the statement precise: Most performance or memory optimizations are useless.
    • @sarahmei: People run into trouble with DRY because it doesn't tell you *what* not to repeat. People assume syntax, but it's actually concepts.
    • Dan Rayburn: China suffers from 9.2% transfer failure rate (similar to Malaysia, India and Brazil), and a high packet loss.  These two parameters have severe impact on content download time and overall performance.
    • Daniel Lemire: I submit to you that it is no accident if the StackOverflow list of top-paying programming languages is made of obscure languages. They are comparing the average of a niche against the average of a large population
    • For even more Quotable Quotes please click through to the main article.

  • For good WiFi you don't necessarily need one big powerful router bristling with antenna like a radiation mutated ant. 802.eleventy what? A deep dive into why Wi-Fi kind of suck and New Screen Savers (@20 min). You want a true mesh network (Plume). WiFi should whisper, use 5G to create pools of WiFi in each room so signals don't penetrate between rooms. Lots of little access points can automatically find a path through your house. Use a wired backhaul for best performance. Raw throughput isn't the best measure. How does it perform with many people using many devices? Roaming isn't always well supported. Consider how well the system hands-off devices as you walk through the house. 

  • BloomCON 2017 Videos are now available. You might like Honey, I Stole Your C2 [Command-and-control] Server: A dive into attacker infrastructure.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

A new issue tracker for G Suite developers

Google Code Blog - Fri, 03/31/2017 - 07:36
Originally Posted on the G Suite Developers Blog
Posted by Ryan Roth, Developer Programs Engineer & Wesley Chun (@Wescpy), Developer Advocate, G Suite

You may have read recently that the Google Cloud Platform team upgraded to Issue Tracker, the same system that Google uses internally. This allows for improved collaboration between all of us and all of you. Issues you file will have better exposure internally, and you get improved transparency in terms of seeing the issues we're actively working on. Starting today, G Suite developers will also have a new issue tracker to which we've already migrated existing issues from previous systems.

Whether it's a bug that you've found, or if you wish to submit a favorite feature request, the new issue tracker is here for you. Heads up, you need to be logged in with your Google credentials to view or update issues in the tracker.



The new issue tracker for G Suite developers.
Each G Suite API and developer tool has its own "component" number that you can search. For your convenience, below is the entire list. You may browse for issues relevant to the Google APIs that you're using, or click on the convenience links to report an issue or request a new/missing feature:
To get started, take a look at the documentation pages, as well as the FAQ. For more details, be sure to check out the Google Cloud Platform announcement, too. We look forward to working more closely with all of you soon!
Categories: Programming

Storytelling Techniques: Business Obituaries

An obituary was written when a queen was interned

In keeping with a slightly morbid bend in storytelling techniques, we add to the premortem technique the idea of a business (substitute project) obituary. ¬†An obituary is a specialized form of a news story that communicates the key points in the life or a person, organization, event or project. During my college years, I spent time in college radio stations on air both playing music and doing the news (where do you think the podcasting came from? Check out the Software Process and Measurement Cast). ¬†In the newsroom we largely knew how to put together an obituary. ¬†We kept a few critical local celebrities written and ready just in case (in the business, this is called a morgue). ¬†Just like any story an obituary is comprised by a set of attributes. ¬†A typical (simplified)l set of components found in obituaries (Chapter 51 from the News Manual ‚Äď Obituaries) includes:

  Formula B

  1. Name, identity, time of death, cause of death and place of death.
  2. Where the person came from and age.
  3. Most newsworthy achievements.
  4. The rest of their life and achievements, in chronological order.
  5. Funeral arrangements.
  6. Survivors.

Business obituaries, while fiction, follow many of the same conventions.  The process for writing a project obituary is different than the one we used back in WLSU’s newsroom.  A team-oriented process to write a project obituary includes the following steps:

  1.     Assemble the project team and break them into pairs.
  2. ¬†¬†¬†¬†Ask each pair identify two large concerns the could lead to project failure. ¬†If with the pairs have problems identifying large risks ask ‚Äúwhat keeps them up at night.‚ÄĚ
  3.     Have each team develop an obituary based on the News Manual outline using the risk they identified as the shock that killed the project.  Timebox this step to 30 minutes.
  4.     Ask each team to develop a headline for their obituary.  Generating a headline is often useful to help the team distil the central message.
  5.     Have each team debrief in a round robin format.  As each team debriefs have the rest of the teams listen to how writers perceived the impact of the risk. Timebox this task to 30 minutes.
  6.     When done debriefing ask the whole team how they would avoid the risks and the impacts foreseen in the obituaries (for risk aficionados, this step is about developing a risk mitigation plan).

The story in the obituary is useful for identifying risks and providing information needed for risk mitigation.  At the same the story can help the team to identify the true nature of the project, the attributes that the team perceives will make it special.

An alternate approach is to ask the team to write the obituary as if the project was a wild success. This approach takes a positive rather than a negative path and might be useful in organizations that do not want to discuss possible failure.

Most project personnel, regardless of whether they are software developers, testers, business subject matter experts or business analysts are inherently optimistic.  The idea of a developing an obituary helps get people out of their comfort zone so they are free to think about what could go wrong and what the consequences could be if they do not address not dealt with.  


Categories: Process Management

Principles, Processes, and Practices of Project Success

Herding Cats - Glen Alleman - Thu, 03/30/2017 - 23:39

In the project management world, everyone is selling something to solve some problem. This includes product vendors, consulting firms, and internal providers. I'm on the internal provider side most of the time. Other times, I on the consulting firm acting as an internal provider. I not on the vendor side.

Over the years (30 something years) I come to understand and write about the Principles, Processes, and Practices of project success in a wide variety of domains. From software systems to heavy construction, to metal bending companies, petrochemicals, pulp and paper, drugs, consumer products, industrial products, and intellectual property.

Over this time I've been guided by project planning and control people, engineers, sales people, marketing people, and senior business leaders. 

Here's what I've learned. There are 5 Principles, 5 Practices, and 5 Process that can be applied to increase the probability of success of all projects.

The notion that we should move our focus to Products and away from Projects ignores (sometimes willfully) what the role of a project is. Those suggesting we don't need projects (#NoProjects) probably don't understand the actual role of project work, but that another post.

Here's the summary of these Principles, Practices, and Processes. 

Screen Shot 2017-03-30 at 3.22.48 PM

Each of the Principles, Practices, and Processes are independent in the following manner. Again, no matter the domain, context, engineering or development method. From putting in the Spring Garden to flying to Mars, to building the flight avionics for the spacecraft flying to Mars

Screen Shot 2017-03-30 at 4.38.31 PM

So when someone suggests some new way of managing a project, use this to check to see if their new ways covers the Principles, Processes, and Practices.

Related articles Risk Management is How Adults Manage Projects Root Cause of Project Failure Want To Learn How To Estimate? Essential Reading List for Managing Other People's Money
Categories: Project Management

My top 10 technology podcasts

Mark Needham - Thu, 03/30/2017 - 23:38

For the last six months I’ve been listening to 2 or 3 technology podcasts every day while out running and on my commute and I thought it’d be cool to share some of my favourites.

I listen to all of these on the Podbean android app which seems pretty good. It can’t read the RSS feeds of some podcasts but other than that it’s worked well.

Anyway, on with the podcasts:

Software Engineering Daily

This is the most reliable of all the podcasts I’ve listened to and a new episode is posted every weekday.

It sweeps across lots of different areas of technology – there’s a bit of software development, a bit of data engineering, and a bit of infrastructure.

Every now and then there’s a focus on a particular topic area or company which I find really interesting e.g. in 2015 there was a week of Bitcoin focused episodes and more recently there’s been a bunch of episodes about Stripe.

Partially Derivative

This one is more of a data science postcast and cover lots of different areas in that space but thankfully keep the conversation at a level that a non data scientist like me can understand.

I especially liked the post US election episode where they talked about the problems with polling and how most election predictions had ended up being wrong.

There’s roughly one new episode a week.

O’Reilly Bots podcast

I didn’t know anything about bots before i listened to this podcast and it was quite addictive – i powered through all the episodes in a few weeks.

They cover all sorts of topics that I’d have never thought of – why have developers got interested in bots? How do UIs differ to ones in apps? How do users find out about bots?

I really enjoy listening to this one but it’s been a bit quiet recently.

Datanauts

I found this one really useful for getting the hang of infrastructure topics. I wanted to learn a bit more about Kubernetes a few months ago and they had an episode which gives an overview as well as more detailed episodes.

One neat feature of this podcast is that after each part of an interview the hosts summarise what they picked up from that segment. I like that it gives you a few seconds to think about what you picked up and whether it matches the summary.

Some of the episodes go really deep into specific infrastructure topics and I struggle to follow along but there are enough other ones to keep me happy.

Becoming a Data Scientist

This one mirrors the journey of Renee Teate getting into data science and bringing everyone along on the journey.

Each episode is paired with a learning exercises for the listener to try and although any of the learning exercises yet I like how some interviews are structured around them. e.g. Sebastien Rashka was interviewed about model accuracy on the week that was being explored in the learning club.

If you’re interested in data science topics but aren’t a data scientist yourself this is a good one to listen to.

This Week In Machine Learning and AI Podcast

This one mostly goes well over my head but it’s still interesting to listen to other people talk about stuff they’re working on.

There’s a lot of focus on Deep Learning so i think i need to learn a bit more about that and then the episodes will make more sense.

The last episode with Evan Wright was much more accessible. I need more like that one!

The Women in Tech Show

I came across Edaena Salinas on Software Engineering Daily and didn’t initially realise that Edaena had a podcast until a couple of weeks ago.

There’s lots of interesting content on this one. The episodes on data driven marketing and unconscious bias are my favourites of the ones I’ve listened to so far.

The Bitcoin Podcast

I listened to a few shows about bitcoin on Software Engineering Daily and found this podcast while trying to learn more.

Some of the episodes are general enough that i can follow along but others use a lot of block chain specific terminology that leave me feeling a bit lost.

I especially liked the episode that featured Greg Walker of learnmeabitcoin fame. Greg uses Neo4j as part of the website and presented at the London Neo4j meetup earlier this week.

Go Time

This one has a chat based format that I really. They have a cool section called ‘free software Friday’ at the end of each show where everybody calls out a piece of software or maintainer that they’re grateful for.

I was playing around with Go in November/December last year so it was really helpful in pointing me in the right direction. I haven’t done any stuff recently so it’s more a general interest show for now.

Change Log

This one covers lots of different topics, mostly around different open source projects.

The really cool thing about this one is they get every guest to explain their ‘origin story’ i.e. how did they get into software and what was their path to the current job. The interview with Nathan Sobo about Atom was particularly good in this respect.

It’s always interesting to hear how other people got started and contrast it with my own experiences.

Another cool feature of this podcast is that they sometimes have episodes where they interview people at open source conferences.

That’s it folks

That’s all for now. Hopefully there’s one or more in there that you haven’t listened to before.

If you’ve got any suggestions for other ones I should listen to let me know in the comments or send me a message on twitter @markhneedham

The post My top 10 technology podcasts appeared first on Mark Needham.

Categories: Programming

Working with AWS CodeDeploy

Agile Testing - Grig Gheorghiu - Thu, 03/30/2017 - 23:18
As usual when I make a breakthrough after bumping my head against the wall for a few days trying to get something to work, I hasten to write down my notes here so I can remember what I've done ;) In this case, the head-against-the-wall routine was caused by trying to get AWS CodeDeploy to work within the regular code deployment procedures that we have in place using Jenkins and Capistrano.

Here is the 30,000 foot view of how the deployment process works using a combination of Jenkins, Docker, Capistrano and AWS CodeDeploy:
  1. Code gets pushed to GitHub
  2. Jenkins deployment job fires off either automatically (for development environments, if so desired) or manually
    • Jenkins spins up a Docker container running Capistrano and passes it several environment variables such as GitHub repository URL and branch, target deployment directory, etc.
    • The Capistrano Docker image is built beforehand and contains rake files that specify how the code it checks out from GitHub is supposed to be built
    • The Capistrano Docker container builds the code and exposes the target deployment directory as a Docker volume
    • Jenkins archives the files from the exposed Docker volume locally as a tar.gz file
    • Jenkins uploads the tar.gz to an S3 bucket
    • For good measure, Jenkins also builds a Docker image of a webapp container which includes the built artifacts, tags the image and pushes it to Amazon ECR so it can be later used if needed by an orchestration system such as Kubernetes
  3. AWS CodeDeploy runs a code deployment (via the AWS console currently, using the awscli soon) while specifying the S3 bucket and the tar.gz file above as the source of the deployment and an AWS AutoScaling group as the destination of the deployment
  4. Everybody is happy 
You may ask: why Capistrano? Why not use a shell script or some other way of building the source code into artifacts? Several reasons:
  • Capistrano is still one of the most popular deployment tools. Many developers are familiar with it.
  • You get many good features for free just by using Capistrano. For example, it automatically creates a releases directory under your target directory, creates a timestamped subdirectory under releases where it checks out the source code, builds the source code, and if everything works well creates a 'current' symlink pointing to the releases/timestamped subdirectory
  • This strategy is portable. Instead of building the code locally and uploading it to S3 for use with AWS CodeDeploy, you can use the regular Capistrano deployment and build the code directly on a target server via ssh. The rake files are the same, only the deploy configuration differs.
I am not going to go into details for the Jenkins/Capistrano/Docker setup. I've touched on some of these topics in previous posts.

I will go into details for the AWS CodeDeploy setup. Here goes.

Create IAM policies and roles

There are two roles that need to be created for AWS CodeDeploy to work. One is to be attached to EC2 instances that you want to deploy to, and one is to be used by the CodeDeploy agent running on each instance.

- Create following IAM policy for EC2 instances, which allows those instances to list S3 buckets and download fobject from S3 buckets (in this case the permissions cover all S3 buckets, but you can specify specific ones in the Resource variable):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

- Attach above policy to an IAM role and name the role e.g. CodeDeploy-EC2-Instance-Profile

- Create following IAM policy to be used by the CodeDeploy agent running on the EC2 instances you want to deploy to:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:CompleteLifecycleAction",
"autoscaling:DeleteLifecycleHook",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:PutLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:EnableMetricsCollection",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribePolicies",
"autoscaling:DescribeScheduledActions",
"autoscaling:DescribeNotificationConfigurations",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:SuspendProcesses",
"autoscaling:ResumeProcesses",
"autoscaling:AttachLoadBalancers",
"autoscaling:PutScalingPolicy",
"autoscaling:PutScheduledUpdateGroupAction",
"autoscaling:PutNotificationConfiguration",
"autoscaling:PutLifecycleHook",
"autoscaling:DescribeScalingActivities",
"autoscaling:DeleteAutoScalingGroup",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:TerminateInstances",
"tag:GetTags",
"tag:GetResources",
"sns:Publish",
"cloudwatch:DescribeAlarms",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeInstanceHealth",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer"
],
"Resource": "*"
}
]
}
- Attach above policy to an IAM role and name the role e.g. CodeDeployServiceRole

Create a 'golden image' AMI

The whole purpose of AWS CodeDeploy is to act in conjunction with Auto Scaling Groups so that the app server layer of your infrastructure becomes horizontally scalable. You need to start somewhere, so I recommend the following:
  • set up an EC2 instance for your app server the old-fashioned way, either with Ansible/Chef/Puppet or with Terraform
  • configure this EC2 instance to talk to any other layers it needs, i.e. the database layer (either running on EC2 instances or, if you are in AWS, on RDS), the caching layer (dedicated EC2 instances running Redis/memcached, or AWS ElastiCache), etc. 
  •  deploy some version of your code to the instance and make sure your application is fully functioning
 If all this works as expected, take an AMI image from this EC2 instance. This image will serve as the 'golden image' that all other instances launched by the Auto Scaling Group / Launch Configuration will be based on.

Create Application Load Balancer (ALB) and Target Group

The ALB will be the entry point into your infrastructure. For now just create an ALB and an associated Target Group. Make sure you add your availability zones into the AZ pool of the ALB.

If you want the ALB to handle the SSL certificate for your domain, add the SSL cert to Amazon Certificate Manager and add a listener on the ALB mapping port 443 to the Target Group. Of course, also add a listener for port 80 on the ALB and map it to the Target Group.

I recommend creating a dedicated Security Group for the ALB and allowing ports 80 and 443, either from everywhere or from a restricted subnet if you want to test it first.

For the Target Group, make sure you set the correct health check for your application (something like requesting a special file healthcheck.html over port 80). No need to select any EC2 instances in the Target Group yet.

Create Launch Configuration and Auto Scaling Group

Here are the main elements to keep in mind when creating a Launch Configuration to be used in conjunction with AWS CodeDeploy:
  • AMI ID: specify the AMI ID of the 'golden image' created above
  • IAM Instance Profile: specify CodeDeploy-EC2-Instance-Profile (role created above)
  • Security Groups: create a Security Group that allows access to ports 80 and 443 from the ALB Security Group above 
  • User data: each newly launched EC2 instance based on your golden image AMI will have to get the AWS CodeDeploy agent installed. Here's the user data for an Ubuntu-based AMI (taken from the AWS CodeDeploy documentation):
#!/bin/bash
apt-get -y update
apt-get -y install awscli
apt-get -y install ruby
cd /home/ubuntu
aws s3 cp s3://aws-codedeploy-us-west-2/latest/install . --region us-west-2
chmod +x ./install
./install auto

Alternatively, you can run these commands your initial EC2 instance, then take a golden image AMI based off of that instance. That way you make sure that the CodeDeploy agent will be running on each new EC2 instance that gets provisioned via the Launch Configuration. In this case, there is no need to specify a User data section for the Launch Configuration.

Once the Launch Configuration is created, you'll be able to create an Auto Scaling Group (ASG) associated with it. Here are the main configuration elements for the ASG:
  • Launch Configuration: the one defined above
  • Target Groups: the Target Group defined above
  • Min/Max/Desired: up to you to define the EC2 instance count for each of these. You can start with 1/1/1 to test
  • Scaling Policies: you can start with a static policy (corresponding to Min/Max/Desired counts) and add policies based on alarms triggered by various Cloud Watch metrics such as CPU usage, memory usage, etc as measured on the EC2 instances comprising the ASG
Once the ASG is created, depending on the Desired instance count, that many EC2 instances will be launched.

 Create AWS CodeDeploy Application and Deployment Group

We finally get to the meat of this post. Go to the AWS CodeDeploy page and create a new Application. You also need to create a Deployment Group while you are at it. For Deployment Type, you can start with 'In-place deployment' and once you are happy with that, move to 'Blue/green deployment, which is more complex but better from a high-availability and rollback perspective.

In the Add Instances section, choose 'Auto scaling group' as the tag type, and the name of the ASG created above as the key. Under 'Total matching instances' below the Tag and Key you should see a number of EC2 instances corresponding to the Desired count in your ASG.

For Deployment Configuration, you can start with the default value, which is OneAtATime, then experiment with other types such as HalfAtATime (I don't recommend AllAtOnce unless you know what you're doing)

For Service Role, you need to specify the CodeDeployServiceRole service role created above.

Create scaffoding files for AWS CodeDeploy Application lifecycle

At a minimum, the tar.gz or zip archive of your application's built code also needs to contain what is called an AppSpec file, which is a YAML file named appspec.yml. The file needs to be in the root directory of the archive. Here's what I have in mine:

version: 0.0
os: linux
files:
  - source: /
    destination: /var/www/mydomain.com/
hooks:
  BeforeInstall:
    - location: before_install
      timeout: 300
      runas: root
  AfterInstall:
    - location: after_install
      timeout: 300
      runas: root

The before_install and after_install scripts (you can name them anything you want) are shell scripts that will be executed after the archive is downloaded on the target EC2 instance.

The before_install script will be run before the files inside the archive are copied into the destination directory (as specified in the destination variable /var/www/mydomain.com). You can do things like create certain directories that need to exist, or change the ownership/permissions of certain files and directories.

The after_install script script will be run after the files inside the archive are copied into the destination directory. You can do things like create symlinks, run any scripts that need to complete the application installation (such as scripts that need to hit the database), etc.

One note specific to archives obtained from code built by Capistrano: it's customary to have Capistrano tasks create symlinks for directories such as media or var to volumes outside of the web server document root (when media files are mounted over NFS/EFS for example). When these symlinks are unarchived by CodeDeploy, they tend to turn into regular directories, and the contents of potentially large mounted file systems get copied in them. Not what you want. I ended up creating all symlinks I need in the after_install script, and not creating them in Capistrano.

There are other points in the Application deploy lifecycle where you can insert your own scripts. See the AppSpec hook documentation.


Deploy the application with AWS CodeDeploy

Once you have an Application and its associated Deployment Group, you can select this group and choose 'Deploy new revision' from the Action drop-down. For the Revision Type, choose 'My application is stored in Amazon S3'. For the Revision Location, type in the name of the S3 bucket where Jenkins uploaded the tar.gz of the application build. You can play with the other options according to the needs of your deployment.

Finally, hit the Deploy button, baby! If everything goes well, you'll see a nice green bar showing success.


If everything does not go well, you can usually troubleshoot things pretty well by looking at the logs of the Events associated with that particular Deployment. Here's an example of an error log:

ScriptFailed
Script Name after_install
Message Script at specified location: after_install run as user root failed with exit code 1 
Log Tail [stderr]chown: changing ownership of ‚Äė/var/www/mydomain.com/shared/media/images/85.jpg‚Äô:
Operation not permitted
 
In this case, I the 'shared' directory was mounted over NFS, so I had to make sure the permissions and ownership of the source file system on the NFS server were correct.

I am still experimenting with AWS CodeDeploy and haven't quite used it 'in anger' yet, so I'll report back with any other findings.

Working with AWS CodeDeploy

Agile Testing - Grig Gheorghiu - Thu, 03/30/2017 - 23:18
As usual when I make a breakthrough after bumping my head against the wall for a few days trying to get something to work, I hasten to write down my notes here so I can remember what I've done ;) In this case, the head-against-the-wall routine was caused by trying to get AWS CodeDeploy to work within the regular code deployment procedures that we have in place using Jenkins and Capistrano.

Here is the 30,000 foot view of how the deployment process works using a combination of Jenkins, Docker, Capistrano and AWS CodeDeploy:
  1. Code gets pushed to GitHub
  2. Jenkins deployment job fires off either automatically (for development environments, if so desired) or manually
    • Jenkins spins up a Docker container running Capistrano and passes it several environment variables such as GitHub repository URL and branch, target deployment directory, etc.
    • The Capistrano Docker image is built beforehand and contains rake files that specify how the code it checks out from GitHub is supposed to be built
    • The Capistrano Docker container builds the code and exposes the target deployment directory as a Docker volume
    • Jenkins archives the files from the exposed Docker volume locally as a tar.gz file
    • Jenkins uploads the tar.gz to an S3 bucket
    • For good measure, Jenkins also builds a Docker image of a webapp container which includes the built artifacts, tags the image and pushes it to Amazon ECR so it can be later used if needed by an orchestration system such as Kubernetes
  3. AWS CodeDeploy runs a code deployment (via the AWS console currently, using the awscli soon) while specifying the S3 bucket and the tar.gz file above as the source of the deployment and an AWS AutoScaling group as the destination of the deployment
  4. Everybody is happy 
You may ask: why Capistrano? Why not use a shell script or some other way of building the source code into artifacts? Several reasons:
  • Capistrano is still one of the most popular deployment tools. Many developers are familiar with it.
  • You get many good features for free just by using Capistrano. For example, it automatically creates a releases directory under your target directory, creates a timestamped subdirectory under releases where it checks out the source code, builds the source code, and if everything works well creates a 'current' symlink pointing to the releases/timestamped subdirectory
  • This strategy is portable. Instead of building the code locally and uploading it to S3 for use with AWS CodeDeploy, you can use the regular Capistrano deployment and build the code directly on a target server via ssh. The rake files are the same, only the deploy configuration differs.
I am not going to go into details for the Jenkins/Capistrano/Docker setup. I've touched on some of these topics in previous posts.

I will go into details for the AWS CodeDeploy setup. Here goes.

Create IAM policies and roles

There are two roles that need to be created for AWS CodeDeploy to work. One is to be attached to EC2 instances that you want to deploy to, and one is to be used by the CodeDeploy agent running on each instance.

- Create following IAM policy for EC2 instances, which allows those instances to list S3 buckets and download fobject from S3 buckets (in this case the permissions cover all S3 buckets, but you can specify specific ones in the Resource variable):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:Get*",
                "s3:List*"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

- Attach above policy to an IAM role and name the role e.g. CodeDeploy-EC2-Instance-Profile

- Create following IAM policy to be used by the CodeDeploy agent running on the EC2 instances you want to deploy to:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:CompleteLifecycleAction",
"autoscaling:DeleteLifecycleHook",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:PutLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:EnableMetricsCollection",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribePolicies",
"autoscaling:DescribeScheduledActions",
"autoscaling:DescribeNotificationConfigurations",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:SuspendProcesses",
"autoscaling:ResumeProcesses",
"autoscaling:AttachLoadBalancers",
"autoscaling:PutScalingPolicy",
"autoscaling:PutScheduledUpdateGroupAction",
"autoscaling:PutNotificationConfiguration",
"autoscaling:PutLifecycleHook",
"autoscaling:DescribeScalingActivities",
"autoscaling:DeleteAutoScalingGroup",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:TerminateInstances",
"tag:GetTags",
"tag:GetResources",
"sns:Publish",
"cloudwatch:DescribeAlarms",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeInstanceHealth",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer"
],
"Resource": "*"
}
]
}
- Attach above policy to an IAM role and name the role e.g. CodeDeployServiceRole

Create a 'golden image' AMI

The whole purpose of AWS CodeDeploy is to act in conjunction with Auto Scaling Groups so that the app server layer of your infrastructure becomes horizontally scalable. You need to start somewhere, so I recommend the following:
  • set up an EC2 instance for your app server the old-fashioned way, either with Ansible/Chef/Puppet or with Terraform
  • configure this EC2 instance to talk to any other layers it needs, i.e. the database layer (either running on EC2 instances or, if you are in AWS, on RDS), the caching layer (dedicated EC2 instances running Redis/memcached, or AWS ElastiCache), etc. 
  •  deploy some version of your code to the instance and make sure your application is fully functioning
 If all this works as expected, take an AMI image from this EC2 instance. This image will serve as the 'golden image' that all other instances launched by the Auto Scaling Group / Launch Configuration will be based on.

Create Application Load Balancer (ALB) and Target Group

The ALB will be the entry point into your infrastructure. For now just create an ALB and an associated Target Group. Make sure you add your availability zones into the AZ pool of the ALB.

If you want the ALB to handle the SSL certificate for your domain, add the SSL cert to Amazon Certificate Manager and add a listener on the ALB mapping port 443 to the Target Group. Of course, also add a listener for port 80 on the ALB and map it to the Target Group.

I recommend creating a dedicated Security Group for the ALB and allowing ports 80 and 443, either from everywhere or from a restricted subnet if you want to test it first.

For the Target Group, make sure you set the correct health check for your application (something like requesting a special file healthcheck.html over port 80). No need to select any EC2 instances in the Target Group yet.

Create Launch Configuration and Auto Scaling Group

Here are the main elements to keep in mind when creating a Launch Configuration to be used in conjunction with AWS CodeDeploy:
  • AMI ID: specify the AMI ID of the 'golden image' created above
  • IAM Instance Profile: specify CodeDeploy-EC2-Instance-Profile (role created above)
  • Security Groups: create a Security Group that allows access to ports 80 and 443 from the ALB Security Group above 
  • User data: each newly launched EC2 instance based on your golden image AMI will have to get the AWS CodeDeploy agent installed. Here's the user data for an Ubuntu-based AMI (taken from the AWS CodeDeploy documentation):
#!/bin/bash
apt-get -y update
apt-get -y install awscli
apt-get -y install ruby
cd /home/ubuntu
aws s3 cp s3://aws-codedeploy-us-west-2/latest/install . --region us-west-2
chmod +x ./install
./install auto

Alternatively, you can run these commands your initial EC2 instance, then take a golden image AMI based off of that instance. That way you make sure that the CodeDeploy agent will be running on each new EC2 instance that gets provisioned via the Launch Configuration. In this case, there is no need to specify a User data section for the Launch Configuration.

Once the Launch Configuration is created, you'll be able to create an Auto Scaling Group (ASG) associated with it. Here are the main configuration elements for the ASG:
  • Launch Configuration: the one defined above
  • Target Groups: the Target Group defined above
  • Min/Max/Desired: up to you to define the EC2 instance count for each of these. You can start with 1/1/1 to test
  • Scaling Policies: you can start with a static policy (corresponding to Min/Max/Desired counts) and add policies based on alarms triggered by various Cloud Watch metrics such as CPU usage, memory usage, etc as measured on the EC2 instances comprising the ASG
Once the ASG is created, depending on the Desired instance count, that many EC2 instances will be launched.

 Create AWS CodeDeploy Application and Deployment Group

We finally get to the meat of this post. Go to the AWS CodeDeploy page and create a new Application. You also need to create a Deployment Group while you are at it. For Deployment Type, you can start with 'In-place deployment' and once you are happy with that, move to 'Blue/green deployment, which is more complex but better from a high-availability and rollback perspective.

In the Add Instances section, choose 'Auto scaling group' as the tag type, and the name of the ASG created above as the key. Under 'Total matching instances' below the Tag and Key you should see a number of EC2 instances corresponding to the Desired count in your ASG.

For Deployment Configuration, you can start with the default value, which is OneAtATime, then experiment with other types such as HalfAtATime (I don't recommend AllAtOnce unless you know what you're doing)

For Service Role, you need to specify the CodeDeployServiceRole service role created above.

Create scaffoding files for AWS CodeDeploy Application lifecycle

At a minimum, the tar.gz or zip archive of your application's built code also needs to contain what is called an AppSpec file, which is a YAML file named appspec.yml. The file needs to be in the root directory of the archive. Here's what I have in mine:

version: 0.0
os: linux
files:
  - source: /
    destination: /var/www/mydomain.com/
hooks:
  BeforeInstall:
    - location: before_install
      timeout: 300
      runas: root
  AfterInstall:
    - location: after_install
      timeout: 300
      runas: root

The before_install and after_install scripts (you can name them anything you want) are shell scripts that will be executed after the archive is downloaded on the target EC2 instance.

The before_install script will be run before the files inside the archive are copied into the destination directory (as specified in the destination variable /var/www/mydomain.com). You can do things like create certain directories that need to exist, or change the ownership/permissions of certain files and directories.

The after_install script script will be run after the files inside the archive are copied into the destination directory. You can do things like create symlinks, run any scripts that need to complete the application installation (such as scripts that need to hit the database), etc.

One note specific to archives obtained from code built by Capistrano: it's customary to have Capistrano tasks create symlinks for directories such as media or var to volumes outside of the web server document root (when media files are mounted over NFS/EFS for example). When these symlinks are unarchived by CodeDeploy, they tend to turn into regular directories, and the contents of potentially large mounted file systems get copied in them. Not what you want. I ended up creating all symlinks I need in the after_install script, and not creating them in Capistrano.

There are other points in the Application deploy lifecycle where you can insert your own scripts. See the AppSpec hook documentation.


Deploy the application with AWS CodeDeploy

Once you have an Application and its associated Deployment Group, you can select this group and choose 'Deploy new revision' from the Action drop-down. For the Revision Type, choose 'My application is stored in Amazon S3'. For the Revision Location, type in the name of the S3 bucket where Jenkins uploaded the tar.gz of the application build. You can play with the other options according to the needs of your deployment.

Finally, hit the Deploy button, baby! If everything goes well, you'll see a nice green bar showing success.


If everything does not go well, you can usually troubleshoot things pretty well by looking at the logs of the Events associated with that particular Deployment. Here's an example of an error log:

ScriptFailed
Script Name after_install
Message Script at specified location: after_install run as user root failed with exit code 1 
Log Tail [stderr]chown: changing ownership of ‚Äė/var/www/mydomain.com/shared/media/images/85.jpg‚Äô:
Operation not permitted
 
In this case, I the 'shared' directory was mounted over NFS, so I had to make sure the permissions and ownership of the source file system on the NFS server were correct.

I am still experimenting with AWS CodeDeploy and haven't quite used it 'in anger' yet, so I'll report back with any other findings.

Update your app to take advantage of the larger aspect ratio on new Android flagship devices

Android Developers Blog - Thu, 03/30/2017 - 18:40
Posted by Neto Marin, Developer Advocate, Google Play

To deliver more engaging viewing experiences to their users, many Android OEMs are experimenting with new, super widescreen smartphones. Samsung has just announced a new flagship device, the Samsung Galaxy S8, featuring a new display format with an aspect ratio of 18.5:9. At the Mobile World Congress earlier this year, LG also launched their new flagship device, the LG G6, with an expanded screen aspect ratio of 18:9.
(Left) An app with a maximum aspect ratio set at 16:9 on an 18.5:9 device  (Right) An app with a maximum aspect ratio set at or over 18.5:9 on an 18.5:9 device

In order to take full advantage of the larger display formats on these devices, you should consider increasing your app's maximum supported aspect ratio. To do so, simply declare an android.max_aspect <meta-data> element in the app's <application> element:

<meta-data android:name="android.max_aspect"
    android:value="ratio_float"/>
Where ratio_float is the maximum aspect ratio your app can support, expressed as (longer dimension / shorter dimension) in decimal form.

We recommend that you design your app to support aspect ratios of 2.1 or higher. For this, you would add the following to the <application> element:

<meta-data android:name="android.max_aspect" android:value="2.1" />
Note: if you don't set a value, and android:resizeableActivity is not true, then the maximum aspect ratio defaults to 1.86 (roughly 16:9) and your app will not take advantage of the extra screen space.

As more super widescreen Android devices, like the Samsung Galaxy S8 and the LG G6, become available, you'll have more opportunities to display more content and create more engaging experiences with your app.

For more details about how to support multiple screens on Android, please visit the page Supporting Multiple Screens.
Categories: Programming