Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

Go vs Python: Parsing a JSON response from a HTTP API

Mark Needham - 8 hours 56 min ago

As part of a recommendations with Neo4j talk that I’ve presented a few times over the last year I have a set of scripts that download some data from the meetup.com API.

They’re all written in Python but I thought it’d be a fun exercise to see what they’d look like in Go. My eventual goal is to try and parallelise the API calls.

This is the Python version of the script:

import requests
import os
import json

key =  os.environ['MEETUP_API_KEY']
lat = "51.5072"
lon = "0.1275"

seed_topic = "nosql"
uri = "https://api.meetup.com/2/groups?&topic={0}&lat={1}&lon={2}&key={3}".format(seed_topic, lat, lon, key)

r = requests.get(uri)
all_topics = [topic["urlkey"]  for result in r.json()["results"] for topic in result["topics"]]

for topic in all_topics:
    print topic

We’re using the requests library to send a request to the meetup API to get the groups which have the topic ‘nosql’ in the London area. We then parse the response and print out the topics.

Now to do the same thing in Go! The first bit of the script is almost identical:

import (
	"fmt"
	"os"
	"net/http"
	"log"
	"time"
)

func handleError(err error) {
	if err != nil {
		fmt.Println(err)
		log.Fatal(err)
	}
}

func main() {
	var httpClient = &http.Client{Timeout: 10 * time.Second}

	seedTopic := "nosql"
	lat := "51.5072"
	lon := "0.1275"
	key := os.Getenv("MEETUP_API_KEY")

	uri := fmt.Sprintf("https://api.meetup.com/2/groups?&topic=%s&lat=%s&lon=%s&key=%s", seedTopic, lat, lon, key)

	response, err := httpClient.Get(uri)
	handleError(err)
	defer response.Body.Close()
	fmt.Println(response)
}

If we run that this is the output we see:

$ go cmd/blog/main.go

So far so good. Now we need to parse the response that comes back.

Most of the examples that I came across suggest creating a struct with all the fields that you want to extract from the JSON document but that feels a bit over kill for such a simple script.

Instead we can just create maps of (string -> interface{}) and then apply type conversions where appropriate. I ended up with the following code to extract the topics:

import "encoding/json"

var target map[string]interface{}
decoder := json.NewDecoder(response.Body)
decoder.Decode(&target)

for _, rawGroup := range target["results"].([]interface{}) {
    group := rawGroup.(map[string]interface{})
    for _, rawTopic := range group["topics"].([]interface{}) {
        topic := rawTopic.(map[string]interface{})
        fmt.Println(topic["urlkey"])
    }
}

It’s more verbose that the Python version because we have to explicitly type each thing we take out of the map at every stage, but it’s not too bad. This is the full script:

package main

import (
	"fmt"
	"os"
	"net/http"
	"log"
	"time"
	"encoding/json"
)

func handleError(err error) {
	if err != nil {
		fmt.Println(err)
		log.Fatal(err)
	}
}

func main() {
	var httpClient = &http.Client{Timeout: 10 * time.Second}

	seedTopic := "nosql"
	lat := "51.5072"
	lon := "0.1275"
	key := os.Getenv("MEETUP_API_KEY")

	uri := fmt.Sprintf("https://api.meetup.com/2/groups?&topic=%s&lat=%s&lon=%s&key=%s", seedTopic, lat, lon, key)

	response, err := httpClient.Get(uri)
	handleError(err)
	defer response.Body.Close()

	var target map[string]interface{}
	decoder := json.NewDecoder(response.Body)
	decoder.Decode(&target)

	for _, rawGroup := range target["results"].([]interface{}) {
		group := rawGroup.(map[string]interface{})
		for _, rawTopic := range group["topics"].([]interface{}) {
			topic := rawTopic.(map[string]interface{})
			fmt.Println(topic["urlkey"])
		}
	}
}

Once I’ve got these topics the next step is to make more API calls to get the groups for those topics.

I want to make those API calls in parallel while making sure I don’t exceed the rate limit restrictions on the API and I think I can make use of go routines, channels, and timers to do that. But that’s for another post!

Categories: Programming

Southeast Asian indie game developers find success on Google Play

Android Developers Blog - Fri, 01/20/2017 - 18:46
Posted by Vineet Tanwar, Business Development Manager, Google Play

Indie game developers bring high quality, artistic, and innovative content to Google Play and raise the bar for all developers in the process. In fact, they also make up a large portion of our 'Editor's Choice' recommended titles.
Southeast Asia, in particular, has a vibrant indie game developer ecosystem, and we've been working closely with them to provide tools that help them build successful businesses on Google Play. Today, we're sharing stories from three Indie developers based in Singapore, Vietnam, and Indonesia, who joined us at our 'Indie Game Developers Day' workshops in May 2016 and all of whom have experienced significant growth since.

Inzen Studio from Singapore learned how to use store listing experiments and has improved the conversion rate of their newly launched game Dark Dot by 25%. Indonesia based studio, Niji Games, creator of Cute Munchies, implemented 'Saved Games' and 'Events and Quests' from Google Play games services to significantly improve user retention, and also earned an 'Editor's Choice' badge in the process. Ho Chi Minh City based developer, VGames, optimized monetization and introduced new paid products for their game Gungun online, and grew revenue by over 100%.


Indie game developers who are interested in meeting members of Google Play and who would like to work closer with us are invited to join our next round of SEA workshops in March 2017. To apply for these events, just fill in this form and we will reach out to you.


How useful did you find this blogpost?

   
Categories: Programming

App Security Improvements: Looking back at 2016

Android Developers Blog - Thu, 01/19/2017 - 23:46
Posted by Rahul Mishra, Android Security Program Manager
In April 2016, the Android Security team described how the Google Play App Security Improvement (ASI) program has helped developers fix security issues in 100,000 applications. Since then, we have detected and notified developers of 11 new security issues and provided developers with resources and guidance to update their apps. Because of this, over 90,000 developers have updated over 275,000 apps!
ASI now notifies developers of 26 potential security issues. To make this process more transparent, we introduced a new page where developers can find information about all these security issues in one place. This page includes links to help center articles containing instructions and additional support contacts. Developers can use this page as a resource to learn about new issues and keep track of all past issues.

Make sure to check out our new Security for Android Developers page, which highlights the latest security posts, security best practices documents and security checklist. These resources are all aimed at improving your understanding of general security concepts and giving you examples that can help you address app-specific issues.

How you can help:
For feedback or questions, please reach out to us through the Google PlayDeveloper Help Center.
To report potential security issues in apps, email us at security+asi@android.com.
Categories: Programming

Android Developer Story: Wallapop improves user conversions with store listing experiments on Google Play

Android Developers Blog - Thu, 01/19/2017 - 17:42
Posted by Lily Sheringham, Developer Marketing, Google Play
Wallapop is a mobile app developer based in Barcelona, Spain. The app provides a platform to users for selling and buying things to others nearby in a virtual flea market by using geolocalization. Wallapop now has over 70% of their user base on Android.

Watch Agus Gomez, Co-Founder & CEO, and Marta Gui, Growth Hacking Manager, explain how using store listing experiments has increased their conversion rate by 17%, and has allowed them to optimize organic installs.


Learn more about store listing experiments. Get the Playbook for Developers app to stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.


How useful did you find this blogpost?                                                                               
Categories: Programming

Tips from developers Peak and Soundcloud on how to grow your startup on Google Play

Android Developers Blog - Thu, 01/19/2017 - 10:02

Posted by Francesca Di Felice, Developer Marketing at Google Play
At Playtime 2016, Google Play's series of developer events, we met with top app and game developers from around the world to share learnings on how to build successful businesses on Google Play. Several startups, including game developer Peaklabs and audio platform SoundCloud, presented on stage their own best practices for growth, which you might find helpful.

Testing for growth, by Peak

Hear from Kevin Shanahan, Product Manager from Peak, a brain training app, on how to grow sustainably.



  • Test lots of ideas: You can't be sure of what will work and what won't, so you need to test lots of ideas. Peak ran four different tests to try to increase conversions to Pro (their subscriber offering):
  1. Made the ability to replay games a Pro feature
  2. Reduced price of Pro by 25% in top 2 markets
  3. Bundled add-on modules from partners into Pro
  4. Showed a preview of Pro-only content
          One of these tests resulted in a 50% increase in conversions.

  • Get the basics right: Start with a great product and have a data-informed culture. Don't only test app features, experimenting your store listing using store listing experiments is also important.
  • Build a robust A/B testing process: Having a well-defined A/B testing process and a system for tracking your experiments is key to testing quickly and effectively.

Improving user retention, by SoundCloud

Andy Carvell, former Product Manager at SoundCloud, an online audio distribution platform that enables its users to upload, record, promote, and share their originally-created sounds, explains how they focus on retention to improve growth.

 
  • Design your retention strategy: Apps with poor retention grow slowly. To increase your retention you should:
    • Convert new users to repeat visitors by providing a strong onboarding experience for new users and taking a high-touch approach during the first days and weeks.
    • Increase visit frequency within this group by providing frequent, timely, and relevant messaging about content or activity on the platform.
    • Target returning users who were not seen over the last period, who are 'at risk of churn' users, by giving them reasons to come back for another session before losing them.
    • Re-activate lapsed (long-term churned) users with campaigns to remind them about your app and offer an incentive to return.
  • Build 'growth machines': Create repeatable processes that testing has proven to positively impact retention, retaining users, and preventing churn.
  • Use activity notifications in a personalised and effective way: At SoundCloud there are plenty of things that happen when users are not in the app that might be relevant to them, for example new content releases or social interactions. They tested 5 new notification types, always keeping a control group to better keep track of the impact, and managed to increase retention in a 5%. Watch the video above for more of Andy's tips on making better use of notifications.

Other speakers, such as Silicon Valley VC Greylock, have also shared their tips for startup growth. Watch more sessions from this year's Playtime events to learn best practices from other apps and game partners, and the Google Play team. Get the Playbook for Developers app to stay up to date with news and tips to help you grow a successful business on Google Play.

How useful did you find this blogpost?                                                                               
Categories: Programming

Project vs product teams

Actively Lazy - Wed, 01/18/2017 - 22:45

One of the hardest things for companies trying to be agile is how to structure teams. Back in the bad-old days, teams would form around a project. Then six months later, everyone would dissipate and go onto new teams. By the time a team has formed and become effective it is ripped apart again. You get no sense of ownership, no continuity.

children-laptop

Nowadays everyone knows that projects are bad, you need scrum teams instead. So a scrum team is formed with a product owner to prioritise the work. But what often happens is that what gets prioritised onto the backlog is a project in bite-size pieces. For example, I saw one team that ran out of work to do. The backlog was empty because, except for bugs, none of the outstanding projects had been signed off. There’s that word again. Project.

Behind the scenes a scrum team often becomes a slightly better way of delivering projects. You get the benefits of team consistency and continuity and the added benefit that the business can carry on thinking of the work in terms of projects. The downside of this approach is the scrum team can lack clear focus: there’s no overarching goal for the team. From sprint to sprint the focus might change as the relative importance of different projects changes. This makes it hard for the team to feel committed to a big idea, to some greater purpose. It ends up an endless procession through the backlog.

Why does this happen? I think it comes down to money. Somebody, somewhere is watching the money. Somebody wants to know “if I spend £x here, how much am I going to make back and by when?” The idea of the project is very easy to fit into this model. The team costs £x per day. The project is estimated to take n days. It’s expected to deliver £y profit. From this we can calculate the expected return on our investment. The trouble is, most of these numbers are entirely made up. If not fundamentally unknowable.

Let’s start with the obvious one: how long is the project going to take? Really, we still actually ask this question? Have we learnt nothing from agile? It seems not: many, many people still think about the world in terms of delivery dates and certainties. When will we learn that the best way is always to deliver a little, inspect the results; then decide whether to keep on the same path or deliver something different. You can’t have an end date with this approach – it’s not even meaningful. Keep on delivering one thing until there’s something better you could be doing, then go do that. Rinse, repeat.

What about the other question: how much profit will this project make? Well, let’s assume for now that the entire project, as originally conceived, will actually be delivered (as if this ever actually happens in software). Can you tell how much money it’s made you? Really? Independent of every other change that the organisation has made at the same time? From software to operations to marketing?

Now sometimes you can come up with a good estimate of expected returns, but often it’s just a pipe dream. But, if you’re vigorously disagreeing with me: I assume you’re religiously tracking actual costs and feed that back into future project planning? I have seen very, very few companies actually do this. If you’re not actually measuring how much you made from a project, how do you know your original estimates were any good?

So we have two made up numbers, both almost certainly unachievable in practice – but we use this to dictate the team’s priority order. I once saw a project signed off and jump to the top of the priority order because it predicted something like a 10% uplift in revenue for the company. This was a very large number for a single project and clearly ridiculous to everyone involved, but it was signed off and duly implemented. Revenue projections later that year were re-estimated downwards and downwards due to difficult market conditions. And some blatant over-estimation. And yet, this non-science is what passes for return on investment planning in all-too-many organisations.

What’s the alternative? The best teams I’ve seen have been structured around products. Give the team complete ownership of one or more products. Any and all changes to those products go via the product team. A product owner guides product direction. As an area expert they are entrusted to decide what are the most important things to work on. They can discuss long term directions with the team and have a consistent, coherent vision for where the product will evolve towards. While, inevitably, some changes are large and sufficiently inter-dependent that they become a project (if one part is delivered then it all must be); the team understands the business benefit of the solution and can evolve the implementation to meet the underlying business need, instead of trying to satisfy some arbitrary internal project deadline. This gives teams the complete freedom to inspect and adapt each iteration. With an understanding of the business priorities for their products they can make sensible trade-offs as each iteration surfaces more information.

What about the money? It’s hard, but let’s be honest about it: return on investment is not clear with the project model of software delivery, so accept that it isn’t clear. The hard thing is working out which products are making you money and which could make more money if more was invested. The trouble is I’ve worked in teams where, honestly, the product was so profitable with so little scope for uplift that the most cost-effective thing to do would have been to fire the dev team and just keep milking the cash cow.

So how can we decide where to spend our money? I think the empirical model of agile could fit here perfectly well. Let’s assume for a minute that the amount of money you have for the delivery team as a whole is fixed – your only choice is where to put it. How much to spend on product A vs how much on product B. Can you estimate how much money each product is making for the business? How is it changing over time?

If one product is making more profit each month – if it’s a growing product – then invest more resources there, to accelerate the growth. If a product is slowing down, with smaller increases in profit each month, or even with profit decreasing – then stop spending so much money on it. This naturally means that your money goes where it seems to be delivering the biggest return. Put your money where it seems to be delivering results.

The hardest thing with this is that it takes time to get the feedback: changing resource allocation could take months to show up on the bottom line. But at least we’re being honest about the impact our decisions have. Instead of trying to micro-manage delivery via projects, manage where resources are put and let the product owner manage the priority order.


Categories: Programming, Testing & QA

Welcoming Fabric to Google

Google Code Blog - Wed, 01/18/2017 - 22:26
Originally posted on the Firebase Blog

Posted by Francis Ma, Firebase Product Manager

Almost eight months ago, we launchedthe expansion of Firebase to help developers build high-quality apps, grow their user base, and earn more money across iOS, Android and the Web. We've already seen great adoption of the platform, which brings together the best of Google's core businesses from Cloud to mobile advertising.

Our ultimate goal with Firebase is to free developers from so much of the complexity associated with modern software development, giving them back more time and energy to focus on innovation.

As we work towards that goal, we've continued to improve Firebase, working closely with our user community. We recently introducedmajor enhancements to many core features, including Firebase Analytics, Test Lab and Cloud Messaging, as well as added support for game developers with a C++ SDK and Unity plug-in.


We're deeply committed to Firebase and are doubling down on our investment to solve developer challenges.
Fabric and Firebase Joining Forces

Today, we're excited to announce that we've signed an agreement to acquire Fabric to continue the great work that Twitter put into the platform. Fabric will join Google's Developer Product Group, working with the Firebase team. Our missions align closely: help developers build better apps and grow their business.
As a popular, trusted tool over many years, we expect that Crashlytics will become the main crash reporting offering for Firebase and will augment the work that we have already done in this area. While Fabric was built on the foundation of Crashlytics, the Fabric team leveraged its success to launch a broad set of important tools, including Answers and Fastlane. We'll share further details in the coming weeks after we close the deal, as we work closely together with the Fabric team to determine the most efficient ways to further combine our strengths. During the transition period, Digits, the SMS authentication services, will be maintained by Twitter.


The integration of Fabric is part of our larger, long-term effort of delivering a comprehensive suite of features for iOS, Android and mobile Web app development.

This is a great moment for the industry and a unique opportunity to bring the best of Firebase with the best of Fabric. We're committed to making mobile app development seamless, so that developers can focus more of their time on building creative experiences.
Categories: Programming

Get the guide to finding success in new markets on Google Play

Android Developers Blog - Wed, 01/18/2017 - 20:48
Posted by Lily Sheringham, Developer Marketing at Google Play


With just a few clicks, you can publish an app to Google Play and access a global audience of more than 1 billion 30 days active users. Finding success in global markets means considering how each market differs, planning for high quality localization, and tailoring your activity to the local audience. The new Going Global Playbook provides best practices and tips, with advice from developers who've successfully gone global.

This guide includes advice to help you plan your approach to going global, prepare your app for new markets, take your app to market, and also include data and insights for key countries and other useful resources.

This ebook joins others that we've recently published including The Building for Billions Playbook and The News Publisher Playbook. All of our ebooks are promoted in the Playbook for Developers app, which is where you can stay up to date with all the news and best practices you need to find success on Google Play.

How useful did you find this blogpost?

                                                       
Categories: Programming

We’re Listening

Recently, the IEEE Computer Society’s popular SE Radio podcast included a sponsored advertising campaign that sparked a negative reaction among many of those involved with producing SE Radio, as well as among a number of listeners. In response to that reaction, the Computer Society has reviewed the advertisement and removed it from the podcast. In […]
Categories: Programming

We’re Listening

Recently, the IEEE Computer Society’s popular SE Radio podcast included a sponsored advertising campaign that sparked a negative reaction among many of those involved with producing SE Radio, as well as among a number of listeners. In response to that reaction, the Computer Society has reviewed the advertisement and removed it from the podcast. In […]
Categories: Programming

Meet the 20 finalists of the Google Play Indie Games Contest

Android Developers Blog - Wed, 01/18/2017 - 09:17
Posted by Matteo Vallone, Google Play Games Business Development

Back in November, we launched the Google Play Indie Games Contest for developers from 15 European countries, to celebrate the passion and innovation of the indie community in the region. The contest will reward the winners with exposure to industry experts and players worldwide, as well as other prizes that will showcase their art and help them grow their business on Android and Google Play.

Thank you to the nearly 1000 of you who submitted high quality games in all types of genres! Your creativity, enthusiasm and dedication have once again impressed us and inspired us. We had a very fun time testing and judging the games based on fun, innovation, design excellence and technical and production quality, and it was challenging to select only 20 finalists:

Meet the 20 finalists
(In alphabetical order)

Blind Drive
(coming soon)

Lo-Fi People
Israel Causality
(coming soon)

Loju
United Kingdom
Crap! I'm Broke: Out of Pocket
Arcane Circus Netherlands
Egz

Lonely Woof
France
Ellipsis

Salmi GmbH Germany


Gladiabots


GFX47
France
Happy Hop: Kawaii Jump

Platonic Games
Spain
Hidden Folks (coming soon)

Adriaan de Jongh Netherlands Lichtspeer
(coming soon)

Lichthund
Poland Lost in Harmony
Digixart

Entertainment France


Mr Future Ninja (coming soon)

Huijaus Studios
Finland Paper Wings


Fil Games
Turkey PinOut


Mediocre
Sweden
Power Hover


Oddrok
Finland
Reigns

Nerial
United Kingdom
Rusty Lake: Roots

Rusty Lake Netherlands
Samorost 3

Amanita Design Czech Republic
The Battle of Polytopia
Midjiwan AB Sweden


twofold inc.

Grapefrukt games Sweden
Unworded (coming soon)

Bento Studio France

Check out the prizes

All the 20 finalists are getting:
  • The opportunity to exhibit and showcase their game at the final event held at the Saatchi Gallery in London, on 16th February 2017.
  • Promotion of their game on a London billboard for one month.
  • Two tickets to attend a 2017 Playtime event. This is an invitation-only event for top apps and games developers on Google Play.
  • One Pixel XL smartphone.
At the event at Saatchi, the finalists will also have a chance to make it to the next rounds and win additional prizes, including:
  • YouTube influencer campaigns worth up to 100,000 EUR.
  • Premium placements on Google Play.
  • Tickets to Google I/O 2017 and other top industry events.
  • Promotions on our channels.
  • Special prizes for the best Unity game.
  • And more!

Come support them at the final event

At the final event attendees will have a say on which 10 of these finalists will get to pitch their games to the jury, who will decide on the final contest winners who will receive the top prizes.

Register now to join us in London, meet the developers, check out their great games, vote for your favourites, and have fun with various industry experts and indie developers.



A big thank you again to everyone who entered and congratulations to the finalists. We look forward to seeing you at the Saatchi Gallery in London on 16th February.
Categories: Programming

Silence speaks louder than words when finding malware

Google Code Blog - Tue, 01/17/2017 - 23:06
Originally posted on Android Developer Blog

Posted by Megan Ruthven, Software Engineer
In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.

But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system. This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps
To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate. Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.
  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.
Difference between a regular and DOI app download on the same device.
Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.
Categories: Programming

Silence speaks louder than words when finding malware

Android Developers Blog - Tue, 01/17/2017 - 22:59
Posted by Megan Ruthven, Software Engineer
In Android Security, we're constantly working to better understand how to make Android devices operate more smoothly and securely. One security solution included on all devices with Google Play is Verify apps. Verify apps checks if there are Potentially Harmful Apps (PHAs) on your device. If a PHA is found, Verify apps warns the user and enables them to uninstall the app.
But, sometimes devices stop checking up with Verify apps. This may happen for a non-security related reason, like buying a new phone, or, it could mean something more concerning is going on. When a device stops checking up with Verify apps, it is considered Dead or Insecure (DOI). An app with a high enough percentage of DOI devices downloading it, is considered a DOI app. We use the DOI metric, along with the other security systems to help determine if an app is a PHA to protect Android users. Additionally, when we discover vulnerabilities, we patch Android devices with our security update system.

This blog post explores the Android Security team's research to identify the security-related reasons that devices stop working and prevent it from happening in the future.
Flagging DOI Apps

To understand this problem more deeply, the Android Security team correlates app install attempts and DOI devices to find apps that harm the device in order to protect our users.
With these factors in mind, we then focus on 'retention'. A device is considered retained if it continues to perform periodic Verify apps security check ups after an app download. If it doesn't, it's considered potentially dead or insecure (DOI). An app's retention rate is the percentage of all retained devices that downloaded the app in one day. Because retention is a strong indicator of device health, we work to maximize the ecosystem's retention rate.

Therefore, we use an app DOI scorer, which assumes that all apps should have a similar device retention rate. If an app's retention rate is a couple of standard deviations lower than average, the DOI scorer flags it. A common way to calculate the number of standard deviations from the average is called a Z-score. The equation for the Z-score is below.

  • N = Number of devices that downloaded the app.
  • x = Number of retained devices that downloaded the app.
  • p = Probability of a device downloading any app will be retained.

In this context, we call the Z-score of an app's retention rate a DOI score. The DOI score indicates an app has a statistically significant lower retention rate if the Z-score is much less than -3.7. This means that if the null hypothesis is true, there is much less than a 0.01% chance the magnitude of the Z-score being as high. In this case, the null hypothesis means the app accidentally correlated with lower retention rate independent of what the app does.
This allows for percolation of extreme apps (with low retention rate and high number of downloads) to the top of the DOI list. From there, we combine the DOI score with other information to determine whether to classify the app as a PHA. We then use Verify apps to remove existing installs of the app and prevent future installs of the app.

Difference between a regular and DOI app download on the same device.


Results in the wild
Among others, the DOI score flagged many apps in three well known malware families— Hummingbad, Ghost Push, and Gooligan. Although they behave differently, the DOI scorer flagged over 25,000 apps in these three families of malware because they can degrade the Android experience to such an extent that a non-negligible amount of users factory reset or abandon their devices. This approach provides us with another perspective to discover PHAs and block them before they gain popularity. Without the DOI scorer, many of these apps would have escaped the extra scrutiny of a manual review.
The DOI scorer and all of Android's anti-malware work is one of multiple layers protecting users and developers on Android. For an overview of Android's security and transparency efforts, check out our page.


Categories: Programming

An Inferno on the Head of a Pin

Coding Horror - Jeff Atwood - Tue, 01/17/2017 - 12:37

Today's processors contain billions of heat-generating transistors in an ever shrinking space. The power budget might go from:

  • 1000 watts on a specialized server
  • 100 watts on desktops
  • 30 watts on laptops
  • 5 watts on tablets
  • 1 or 2 watts on a phone
  • 100 milliwatts on an embedded system

That's three four orders of magnitude. Modern CPU design is the delicate art of placing an inferno on the head of a pin.

Look at the original 1993 Pentium compared to the 20th anniversary Pentium:

Intel Pentium 66 1993
Pentium
66 Mhz
16kb L1
3.2 million transistors
Intel Pentium G3258 20th Anniversary Edition 2014
Pentium G3258
3.2 Ghz × 2 cores
128kb L1, 512kb L2, 3MB L3
1.4 billion transistors

I remember cooling the early CPUs with simple heatsinks; no fan. Those days are long gone.

A roomy desktop computer affords cooling opportunities (and thus a watt budget) that a laptop or tablet could only dream of. How often will you be at peak load? For most computers, the answer is "rarely". The smaller the space, the higher the required performance, the more … challenging your situation gets.

Sometimes, I build servers.

Inspired by Google and their use of cheap, commodity x86 hardware to scale on top of the open source Linux OS, I also built our own servers. When I get stressed out, when I feel the world weighing heavy on my shoulders and I don't know where to turn … I build servers. It's therapeutic.

Servers are one of those situations where you may be at full CPU load more often than not. I prefer to build 1U servers which is the smallest rack mountable unit, at 1.75" total height.

You get plenty of cores on a die these days, so I build single CPU servers. One reason is price; the other reason is that clock speed declines proportionally to the number of cores on a die (this is for the Broadwell Xeon V4 series):

coresGHz E5-163043.7$406 E5-165063.6$617 E5-168083.4$1723 E5-2680122.4$1745 E5-2690142.6$2090 E5-2697182.3$2702

Yes, there are server CPUs with even more cores, but if you have to ask how much they cost, you definitely can't afford them … and they're clocked even slower. What we do is serviced better by a smaller number of super fast cores than a larger number of slow cores, anyway.

With that in mind, consider these two Intel Xeon server CPUs:

As you can see from the official Intel product pages for each processor, they both have a TDP heat budget of 140 watts. I'm scanning the specs, thinking maybe this is an OK tradeoff.

Unfortunately, here's what I actually measured with my trusty Kill-a-Watt for each server build as I performed my standard stability testing, with completely identical parts except for the CPU:

  • E5-1630: 40w idle, 170w mprime
  • E5-1650: 55w idle, 250w mprime

I am here to tell you that Intel's TDP figure of 140 watts for the 6 core version of this CPU is a terrible, scurrilous lie!

This caused a bit of a problem for me as our standard 1U server build now overheats, alarms, and throttles with the 6 core CPU — whereas the 4 core CPU was just fine. Hey Intel! From my home in California, I stab at thee!

But, you know..

Better Heatsink

The 1.75" maximum height of the 1U server form factor doesn't leave a lot of room for creative cooling of a CPU. But you can switch from an Aluminum cooler to a Copper one.

Copper is significantly more expensive, plus heavier and harder to work with, so it's generally easier to throw an ever-larger mass of aluminum at the cooling problem when you can. But when space is a constraint, as it is with a 1U server, copper dissipates more heat in the same form factor.

The famous "Ninja" CPU cooler came in identical copper and aluminum versions so we can compare apples to apples:

  • Aluminum Ninja — 24C rise over ambient
  • Copper Ninja — 17C rise over ambient

You can scale the load and the resulting watts of heat by spinning up MPrime threads for the exact number of cores you want to "activate", so that's how I tested:

  • Aluminum heatsink — stable at 170w (mprime threads=4), but heat warnings with 190w (mprime threads=5)
  • Copper heatsink — stable at 190w (mprime threads=5) but heat warnings with 230w (mprime threads=6)

Each run has to be overnight to be considered successful. This helped, noticeably. But we need more.

Better Thermal Interface

When it comes to server builds, I stick with the pre-applied grey thermal interface pad that comes on the heatsinks. But out of boredom and a desire to experiment, I …

  • Removed the copper heatsink.
  • Used isopropyl alcohol to clean both CPU and heatsink.
  • Applied fancy "Ceramique" thermal compound I have on hand, using an X shape pattern.

I wasn't expecting any change at all, but to my surprise with the new TIM applied it took 5x longer to reach throttle temps with mprime threads=6. Before, it would thermally throttle within a minute of launching the test, and after it took ~10 minutes to reach that same throttle temp. The difference was noticeable.

That's a surprisingly good outcome, and it tells us the default grey goop that comes pre-installed on heatsinks is ... not great. Per this 2011 test, the difference between worst and best thermal compounds is 4.3°C.

But as Dan once bravely noted while testing Vegemite as a thermal interface material:

If your PC's so marginal that a CPU running three or four degrees Celsius warmer will crash it [or, for modern CPUs, cause the processor to auto-throttle itself and substantially reduce system performance], the solution is not to try to edge away from the precipice with better thermal compound. It's to make a big change to the cooling system, or just lower the darn clock speed.

An improved thermal interface just gets you there faster (or slower); it doesn't address the underlying problem. So we're not done here.

Ducted Airflow

Most, but not all, of the SuperMicro cases I've used have included a basic fan duct / shroud that lays across the central fans and the system. Given that the case fans are pretty much directly in front of the CPU anyway, I've included the shroud in the builds out of a sense of completeness more than any conviction that it was doing anything for the cooling performance.

This particular server case, though, did not include a fan duct. I didn't think much about it at the time, but considering the heat stress this 6-core CPU and its 250 watt heat generation was putting on our 1U build, I decided I should build a quick duct out of card stock and test it out.

(I know, I know, it's a super janky duct! But I was prototyping!)

Sure enough, this duct, combined with the previous heatsink and TIM changes, enabled the server to remain stable overnight with a full MPrime run of 12 threads.

I think we've certainly demonstrated the surprising (to me, at least) value of fan shrouds. But before we get too excited, let's consider one last thing.

Define "CPU Load"

Sometimes you get so involved with solving the problem at hand that you forget to consider whether you are, in fact, solving the right problem.

In these tests, we defined 100% CPU load using MPrime. Some people claim MPrime is more of a power virus than a real load test, because it exerts so much heat pressure on the CPUs. I initially dismissed these claims since I've used MPrime (and its Windows cousin, Prime95) for almost 20 years to test CPU stability, and it's never let me down.

But I did more research and I found that MPrime, since 2011, uses AVX2 instructions extensively on newer Intel CPUs:

The newer versions of Prime load in a way that they are only safe to run at near stock settings. The server processors actually downclock when AVX2 is detected to retain their TDP rating. On the desktop we're free to play and the thing most people don't know is how much current these routines can generate. It can be lethal for a CPU to see that level of current for prolonged periods.

That's why most stress test programs alternate between different data pattern types. Depending on how effective the rotation is, and how well that pattern causes issues for the system timing margin, it will, or will not, catch potential for instability. So it's wise not to hang one's hat on a single test type.

This explains why I saw such a large discrepancy between other CPU load programs like BurnP6 and MPrime.

MPrime does an amazing job of generating the type of CPU load that causes maximum heat pressure. But unless your servers regularly chew through zillions of especially power-hungry AVX2 instructions this may be completely unrepresentative of any real world load your server would actually see.

Your Own Personal Inferno

Was this overkill? Probably. Even with the aluminum heatsink, no change to thermal interface material, and zero ducting, we'd probably see no throttling under normal use in our server rack. But I wanted to be sure. Completely sure.

Is this extreme? Putting 140 TDP of CPU heat in a 1U server? Not really. Nick at Stack Overflow told me they just put two 22 core, 145W TDP Xeon 2699v4 CPUs and four 300W TDP GPUs in a single Dell C4130 1U server. I'd sure hate to be in the room when those fans spin up. I'm also a little afraid to find out what happens if you run MPrime plus full GPU load on that box.

Servers are an admittedly rare example of big CPU performance heat and size tradeoffs, one of the few left. It is fun to play at the extremes, but the SoC inside your phone makes the same tradeoffs on a smaller scale. Tiny infernos in our pockets, each and every one.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Programming

Agile Results Refresher for 2017

I’ve put together a quick refresher on Agile Results for 2017:

Agile Results Refresher for 2017

I tried to keep it simple and to the point, but at the same time, help new folks that don’t know what Agile Results is, really sink their teeth into it.

For example, one important idea is that it’s effectively a system to use your best energy for your best results.

I’ve seen people struggle with getting results for years, and one of the most common patterns I see is they use their worst energy for their most important activities.

Worse, they don’t know how to change their energy.

So now they are doing work they hate, because they feel like crap,and this feeling becomes a habit.

The irony is that they would enjoy their work if they just knew how to flip the switch and reimagine their work as an opportunity to experiment and explore their full potential.

Work is actually one of the ultimate forms of self-expression.

Your work can be your dojo where you practice building your abilities, creating your competencies, and sharpening your skills in all areas of your life.

But the real key is to bridge work and life through your values.

If you can find a way to bake your values into how you show up each day, whether at home or in the office, that’s the real secret to living the good life.

But what’s the key to living the great life?

The key to living the great life is to give your best where you have your best to give in the service of others.

Agile Results is a way to help you do that.

Check out the refresher on Agile Results and use the Rule of Three to rule your day.

If you already know Agile Results, teach three people and help them live and lead a more inspired life.

Game on.

Categories: Architecture, Programming

Meet the 20 finalists of the Google Play Indie Games Contest

Android Developers Blog - Mon, 01/16/2017 - 12:15
Posted by Matteo Vallone, Google Play Games Business Development

Back in November, we launched the Google Play Indie Games Contest for developers from 15 European countries, to celebrate the passion and innovation of the indie community in the region. The contest will reward the winners with exposure to industry experts and players worldwide, as well as other prizes that will showcase their art and help them grow their business on Android and Google Play.

Thank you to the nearly 1000 of you who submitted high quality games in all types of genres! Your creativity, enthusiasm and dedication have once again impressed us and inspired us. We had a very fun time testing and judging the games based on fun, innovation, design excellence and technical and production quality, and it was challenging to select only 20 finalists:

Meet the 20 finalists
(In alphabetical order)

Blind Drive
(coming soon)

Lo-Fi People
Israel Causality
(coming soon)

Loju
United Kingdom Crap! I'm Broke: Out of Pocket
Arcane Circus Netherlands Egz

Lonely Woof
France Ellipsis

Salmi GmbH Germany Gladiabots


GFX47
France Happy Hop: Kawaii Jump

Platonic Games
Spain Hidden Folks (coming soon)

Adriaan de Jongh Netherlands Lichtspeer
(coming soon)

Lichthund
Poland Lost in Harmony
Digixart

Entertainment France Mr Future Ninja (coming soon)

Huijaus Studios
Finland Paper Wings


Fil Games
Turkey PinOut


Mediocre
Sweden Power Hover


Oddrok
Finland Reigns

Nerial
United Kingdom Rusty Lake: Roots


Rusty Lake Netherlands Samorost 3


Amanita Design Czech Republic The Battle of Polytopia

Midjiwan AB Sweden twofold inc.


Grapefrukt games Sweden Unworded (coming soon)

Bento Studio France
Check out the prizes

All the 20 finalists are getting:
  • The opportunity to exhibit and showcase their game at the final event held at the Saatchi Gallery in London, on 16th February 2017.
  • Promotion of their game on a London billboard for one month.
  • Two tickets to attend a 2017 Playtime event. This is an invitation-only event for top apps and games developers on Google Play.
  • One Pixel XL smartphone.
At the event at Saatchi, the finalists will also have a chance to make it to the next rounds and win additional prizes, including:
  • YouTube influencer campaigns worth up to 100,000 EUR.
  • Premium placements on Google Play.
  • Tickets to Google I/O 2017 and other top industry events.
  • Promotions on our channels.
  • Special prizes for the best Unity game.
  • And more!

Come support them at the final event

At the final event attendees will have a say on which 10 of these finalists will get to pitch their games to the jury, who will decide on the final contest winners who will receive the top prizes.

Register now to join us in London, meet the developers, check out their great games, vote for your favourites, and have fun with various industry experts and indie developers.



A big thank you again to everyone who entered and congratulations to the finalists. We look forward to seeing you at the Saatchi Gallery in London on 16th February.
Categories: Programming

Where to Look for Trends and Insights

“The best is yet to come.”

It can be tough creating the future among the chaos.

The key is to get a good handle on the real and durable trends that lie beneath the change and churn that’s all around you.

But how do you get a good handle on the key disruptions, the key trends, and the macro-level patterns that matter?

Draw from multiple sources that help you see the big picture in a simple way.

To get started, I’m going to share the key sources for trends and insights that I draw from (beyond my own experience and what I learn from working with customers and colleagues from around the world).

Here are the key sources for trends and insights that I draw from:

  1. Age of Context (Book), by Robert Scoble and Shel Israel.  Age of Context provides a walkthrough of 5 technological forces shaping our world: 1) mobile devices, 2) social media, 3) big data, 4) sensors, 5) location-based services.
  2. Cognizant – A global leader in business and technology services, helping clients bring the future of work to life — today.
  3. DaVini Institute – The DaVinci Institute is a non-profit futurist think tank. But unlike traditional research-based consulting organizations, the DaVinci Institute operates as a working laboratory for the future human experience A community of entrepreneurs and visionary thinkers intent on discovering the (future) opportunities created when cutting edge technology meets the rapidly changing human world.
  4. Faith Popcorn – The “Trend Oracle.”  Faith is a key strategist for BrainReserve and trusted advisor to the CEOs of The Fortune 500.  She’s identified movements such as, “Cocooning,” “AtmosFear,” “Anchoring,” “99 Lives,” “Icon Toppling” and “Vigilante Consumer.”
  5. Fjord – Fjord produces an annual report to help guide you through challenges, experiences, and opportunities you, your organization, employees, customers, and stakeholders will likely face.  Check out the Fjord Trends 2017 report on SlideShare.
  6. Foresight Factory (Formerly called Future Foundation) – Future focused, applied, global consumer insight. Universal trends that shape tastes and determine demand the world over; sector trends that are critical to success in specific industries; custom reports produced in partnership with clients and focus reports on key markets, regions and topics.
  7. Forrester – Research to help you make better decisions in a world where technology is radically changing your customer.
  8. Gartner – The the world’s leading information technology research and advisory company.
  9. Global Goals – In September 2015, 193 world leaders agreed to 17 Global Goals for Sustainable Development. If these Goals are completed, it would mean an end to extreme poverty, inequality and climate change by 2030.
  10. IBM Executive Exchange – An issues-based portal providing news, thought leadership, case studies, solutions, and social media exchange for C-level executives.
  11. Jim Carroll – A world-leading futurist, trends, and innovation expert, with a track record for strategic insight.  He is author of the book The Future Belongs to Those Who Are Fast, and he shares major trends, as well as trends by industry, on his site.
  12. Motley Fool – Motley Fool – To educate, amuse, and enrich.
  13. No Ordinary Disruption (Book) – This is a deep dive into the future, backed with data, stories, and insight.  It highlights four forces colliding and transforming the global economy: 1) the rise of emerging markets, 2) the accelerating impact of technology on the natural forces of market competition, 3) an aging world population, 4) accelerating flows of trade, capital, people, and data.
  14. O’Reilly Ideas – Insight, analysis, and research about emerging technologies.
  15. Richard Watson – A futurist author, speaker and scenario planner, and the chart maker behind The Table of Trends and Technologies for the World in 2020 (PDF). Watson is author of the What’s Next Top Trends Blog. Watson is the author of 4 books: Future Files, Future Minds, Futurevision, and The Future: 50 Ideas You Really Need to Know.
  16. Sandy Carter — Sandy Carter is IBM Vice President  of Social Business and Collaboration, and author of The New Language of Marketing 2.0, The New Language of Business, and Get Bold: Using Social Media to Create a New Type of Social Business.  She’s not just fun to read or watch – she has some of the best insight on social innovation.
  17. The Industries of the Future (Book), by Alec Ross.  Alec Ross explains what’s next for the world: the advances and stumbling blocks that will emerge in the next ten years, and how we can navigate them.
  18. The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee.  Erik Brynjolfsson and Andrew McAfee identify the best strategies for survival and offer a new path to prosperity amid exponential technological change. These include revamping education so that it prepares people for the next economy instead of the last one, designing new collaborations that pair brute processing power with human ingenuity, and embracing policies that make sense in a radically transformed landscape.
  19. ThoughtWorks Technology Radar – Thoughts from the ThoughtWorks team on the technology and trends that are shaping the future.
  20. Trend Hunter – Each day, Trend Hunter features a daily dose of micro-trends, viral news and pop culture. The most popular micro-trends are featured on Trend Hunter TV and later grouped into clusters of inspiration in our Trend Reports, a series of tools for professional innovators and entrepreneurs.
  21. Trends and Technologies for the World in 2020 (PDF) – Table of trends and technologies shaping the world in 2020.
  22. Trendwatching.com – Trendwatching.com helps forward-thinking business professionals in 180+ countries understand the new consumer and subsequently uncover compelling, profitable innovation opportunities.

While it might look like a short-list, it’s actually pretty deep.

It’s like a Russian nesting doll in that each source might lead you to more sources or might be the trunk of a tree that has multiple branches.

These sources of trends and insights have served me well and continue to serve me as I look to the future and try to figure out what’s going on.

But more importantly, they all inspire me in some way to create the future, rather than wait for it to just happen.

I’m a big fan of making things happen … you play the world, or the world plays you.

You Might Also Like

All Digital Transformation Articles

Digital Transformation Books

Consumer Trend Canvas

Trend Framework

101 Hacks for a Better Year

Categories: Architecture, Programming

Don't Build That Product

Xebia Blog - Sun, 01/15/2017 - 12:06
At the Agile Chef Conference I facilitated a workshop where participants could experience how Aikido can be used to resolve conflicts on the work floor as well by applying verbal Aikido. At the end of the session someone asked me to demonstrate the best defence against a sword attack; I responded by turning around and

Consumer Trend Canvas

Consumer Trends are a key building block for innovation.

Is you are stuck coming up with innovation opportunities, part of it is that you are missing sources of insight.

And of the best sources of insight is actually consumer trends.

One tool for helping you turn consumer trends into innovation opportunities is the Consumer Trend Canvas, by Trendwatching.com.

image

 

What I like about it is the simplicity, the elegance, and the fact that it’s similar in format to the Business Model Canvas.

The Consumer Trend Canvas is broken down into to simple sections:

  1. Analyze
  2. Apply

Pretty simple.

In terms of the overall canvas, it’s actually a map of the following 7 components:

  1. Basic Needs
  2. Drivers of Change
  3. Emerging Customer Expectations
  4. Inspiration
  5. Innovation Potential
  6. Who
  7. Your Innovations

From a narrative standpoint, you can think of it in terns of pains, needs, and desired outcomes for a particular persona, along with the innovation opportunities that flow from that simple frame.

The real beauty of the Consumer Trend Canvas is that it’s a question-driven approach to revealing innovation opportunities.

Here are the questions within each of the parts of the Consumer Trend Canvas:

  1. Which deep consumer needs & desires does this trend address?
  2. Why is this trend emerging now? What’s changing?
  3. What new consumer needs, wants, and expectations are created by the changes identified above? Where and how does this trends satisfy them?
  4. How are other businesses applying this trend?
  5. How and where could you apply this trend to your business?
  6. Which (new) customer groups could you apply this trend? What would you have to change?

When you put it all together, you have a quick and simple view of how a trend can lead to some potential innovations.

The power is in the simplicity and in the consolidation.

You Might Also Like

Trend Framework

8 Big Trends

10 High-Value Activities in the Enterprise

Hack a Happy New Year

Continuous Value Delivery the Agile Way

Categories: Architecture, Programming

Trend Framework

It’s that time of year when I like to take the balcony view to figure out where the world is going, at least some of the key trends.

I’ve long been a fan that while you can’t predict the future, you can take the long view and play out multiple future scenarios so you are ready for (most) anything.

But I’m an even bigger fan of the idea that rather than predict the future—create the future.

To do that, it helps to have a solid handle on the trends shaping the world.

To help make sense of the trends, I like to use mind tools and frameworks that help me see things more clearly.

One of my favorite tools for trends is the Trend Framework by Trendwatching.com

Trendwatching.com uses a framework to sort and catalog trends. 

To understand the future of consumerism, they use a framework of 16 Mega-Trends:

  1. Status Seekers.  The relentless, often subconscious, yet ever present force that underpins almost all consumer behavior.
  2. Betterment.  The universal quest for self-improvement.
  3. Human Brands.  Why personality and purpose will mean profit.
  4. Better Business.  Why “good” business will be good for business.
  5. Youniverse.  Make your consumers the center of their Youniverse.
  6. Local Love.  Why “local” is in, and will remain, loved.
  7. Ubitech.  The ever-greater pervasiveness of technology.
  8. Infolust.  Why consumers voracious appetite for (even more) information will only grow.
  9. Playsumers.  Who said business has to be boring?
  10. Ephemeral.  Why consumers will embrace the here, the now, and the soon-to-be-gone.
  11. Fuzzynomics.  The divisions between producers and consumers, brands, and customers will continue to blur.
  12. Pricing Pandemonium.  Pricing more fluid and flexible than ever.
  13. Helpful. Be part of the solution, not the problem.
  14. Joyning.  The eternal desire for connection, and the many (new) ways it can be satisfied.
  15. Post-Demographics.  The age of disrupted demographics.
  16. Remapped.  The epic power shifts in the global economy.

I’ve used these 16 Mega-Trends from the Trend Framework as a filter (well, maybe more accurately as idiot-guards and bumper-rails) for guiding how I look at consumer behaviors shaping the market.

In fact, this was one of the most helpful frameworks I used when putting together my Trends for 2016: The Year of the Bold.

As I create my master list of Trends for 2017, I’m finding this simple list of 16 Mega-Trends to be useful once again, to better understand all of the micro-trends that emerge on top of this foundation.

The Trend Framework makes it easier to see the graph of trends and to quickly make sense of why things are shaping the way they are.

Categories: Architecture, Programming