Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

SPaMCAST 435 ‚Äď Allan Kelly, #NoProjects, Value

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 435 features our interview with Allan Kelly.  Our discussion touched on the concepts behind #NoProjects.  Allan describes how the concept of a project leads to a number of unintended consequences.  Those consequences aren’t pretty.

Allan makes digital development teams more effective and improves delivery with continuous agile approaches to reduce delay and risk while increasing value delivered. He helps teams and smaller companies – including start-ups and scale-ups – with advice, coaching and training. Managers, product, and technical staff are all involved in his improvements. He is the originator of Retrospective Dialogue Sheets and Value Poker, the author of four books, including “Xanpan – team-centric Agile Software Development” and “Business Patterns for Software Developers”. On Twitter he is @allankellynet.

Re-Read Saturday News

This week we tackle Chapter 8 of Carol Dweck‚Äôs Mindset: The New Psychology of Success (buy your copy and read along). ¬†Chapter 8, titled ‚ÄúChanging Mindsets.‚ÄĚ The whole concept of mindsets would be an interesting footnote if we did not believe they could change. Chapter 8 drives home the point that has been made multiple times in the book, that mindsets are malleable with self-awareness and a lot of effort. The question of whether all people want to be that self-aware will be addressed next week as we wrap up our re-read.

We are quickly closing in on the end of our re-read of Mindset.  I anticipate one more week.   The next book in the series will be Holacracy (Buy a copy today). After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy by Brian J. Robertson, therefore we will read (first time for me) the whole book together.

Every week we discuss a chapter then consider the implications of what we have ‚Äúread‚ÄĚ from the point of view of both pursuing an organizational transformation and also using the material when coaching teams. ¬†

Remember to buy a copy of Carol Dweck’s Mindset and start the re-read from the beginning!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on incremental change approaches.  We will also have columns from Jeremy Berriault. Jeremy blogs at https://jberria.wordpress.com/  and Jon M Quigley who brings his column, the Alpha and Omega of Product Development, to the Cast. One of the places you can find Jon is at Value Transformation LLC.

 


Categories: Process Management

SPaMCAST 435 - Allan Kelly, #NoProjects, Value

Software Process and Measurement Cast - Sun, 03/26/2017 - 22:00

The Software Process and Measurement Cast 435 features our interview with Allan Kelly.  Our discussion touched on the concepts behind #NoProjects.  Allan describes how the concept of a project leads to a number of unintended consequences.  Those consequences aren’t pretty.

Allan makes digital development teams more effective and improves delivery with continuous agile approaches to reduce delay and risk while increasing value delivered. He helps teams and smaller companies - including start-ups and scale-ups - with advice, coaching and training. Managers, product and technical staff are all involved in his improvements. He is the originator of Retrospective Dialogue Sheets and Value Poker, the author of four books, including "Xanpan - team-centric Agile Software Development" and "Business Patterns for Software Developers". On Twitter he is @allankellynet.

Re-Read Saturday News

This week we tackle Chapter 8 of Carol Dweck‚Äôs Mindset: The New Psychology of Success (buy your copy and read along). ¬†Chapter 8, titled ‚ÄúChanging Mindsets.‚ÄĚ The whole concept of mindsets would be an interesting footnote if we did not believe they could change. Chapter 8 drives home the point that has been made multiple times in the book, that mindsets are malleable with self-awareness and a lot of effort. The question of whether all people want to be that self-aware will be addressed next week as we wrap up our re-read.

We are quickly closing in on the end of our re-read of Mindset.  I anticipate one more week.   The next book in the series will be Holacracy (Buy a copy today). After my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy by Brian J. Robertson, therefore we will read (first time for me) the whole book together.

Every week we discuss a chapter then consider the implications of what we have ‚Äúread‚ÄĚ from the point of view of both someone pursuing an organizational transformation and using the material when coaching teams. ¬†

Remember to buy a copy of Carol Dweck’s Mindset and start the re-read from the beginning!

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on incremental change approaches.  We will also have columns from Jeremy Berriault. Jeremy blogs at https://jberria.wordpress.com/  and Jon M Quigley who brings his column, the Alpha and Omega of Product Development, to the Cast. One of the places you can find Jon is at Value Transformation LLC.

 

Categories: Process Management

Own the Future of Management, with Me

NOOP.NL - Jurgen Appelo - Sun, 03/26/2017 - 16:53
You have just 5 days left to become my business partner.

Since today, I am no longer the only owner of the Management 3.0 business. I have co-owners. Yay!! And you can now join me as a co-owner too.

Management 3.0 is about better management with fewer managers. The ideas and practices help leaders and other change agents with managing for more happiness at work. And the brand was named the leader of the Third Wave of Agile, because of our focus on the entire business, rather than just on development practices and projects. By improving the management of organizations, we hope we are helping to make the world a better place.

As a co-owner of Management 3.0, you support and participate in my adventure. Either passively or actively, you help our team to offer healthy games, tools, and practices to more managers in more organizations.

Since last week, a Foundation owns the Management 3.0 business model and this Foundation has issued virtual shares. The business has grown by more than 40% each year in 2014, 2015, and 2016. In other words, the ownership of shares not only contributes to happier people in healthier organizations. It is also a smart business investment!

Until 31 March 2017, I sell virtual shares (officially: certificates) for EUR 50 per share. On 1 April 2017, I stop selling them for a while. I may continue selling more shares later, but probably at a higher price. And there are more reasons not to wait!

When you buy 10 or more shares before 1 April 2017, I send you a free copy of the book Managing for Happiness, personally signed by me on a unique hand-drawn bookplate.

When you buy 100 or more shares before 1 April 2017, you are entitled to a free one-hour keynote on location (excluding travel and accommodation expenses).

When you buy 1,000 or more shares before 1 April 2017, you gain the status of honored business partner, with special privileges and exclusive access to the team and me.

And everyone who buys shares has a chance to win one of my last eight copies of #Workout, the exclusive Limited Edition. Some people sell it for $2000+ on Amazon.

It is important to known that Management 3.0 is a global brand. I prefer that ownership is distributed across the world. I reserve the right not to sell too many shares to people in the same country. (And yes, it’s first come, first served.)

What are the next steps?

1. Check out my FAQ for all details (read it here);
2. Fill out the application form (APPLY HERE);
3. Sign the simple agreement (I will send it);
4. Pay the share price (information will follow).

I asked the notary and my accountant to make it so simple that it’s five minutes of work and you could be co-owner in one day.

When this simple procedure is complete, we add you to the exclusive list of Management 3.0 owners. You can proudly wear a bragging badge on your website, and the team will inform you about new developments on a regular basis.

Don’t wait too long!

This offer is valid until 31 March 2017. The available shares per country are limited.

OWN THE FUTURE OF MANAGEMENT – APPLY NOW

The post Own the Future of Management, with Me appeared first on NOOP.NL.

Categories: Project Management

Mindset: The New Psychology of Success: Re-Read Week 9, Chapter 8, Changing Mindsets: A Workshop

Mindset Book Cover

Next week we will complete our re-read of Mindset with a round-up and some thoughts on using the concepts in this book in a wholesale manner.  The next book in the series will be Holacracy.  Buy a copy today and read along!  I have had a couple of questions about why did not do a poll for this re-read.  As I noted last week, after my recent interview with Jeff Dalton on Software Process and Measurement Cast 433, I realized that I had only read extracts from Holacracy by Brian J. Robertson.  I think many of us are looking for an organizational paradigm for Agile organizations.  Hierarchies and matrix organizations have clear and immediate drawbacks.  Holacracy might be one tool to address this problem, which why we will read this book.

One more thing — If you are going to be at QAI Quest 2017 April 3 ‚Äď 7, please come hear me speak and track me down for a coffee or adult beverage and we can talk shop!¬†

Chapter 8: Changing Mindsets

The whole concept of mindsets would be an interesting footnote if we did not believe they could change. Chapter 8 drives home the point that has been made multiple times in the book, that mindsets are malleable with self-awareness and a lot of effort. The question of whether all people want to be that self-aware will be addressed next week as we wrap up our re-read.

Dr. Dweck opens the chapter by using the metaphor of surgery to illustrate why change is difficult. For example, if you have a wart a doctor will freeze it or cut it off. ¬†It is gone. ¬†Old behaviors don‚Äôt lend themselves to surgical removal. They are always still lurking in the background and can come back. ¬†They are never excised. ¬†If we wanted a medical metaphor, behaviors are more like the virus that causes shingles which enters the body as chicken pox, runs its course and then lingers forever after to potentially reemerge over and over (PS – get the vaccination). ¬†When I was young, I smoked. ¬†I don’t know how many times I quit only to relapse. ¬†Every time I relapsed I knew shouldn’t buy that pack or bum a smoke but did it anyway. ¬†If I had branded myself as weak I would have never I climbed back on the wagon and learned from the triggering event. Dweck points out that our mind is always keeping track and interpreting, keeping a running account, of our actions based on our mindset. That accounting process can be the difference between see not meeting a goal such smoking cessation as a learning even or being branded a failure. ¬†Our mindsets generate internal dialogs that can empower or ‚Äúunpower‚ÄĚ (I made up this word). ¬†A growth mindset generates a different internal dialog and a fixed mindsets fill in the internal dialog. ¬†¬†A growth mindset looks for the learning opportunity

Mindsets are not fixed.  In studies presented in Chapter 8, just learning that you have or lean toward a fixed mindset can cause change.  The act of learning provides knowledge that can be helpful to confront the self-destructive behaviors at the heart of a fixed mindset.  Knowing is not always a sufficient mechanism for change.

Chapter 8 provides insights into several academic and commercial approaches Dweck has used to affect change.  The common thread in all of the effective approaches outlined in the chapter is a belief that you are in charge of your mind and your mind can grow (metaphorically).  Change however is difficult.  In order to change an individual has to be able to give up their current self-image and replace the self-image.  Replacing your self-image is frightening.  You have to give up something known and replace it with something else that might sound better but that you have no experience with.  

The process of changing from a fixed to a growth mindset begins with making a “vivid, concrete, growth-oriented plan” that includes specific”when, where, and how‚ÄĚ components. ¬†Execution needs to be coupled with feedback, support and mentoring. ¬†None of the good techniques and examples provided will suffer from willpower and the ability to learn from feedback.

Using Mindsets:

Organizational Transformation:  Mindsets provide a tool for considering how organization transformation will be perceived and the predicting the unintended consequences of change. For example, if an organization was trying to shift from a risk-adverse culture to a more innovative culture messaging would tend to focus on growth opportunities, failing fast and learning.  To people within the organization with growth mindsets, these concepts would make sense and be easily absorbed (assuming the organization’s actions supported the words for the most part).  However, those with a fixed mindset (potentially some key players and top individual performers) would first need to recognize that their behavior has to change.  The organization would need to actively provide growth plans to support their transition.  Using mindsets in organizational transformation plans is using for change management, messaging and risk planning. At an organizational level, using mindsets in planning is an important though exercise that can guide other activities including team level coaching.

Team Coaching: The sentence, ‚Äúa¬†vivid, concrete, growth-oriented plan‚ÄĚ reflects one of the more important tactical realities that must be remembered when using mindsets at a team or personal level. ¬†Teams and organizations don‚Äôt change, it is all about the people. ¬†Individual people change which then influence the team or organization. ¬†Coaches need should begin any team coaching activities by targeting leaders/influencers. ¬†Think of the game Jenga, game pieces are removed until the key piece is exposed and when removed the tower falls. ¬†Transforming a team is much akin to anti-Jenga. ¬†The goal is to find the critical piece and help them to change. Change requires self-realization, a plan, effort and support. ¬†¬†¬†

Previous Entries of the re-read of Mindset:

Basics and Introduction

Chapter 1: Mindsets

Chapter 2: Inside the Mindsets

Chapter 3: The Truth About Ability and Accomplishment

Chapter 4: Sports: The Mindset of a Champion

Chapter 5: Business: Mindset and Leadership

Chapter 6: Relationships: Mindsets in Love (or Not)

Chapter 7: Parents, Teachers, Coaches: Where Do Mindsets Come From?

 


Categories: Process Management

Luigi: An ExternalProgramTask example ‚Äď Converting JSON to CSV

Mark Needham - Sat, 03/25/2017 - 15:09

I’ve been playing around with the Python library Luigi which is used to build pipelines of batch jobs and I struggled to find an example of an ExternalProgramTask so this is my attempt at filling that void.

Luigi - the Python data library for building data science pipelines

I’m building a little data pipeline to get data from the meetup.com API and put it into CSV files that can be loaded into Neo4j using the LOAD CSV command.

The first task I created calls the /groups endpoint and saves the result into a JSON file:

import luigi
import requests
import json
from collections import Counter

class GroupsToJSON(luigi.Task):
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()

    def run(self):
        seed_topic = "nosql"
        uri = "https://api.meetup.com/2/groups?&topic={0}&lat={1}&lon={2}&key={3}".format(seed_topic, self.lat, self.lon, self.key)

        r = requests.get(uri)
        all_topics = [topic["urlkey"]  for result in r.json()["results"] for topic in result["topics"]]
        c = Counter(all_topics)

        topics = [entry[0] for entry in c.most_common(10)]

        groups = {}
        for topic in topics:
            uri = "https://api.meetup.com/2/groups?&topic={0}&lat={1}&lon={2}&key={3}".format(topic, self.lat, self.lon, self.key)
            r = requests.get(uri)
            for group in r.json()["results"]:
                groups[group["id"]] = group

        with self.output().open('w') as groups_file:
            json.dump(list(groups.values()), groups_file, indent=4, sort_keys=True)

    def output(self):
        return luigi.LocalTarget("/tmp/groups.json")

We define a few parameters at the top of the class which will be passed in when this task is executed. The most interesting lines of the run function are the last couple where we write the JSON to a file. self.output() refers to the target defined in the output function which in this case is /tmp/groups.json.

Now we need to create a task to convert that JSON file into CSV format. The jq command line tool does this job well so we’ll use that. The following task does the job:

from luigi.contrib.external_program import ExternalProgramTask

class GroupsToCSV(luigi.contrib.external_program.ExternalProgramTask):
    file_path = "/tmp/groups.csv"
    key = luigi.Parameter()
    lat = luigi.Parameter()
    lon = luigi.Parameter()

    def program_args(self):
        return ["./groups.sh", self.input()[0].path, self.output().path]

    def output(self):
        return luigi.LocalTarget(self.file_path)

    def requires(self):
        yield GroupsToJSON(self.key, self.lat, self.lon)

groups.sh

#!/bin/bash

in=${1}
out=${2}

echo "id,name,urlname,link,rating,created,description,organiserName,organiserMemberId" > ${out}
jq -r '.[] | [.id, .name, .urlname, .link, .rating, .created, .description, .organizer.name, .organizer.member_id] | @csv' ${in} >> ${out}

I wanted to call jq directly from the Python code but I couldn’t figure out how to do it so putting that code in a shell script is my workaround.

The last piece of the puzzle is a wrapper task that launches the others:

import os

class Meetup(luigi.WrapperTask):
    def run(self):
        print("Running Meetup")

    def requires(self):
        key = os.environ['MEETUP_API_KEY']
        lat = os.getenv('LAT', "51.5072")
        lon = os.getenv('LON', "0.1275")

        yield GroupsToCSV(key, lat, lon)

Now we’re ready to run the tasks:

$ PYTHONPATH="." luigi --module blog --local-scheduler Meetup
DEBUG: Checking if Meetup() is complete
DEBUG: Checking if GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275) is complete
INFO: Informed scheduler that task   Meetup__99914b932b   has status   PENDING
DEBUG: Checking if GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275) is complete
INFO: Informed scheduler that task   GroupsToCSV_xxx_51_5072_0_1275_e07372cebf   has status   PENDING
INFO: Informed scheduler that task   GroupsToJSON_xxx_51_5072_0_1275_e07372cebf   has status   PENDING
INFO: Done scheduling tasks
INFO: Running Worker with 1 processes
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 3
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   GroupsToJSON_xxx_51_5072_0_1275_e07372cebf   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 2
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
INFO: Running command: ./groups.sh /tmp/groups.json /tmp/groups.csv
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   GroupsToCSV_xxx_51_5072_0_1275_e07372cebf   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Pending tasks: 1
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) running   Meetup()
Running Meetup
INFO: [pid 4452] Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) done      Meetup()
DEBUG: 1 running tasks, waiting for next task to finish
INFO: Informed scheduler that task   Meetup__99914b932b   has status   DONE
DEBUG: Asking scheduler for work...
DEBUG: Done
DEBUG: There are no more tasks to run at this time
INFO: Worker Worker(salt=970508581, workers=1, host=Marks-MBP-4, username=markneedham, pid=4452) was stopped. Shutting down Keep-Alive thread
INFO: 
===== Luigi Execution Summary =====

Scheduled 3 tasks of which:
* 3 ran successfully:
    - 1 GroupsToCSV(key=xxx, lat=51.5072, lon=0.1275)
    - 1 GroupsToJSON(key=xxx, lat=51.5072, lon=0.1275)
    - 1 Meetup()

This progress looks 🙂 because there were no failed tasks or missing external dependencies

===== Luigi Execution Summary =====

Looks good! Let’s quickly look at our CSV file:

$ head -n10 /tmp/groups.csv 
id,name,urlname,link,rating,created,description,organiserName,organiserMemberId
1114381,"London NoSQL, MySQL, Open Source Community","london-nosql-mysql","https://www.meetup.com/london-nosql-mysql/",4.28,1208505614000,"

Meet others in London interested in NoSQL, MySQL, and Open Source Databases.

","Sinead Lawless",185675230 1561841,"Enterprise Search London Meetup","es-london","https://www.meetup.com/es-london/",4.66,1259157419000,"

Enterprise Search London is a meetup for anyone interested in building search and discovery experiences ‚ÄĒ from intranet search and site search, to advanced discovery applications and beyond.

Disclaimer: This meetup is NOT about SEO or search engine marketing.

What people are saying:

  • ""Join this meetup if you have a passion for enterprise search and user experience that you would like to share with other able-minded practitioners."" ‚ÄĒ Vegard Sandvold
  • ""Full marks for vision and execution. Looking forward to the next Meetup."" ‚ÄĒ Martin White
  • ‚ÄúConsistently excellent‚ÄĚ ‚ÄĒ Helen Lippell

Sweet! And what if we run it again?

$ PYTHONPATH="." luigi --module blog --local-scheduler Meetup
DEBUG: Checking if Meetup() is complete
INFO: Informed scheduler that task   Meetup__99914b932b   has status   DONE
INFO: Done scheduling tasks
INFO: Running Worker with 1 processes
DEBUG: Asking scheduler for work...
DEBUG: Done
DEBUG: There are no more tasks to run at this time
INFO: Worker Worker(salt=172768377, workers=1, host=Marks-MBP-4, username=markneedham, pid=4531) was stopped. Shutting down Keep-Alive thread
INFO: 
===== Luigi Execution Summary =====

Scheduled 1 tasks of which:
* 1 present dependencies were encountered:
    - 1 Meetup()

Did not run any tasks
This progress looks 🙂 because there were no failed tasks or missing external dependencies

===== Luigi Execution Summary =====

As expected nothing happens since our dependencies are already satisfied and we have our first Luigi pipeline up and running.

The post Luigi: An ExternalProgramTask example – Converting JSON to CSV appeared first on Mark Needham.

Categories: Programming

Stuff The Internet Says On Scalability For March 24th, 2017

Hey, it's HighScalability time:

 This is real and oh so eerie. Custom microscope takes a 33 hour time lapse of a tadpole egg dividing.
If you like this sort of Stuff then please support me on Patreon.
  • 40Gbit/s: indoor optical wireless networks; 15%: energy produced by wind in Europe; 5: new tasty particles; 2000: Qubits are easy; 30 minutes: flight time for electric helicopter; 42.9%: of heathen StackOverflowers prefer tabs;

  • Quotable Quotes:
    • @RichRogersIoT: "Did you know? The collective noun for a group of programmers is a merge-conflict." - @omervk
    • @tjholowaychuk: reviewed my dad's company AWS expenses, devs love over-provisioning, by like 90% too, guess that's where "serverless" cost savings come in
    • @karpathy: Nature is evolving ~7 billion ~10 PetaFLOP NI agents in parallel, and has been for ~10M+s of years, in a very realistic simulator. Not fair.
    • @rbranson: This is funny, but legit. Production software tends to be ugly because production is ugly. The ugliness outpaces our ability to abstract it.
    • @joeweinman: @harrietgreen1 : Watson IoT center opened in Munich... $200 million dollar investment; 1000 engineers #ibminterconnect
    • David Gerard: This [IBM Blockchain Service] is bollocks all the way down.
    • digi_owl: Sometimes it seems that the diff between a CPU and a cluster is the suffix put on the latency times.
    • Scott Aaronson: I’m at an It from Qubit meeting at Stanford, where everyone is talking about how to map quantum theories of gravity to quantum circuits acting on finite sets of qubits, and the questions in quantum circuit complexity that are thereby raised.
    • Founder Collective: Firebase didn’t try to do everything at once. Instead, they focused on a few core problems and executed brilliantly. “We built a nice syntax with sugar on top,” says Tamplin. “We made real-time possible and delightful.” It is a reminder that entrepreneurs can rapidly add value to the ecosystem if they really focus.
    • Elizabeth Kolbert: Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups. 
    • Western Union: the ‘telephone’ has too many shortcomings to be seriously considered as a means of communication.
    • Arthur Doskow: being fair, being humane may cost money. And this is the real issue with many algorithms. In economists’ terms, the inhumanity associated with an algorithm could be referred to as an externality. 
    • Francis: The point is that even if GPUs will support lower precision data types exclusively for AI, ML and DNN, they will still carry the big overhead of the graphics pipeline, hence lower efficiency than an FPGA (in terms of FLOPS/WATT). The winner? Dedicated AI processors, e.g. Google TPU
    • James Glasnapp: When we move out of the physical space to a technological one, how is the concept of a “line” assessed by the customer who can’t actually see the line? 
    • Frank: On the other hand, if institutionalized slavery still existed, factories would be looking at around $7,500 in annual costs for housing, food and healthcare per “worker”.
    • Baron Schwartz: If anyone thought that NoSQL was just a flare-up and it’s died down now, they were wrong...In my opinion, three important areas where markets aren’t being satisfied by relational technologies are relational and SQL backwardness, time series, and streaming data. 
    • CJefferson: The problem is, people tell me that if I just learn Haskell, Idris, Closure, Coffescript, Rust, C++17, C#, F#, Swift, D, Lua, Scala, Ruby, Python, Lisp, Scheme, Julia, Emacs Lisp, Vimscript, Smalltalk, Tcl, Verilog, Perl, Go... then I'll finally find 'programming nirvana'.
    • @spectatorindex: Scientists had to delete Urban Dictionary's data from the memory of IBM's Watson, because it was learning to swear in its answers.
    • Animats: [Homomorphically Encrypted Deep Learning] is a way for someone to run a trained network on their own machine without being able to extract the parameters of the network. That's DRM.
    • Dino Dai Zovi: Attackers will take the least cost path through an attack graph from their start node to their goal node.
    • @hshaban: JUST IN: Senate votes to repeal web privacy rules, allowing broadband providers to sell customer data w/o consent including browsing history
    • KBZX5000: The biggest problem you face, as a student, when taking a programming course at a University level, is that the commercially applicable part of it is very limited in scope. You tend to become decent at writhing algorithms. A somewhat dubious skill, unless you are extremely gifted in mathematics and / or somehow have access to current or unique hardware IP's (IP as in Intellectual Property).
    • Brian Bailey: The increase in complexity of the power delivery network (PDN) is starting to outpace increases in functional complexity, adding to the already escalating costs of modern chips. With no signs of slowdown, designers have to ensure that overdesign and margining do not eat up all of the profit margin.
    • rbanffy: Those old enough will remember the AS/400 (now called iSeries) computers map all storage to a single address space. You had no disk - you had just an address space that encompassed everything and an OS that dealt with that.
    • @disruptivedean: Biggest source of latency in mobile networks isn't milliseconds in core, it's months or years to get new cell sites / coverage installed
    • Greg Ferro: Why Is 40G Ethernet Obsolete? Short Answer: COST. The primary issue is that 40G Ethernet uses 4x10G signalling lanes. On UTP, 40G uses 4 pairs at 10G each. 
    • @adriaanm: "We chose Scala as the language because we wanted the latest features of Spark, as well as [...] types, closures, immutability [...]"Adriaan Moors added,
    • ajamesm: There's a difference between (A) locking (waiting, really) on access to a critical section (where you spinlock, yield your thread, etc.) and (B) locking the processor to safely execute a synchronization primitive (mutexes/semaphores).
    • @evan2645: "Chaos doesn't cause problems, it reveals them" - @nora_js #SREcon17Americas #SRECon17
    • chrissnell: We've been running large ES clusters here at Revinate for about four years now. I've found the sweet spot to be about 14-16 data nodes, plus three master-only nodes. Right now, we're running them under OpenStack on top of our own bare metal with SAS disks. It works well but I have been working on a plan to migrate them to live under Kubernetes like the rest of our infrastructure. I think the answer is to put them in StatefulSets with local hostPath volumes on SSD.
    • @beaucronin: Major recurring theme of deep learning twitter is how even those 100% dedicated to the field can't keep up with progress.
    • Chris McNab: VPN certificates and keys are often found within and lifted from email, ticketing, and chat services.
    • @bodil: And it took two hours where the Rust version has taken three days and I'm still not sure it works.
    • azirbel: One thing that's generalizable (though maybe obvious) is to explicitly define the SLAs for each microservice. There were a few weeks where we gave ourselves paging errors every time a smaller service had a deploy or went down due to unimportant errors.
    • bigzen: I'm worn out on articles dissing the performance of SQL databases without quoting any hard numbers and then proceeding to replace the systems with no thanks of development in the latest and great tech. I have nothing against spark, but I find it very hard to believe that alarm code is now readable than SQL. In fact, my experience is just the opposite.
    • jhgg: We are experimenting with webworkers to power a very complicated autocomplete and scoring system in our client. So far so good. We're able to keep the UI running at 60fps while we match, score and sort results in a web-worker.
    • DoubleGlazing: NoSQL doesn't reduce development effort. What you gain from not having to worry about modifying schemas and enforcing referential integrity, you lose from having to add more code to your app to check that a DB document has a certain value. In essence you are moving responsibility for data integrity away from the DB and in to your app, something I think is quite dangerous.
    • Const-me: Too bad many computer scientists who write books about those algorithms prefer to view RAM in an old-fashioned way, as fast and byte-addressable.
    • Azur: It always annoys me a bit when tardigrades are described as extremely hardy: they are not. It is ONLY in the desiccated, cryptobiotic, form they are resistant to adverse conditions.
    • rebootthesystem: Hardware engineers can design FPGA-based hardware optimized for ML. A second set of engineers then uses these boards/FPGA's just as they would GPU's. They write code in whatever language to use them as ML co-processors. This second group doesn't have to be composed of hardware engineers. Today someone using a GPU doesn't have to be a hardware engineer who knows how to design a GPU. Same thing.

  • There should be some sort of Metcalfe's law for events. Maybe: the value of a platform is proportional to the square of the number of scriptable events emitted by unconnected services in the system. CloudWatch Events Now Supports AWS Step Functions as a Target@ben11kehoe: This is *really* useful: Automate your incident response processes with bulletproof state machines #aws

  • Cute faux O'Reilly book cover. Solving Imaginary Scaling Issues.

  • Intel's Optane SSD is finally out, though not quite meeting it's initial this will change everything promise, it still might change a lot of things. Intel’s first Optane SSD: 375GB that you can also use as RAM. 10x DRAM latency. 1/1000 NAND latency. 2400MB/s read, 2000MB/s write. 30 full-drive writes per day. 2.5x better density. $4/GB (1/2 RAM cost). 1.5TB capacity. 500k mixed random IOPS. Great random write response. Targeted at power users with big files, like databases. NDAs are still in place so there's more to learn later. PCPerspective: comparing a server with 768GB of DRAM to one with 128GB of DRAM combined with a pair of P4800X's, 80% of the transactions per second were possible (with 1/6th of the DRAM). More impressive was that matrix multiplication of the data saw a 1.1x *increase* in performance. This seems impossible, as Optane is still slower than DRAM, but the key here was that in the case of the DRAM-only configuration, half of the database was hanging off of the 'wrong' CPU.  foboz1: For anyone think that this a solution looking for a problem, think about two things: Big Data and mobile/embedded. Big Data has an endless appetite for large quantities for memory and fast storage; 3D XPoint plays into the memory hierarchy nicely. At the extreme other end of the scale, it may be fast enough to obviate the need for having DRAM+NAND in some applications. raxx7: And 3D XPoint isn't free of limitations yet. RAM has 50-100 ns latency, 50 GB/s bandwidth (128 bit interface) and unlimited write endurance. If 3D XPoint NVDIMM can't deliver this, we'll still need to manage the difference between RAM and 3D XPoint NVDIMM. zogus: The real breakthrough will come, I think, when the OS and applications are re-written so that they no longer assume that a computer's memory consists of a small, fast RAM bank and a huge, slow persistent set of storage--a model that had held true since just about forever. VertexMaster: Given that DRAM is currently an order of magnitude faster (and several orders vs this real-world x-point product) I really have a hard time seeing where this fits in. sologoub: we built a system using Druid as the primary store of reporting data. The setup worked amazingly well with the size/cardinality of the data we had, but was constantly bottlenecked at paging segments in and out of RAM. Economically, we just couldn't justify a system with RAM big enough to hold the primary dataset...I don't have access to the original planning calculations anymore, but 375GB at $1520 would definitely have been a game changer in terms of performance/$, and I suspect be good enough to make the end user feel like the entire dataset was in memory.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Thunderbolting Your Video Card

Coding Horror - Jeff Atwood - Fri, 03/24/2017 - 10:08

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won't argue with you. It's crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today's GPU standards HD is pretty much easy mode these days. It's not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It's so good that I'm now a little angry that every display that my eyes touch isn't OLED already. I even got into nerd fights over it, and to be honest, I'd still throw down for OLED. It is legitimately that good. Come at me, bro.

Don't believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

@andrewbstiles if it was physically possible to have sex with this TV I.. uh.. I'd take it on long, romantic walks

— Jeff Atwood (@codinghorror) August 13, 2016

There's a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it ‚Äúthe best looking TV I‚Äôve ever reviewed.‚ÄĚ But we aren‚Äôt alone in loving the E6. Vincent Teoh at HDTVtest writes, ‚ÄúWe‚Äôre not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.‚ÄĚ Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it's a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That's where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There's a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let's use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That's more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There's a mild performance hit for running the card externally, on the order of 15%. There's also a further performance hit of 10% if you are in "loopback" mode on a laptop where you don't have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn't. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you're actually interested in this stuff; it is essential. I'll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won't fit. If the card takes more than 2 slots in width, it also won't fit, but this is more rare. Depth (length) is rarely an issue.

  • There are four fans in the Razer Core and although it is reasonably quiet, it's not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it's really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.

  • If you're putting a heavy hitter GPU in the Razer Core, I'd try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you'll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it's connected to.

  • There is no visible external power switch on the Razer Core. It doesn't power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.

  • It's kinda ... weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can't really turn off the built in GPU – you can select "only use display 2", that's all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both "displays" connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical "because I could", I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here's why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It's also the item you are most likely to need to replace a year or two from now.

  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you're OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.

  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.

  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I've built in the last ten years.

  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has "only" PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there's always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it's just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what's possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it's not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let's compare my Skull Canyon NUC, which has Intel's fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail Bioshock Infinite15 ‚Üí 79 fps Rise of the Tomb Raider12 ‚Üí 49 fps Overwatch43 ‚Üí 114 fps

As predicted, that's a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It's a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They'll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Thunderbolting Your Video Card

Coding Horror - Jeff Atwood - Fri, 03/24/2017 - 10:08

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won't argue with you. It's crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today's GPU standards HD is pretty much easy mode these days. It's not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It's so good that I'm now a little angry that every display that my eyes touch isn't OLED already. I even got into nerd fights over it, and to be honest, I'd still throw down for OLED. It is legitimately that good. Come at me, bro.

Don't believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

@andrewbstiles if it was physically possible to have sex with this TV I.. uh.. I'd take it on long, romantic walks

— Jeff Atwood (@codinghorror) August 13, 2016

There's a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it ‚Äúthe best looking TV I‚Äôve ever reviewed.‚ÄĚ But we aren‚Äôt alone in loving the E6. Vincent Teoh at HDTVtest writes, ‚ÄúWe‚Äôre not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.‚ÄĚ Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it's a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That's where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There's a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let's use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That's more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There's a mild performance hit for running the card externally, on the order of 15%. There's also a further performance hit of 10% if you are in "loopback" mode on a laptop where you don't have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn't. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you're actually interested in this stuff; it is essential. I'll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won't fit. If the card takes more than 2 slots in width, it also won't fit, but this is more rare. Depth (length) is rarely an issue.

  • There are four fans in the Razer Core and although it is reasonably quiet, it's not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it's really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.

  • If you're putting a heavy hitter GPU in the Razer Core, I'd try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you'll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it's connected to.

  • There is no visible external power switch on the Razer Core. It doesn't power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.

  • It's kinda ... weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can't really turn off the built in GPU – you can select "only use display 2", that's all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both "displays" connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical "because I could", I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here's why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It's also the item you are most likely to need to replace a year or two from now.

  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you're OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.

  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.

  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I've built in the last ten years.

  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has "only" PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there's always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it's just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what's possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it's not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let's compare my Skull Canyon NUC, which has Intel's fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail Bioshock Infinite15 ‚Üí 79 fps Rise of the Tomb Raider12 ‚Üí 49 fps Overwatch43 ‚Üí 114 fps

As predicted, that's a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It's a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They'll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Thunderbolting Your Video Card

Coding Horror - Jeff Atwood - Fri, 03/24/2017 - 10:08

When I wrote about The Golden Age of x86 Gaming, I implied that, in the future, it might be an interesting, albeit expensive, idea to upgrade your video card via an external Thunderbolt 3 enclosure.

I'm here to report that the future is now.

Yes, that's right, I paid $500 for an external Thunderbolt 3 enclosure to fit a $600 video card, all to enable a plug-in upgrade of a GPU on a Skull Canyon NUC that itself cost around $1000 fully built. I know, it sounds crazy, and … OK fine, I won't argue with you. It's crazy.

This matters mostly because of 4k, aka 2160p, aka 3840 × 2160, aka Ultra HD.

4k compared to 1080p

Plain old regular HD, aka 1080p, aka 1920 × 1080, is one quarter the size of 4k, and ¼ the work. By today's GPU standards HD is pretty much easy mode these days. It's not even interesting. No offense to console fans, or anything.

Late in 2016, I got a 4k OLED display and it … kind of blew my mind. I have never seen blacks so black, colors so vivid, on a display so thin. It made my previous 2008 era Panasonic plasma set look lame. It's so good that I'm now a little angry that every display that my eyes touch isn't OLED already. I even got into nerd fights over it, and to be honest, I'd still throw down for OLED. It is legitimately that good. Come at me, bro.

Don't believe me? Well, guess which display in the below picture is OLED? Go on, guess:

Guess which screen is OLED?

@andrewbstiles if it was physically possible to have sex with this TV I.. uh.. I'd take it on long, romantic walks

— Jeff Atwood (@codinghorror) August 13, 2016

There's a reason every site that reviews TVs had to recalibrate their results when they reviewed the 2016 OLED sets.

In my extended review at Reference Home Theater, I call it ‚Äúthe best looking TV I‚Äôve ever reviewed.‚ÄĚ But we aren‚Äôt alone in loving the E6. Vincent Teoh at HDTVtest writes, ‚ÄúWe‚Äôre not even going to qualify the following endorsement: if you can afford it, this is the TV to buy.‚ÄĚ Rtings.com gave the E6 OLED the highest score of any TV the site has ever tested. Reviewed.com awarded it a 9.9 out of 10, with only the LG G6 OLED (which offers the same image but better styling and sound for $2,000 more) coming out ahead.

But I digress.

Playing games at 1080p in my living room was already possible. But now that I have an incredible 4k display in the living room, it's a whole other level of difficulty. Not just twice as hard – and remember current consoles barely manage to eke out 1080p at 30fps in most games – but four times as hard. That's where external GPU power comes in.

The cool technology underpinning all of this is Thunderbolt 3. The thunderbolt cable bundled with the Razer Core is rather … diminutive. There's a reason for this.

Is there a maximum cable length for Thunderbolt 3 technology?

Thunderbolt 3 passive cables have maximum lengths.

  • 0.5m TB 3 (40Gbps)
  • 1.0m TB 3 (20Gbps)
  • 2.0m TB 3 (20Gbps)

In the future we will offer active cables which will provide 40Gbps of bandwidth at longer lengths.

40Gbps is, for the record, an insane amount of bandwidth. Let's use our rule of thumb based on ultra common gigabit ethernet, that 1 gigabit = 120 megabytes/second, and we arrive at 4.8 gigabytes/second. Zow.

That's more than enough bandwidth to run even the highest of high end video cards, but it is not without overhead. There's a mild performance hit for running the card externally, on the order of 15%. There's also a further performance hit of 10% if you are in "loopback" mode on a laptop where you don't have an external display, so the video frames have to be shuttled back from the GPU to the internal laptop display.

This may look like a gamer-only thing, but surprisingly, it isn't. What you get is the general purpose ability to attach any PCI express card to any computer with a Thunderbolt 3 port and, for the most part, it just works!

Linus breaks it down and answers all your most difficult questions:

Please watch the above video closely if you're actually interested in this stuff; it is essential. I'll add some caveats of my own after working with the Razer Core for a while:

  • Make sure the video card you plan to put into the Razer Core is not too tall, or too wide. You can tell if a card is going to be too tall by looking at pictures of the mounting rear bracket. If the card extends significantly above the standard rear mounting bracket, it won't fit. If the card takes more than 2 slots in width, it also won't fit, but this is more rare. Depth (length) is rarely an issue.

  • There are four fans in the Razer Core and although it is reasonably quiet, it's not super silent or anything. You may want to mod the fans. The Razer Core is a remarkably simple device, internally, it's really just a power supply, some Thunderbolt 3 bridge logic, and a PCI express slot. I agree with Linus that the #1 area Razer could improve in the future, beyond generally getting the price down, is to use fewer and larger fans that run quieter.

  • If you're putting a heavy hitter GPU in the Razer Core, I'd try to avoid blower style cards (the ones that exhaust heat from the rear) in favor of those that cool with large fans blowing down and around the card. Dissipating 150w+ is no mean feat and you'll definitely need to keep the enclosure in open air … and of course within 0.5 meters of the computer it's connected to.

  • There is no visible external power switch on the Razer Core. It doesn't power on until you connect a TB3 cable to it. I was totally not expecting that. But once connected, it powers up and the Windows 10 Thunderbolt 3 drivers kick in and ask you to authorize the device, which I did (always authorize). Then it spun a bit, detected the new GPU, and suddenly I had multiple graphics card active on the same computer. I also installed the latest Nvidia drivers just to make sure everything was ship shape.

  • It's kinda ... weird having multiple GPUs simultaneously active. I wanted to make the Razer Core display the only display, but you can't really turn off the built in GPU – you can select "only use display 2", that's all. I got into several weird states where windows were opening on the other display and I had to mess around a fair bit to get things locked down to just one display. You may want to consider whether you have both "displays" connected for troubleshooting, or not.

And then, there I am, playing Lego Marvel in splitscreen co-op at glorious 3840 × 2160 UltraHD resolution on an amazing OLED display with my son. It is incredible.

Beyond the technical "because I could", I am wildly optimistic about the future of external Thunderbolt 3 expansion boxes, and here's why:

  • The main expense and bottleneck in any stonking gaming rig is, by far, the GPU. It's also the item you are most likely to need to replace a year or two from now.

  • The CPU and memory speeds available today are so comically fast that any device with a low-end i3-7100 for $120 will make zero difference in real world gaming at 1080p or higher … if you're OK with 30fps minimum. If you bump up to $200, you can get a quad-core i5-7500 that guarantees you 60fps minimum everywhere.

  • If you prefer a small system or a laptop, an external GPU makes it so much more flexible. Because CPU and memory speeds are already so fast, 99.9% of the time your bottleneck is the GPU, and almost any small device you can buy with a Thunderbolt 3 port can now magically transform into a potent gaming rig with a single plug. Thunderbolt 3 may be a bit cutting edge today, but more and more devices are shipping with Thunderbolt 3. Within a few years, I predict TB3 ports will be as common as USB3 ports.

  • A general purpose external PCI express enclosure will be usable for a very long time. My last seven video card upgrades were plug and play PCI Express cards that would have worked fine in any computer I've built in the last ten years.

  • External GPUs are not meaningfully bottlenecked by Thunderbolt 3 bandwidth; the impact is 15% to 25%, and perhaps even less over time as drivers and implementations mature. While Thunderbolt 3 has "only" PCI Express x4 bandwidth, many benchmarkers have noted that GPUs moving from PCI Express x16 to x8 has almost no effect on performance. And there's always Thunderbolt 4 on the horizon.

The future, as they say, is already here – it's just not evenly distributed.

I am painfully aware that costs need to come down. Way, way down. The $499 Razer Core is well made, on the vanguard of what's possible, a harbinger of the future, and fantastically enough, it does even more than what it says on the tin. But it's not exactly affordable.

I would absolutely love to see a modest, dedicated $200 external Thunderbolt 3 box that included an inexpensive current-gen GPU. This would clobber any onboard GPU on the planet. Let's compare my Skull Canyon NUC, which has Intel's fastest ever, PS4 class embedded GPU, with the modest $150 GeForce GTX 1050 Ti:

1920 × 1080 high detail Bioshock Infinite15 ‚Üí 79 fps Rise of the Tomb Raider12 ‚Üí 49 fps Overwatch43 ‚Üí 114 fps

As predicted, that's a 3x-5x stompdown. Mac users lamenting their general lack of upgradeability, hear me: this sort of box is exactly what you want and need. Imagine if Apple was to embrace upgrading their laptops and all-in-one systems via Thunderbolt 3.

I know, I know. It's a stretch. But a man can dream … of externally upgradeable GPUs. That are too expensive, sure, but they are here, right now, today. They'll only get cheaper over time.

[advertisement] Find a better job the Stack Overflow way - what you need when you need it, no spam, and no scams.
Categories: Programming

Monitor Your Mesos Cluster with StackState

Xebia Blog - Fri, 03/24/2017 - 09:28

This post is part 2 in a 4-part series about Container Monitoring. Post 1¬†dives into some of the new challenges containers and microservices create and¬†the information¬†you should focus on. This article describes how to monitor your Mesos cluster. Apache Mesos is¬†a distributed systems kernel at the heart of the Mesosphere DC/OS and is designed for […]

The post Monitor Your Mesos Cluster with StackState appeared first on Xebia Blog.

Estimating Accuracy Mathematics

Herding Cats - Glen Alleman - Fri, 03/24/2017 - 01:55

In the estimating business, like many things in project management, there is confusion about principles, practices, and processes. And sometimes even outright misinformation. 

Here's an example used by the #NoEstimates advocates. Starting in 1986, there is a sentence that says more or less what the slide says below.

A good estimation approach should provide estimates that are within 25% of the actual results, 75% of the time

The book this statement comes from is Conte, S. D., H. E. Dunsmore and V.Y. Shen. Software Engineering Metrics and Models. Menlo Park CA: The Benjamin/Cummings Publishing Company, Inc., 1986. Looking at the statement is on page 172-175 open on my desk right now. Steve McConnell abstracted the original page content into those words. 

 

C7mfWBCW0AEB33Z

And the words on page 172 to 175 speak about the Magnitude and Mean Magnitude of relative Error. The term within 25% is the Mean Relative Error, that is the estimate is within 25% of the actual value - the real value compared to the estimated value.

So if the actual value - after we are done - is $25,000 then is the estimate is within 25% of that estimate - $18,700 - then that's a good start.

Screen Shot 2017-03-23 at 8.44.03 AMIn other words, if the error of our estimate is less that 25% of the actual outcome, 75% of the time, we're doing pretty well early in the project - possibly on day one. In our NASA Software Intensive System of Systems business, we need an 80% confidence basis of estimate in the proposal - A 20% MRE. Conte, Dunsmore, and Shen's number is a 75% confidence level in 1986.

We use Monte Carlo Simulation tools and Method of Moments algorithms from very large historical - the holy grail of empirical forecasting - databases and apply analogous and parametric models for work that is new to get these numbers.

The notion used by #NoEstinates advocates does NOT mean that the estimate is within 25% of the actual. But the Mean Relative Error of the estimate is with within 25%. They would know that if the Read the Book and stopped echoing someone else's poorly translated mathematics.

This is a serious error in understanding the principles of estimating, and this error is repeated throughout the #NoEstimates community. It's time to put it right. 

Please go buy Software Engineering Metrics and Models, it's cheap and packed full of the mathematics needed to actually perform credible estimating on software intensive systems. And download the paper that followed "A Software Metrics Survey." While you're at it buy Estimating Software Intensive System of Systems and you to can start debunking the #NoEstimates hoax that Decisions can be made in the presence of uncertainty without estimating the impact of those decisions.

The only way this can happen is if there is no uncertainty, the future is like the past, there is no risk - reducible or irreducible and nothing changes. 
 

 

 

Related articles Risk Management is How Adults Manage Projects Information Technology Estimating Quality Herding Cats: The Fallacy of Wild Numbers Herding Cats: Quote of the Day Want To Learn How To Estimate? Two Books in the Spectrum of Software Development Essential Reading List for Managing Other People's Money What Happened to our Basic Math Skills? The Fallacy of the Planning Fallacy
Categories: Project Management

Change Fatigue, Tunnel Vision, and Watts Humphrey

Traffic in India

I recently spent a week Mumbai. While stuck in traffic during a tour of some of the incredible sights, our guide stated that in Mumbai there were three certainties, death, taxes and traffic. With the sound of auto and truck horns ringing in my ear, that statement rang true. ¬†On reflection, I would add change to the list of certainties, whether in Mumbai or as a general attribute of all human endeavors. ¬†Software development and maintenance are no different. Over the past few weeks, this blog has extolled and then pilloried the virtues of both big bang and incremental change approaches (and by inference everything in-between). In the end, there is no perfect approach that fits all scenarios. How can we decide which end of the change approach spectrum will work in any given scenario? ¬†The answer is not as straightforward as a checklist or decision tree, rather three interrelated concepts must be weighed when deciding on a change approach. The three are the organization‚Äôs propensity to fall prey to change fatigue, the possibility of tunnel vision and the tolerance for dealing with Watts Humphrey’s requirements uncertainty principle.

Change fatigue occurs when the combination of a high pace of change and a negative perception the value or success of change, collide. One of the mantras of the software development industry is that the pace of technological change is fast and getting faster. One form of evidence of the pace of change is that nearly every person has to reinvent themselves at least once, and often many times.  This constant change churn sets the stage for the propensity for change fatigue to pop up at a moment’s notice if they feel they are working toward change they see it as successful or valuable (either to themselves directly or to their organization). In organizations with this smoldering change fatigue lurking below the surface, slowing down the rate of change by consolidating smaller (more incremental) changes can be a useful strategy.

Tunnel vision is a singular focus on one thing to the absolute exclusion of everything else. In medical terms, tunnel vision is not considered a positive. ¬†In the business and IT environment, the medical definition is often confused with the more sought after the concept of focus. While it is easy to see a relationship, they are not the same. When focus crosses the line and causes an organization to neglect or ignore other important priorities, the focus becomes destructive tunnel vision. One of the attributes of a big bang change program is that is often too big to fail. ¬†When an organization develops tunnel vision and begins to ignore feedback, they have entered the danger zone. An organization‚Äôs culture can be a powerful tool to predict whether tunnel vision is a major risk. ¬†Organizations that have an extreme fear of failure or pride themselves for persevering at all odds are apt to fall prey to tunnel vision. ¬†Early in my career, I worked for a firm that proudly stated that if any of their projects got into trouble they would ‚Äúdarken the skies with engineers.‚ÄĚ This almost always busted the budget and was career limiting. The two competing cultural components tended to cause leaders to block out negative feedback. ¬†The culture fostered tunnel vision. ¬†In those cases having a vision had become tunnel vision. [HUH?] The managers and stakeholder put blinders on and ignored feedback until it was too late to correct their course. If an organization is at higher risk for tunnel vision due to its culture, the incremental change approaches are a tool to create a mechanism to challenge the vision and to generate and interpret feedback.

Watts Humphrey, the founder of the Software Engineering Institute and contributor of some of the most significant thought leadership to the software development over his lifetime, established the principle of requirements uncertainty. ¬†In its very simplest form, the principle can be stated as ‚Äúthey won‚Äôt completely know it until they see it.‚ÄĚ ¬†The amount of uncertainty establishes how far into the future a project team or organization will comfortably commit to a direction. ¬†The more uncertainty the shorter amount time into the future a team can commitment. ¬†This requirement uncertainty principle screams incrementalism. ¬†As Todd Field suggested a short commitment cycle (incrementalism) does not preclude a longer term vision (watch the tunnel vision). ¬†Several frameworks for scaling Agile build this risk mitigation cycle into their methods. ¬†For example, the Scaled Agile Framework Enterprise (SAFe) is built around a process that recognizes uncertainty. SAFe includes a long-term roadmap for each product. ¬†The roadmap becomes less specific the further it extends into the future reflecting uncertainty. Specific program increments (PI) define what will be delivered over a 90 (ish) day window reflecting more certainty that comes from a short, fixed time horizon. In a further attempt to reduce uncertainty, the PI evolves based on the feedback and results from short sprints or iterations. SAFe represents a hybrid approach that mixes big bang and incremental concepts. The process provides organizations a way to define what they interned to deliver in the future while accepting that real life creates situations that need to be addressed. ¬†The more uncertainty a change program faces the shorter the increment of commitment should be.

The state and culture of the organization or team has, can have a large impact on whether a Big Bang approach or an incremental approach makes sense. ¬†I think the most succinct answer I got when I asked which change approach made the most sense was from Lee Copeland, Talent Scout at TechWell, answered, ‚Äúdepends.‚ÄĚ

 


Categories: Process Management

5 Tips for launching successful apps and games on Google Play

Android Developers Blog - Thu, 03/23/2017 - 19:24
Posted by Adam Gutterman, Go-To-Market Strategic Lead, Google Play Games

Last month at the Game Developers Conference (GDC), we held a developer panel focused on sharing best practices for building successful app and game businesses. Check out 5 tips for developers, both large and small, as shared by our gaming partners at Electronic Arts (EA), Hutch Games, Nix Hydra, Space Ape Games and Omnidrone.



1. Test, test, test
The best time to test, is before you launch; so test boldly and test a lot! Nix Hydra recommends testing creative, including art style and messaging, as well as gameplay mechanics, onboarding flows and anything else you're not sure about. Gathering feedback from real users in advance of launching can highlight what's working and what can be improved to ensure your game's in the best shape possible at launch.
2. Store listing experiments
Run experiments on all of your store listing page assets. Taking bold risks instead of making assumptions allows you to see the impact of different variables with your actual user base on Google Play. Test in different regions to ensure your store listing page is optimized for each major market, as they often perform differently.

3. Early Access program

Space Ape Games recently used Early Access to test different onboarding experiences and gameplay control methods in their game. Finding the right combination led them to double-digit growth in D1 retention. Gathering these results in advance of launch helped the team fine tune and polish the game, minimizing risk before releasing to the masses.

"Early Access is cool because you can ask the big questions and get real answers from real players," Joe Raeburn, Founding Product Guy at Space Ape Games.
Watch the Android Developer Story below to hear how Omnidrone benefits from Early Access using strong user feedback to improve retention, engagement and monetization in their game.


Mobile game developer Omnidrone benefits from Early Access.
4. Pre-registration

Electronic Arts has run more than 5 pre-registration campaigns on Google Play. Pre-registration allows them to start marketing and build awareness for titles with a clear call-to-action before launch. This gives them a running start on launch day having built a group of users to activate upon the game's release resulting in a jump in D1 installs.

5. Seek feedback

All partners strongly recommended seeking feedback early and often. Feedback tells both sides of the story, by pointing out what's broken as well as what you're doing right. Find the right time and channels to request feedback, whether they be in-game, social, email, or even through reading and responding to reviews within the Google Play store.

If you're a startup who has an upcoming launch on Google Play or has launched an app or game recently and you're interested in opportunities like Early Access and pre-registration, get in touch with us so we can work with you.

Watch sessions from Google Developer Day at GDC17 on the Android Developers YT channel to learn tips for success. Also, visit the Android Developers website to stay up-to-date with features and best practices that will help you grow a successful business on Google Play.


How useful did you find this blogpost? ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ ‚ėÖ         

Categories: Programming

Estimating Accuracy Mathematics

Herding Cats - Glen Alleman - Thu, 03/23/2017 - 16:03

In the estimating business, like many things in project management, there is confusion about principles, practices, and processes. And sometimes even outright misinformation. 

Here's an example used by the #NoEstimates advocates. Starting in 1986, there is a sentence that says more or less what the slide says below

A good estimation approach should provide estimates that are within 25% of the actual results, 75% of the time

The book this statement comes from is Conte, S. D., H. E. Dunsmore and V.Y. Shen. Software Engineering Metrics and Models. Menlo Park CA: The Benjamin/Cummings Publishing Company, Inc., 1986. Looking at the statement is on page 172-175 open on my desk right now. Steve McConnell abstracted the original page content into those words. 

 

C7mfWBCW0AEB33Z

And the words on page 172 to 175 speak about the Magnitude and Mean Magnitude of relative Error. The term within 25% is the Mean Relative Error, that is the estimate is within 25% of the actual value - the real value compared to the estimated value.

So if the actual value - after we are done - is $25,000 then is the estimate is within 25% of that estimate - $18,700 - then that's a good start.

Screen Shot 2017-03-23 at 8.44.03 AMIn other words, if the error of our estimate is less that 25% of the actual outcome, 75% of the time, we're doing pretty well early in the project - possibly on day one. In our NASA Software Intensive System of Systems business, we need an 80% confidence basis of estimate in the proposal - A 20% MRE. Conte, Dunsmore, and Shen's number is a 75% confidence level in 1986.

We use Monte Carlo Simulation tools and Method of Moments algorithms from very large historical - the holy grail of empirical forecasting - databases and apply analogous and parametric models for work that is new to get these numbers.

The notion used by #NoEstinates advocates does NOT mean that the estimate is within 25% of the actual. But the Mean Relative Error of the estimate is with within 25%. They would know that if the Read the Book and stopped echoing someone else's poorly translated mathematics.

This is a serious error in understanding the principles of estimating, and this error is repeated throughout the #NoEstimates community. It's time to put it right. 

Please go buy Software Engineering Metrics and Models, it's cheap and packed full of the mathematics needed to actually perform credible estimating on software intensive systems. And download the paper that followed "A Software Metrics Survey." While you're at it buy Estimating Software Intensive System of Systems and you to can start debunking the #NoEstimates hoax that Decisions can be made in the presence of uncertainty without estimating the impact of those decisions.

 

 

 

Related articles Risk Management is How Adults Manage Projects Information Technology Estimating Quality Herding Cats: The Fallacy of Wild Numbers Herding Cats: Quote of the Day Want To Learn How To Estimate? Two Books in the Spectrum of Software Development Essential Reading List for Managing Other People's Money What Happened to our Basic Math Skills? The Fallacy of the Planning Fallacy
Categories: Project Management

Happy 10th Birthday Google Testing Blog!

Google Testing Blog - Wed, 03/22/2017 - 22:22
by Anthony Vallone

Ten years ago today, the first Google Testing Blog article was posted (official announcement 2 days later). Over the years, Google engineers have used this blog to help advance the test engineering discipline. We have shared information about our testing technologies, strategies, and theories; discussed what code quality really means; described how our teams are organized for optimal productivity; announced new tooling; and invited readers to speak at and attend the annual Google Test Automation Conference.

Google Testing Blog banner in 2007

The blog has enjoyed excellent readership. There have been over 10 million page views of the blog since it was created, and there are currently about 100 to 200 thousand views per month.

This blog is made possible by many Google engineers who have volunteered time to author and review content on a regular basis in the interest of sharing. Thank you to all the contributors and our readers!

Please leave a comment if you have a story to share about how this blog has helped you.

Categories: Testing & QA

Diverse protections for a diverse ecosystem: Android Security 2016 Year in Review

Android Developers Blog - Wed, 03/22/2017 - 18:49
Posted by Adrian Ludwig & Mel Miller, Android Security Team
Today, we're sharing the third annual Android Security Year In Review, a comprehensive look at our work to protect more than 1.4 billion Android users and their data.

Our goal is simple: keep our users safe. In 2016, we improved our abilities to stop dangerous apps, built new security features into Android 7.0 Nougat, and collaborated with device manufacturers, researchers, and other members of the Android ecosystem. For more details, you can read the full Year in Review report or watch our webinar.


Protecting you from PHAs
It's critical to keep people safe from Potentially Harmful Apps (PHAs) that may put their data or devices at risk. Our ongoing work in this area requires us to find ways to track and stop existing PHAs, and anticipate new ones that haven't even emerged yet.
Over the years, we've built a variety of systems to address these threats, such as application analyzers that constantly review apps for unsafe behavior, and Verify Apps which regularly checks users' devices for PHAs. When these systems detect PHAs, we warn users, suggest they think twice about downloading a particular app, or even remove the app from their devices entirely.

We constantly monitor threats and improve our systems over time. Last year's data reflected those improvements: Verify Apps conducted 750 million daily checks in 2016, up from 450 million the previous year, enabling us to reduce the PHA installation rate in the top 50 countries for Android usage.

Google Play continues to be the safest place for Android users to download their apps. Installs of PHAs from Google Play decreased in nearly every category:
  • Now 0.016 percent of installs, trojans dropped by 51.5 percent compared to 2015
  • Now 0.003 percent of installs, hostile downloaders dropped by 54.6 percent compared to 2015
  • Now 0.003 percent of installs, backdoors dropped by 30.5 percent compared to 2015
  • Now 0.0018 percent of installs, phishing apps dropped by 73.4 percent compared to 2015
By the end of 2016, only 0.05 percent of devices that downloaded apps exclusively from Play contained a PHA; down from 0.15 percent in 2015.

Still, there's more work to do for devices overall, especially those that install apps from multiple sources. While only 0.71 percent of all Android devices had PHAs installed at the end of 2016, that was a slight increase from about 0.5 percent in the beginning of 2015. Using improved tools and the knowledge we gained in 2016, we think we can reduce the number of devices affected by PHAs in 2017, no matter where people get their apps.
New security protections in Nougat
Last year, we introduced a variety of new protections in Nougat, and continued our ongoing work to strengthen the security of the Linux Kernel.
  • Encryption improvements: In Nougat, we introduced file-based encryption which enables each user profile on a single device to be encrypted with a unique key. If you have personal and work accounts on the same device, for example, the key from one account can't unlock data from the other. More broadly, encryption of user data has been required for capable Android devices since in late 2014, and we now see that feature enabled on over 80 percent of Android Nougat devices.
  • New audio and video protections: We did significant work to improve security and re-architect how Android handles video and audio media. One example: We now store different media components into individual sandboxes, where previously they lived together. Now if one component is compromised, it doesn't automatically have permissions to other components, which helps contain any additional issues.
  • Even more security for enterprise users: We introduced a variety of new enterprise security features including "Always On" VPN, which protects your data from the moment your device boots up and ensures it isn't traveling from a work phone to your personal device via an insecure connection. We also added security policy transparency, process logging, improved wifi certification handling, and client certification improvements to our growing set of enterprise tools.
Working together to secure the Android ecosystem
Sharing information about security threats between Google, device manufacturers, the research community, and others helps keep all Android users safer. In 2016, our biggest collaborations were our monthly security updates program and ongoing partnership with the security research community.

Security updates are regularly highlighted as a pillar of mobile security‚ÄĒand rightly so. We launched our monthly security updates program in 2015, following the public disclosure of a bug in Stagefright, to help accelerate patching security vulnerabilities across devices from many different device makers. This program expanded significantly in 2016:
  • More than 735 million devices from 200+ manufacturers received a platform security update in 2016.
  • We released monthly Android security updates throughout the year for devices running Android 4.4.4 and up‚ÄĒthat accounts for 86.3 percent of all active Android devices worldwide.
  • Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016.
We provided monthly security updates for all supported Pixel and Nexus devices throughout 2016, and we're thrilled to see our partners invest significantly in regular updates as well. There's still a lot of room for improvement however. About half of devices in use at the end of 2016 had not received a platform security update in the previous year. We're working to increase device security updates by streamlining our security update program to make it easier for manufacturers to deploy security patches and releasing A/B updates to make it easier for users to apply those patches.

On the research side, our Android Security Rewards program grew rapidly: we paid researchers nearly $1 million dollars for their reports in 2016. In parallel, we worked closely with various security firms to identify and quickly fix issues that may have posed risks to our users.

We appreciate all of the hard work by Android partners, external researchers, and teams at Google that led to the progress the ecosystem has made with security in 2016. But it doesn't stop there. Keeping you safe requires constant vigilance and effort. We're looking forward to new insights and progress in 2017 and beyond.
Categories: Programming

Diverse protections for a diverse ecosystem: Android Security 2016 Year in Review

Android Developers Blog - Wed, 03/22/2017 - 18:49
Posted by Adrian Ludwig & Mel Miller, Android Security Team
Today, we're sharing the third annual Android Security Year In Review, a comprehensive look at our work to protect more than 1.4 billion Android users and their data.

Our goal is simple: keep our users safe. In 2016, we improved our abilities to stop dangerous apps, built new security features into Android 7.0 Nougat, and collaborated with device manufacturers, researchers, and other members of the Android ecosystem. For more details, you can read the full Year in Review report or watch our webinar.


Protecting you from PHAs
It's critical to keep people safe from Potentially Harmful Apps (PHAs) that may put their data or devices at risk. Our ongoing work in this area requires us to find ways to track and stop existing PHAs, and anticipate new ones that haven't even emerged yet.
Over the years, we've built a variety of systems to address these threats, such as application analyzers that constantly review apps for unsafe behavior, and Verify Apps which regularly checks users' devices for PHAs. When these systems detect PHAs, we warn users, suggest they think twice about downloading a particular app, or even remove the app from their devices entirely.

We constantly monitor threats and improve our systems over time. Last year's data reflected those improvements: Verify Apps conducted 750 million daily checks in 2016, up from 450 million the previous year, enabling us to reduce the PHA installation rate in the top 50 countries for Android usage.

Google Play continues to be the safest place for Android users to download their apps. Installs of PHAs from Google Play decreased in nearly every category:
  • Now 0.016 percent of installs, trojans dropped by 51.5 percent compared to 2015
  • Now 0.003 percent of installs, hostile downloaders dropped by 54.6 percent compared to 2015
  • Now 0.003 percent of installs, backdoors dropped by 30.5 percent compared to 2015
  • Now 0.0018 percent of installs, phishing apps dropped by 73.4 percent compared to 2015
By the end of 2016, only 0.05 percent of devices that downloaded apps exclusively from Play contained a PHA; down from 0.15 percent in 2015.

Still, there's more work to do for devices overall, especially those that install apps from multiple sources. While only 0.71 percent of all Android devices had PHAs installed at the end of 2016, that was a slight increase from about 0.5 percent in the beginning of 2015. Using improved tools and the knowledge we gained in 2016, we think we can reduce the number of devices affected by PHAs in 2017, no matter where people get their apps.
New security protections in Nougat
Last year, we introduced a variety of new protections in Nougat, and continued our ongoing work to strengthen the security of the Linux Kernel.
  • Encryption improvements: In Nougat, we introduced file-based encryption which enables each user profile on a single device to be encrypted with a unique key. If you have personal and work accounts on the same device, for example, the key from one account can't unlock data from the other. More broadly, encryption of user data has been required for capable Android devices since in late 2014, and we now see that feature enabled on over 80 percent of Android Nougat devices.
  • New audio and video protections: We did significant work to improve security and re-architect how Android handles video and audio media. One example: We now store different media components into individual sandboxes, where previously they lived together. Now if one component is compromised, it doesn't automatically have permissions to other components, which helps contain any additional issues.
  • Even more security for enterprise users: We introduced a variety of new enterprise security features including "Always On" VPN, which protects your data from the moment your device boots up and ensures it isn't traveling from a work phone to your personal device via an insecure connection. We also added security policy transparency, process logging, improved wifi certification handling, and client certification improvements to our growing set of enterprise tools.
Working together to secure the Android ecosystem
Sharing information about security threats between Google, device manufacturers, the research community, and others helps keep all Android users safer. In 2016, our biggest collaborations were our monthly security updates program and ongoing partnership with the security research community.

Security updates are regularly highlighted as a pillar of mobile security‚ÄĒand rightly so. We launched our monthly security updates program in 2015, following the public disclosure of a bug in Stagefright, to help accelerate patching security vulnerabilities across devices from many different device makers. This program expanded significantly in 2016:
  • More than 735 million devices from 200+ manufacturers received a platform security update in 2016.
  • We released monthly Android security updates throughout the year for devices running Android 4.4.4 and up‚ÄĒthat accounts for 86.3 percent of all active Android devices worldwide.
  • Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016.
We provided monthly security updates for all supported Pixel and Nexus devices throughout 2016, and we're thrilled to see our partners invest significantly in regular updates as well. There's still a lot of room for improvement however. About half of devices in use at the end of 2016 had not received a platform security update in the previous year. We're working to increase device security updates by streamlining our security update program to make it easier for manufacturers to deploy security patches and releasing A/B updates to make it easier for users to apply those patches.

On the research side, our Android Security Rewards program grew rapidly: we paid researchers nearly $1 million dollars for their reports in 2016. In parallel, we worked closely with various security firms to identify and quickly fix issues that may have posed risks to our users.

We appreciate all of the hard work by Android partners, external researchers, and teams at Google that led to the progress the ecosystem has made with security in 2016. But it doesn't stop there. Keeping you safe requires constant vigilance and effort. We're looking forward to new insights and progress in 2017 and beyond.
Categories: Programming

Doors Now Open to the Better User Stories Advanced Video Training

Mike Cohn's Blog - Wed, 03/22/2017 - 13:05

This past week we’ve given away free online training and a number of resources to help you combat some of the most vexing problems agile teams encounter when writing user stories.

Now it’s time to open the doors to the full course: Better User Stories.

“In my 30 years of IT experience, this class has without question provided the most ‘bang for buck’ of any previous training course I have ever attended. If you or your organization are struggling with user stories, then this class is absolutely a must have. I simply can’t recommend it enough. 5 Stars!!” - Douglas Tooley

If you watched and enjoyed the free videos, you’ll love Better User Stories. It’s much more in-depth, with 9 modules of advanced training, worksheets, lesson transcripts, audio recordings, bonus materials, and quizzes to help cement the learning.

Registration for Better User Stories will only be open for one week

Click here to read more about the course and reserve your seat.

Because of the intense level of interest in this course, we’re expecting a large numbers of people to sign-up. That’s why we’re only opening the doors for one week, so that we have the time and resources to get everyone settled.

If demand is even higher than we expect, we may close the doors early, so if you already know you’re interested, the next step is to:

Choose one of 3 levels of access. Which is right for you?

I know when it comes to training, everyone has different needs, objectives, learning preferences and budgets.

That’s why you can choose from 3 levels of access when you register:

  • Professional - Get the full course with lifetime access to all materials and any future upgrades
  • Expert Access - Acquire the full course and become part of the Better User Stories online community, where you can discuss ideas, share tips and submit questions to live Q+A calls with Mike
  • Work With Mike - Secure all of the above, plus private, 1:1 time with Mike to work through any specific issues or challenges.

Click here to choose the best level for your situation

What people are already saying

We recently finished a beta launch where a number of agilists worked through all 9 modules, providing feedback along the way. This let us tweak, polish and finish the course to make it even more practical and valuable.

Here’s what people had to say:

Anne Aaroe

Thank you for an amazing course. Better User Stories is by far the best course I have had since I started my agile journey back in 2008.

Anne Aaroe

Packed full of humor, stories, and exercises the course is easy to take at one’s own leisure. Mike Cohn has a way of covering complex topics such as splitting user stories with easy to understand acronyms, charts and reinforces these concepts with quizzes and homework that really bring the learning objectives to life. So, whether you’re practicing scrum or just looking to learn more about user stories this course will provide you the roadmap needed to improve at any experience level, at a cost that everyone can appreciate.

Aaron Corcoran

Click here to read a full description of the course, and what you get with each of the 3 levels of access. Questions about the course?

Let me know in the comments below.

Docker container secrets on AWS ECS

Xebia Blog - Wed, 03/22/2017 - 08:42

Almost every application needs some kind of a secret or secrets to do it's work. There are all kind of ways¬†to provide this¬†to the containers but it all comes down to the following five: Save the secrets inside the image Provide the secrets trough ENV variables Provide the secrets trough volume mounts Use a secrets […]

The post Docker container secrets on AWS ECS appeared first on Xebia Blog.

O-MG, the Developer Preview of Android O is here!

Android Developers Blog - Wed, 03/22/2017 - 01:55

Posted by Dave Burke, VP of Engineering

Since the first launch in 2008, the Android project has thrived on the incredible feedback from our vibrant ecosystems of app developers and device makers, as well as of course our users. More recently, we've been pushing hard on improving our engineering processes so we can share our work earlier and more openly with our partners.

So, today, I'm excited to share a first developer preview of the next version of the OS: Android O. The usual caveats apply: it's early days, there are more features coming, and there's still plenty of stabilization and performance work ahead of us. But it's booting :).

Over the course of the next several months, we'll be releasing updated developer previews, and we'll be doing a deep dive on all things Android at Google I/O in May. In the meantime, we'd love your feedback on trying out new features, and of course testing your apps on the new OS.

What's new in O?

Android O introduces a number of new features and APIs to use in your apps. Here's are just a few new things for you to start trying in this first Developer Preview:

Background limits: Building on the work we began in Nougat, Android O puts a big priority on improving a user's battery life and the device's interactive performance. To make this possible, we've put additional automatic limits on what apps can do in the background, in three main areas: implicit broadcasts, background services, and location updates. These changes will make it easier to create apps that have minimal impact on a user's device and battery. Background limits represent a significant change in Android, so we want every developer to get familiar with them. Check out the documentation on background execution limits and background location limits for details.

Notification channels: Android O also introduces notification channels, which are new app-defined categories for notification content. Channels let developers give users fine-grained control over different kinds of notifications — users can block or change the behavior of each channel individually, rather than managing all of the app's notifications together.

Notification channels let users control your app's notification categories

Android O also adds new visuals and grouping to notifications that make it easier for users to see what's going on when they have an incoming message or are glancing at the notification shade.

Autofill APIs: Android users already depend on a range of password managers to autofill login details and repetitive information, which makes setting up new apps or placing transactions easier. Now we are making this work more easily across the ecosystem by adding platform support for autofill. Users can select an autofill app, similar to the way they select a keyboard app. The autofill app stores and secures user data, such as addresses, user names, and even passwords. For apps that want to handle autofill, we're adding new APIs to implement an Autofill service.

PIP for handsets and new windowing features: Picture in Picture (PIP) display is now available on phones and tablets, so users can continue watching a video while they're answering a chat or hailing a car. Apps can put themselves in PiP mode from the resumed or a pausing state where the system supports it - and you can specify the aspect ratio and a set of custom interactions (such as play/pause). Other new windowing features include a new app overlay window for apps to use instead of system alert window, and multi-display support for launching an activity on a remote display.

Font resources in XML: Fonts are now a fully supported resource type in Android O. Apps can now use fonts in XML layouts as well as define font families in XML — declaring the font style and weight along with the font files.

Adaptive icons: To help you integrate better with the device UI, you can now create adaptive icons that the system displays in different shapes, based on a mask selected by the device. The system also animates interactions with the icons, and uses them in the launcher, shortcuts, Settings, sharing dialogs, and in the overview screen.

Adaptive icons display in a variety of shapes across different device models.

Wide-gamut color for apps: Android developers of imaging apps can now take advantage of new devices that have a wide-gamut color capable display. To display wide gamut images, apps will need to enable a flag in their manifest (per activity) and load bitmaps with an embedded wide color profile (AdobeRGB, Pro Photo RGB, DCI-P3, etc.).

Connectivity: For the ultimate in audio fidelity, Android O now also supports high-quality Bluetooth audio codecs such as LDAC codec. We're also adding new Wi-Fi features as well, like Wi-Fi Aware, previously known as Neighbor Awareness Networking (NAN). On devices with the appropriate hardware, apps and nearby devices can discover and communicate over Wi-Fi without an Internet access point. We're working with our hardware partners to bring Wi-Fi Aware technology to devices as soon as possible.

The Telecom framework is extending ConnectionService APIs to enable third party calling apps integrate with System UI and operate seamlessly with other audio apps. For instance, apps can have their calls displayed and controlled in different kinds of UIs such as car head units.

Keyboard navigation: With the advent of Google Play apps on Chrome OS and other large form factors, we're seeing a resurgence of keyboard navigation use within these apps. In Android O we focused on building a more reliable, predictable model for "arrow" and "tab" navigation that aids both developers and end users.

AAudio API for Pro Audio: AAudio is a new native API that's designed specifically for apps that require high-performance, low-latency audio. Apps using AAudio read and write data via streams. In the Developer Preview we're releasing an early version of this new API to get your feedback.

WebView enhancements: In Android Nougat we introduced an optional multiprocess mode for WebView that moved the handling of web content into an isolated process. In Android O, we're enabling multiprocess mode by default and adding an API to let your app handle errors and crashes, for enhanced security and improved app stability. As a further security measure, you can now opt in your app's WebView objects to verify URLs through Google Safe Browsing.

Java 8 Language APIs and runtime optimizations: Android now supports several new Java Language APIs, including the new java.time API. In addition, the Android Runtime is faster than ever before, with improvements of up to 2x on some application benchmarks.

Partner platform contributions: Hardware manufacturers and silicon partners have accelerated fixes and enhancements to the Android platform in the O release. For example, Sony has contributed more than 30 feature enhancements including the LDAC codec and 250 bug fixes to Android O.

Get started in a few simple steps

First, make your app compatible to give your users a seamless transition to Android O. Just download a device system image or emulator system image, install your current app, and test -- the app should run and look great, and handle behavior changes properly. After you've made any necessary updates, we recommend publishing to Google Play right away without changing the app's platform targeting.

Building with Android O

When you're ready, dive in to O in depth to learn about everything you can take advantage of for your app. Visit the O Developer Preview site for details on the preview timeline, behavior changes, new APIs, and support resources.

Plan how your app will support background limits and other changes. Try out some of the great new features in your app -- notification channels, PIP, adaptive icons, font resources in XML, autosizing TextView, and many others. To make it easier to explore the new APIs in Android O, we've brought the API diff report online, along with the Android O API reference.

The latest canary version of Android Studio 2.4 includes new features to help you get started with Android O. You can download and set up the O preview SDK from inside Android Studio, then use Android O's XML font resources and autosizing TextView in the Layout Editor. Watch for more Android O support coming in the weeks ahead.

We're also releasing an alpha version of the 26.0.0 support library for you to try. This version adds a number of new APIs and increases the minSdkversion to 14. Check out the release notes for details.

Preview updates

The O Developer Preview includes an updated SDK with system images for testing on the official Android Emulator and on Nexus 5X, Nexus 6P, Nexus Player, Pixel, Pixel XL and Pixel C devices. If you're building for wearables, there's also an emulator for testing Android Wear 2.0 on Android O.

We plan to update the preview system images and SDK regularly throughout the O Developer Preview. This initial preview release is for developers only and not intended for daily or consumer use, so we're making it available by manual download and flash only. Downloads and instructions are here.

As we get closer to a final product, we'll be inviting consumers to try it out as well, and we'll open up enrollments through Android Beta at that time. Stay tuned for details, but for now please note that Android Beta is not currently available for Android O.

Give us your feedback

As always, your feedback is crucial, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We've moved to a more robust tool, Issue Tracker, which is also used internally at Google to track bugs and feature requests during product development. We hope you'll find it easier to use.

Categories: Programming