Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

The No Estimates principle: The importance of knowing when you are wrong

Software Development Today - Vasco Duarte - Tue, 09/23/2014 - 04:00

You started the project. You spent hours, no: days! estimating the project. The project starts and your confidence in its success is high.

Everything goes well at the start, but at some point you find the project is late. What happened? How can you be wrong about estimates?

This story very common in software projects. So common, that I bet you have lived through it many times in your life. I know I have!

Let’s get over it. We’re always wrong about estimation. Sometimes more, sometimes less and very, very rarely we are wrong in a way that makes us happy: we overestimated something and can deliver the project ahead of (the inflated?) schedule.

We’re always wrong about estimation.

Being wrong about estimates is the status quo. Get over it. Now let’s take advantage of being wrong! You can save the project by being wrong. Here’s why...

The art of being wrong about software estimates

Knowing you are wrong about your estimates is not difficult after the fact, when you compare estimates to actuals. The difficult part is to make a prediction in a way that can tested regularly, and very early on - when you still have time to change the project.

Software project estimates as they are usually done, delay the feedback for the “on time” performance to a point in time when there’s very little we can do about it. Goldratt grasped this problem and made a radical suggestion: cut all estimates in half, and use the rest of the time as a project buffer. Pretty crazy hein? Well, it worked because it forced projects to face their failures much earlier than they would otherwise. Failing to meet a deadline early on in the life-cycle of the project gave them a very powerful tool in project management: time to react!

The #NoEstimates approach to being wrong...and learning from it

In this video I explain shortly how I make predictions about a possible release date for the project based on available data. Once I make a release date prediction, I validate it as soon as possible, and typically every week. This approach allows me to learn early enough when I’m wrong and then adjust the project as needed.

We’re always wrong, the important thing is to find out how wrong, as early as possible

After each delivery (whether it is a feature or a timebox like a sprint), I update my prediction for the release date of the project based on the lead time or throughput rate so far. After updating the release date projection, I can see whether it has changed enough to require a reaction by the project team. I can make this update to the project schedule without gathering the whole team (or "the chosen ones") into a room for an ungodly long estimation meeting.

If the date has not changed outside the originally interval, or if the delivery rate is stable (see the video), then I don’t need to react.

When the release date projection changes to a time outside the original interval, or the throughput rate has become unstable (did you see the video?), then you need to react. At first to investigate the situation, and later to adjust the parameters in your project if needed.

Conclusion

The #NoEstimates approach I advocate will allow you to know when the project has changed enough to warrant a reaction. I make a prediction, and (at least) every week I review that prediction and take action.

Estimates, done the traditional way, also give you this information, but too late. This happens because of the big-batch thinking the reliance on estimations enables (larger work items are ok if you estimate), and because of the delayed dependency integration it enables (estimated projects typically allow for teams that are dependent to work separately because of the agreed plan).

The #NoEstimates approach I advocate has one goal: reduce feedback cycle. These short feedback cycles will allow you to recognise early enough how wrong you were about your predictions, and then you can make the necessary adjustments!

Picture credit: John Hammink, follow him on twitter

New D-Series of Azure VMs with 60% Faster CPUs, More Memory and Local SSD Disks

ScottGu's Blog - Scott Guthrie - Mon, 09/22/2014 - 19:19

Today I’m excited to announce that we just released a new set of VM sizes for Microsoft Azure. These VM sizes are now available to be used immediately by every Azure customer.

The new D-Series of VMs can be used with both Azure Virtual Machines and Azure Cloud Services.  In addition to offering faster vCPUs (approximately 60% faster than our A series) and more memory (up to 112 GB), the new VM sizes also all have a local SSD disk (up to 800 GB) to enable much faster IO reads and writes.

The new VM sizes available today include the following:

General Purpose D-Series VMs

Name vCores Memory (GB) Local SSD Disk (GB) Standard_D1 1 3.5 50 Standard_D2 2 7 100 Standard_D3 4 14 200 Standard_D4 8 28 400

 

High Memory D-Series VMs

Name vCores Memory (GB) Local SSD Disk (GB) Standard_D11 2 14 100 Standard_D12 4 28 200 Standard_D13 8 56 400 Standard_D14 16 112 800

For pricing information, please see Virtual Machine Pricing Details.

Local SSD Disk and SQL Server Buffer Pool Extensions

A temporary drive on the VMs (D:\ on Windows, /mnt or /mnt/resource on Linux) is mapped to the local SSDs exposed on the D-Service VMs, and provides a really good option for replicated storage workloads, like MongoDB, or for significantly increasing the performance of SQL Server 2014 by enabling its unique Buffer Pool Extensions (BPE) feature.

SQL Server 2014’s Buffer Pool Extensions allows you to extend the SQL Engine Buffer Pool with the memory of local SSD disks to significantly improve the performance of SQL workloads. The Buffer Pool is a global memory resource used to cache data pages for much faster read operations.  Without any code changes in your application, you can enable the buffer pool support with the SSDs of the D-Series VMs using a simple T-SQL query with just four lines:

ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON
SIZE = <size> [ KB | MB | GB ]
FILENAME = 'D:\SSDCACHE\EXAMPLE.BPE'

No code changes are required in your application, and all write operations will continue to be durably persisted in VM drives persisted in Azure Storage. More details on configuring and using BPE can be found here.

Start Using the D-Series VMs Today

You can start using the new D-Series VM sizes immediately.  They can be easily created and used via both the current Azure Management Portal as well as Preview Portal, as well as from the Azure management command-line/scripts/APIs.

To learn more about the D-Series please read this post which has even more details about them, as well as check out the Azure documentation center.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Xebia KnowledgeCast Episode 4: Scrum Day Europe 2013, OpenSpace Knowledge Exchange, and Fun With Stickies!

Xebia Blog - Mon, 09/22/2014 - 16:44

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this fourth episode, we share some impressions of Scrum Day Europe 2013 and Xebia's OpenSpace Knowledge Exchange. And of course, Serge Beaumont will have Fun With Stickies! First, we interview Frank Bakker and Evelien Roos at Scrum Day Europe 2013. Then, Adriaan de Jonge and Jeroen Leenarts talk about continuous delivery and iOS development at the OpenSpace XKE. And in between, Serge Beaumont has Fun With Stickies!

Frank Bakker and Evelien Roos give their first impressions of the Keynotes at Scrum Day Europe 2013. Yes, that was last year, I know. New, more current interviews are coming soon. In fact, this is the last episode in which I use interviews that were recorded last year.

In this episode's Fun With Stickies Serge Beaumont talks about hypothesis stories. Using those, ensures you keep your Agile really agile. A very relevant topic, in my opinion, and it jells nicely with my missing line of the Agile Manifesto: Experimentation over implementation!

Adriaan de Jonge explains how automation in general, and test automation in particular, is useful for continuous delivery. He warns we should focus on the process and customer interaction, not the tool(s). That's right before I can't help myself and ask him which tool to use.

Jeroen Leenarts talks about iOS development. Listening to the interview, which was recorded a year ago, it's amazing to realize that, with the exception of iOS8 having come out in the mean time, all of Jeroen's comments are as relevant today as they were last year. How's that for a world class developer!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!

Credits

The Caffeinated Coder: Is Caffeine Good or Bad?

Making the Complex Simple - John Sonmez - Mon, 09/22/2014 - 15:00

For a long time I’ve wondered about the benefits or detriments of caffeine. I’ve always been one of those coffee drinkers who didn’t have to have coffee, but drank it when it was available. I’ve never really noticed how caffeine affected me, because I never really paid that much attention. But, I’ve always been curious, […]

The post The Caffeinated Coder: Is Caffeine Good or Bad? appeared first on Simple Programmer.

Categories: Programming

F#unctional Londoners 2014

Phil Trelford's Array - Sun, 09/21/2014 - 21:40

2014 has been another crazy year for the F#unctional Londoners meetup with over 20 sessions already. Thanks to our hosts Skills Matter we’ve been able to hold a meetup roughly once every 2 weeks.

Here’s a run down of the year so far and what’s coming up.

January

Ross kicked off the year with a deep dive to his LINQ enabled erasing SQL Type Provider.

Following on, in May, Ross left the sunny shores of Southend to tour the east coast with the talk covering NYC, Washington DC and Nashville along the way.

sql-provider

First seen at DunDDD in Dundee, Anthony’s excellent talk went on to be featured at CodeMesh London too.

With F# built-in to Xamarin Studio you can easily target iOS, Android and Mac.

February

Tomas returned to London to talk about his work on Deedle while at Blue Mountain Capital in New York.

As a follow on from the talk Tomas ran a hands on session using Deedle to explore world climate, the titanic, stock market trends and finally US debt.

March

There was a huge turnout for Scott’s hugely informative and at times somewhat amusing talk first seen at NDC London.

set phasers to null

Eirik Tsarpalis and Jan Dzik, from Nessos, presented their work on MBrace a programming model and cluster infrastructure for effectively defining and executing large scale computation in the cloud.

In this hands on treasure hunt session, Tomas presented a series of data extraction tasks using type providers to find words to build a sentence.

April

Rob Lyndon introduced Deep Belief Networks and his GPU based implementation in Vulpes. This talk was repeated last week at the prestigious Strangeloop conference in St Louis!

May

Michael travelled up from Brighton for a hands on session on building type providers. Type Providers are a hot topic in the London group with a number of popular type providers produced by members including FSharp.Data, SQLProvider and Azure Storage.

Mixing biology and physics to understand stem cells and cancer (video)

Ben Hall from Microsoft Research Cambridge gave a fascinating talk about his work with a hybrid simulator in F# to explore how stem cells grow (and some worms!).

Stephen Channell gave a repeat of his excellent talk featured at FP Days and the F# in Finance conference on liquidity risk.

Ian was in town to run a session at the Progressive .Net Tutorials and gave a repeat of his excellent talk from DDD North.

June

F#unctional Londoners regular Isaac, aka the Cockney Coder, talked about his professional work with Azure including his Azure Storage type provider.

In this hands on session we used the material from Mathias Brandewinder’s session in San Francisco to have some fun drawing fractal trees.

In this session Gabriele Cocco talked about his work on FSCL, an F# to OpenCL compiler.

July

Borrowing material from Mathias again, we built a 2048 bot using the open source web testing library Canopy.



Grant popped down from Leeds to run a fun code golf session where the aim was to complete a task with the least number of characters.

August

Phil Nash talked about how he was using F# scripting at work along side his some of his C++ projects.

In this hands on session we looked at the popular parser combinator library FParsec, building a mini-Logo parser and interpreter.

September

James popped down from Edinburgh to talk about his work with Philip Wadler on the open source project FSharp.Linq.ComposableQuery.

Goswin Rothenthal talked about his work using FSharp scripting in the design of the Abu Dhabi Louvre building:

Coming up this Wednesday we have Evelina talking about some of her data science work at Cambridge.

November

On November 6-7th the Progressive F# Tutorials make a return with expert speakers including Don Syme, Tomas Petricek, Mark Seemann, Andrea Magnorsky, Michael Newton, Jérémie Chassaing, Mathias Brandewinder, Scott Wlaschin and Robert Pickering.

ProgFSharp2014Don’t miss the special offer that runs up to the end of Evelina’s talk giving a 20% discount to members, brining the price down to a barmy 200GBP, use code F#UNCTIONAL-20.

Categories: Programming

McKinsey on Unleashing the Value of Big Data Analytics

Big Data Analytics and Insights are changing the game, as more businesses introduce automated systems to support human judgment.

Add to this, advanced visualizations of Big Data, and throw in some power tools for motivated users and you have a powerful way to empower the front-line to better analyze, predict, and serve their customers.

McKinsey shares a framework and their insights on how advanced analytics can create and unleash new business value from Big Data, in their article:
Unleashing the value of advanced analytics in insurance

Creating World-Class Capabilities

The exciting part is how you can create a new world-class capability, as you bake Big Data Analytics and Insights into your business.

Via Unleashing the value of advanced analytics in insurance:

“Weaving analytics into the fabric of an organization is a journey. Every organization will progress at its own pace, from fragmented beginnings to emerging influence to world-class corporate capability.”

5-Part Framework for Unleashing the Value of Big Data Analytics

McKinsey's transformation involves five components.  The five components include the source of business value, the data ecosystem, modeling the insights, workflow integration, and adoption.

Via Unleashing the value of advanced analytics in insurance:

1. The source of business value Every analytics project should start by identifying the business value that can lead to revenue growth and increased profitability (for example, selecting customers, controlling operating expenses, lowering risk, or improving pricing). 2. The data ecosystem It is not enough for analytics teams to be “builders” of models. These advanced-analytics experts also need to be “architects” and “general contractors” who can quickly assess what resources are available inside and outside the company. 3. Modeling insights Building a robust predictive model has many layers: identifying and clarifying the business problem and source of value, creatively incorporating the business insights of everyone with an informed opinion about the problem and the outcome, reducing the complexity of the solution path, and validating the model with data. 4. Transformation: Work-flow integration The goal is always to design the integration of new decision-support tools to be as simple and user friendly as possible. The way analytics are deployed depends on how the work is done. A key issue is to determine the appropriate level of automation. A high-volume, low-value decision process lends itself to automation. 5. Transformation: Adoption Successful adoption requires employees to accept and trust the tools, understand how they work, and use them consistently. That is why managing the adoption phase well is critical to achieving optimal analytics impact. All the right steps can be made to this point, but if frontline decision makers do not use the analytics the way they are intended to be used, the value to the business evaporates.

Big Data Analytics and Insights is a hot trend for good reason.  If you saw the movie Moneyball you know why.

Businesses are using analytics to identify their most profitable customers and offer them the right price, accelerate product innovation, optimize supply chains, and identify the true drivers of financial performance.

In the book, Competing on Analytics: The New Science of Winning, Thomas H. Davenport and Jeanne G. Harris share examples of how organizations like Amazon, Barclay’s, Capital One, Harrah’s, Procter & Gamble, Wachovia, and the Boston Red Sox, are using the power of Big Data Analytics and Insights to achieve new levels of performance and compete in the digital economy.

You can read it pretty quickly to get a good sense of how analytics can be used to change the business and the more you expose yourself to the patterns, the more you can apply analytics to your work and life.

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

Management Innovation is at the Top of the Innovation Stack

Categories: Architecture, Programming

Hands-on Test Automation Tools session wrap up - Part1

Xebia Blog - Sun, 09/21/2014 - 15:57

Last week we had our first Hands-on Test Automation sessions.
Developers and Testers were challenged to show and tell their experiences in Test Automation.
That resulted in lots of in depth discussions and hands-on Test Automation Tool shoot-outs.

In this blogpost we'll share the outcome of the different sessions, like the famous Cucumber vs. FitNesse debat.

Stay tuned for upcoming updates!

Test Automation Frameworks

The following Test Automation Frameworks were demoed and discussed

1. FitNesse

FitNesse is a test management and execution tool.
You'll have to write/use fixture code if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
You can have a good drill down in test results.
You can make use of scenario's and scenario libraries to make test automation steps reusable.
But refactoring is hard when scenario's are used extensively since there is no IDE support (yet)

2. Cucumber

Cucumber is a specification tool which describes how software should behave.
You'll have to write/use step definitions if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
Cucumber forces you to write specifications / tests with scenarios (Behaviour in human readable language).
You can drill down into test results, but you'll need reporting libraries like Cucumber Reporting
We recommend using IntelliJ IDEA with the Cucumber plugin since it supports Cucumber seamlessly.
Refactoring becomes less problematic since you're using a IDE.

3. Selenium / WebDriver IDE

Selenium / WebDriver automates human interactions with web browser.
With the Selenium IDE you can record and play your tests in Firefox

Pros and Cons
It can get you started very quickly. You can record and play your tests scripts without writing any code.
Reusability of test automation code is not possible. You'll need to export it into an IDE to introduce reusability.

Must haves in Test Automation Tools

During the parallel sessions we've collected the following must haves in test automation tools.

Testers and Developers becoming best friends

When developers do not feel comfortable with the test automation tool, testers will try to fill the gap all by themselves. Most of the time these efforts result in hard to maintain test automation code. At some point in time test automation will become a bottleneck in Continuous Delivery. When picking a test automation tool consider each other's needs and pick a tool together. Feeling comfortable in writing and maintaining test automation code is really important to make test automation beneficial.

Separating What from How

Tools like FitNesse and Cucumber were designed to separate the What from the How in test automation. When combining both in those tools, you'll end up getting lost in details and you'll lose focus of what you are testing.
Use tools like FitNesse and Cucumber to describe What you are testing and put all details about How you are testing in test automation code (like fixture code and step definitions)

Other interesting tools
  • Thucydides: Reporting tests and results (including functional coverage)
  • Vagrant: Provisioning System Under Test instances
  • Liquibase: Treating database schema changes as 'code'

Stay tuned for upcoming updates!

 

FsiBot: Assorted Tweets

Phil Trelford's Array - Sat, 09/20/2014 - 17:23

FsiBot is a cloud hosted bot that evaluates F# expressions in mentions, built by Mathias Brandewinder. Underneath it uses the Twitter API, F# Compiler Services all hosted on Azure.

In the beginning the F# community put a lot of effort into bringing it down testing it’s security. Nowadays it’s become more of a creative outlet for code golf enthusiasts showcasing all sorts from ASCII Art to math.

Christmas Tree

Tomas’s tree first appeared before @fsibot was cool or for that matter even existed:

@tomaspetricek " ++ ++++ ++++0+ +++0++++ ++0++++++0 +0++++++0+++ 0++++++0++++++ [...]

— fsibot (@fsibot) August 25, 2014

Circle

An early attempt at ASCII art, the trick in Twitter is to use characters that are roughly the same width:

@ptrelford " ────█──── ──█████── ─███████─ ─███████─ █████████ ─███████─ ─███████─ ──█████── ────█──── "

— fsibot (@fsibot) August 31, 2014

Invader

Along similar lines this expression uses a bitmap to produce an ASCII invader:

@brandewinder "─────── ─█████─ █──█──█ ─██─██─ ─█─█─█─"

— fsibot (@fsibot) September 1, 2014

FSharp Logo

Here Mathias uses the same technique to generate a logo:

@brandewinder "└┐ ██████└┐└┐██└┐██└┐ ██└┐└┐└┐██████████ ██████└┐└┐██└┐██└┐ ██└┐└┐└┐██████████ ██└┐└┐└┐└┐██└┐██└┐"

— fsibot (@fsibot) September 6, 2014

Bar chart

This expression charts Yes vs No in the recent vote on Scottish Independence:

.@ptrelford " NO ████████────────2001926 YES ███████─────────1617989"

— fsibot (@fsibot) September 19, 2014

Sequences

The hailstone sequence:

.@ptrelford [17; 52; 26; 13; 40; 20; 10; 5; 16; 8; 4; 2; 1]

— fsibot (@fsibot) September 19, 2014

Pi

Interesting Pi approximation using dates:

@ptrelford 3.141592654

— fsibot (@fsibot) September 8, 2014

Inception

Evaluating an F# expression inside an F# expression:

.@ptrelford 2

— fsibot (@fsibot) September 8, 2014

Rick Roll

The thing about a rick roll is that no-one should expect it:

@ptrelford "never gonna give you up"

— fsibot (@fsibot) August 30, 2014

Magic 8-Ball

Based on magic 8-ball here's a somewhat abridged version:

@ptrelford "Go for it!"

— fsibot (@fsibot) August 30, 2014

Have fun!

Categories: Programming

Get your app in the Google index - and be ready for the future of Search

Google Code Blog - Fri, 09/19/2014 - 19:15
Last October we worked with a small number of developers to launch App Indexing, the ability for Google to index apps just like websites. In June, we opened App Indexing to all Android developers, giving you the ability to add app deep links to search results, helping users find your content and re-engage with your app after they’ve installed it.

Today, we’d like to highlight two videos to help you set up App Indexing. Check them out below. You’re only a few steps away from driving re-engagement and unlocking new avenues of discovery for your app.

1998 all over again

In 1998, Google set out to index the web and make content easily accessible and discoverable.

Today mobile app adoption is growing rapidly. Similar to the early days of the internet, it can be frustrating for users to search through their device to find what they need, since the content within apps typically exists in silos. By implementing App Indexing, you can re-engage your users from search results.

Consider the scenario of users looking for details about a great place for some Spanish food. Users might already have a great mobile app for restaurant details installed, but they typically still rely on a search engine to pull up information about local places to eat. Wouldn’t it be great if they could jump straight into your app from those search results? App Indexing is all about connecting users with the best content, whether it’s in-app or on the web. You can add it to your app very quickly -- simply add deep linking to your app, and then verify your website with Google. To enable re-engagement with your users, you can generate query autocompletions in the Google app using the App Indexing API.

Learn More

To learn more about App Indexing, and how you can use the API for your apps, check out this DevByte from Lawrence Chang, Senior Product Manager for App Indexing at Google.
For an in-depth look at App Indexing, demonstrating the technology, and the opportunities available with it, watch this session from Google I/O 2014: “The Future of Apps and Search.”
We’re at the beginning of a new wave of how people find and interact with content, and it often starts with search. Is your app in the index? If not, follow the steps in this post, and get ready for the future of Apps and Search!

Posted by Laurence Moroney

Laurence is a Developer Advocate at Google, working on Google Play services for Android. When not having fun with this, he's also a best-selling author of several Young Adult novels and dozens of books on computer programming.
Categories: Programming

Installing Oracle on Docker (Part 1)

Xebia Blog - Fri, 09/19/2014 - 12:56

I’ve spent Xebia’s Innovation Day last August experimenting with Docker in a team with two of my colleagues and a guest. We thought Docker sounds really cool, especially if your goal is to run software that doesn’t require lots of infrastructure and can be easily installed, e.g. because it runs from a jar file. We wondered however what would happen if we tried to run enterprise-software, like an Oracle database. Software that is notoriously difficult to install and choosy about the infrastructure it runs on. Hence our aim for the day: install an Oracle database on CoreOS and Docker.

We chose CoreOS because of its small footprint and the fact that it is easily installed in a VM using Vagrant (see https://coreos.com/docs/running-coreos/platforms/vagrant/). We used default Vagrantfile and CoreOS files with one modification: $vb_memory = 2024 in config.rb which allows the Oracle’s pre installer to run. The config files we used can be found here: https://github.com/jvermeir/OraDocker/

Starting with a default CoreOS install we then implemented the steps described here: http://www.tecmint.com/oracle-database-11g-release-2-installation-in-linux/.
Below is a snippet from the first version of our Dockerfile (tag: b0a7b56).
FROM centos:centos6
# Step 1: Setting Hostname
ENV HOSTNAME oracle.docker.xebia.com
# Step 2
RUN yum -y install wget
RUN wget --no-check-certificate https://public-yum.oracle.com/public-yum-ol6.repo -O /etc/yum.repos.d/public-yum-ol6.repo
RUN wget --no-check-certificate https://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
RUN yum -y install oracle-rdbms-server-11gR2-preinstall

Note that this takes awhile because the pre installer downloads a number of CentOS packages that are missing in CoreOS.
Execute this in a shell:
vagrant up
vagrant ssh core-01
cd ~/share/docker
docker build -t oradocker .

This seemed like a good time to do a commit to save our work in Docker:
docker ps # note the container-id and substitute for the last parameter in the line below this one.
docker commit -m "executed pre installer" 07f7389c811e janv/oradocker

At this point we studiously ignore some of the advice listed under ‘Step 2’ in Tecmint’s install manual, namely adding the HOSTNAME to /etc/syconfig/network, allowing access to the xhost (what would anyone want that for?) and mapping an IP address to a hostname in /etc/hosts (setting the hostname through ‘ENV HOSTNAME’ had no real effect as far as we could tell). We tried that but it didn’t seem to work. Denying reality and substituting our own we just ploughed on…

Next we added Docker commands to the Dockerfile that creates the oracle user, copy the relevant installer files and unzip them. Docker starts by sending a build context to the Docker daemon. This takes quite some time because the Oracle installer files are large. There’s probably some way to avoid this, but we didn’t look into that. Unfortunately Docker copies the installer files each time you run docker -t … only to conclude later on that nothing changed.

The next version of our Dockerfile sort of works in the sense that it starts up the installer. The installer then complains about missing swap space. We fixed this temporarily at the CoreOS level by running the following commands:
sudo su -
swapfile=$(losetup -f)
truncate -s 1G /swap
losetup $swapfile /swap
mkswap $swapfile
swapon $swapfile

found here: http://www.spinics.net/lists/linux-btrfs/msg28533.html
This works but it doesn’t survive a reboot.
Now the installer continues only to conclude that networking is configured improperly (one of those IgnoreAndContinue decisions coming back to bite us):
[FATAL] PRVF-0002 : Could not retrieve local nodename

For this to work you need to change /etc/hosts which our Docker version doesn’t allow us to do. Apparently this is fixed in a later version, but we didn’t get around to testing that. And maybe changing /etc/sysconfig/network is even enough, but we didn't try that either.

The latest version of our work is on github (tag: d87c5e0). The repository does not include the Oracle installer files. If you want to try for yourself you can download the files here: ">http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index-092322.html and adapt the docker file if necessary.

Below is a list of ToDo’s:

  1. Avoid copying large installer files if they’re not gonna be used anyway.
  2. Find out what should happen if you call ‘ENV HOSTNAME oracle.docker.xebia.com’.
  3. Make swap file setting permanent on CoreOS.
  4. Upgrade Docker version so we can change /etc/hosts

All in all this was a useful day, even though the end result was not a running database. We hope to continue working on the ToDo’s in the future.

Management Innovation is at the Top of the Innovation Stack

Management Innovation is at the top of the Innovation Stack.  

The Innovation Stack includes the following layers:

  1. Management Innovation
  2. Strategic Innovation
  3. Product Innovation
  4. Operational Innovation

While there is value in all of the layers, some layers of the Innovation Stack are more valuable than others in terms of overall impact.  I wrote a post that walks through each of the layers in the Innovation Stack.

I think it’s often a surprise for people that Product or Service Innovation is not at the top of the stack.   Many people assume that if you figure out the ultimate product, then victory is yours.

History shows that’s not the case, and that Management Innovation is actually where you create a breeding ground for ideas and people to flourish.

Management Innovation is all about new ways of mobilizing talent, allocating resources, and building strategies.

If you want to build an extremely competitive advantage, then build a Management Innovation advantage.  Management Innovation advantages are tough to copy or replicate.

If you’ve followed my blog, you know that I’m a fan of extreme effectiveness.   When it comes to innovation, I’ve had the privilege and pleasure of playing a role in lots of types of innovation over the years at Microsoft.   If I look back, the most significant impact has always been in the area of Management Innovation.

It’s the trump card.

Categories: Architecture, Programming

Full width iOS Today Extension

Xebia Blog - Thu, 09/18/2014 - 10:57

Apple decided that the new Today Extensions of iOS 8 should not be the full width of the notification center. They state in their documentation:

Because space in the Today view is limited and the expected user experience is quick and focused, you shouldn’t create a widget that's too big by default. On both platforms, a widget must fit within the width of the Today view, but it can increase in height to display more content.

Source: https://developer.apple.com/library/ios/documentation/General/Conceptual/ExtensibilityPG/NotificationCenter.html

This means that developers that create Today Extensions can only use a width of 273 points instead of the full 320 points (for iPhones pre iPhone 6) and have a left offset of the remaining 47 points. Though with the release of iOS 8, several apps like DropBox and Evernote do seem to have a Today Extension that uses the full width. This raises question wether or not Apple noticed this and why it came through the approval process. Does Apple not care?

Should you want to create a Today Extension with the full width yourself as well, here is how to do it (in Swift):

override func viewWillAppear(animated: Bool) {
    super.viewWillAppear(animated)
    if let superview = view.superview {
        var frame = superview.frame
        frame = CGRectMake(0, CGRectGetMinY(frame), CGRectGetWidth(frame) + CGRectGetMinX(frame), CGRectGetHeight(frame))
        superview.frame = frame
    }
}

This changes the super view (Today View) of your Today Extension view. It doesn't use any private Api's, but Apple might reject it for not following their rules. So think carefully before you use it.

Become high performing. By being happy.  

Xebia Blog - Thu, 09/18/2014 - 03:59

The summer holidays are over. Fall is coming. Like the start of every new year, a good moment for new inspiration.

Recently, I went twice to the Boston area for a client of Xebia. I met there (I dislike the word “assessments"..) a number of experienced Scrum teams. They had an excellent understanding of Scrum, but were not able to convert this to an excellent performance. Actually, there were somewhat frustrated and their performance was slightly going down.

So, they were great teams, great team members, their agile processes were running smoothly, but still not a single winning team. Which left in my opinion only one option: a lack of Spirit.   Spirit is the fertilizer of Scrum and actually every framework, methodology and innovation.  But how to boost the spirit?

Screen Shot 2014-09-17 at 10.43.43 PM Until a few years ago, I would “just" organize teambuilding sessions to boost this, parallel with fixing/escalating the root causes. Nobel, but far from effective.   It’s much more about mindset en happiness and taking your own responsibility there.   Let’s explain this a little bit more here.

This are definitely awkward times. Terrible wars and epidemics where we can’t turn our back from anymore, an economic system which hardly survives, a more and more accelerating and a highly demanding society. In all which we have to find “time” for our friends, family, yourself and job or study. The last ones are essential to regain balance in a more and more depressing world. But how?

One of the most important building blocks of the agile mindset and life is happiness. Happiness is the fuel of acceleration and success. But what is happiness? Happiness is the ultimate moment you’re not thinking, but enjoying the moment and forget the world around you. For example, craftmanship will do this with you. Losing track of time while exercising the craft you love.

But too often we prevent our selves from being happy. Why should I be happy in this crazy world?   With this mentality you’re kept hostage by your worrying mind and ignore the ultimate state you were born: pure, happy ready to explore the world (and make mistakes!). It’s not a bad thing to be egocentric sometimes and switch off your dominant mind now and then. Every human being has the state of mind and ability to do this. But too rarely we do.

On the other hand, it’s also not a bad thing to be angry, frightened or sad sometimes. These emotions will help enjoying your moments of happiness more. But often your mind will resist these emotions. They are perceived as a sign of weakness or as a negative thing you should avoid. A wrong assumption. The harder you're trying to resist these emotions, the longer they will stay in your system and prevent you from being happy.

Being aware of these mechanisms I’ve explained above, you’ll be happier, more productive and better company for your family, friends and colleagues. Parties will not be a forced way trying to create happiness anymore, but a celebration of personal responsibility, success and happiness.

Episode 210: Stefan Tilkov on Architecture and Micro Services

Micro services is an emerging trend in software architecture that focuses on small, lightweight applications as a means to avoid large, unmaintainable, monolithic systems. This approach allows for individual technology stacks for each component and more resilient systems. Micro services uses well-known communication schemes such as REST but also require new technologies for the implementation. […]
Categories: Programming

gcloud-node - a Google Cloud Platform Client Library for Node.js

Google Code Blog - Wed, 09/17/2014 - 21:18
This post originally appeared on the Google Cloud Platform blog 

Today we are announcing a new category of client libraries that has been built specifically for Google Cloud Platform. The very first library, gcloud-node, is idiomatic and intuitive for Node.js developers. With today’s release, you can begin integrating Cloud Datastore and Cloud Storage into your Node.js applications, with more Cloud Platform APIs and programming languages planned. The easiest way to get started is by installing the gcloud package using npm:
$ npm install gcloud
With gcloud installed, your Node.js code is simpler to write, easier to read, and cleaner to integrate with your existing Node.js codebase. Take a look at the code required to retrieve entities from Datastore:
var gcloud = require('gcloud');

var dataset = new gcloud.datastore.Dataset({
projectId: 'my-project',
keyFilename: '/path/to/keyfile.json' // Details at
//https://github.com/googlecloudplatform/gcloud-node#README
});

dataset.get(dataset.key('Product', 123), function(err, entity) {
console.log(err, entity);
});
gcloud is open-sourced on Github; check out the code, file issues and contribute a PR - contributors are welcome. Got questions? Post them on StackOverflow with the [gcloud-node] tag. Learn more about the Client Library for Node.js at http://googlecloudplatform.github.io/gcloud-node/ and try gcloud-node today.

Posted by JJ Geewax, Software Engineer

Node.js is a trademark of Joyent, Inc. and npm is a trademark of npm, Inc.
Categories: Programming

Messaging on Android Wear

Android Developers Blog - Wed, 09/17/2014 - 18:45

By Timothy Jordan, Developer Advocate

Sending messages on Android Wear feels as easy as it was to pass notes back in school. Remember when your friends always felt nearby? That feeling is why I love staying in touch with friends and family using my wearable.

Your messaging app likely already works on Android Wear. With just a few more lines of code you can unlock simple but powerful features that let your users communicate even more effortlessly.

Message notifications for free

If your Android app uses notifications to let the user know about new messages, these will work automatically on their wearable. That is, when you build notifications with the NotificationCompat.Builder class, the system takes care of displaying them properly, whether they appear on a handheld or wearable. Also, an "Open on phone" action will be added so it's easy for the user to reply via the app on their handheld.

Google+ Hangouts message.

Reply like a champ

Messages on Wear get really exciting when you can reply directly from the watch with your voice. In addition to being super convenient, this always gives me a Dick Tracy thrill… but maybe that's just me. =]

To add this functionality, it's as simple as adding an action to the notification via WearableExtender that includes a RemoteInput to your notification. After the user replies, you'll just grab their voice input as a string from the RemoteInput included in the Intent. You can even include text responses the user can easily select from a list by passing an array of them to the setChoices method of the RemoteInput. More details and code can be found here.

WhatsApp message with the reply by voice action.

See who is texting

Messages are more meaningful when you are connected to the sender. That's why we recommend you include the photo of the sender as the background of the notification. As soon as the user taps into the message, they also see who it's from, which will make it matter more (or maybe that other thing, depending on who it is).

You should add a photo with a resolution of at least 400x400, but we recommend 640x400. With the larger size, the background will be given parallax scrolling. If the background is to be included in the apk, place it in the res/drawable-nodpi directory. Then call setBackground() on your WearableExtender and add it to your notification. More details and code can be found here.

Path Talk message with a clear picture of the sender.

Custom actions

Basic notifications with reply by voice and a good background image are the most important parts to get done right away. But why stop there? It's easy to extend the unique parts of your service to the wearable. A simple first step is adding in a custom action the way Omlet does. These are just actions defined with the WearableExtender that raise an intent on the handheld.

Omlet includes two extra actions with every message: Like and Check-In. Check-In sends along the user's current location.

Custom Layouts

Custom interaction on the wearable, like the following example from TextMe, is straightforward to implement. They have what appears to be a simple notification with an action that allows the user to select an emoticon. However, to show this emoticon picker, they are actually issuing a notification from the wearable. The round trip looks something like this:

  1. The handheld gets a new message, issues a notification setLocalOnly(True), and sends a message to the wearable using the Data Layer API
  2. The wearable receives that message using the WearableListenerService and issues a custom notification with a PendingIntent to launch an activity when the user views the notification
  3. That activity has a custom layout defined with the Wearable UI Library
  4. Once the user selects an emoticon, the wearable sends a message back to the handheld
  5. The handheld receives that message and sends it along to the server

Custom layouts are documented in more depth here.

TextMe allows users to reply with a quick emoticon.

Next steps

Make your messaging service awesome by providing rich functionality on the user's wearable. It's easy to get started and easy to go further. It all starts at developer.android.com/wear.


Join the discussion on
+Android Developers


Categories: Programming

Allowing end users to install your app from Google Apps Marketplace

Google Code Blog - Wed, 09/17/2014 - 17:32
by Chris Han, Product Manager Google Apps Marketplace
The Google Apps Marketplace brings together hundreds of third-party applications that integrate and enhance Google Apps for Work. Previously, only administrators were able to install these applications directly for people at work. Now, any Google Apps user can install these applications by logging into Google Apps, clicking the app launcher icon , clicking More, and then clicking More from Apps Marketplace. By default, any Google Apps user can install apps from the Google Apps Marketplace—excluding K-12 EDU domains that are defaulted off. For more information, please see our Help Center
If you have an app in the Google Apps Marketplace utilizing oAuth 2.0, you can follow the simple steps below to enable individual end users to install your app. If you’re not yet using oAuth 2.0, instructions to migrate are here.
1. Navigate to your Google Developer Console.

2. Select your Google Apps Marketplace project.3. Click APIs under the APIs & auth section.4. Click the gear icon next to Google Apps Marketplace SDK.5. Check Allow Individual Install.
6. Click Save changes.
Categories: Programming

Continuous Delivery is about removing waste from the Software Delivery Pipeline

Xebia Blog - Wed, 09/17/2014 - 15:44

On October the 22nd I will be speaking at the Continuous Delivery and DevOps Conference in Copenhagen where I will share experiences on a very successful implementation of a new website serving about 20.000.000 page views a month.

Components and content for this site were developed by five(!) different vendors and for this project the customer took the initiative to work according to DevOps principles and implement a fully automated Software Delivery Process as they went along. This was a big win for the project, as development teams could now focus on delivering new software instead of fixing issues within the delivery process itself and I was the lucky one to implement this.

This blog is about visualizing the 'waste' we addressed within the project where you might find the diagrams handy when communicating Continuous Delivery principles within your own organization.

To enable yourself to work according to Continuous Delivery principles, an effective starting point is to remove waste from the Software Delivery Process. If you look at a traditional Software Delivery Process you'll find that there are probably many areas in your existing process that do not add any value for the customer at all.

These area's should be seen as pure waste, not adding any value to your customer, costing you either time or money (or both) over-and-over-and-over again. Each time new features are being developed and pushed to production, many people will perform a lot of costly manual work and run into the same issues over and over again. The diagram below provides an example of common area's where you might find waste in your existing Software Development Pipeline. Imagine this process to repeat every time a development team delivers new software. Within your conversation, you might want to an equal diagram to explain pain points within your current Software Delivery Process.

a traditional software delivery process

a traditional software delivery process

Automation of the Software Delivery process within this project, was all about eliminating known waste as much as possible. This resulted in setting up an Agile project structure and start working according to DevOps principles, enabling the team to deliver software on a frequent basis. Next to that, we automated the central build with Jenkins CI, which checks out code from a Git Version Management System, compiles code using maven, stores components in Apache Archiva, kicks off static, unit and functional tests covering both the JEE and PHP codebase and creating Deployment Units for further processing down the line. Deployment Automation itself was implemented by introducing XL Deploy. By doing so, every time a developer pushed new JEE or PHP code into the Git Version Management System, freshly baked deployment units were instantly deployed to the target landscape, which in its turn was managed by Puppet. An abstract diagram of this approach and chosen tooling is provided below.

overview of tooling for automating the software delivery process

overview of tooling for automating the software delivery process

When paving the way for Continuous Delivery, I often like to refer to this as working on the six A's: Setting up Agile (Product Focused) Delivery teams, Automating the build, Automating tests, Automating Deployments, Automating the Provisioning Layer and clean, easy to handle Software Architectures. The A for Architecture is about making sure that the software that is being delivered actually supports automation of the Software Delivery Process itself and put's the customer in the position to work according to Continuous Delivery principles. This A is not visible in the diagram.

After automation of the Software Delivery Process, the customer's Software Development Process behaved like the optimized process below, providing the team the opportunity to push out a constant & fluent flow of new features to the end user. Within your conversation, you might want to use this diagram to explain advantages to your organization.

an optimized software delivery process

an optimized software delivery process

As we automated the Software Delivery Pipeline for the customer we positioned this customer to go live at a press of a button. And on the go-live date, it was just that: a press of the button and 5 minutes later the site was completely live, making this the most boring go-live event I've ever experienced. The project itself was real good fun though! :)

Needless to say that subsequent updates are now moved into live state in a matter of minutes as the whole process just became very reliable. Deploying code just became a non-event. More details on how we made this project a complete success, how we implemented this environment, the project setting, the chosen tooling along with technical details I will happily share at the Continuous Delivery and DevOps Conference in Copenhagen. But of course you can also contact me directly. For now, I just hope to meet you there..

Michiel Sens.

Google Play Services 5.0

Android Developers Blog - Tue, 09/16/2014 - 22:45
gps

Google Play services 5.0 is now rolled out to devices worldwide, and it includes a number of features you can use to improve your apps. This release introduces Android wearable services APIs, Dynamic Security Provider and App Indexing, whilst also including updates to the Google Play game services, Cast, Drive, Wallet, Analytics, and Mobile Ads.

Android wearable services

Google Play services 5.0 introduces a set of APIs that make it easier to communicate with your apps running on Android wearables. The APIs provide an automatically synchronized, persistent data store and a low-latency messaging interface that let you sync data, exchange control messages, and transfer assets.

Dynamic security provider

Provides an API that apps can use to easily install a dynamic security provider. The dynamic security provider includes a replacement for the platform's secure networking APIs, which can be updated frequently for rapid delivery of security patches. The current version includes fixes for recent issues identified in OpenSSL.

Google Play game services

Quests are a new set of APIs to run time-based goals for players, and reward them without needing to update the game. To do this, you can send game activity data to the Quests service whenever a player successfully wins a level, kills an alien, or saves a rare black sheep, for example. This tells Quests what’s going on in the game, and you can use that game activity to create new Quests. By running Quests on a regular basis, you can create an unlimited number of new player experiences to drive re-engagement and retention.

Saved games lets you store a player's game progress to the cloud for use across many screen, using a new saved game snapshot API. Along with game progress, you can store a cover image, description and time-played. Players never play level 1 again when they have their progress stored with Google, and they can see where they left off when you attach a cover image and description. Adding cover images and descriptions provides additional context on the player’s progress and helps drive re-engagement through the Play Games app.

App Indexing API

The App Indexing API provides a way for you to notify Google about deep links in your native mobile applications and drive additional user engagement. Integrating with the App Indexing API allows the Google Search app to serve up your app’s history to users as instant Search suggestions, providing fast and easy access to inner pages in your app. The deep links reported using the App Indexing API are also used by Google to index your app’s content and surface them as deep links to Google search result.

Google Cast

The Google Cast SDK now includes media tracks that introduce closed caption support for Chromecast.

Drive

The Google Drive API adds the ability to sort query results, create folders offline, and select any mime type in the file picker by default.

Wallet

Wallet objects from Google take physical objects (like loyalty cards, offers) from your wallet and store them in the cloud. In this release, Wallet adds "Save to Wallet" button support for offers. When a user clicks "Save to Wallet" the offer gets saved and shows up in the user's Google Wallet app. Geo-fenced in-store notifications prompt the user to show and scan digital cards at point-of-sale, driving higher redemption. This also frees the user from having to carry around offers and loyalty cards.

Users can also now use their Google Wallet Balance to pay for Instant Buy transactions by providing split tender support. With split tender, if your Wallet Balance is not sufficient, the payment is split between your Wallet Balance and a credit/debit card in your Google Wallet.

Analytics

Enhanced Ecommerce provides visibility into the full customer journey, adding the ability to measure product impressions, product clicks, viewing product details, adding a product to a shopping cart, initiating the checkout process, internal promotions, transactions, and refunds. Together they help users gain deeper insights into the performance of their business, including how far users progress through the shopping funnel and where they are abandoning in the purchase process. Enhanced Ecommerce also allows users to analyze the effectiveness of their marketing and merchandising efforts, including the impact of internal promotions, coupons, and affiliate marketing programs.

Mobile Ads

Google Mobile Ads are a great way to monetise your apps and you now have access to better in-app purchase ads. We've now added a default implementation for consumable purchases using the Google Play In-app Billing service.

And that’s another release of Google Play services. The updated Google Play services SDK is now available through the Android SDK manager. For details on the APIs, please see New Features in Google Play services 5.0.




Join the discussion on
+Android Developers


Categories: Programming

An important announcement for iOS developers using the GooglePlus SDK

Google Code Blog - Tue, 09/16/2014 - 21:10
Last week, Apple updated their app submission policy requiring that resource bundles not include binaries. In order for your apps to meet these new requirements, you must either replace your existing Google+ iOS SDK with the updated 1.7.1 Google+ iOS SDK that has the files removed or remove the following files from the GooglePlus bundle:

  • GooglePlus.bundle/GPPSignIn3PResources
  • GooglePlus.bundle/GPPCommonSharedResources.bundle/GPPCommonSharedResources
  • GooglePlus.bundle/GPPShareboxSharedResources.bundle/GPPShareboxSharedResources 

Please update your app immediately, or your app will be rejected by Apple. Because the files were only used for versioning, the change will have no impact on your app's functionality.

Posted by Mohamed Zoweil, Software Engineer, Google
Categories: Programming