Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming

R: Removing for loops

Mark Needham - 4 hours 40 min ago

In my last blog post I showed the translation of a likelihood function from Think Bayes into R and in my first attempt at this function I used a couple of nested for loops.

likelihoods = function(names, mixes, observations) {
  scores = rep(1, length(names))
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        scores[name] = scores[name] *  mixes[[name]][observation]      
      }
    }  
  return(scores)
}
Names = c("Bowl 1", "Bowl 2")
 
bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
Mixes
 
Observations = c("vanilla", "vanilla", "vanilla", "chocolate")
l = likelihoods(Names, Mixes, Observations)
 
> l / sum(l)
  Bowl 1   Bowl 2 
0.627907 0.372093

We pass in a vector of bowls, a nested dictionary describing the mixes of cookies in each bowl and the observations that we’ve made. The function tells us that there’s an almost 2/3 probability of the cookies coming from Bowl 1 and just over 1/3 of being Bowl 2.

In this case there probably won’t be much of a performance improvement by getting rid of the loops but we should be able to write something that’s more concise and hopefully idiomatic.

Let’s start by getting rid of the inner for loop. That can be replace by a call to the Reduce function like so:

likelihoods2 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  for(name in names) {
    scores[name] = Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1)
  }  
  return(scores)
}
l2 = likelihoods2(Names, Mixes, Observations)
 
> l2 / sum(l2)
  Bowl 1   Bowl 2 
0.627907 0.372093

So that’s good, we’ve still got the same probabilities as before. Now to get rid of the outer for loop. The Map function helps us out here:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  return(scores)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
> l3
$`Bowl 1`
  vanilla 
0.1054688 
 
$`Bowl 2`
vanilla 
 0.0625

We end up with a list instead of a vector which we need to fix by using the unlist function:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  return(unlist(scores))
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
Bowl 1.vanilla Bowl 2.vanilla 
      0.627907       0.372093

Now we just have this annoying ‘vanilla’ in the name. That’s fixed easily enough:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = Map(function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1), 
    names)
 
  result = unlist(scores)
  names(result) = names
 
  return(result)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
  Bowl 1   Bowl 2 
0.627907 0.372093

A slightly cleaner alternative makes use of the sapply function:

likelihoods3 = function(names, mixes, observations) {
  scores = rep(0, length(names))
  names(scores) = names
 
  scores = sapply(names, function(name) 
    Reduce(function(acc, observation) acc *  mixes[[name]][observation], Observations, 1))
  names(scores) = names
 
  return(scores)
}
 
l3 = likelihoods3(Names, Mixes, Observations)
 
> l3 / sum(l3)
  Bowl 1   Bowl 2 
0.627907 0.372093

That’s the best I’ve got for now but I wonder if we could write a version of this using matrix operations some how – but that’s for next time!

Categories: Programming

Life Quotes That Will Change Your Life

Life’s better with the right words.

And life quotes can help us live better.

Life quotes are a simple way to share some of the deepest insights on the art of living, and how to live well.

While some people might look for wisdom in a bottle, or in a book, or in a guru at the top of a mountain, surprisingly, a lot of the best wisdom still exists as quotes.

The problem is they are splattered all over the Web.

The Ultimate Life Quotes Collection

My ultimate Life Quotes collection is an attempt to put the best quotes right at your fingertips.

I wanted this life quotes collection to answer everything from “What is the meaning of life?” to “How do you live the good life?” 

I also wanted this life quotes collection to dive deep into all angles of life including dealing with challenges, living with regrets, how to find your purpose, how to live with more joy, and ultimately, how to live a little better each day.

The World’s Greatest Philosophers at Your Fingertips

Did I accomplish all that?

I’m not sure.  But I gave it the old college try.

I curated quotes on life from an amazing set of people including Dr. Seuss, Tony Robbins, Gandhi, Ralph Waldo Emerson, James Dean, George Bernard Shaw, Virginia Woolf, Buddha, Lao Tzu, Lewis Carroll, Mark Twain, Confucius, Jonathan Swift, Henry David Thoreau, and more.

Yeah, it’s a pretty serious collection of life quotes.

Don’t Die with Your Music Still In You

There are many messages and big ideas among the collection of life quotes.  But perhaps one of the most important messages is from the late, great Henry David Thoreau:

“Most men lead lives of quiet desperation and go to the grave with the song still in them.” 

And, I don’t think he meant play more Guitar Hero.

If you’re waiting for your chance to rise and shine, chances come to those who take them.

Not Being Dead is Not the Same as Being Alive

E.E. Cummings reminds us that there is more to living than simply existing:

“Unbeing dead isn’t being alive.” 

And the trick is to add more life to your years, rather than just add more years to your life.

Define Yourself

Life quotes teach us that living live on your terms starts by defining yourself.  Here are big, bold words from Harvey Fierstein that remind us of just that:

“Never be bullied into silence. Never allow yourself to be made a victim. Accept no one’s definition of your life; define yourself.”

Now is a great time to re-imagine all that you’re capable of.

We Regret the Things We Didn’t Do

It’s not usually the things that we do that we regret.  It’s the things we didn’t do:

“Of all sad words of tongue or pen, the saddest are these, ‘It might have been.”  – John Greenleaf Whittier

Have you answered to your calling?

Leave the World a Better Place

One sure-fire way that many people find their path is they aim to leave the world at least a little better than they found it.

“To laugh often and much; to win the respect of intelligent people and the affection of children…to leave the world a better place…to know even one life has breathed easier because you have lived. This is to have succeeded.” -- Ralph Waldo Emerson

It’s a reminder that we can measure our life by the lives of the people we touch.

You Might Also Like

7 Habits of Highly Motivated People

10 Leadership Ideas to Improve Your Influence and Impact

Boost Your Confidence with the Right Words

The Great Inspirational Quotes Revamped

The Great Leadership Quotes Collection Revamped

Categories: Architecture, Programming

Is LeSS meer dan SAFe?

Xebia Blog - Fri, 04/17/2015 - 14:48

(Grote) Nederlandse bedrijven die op zoek zijn naar een oplossing om de voordelen die hun Agile teams brengen op te schalen, gebruiken vooral het Scaled Agile Framework (SAFe) als referentiemodel. Dit model is -ook voor managers- zeer toegankelijke opgezet en trainingen en gecertificeerde consultants zijn beschikbaar. Al in 2009 beschreven Craig Larman en Bas Vodde hun ervaringen met de toepassing van Scrum in grote organisaties (onder andere Nokia) in hun boeken 'Scaling Lean & Agile Development' en 'Practices for Scaling Lean & Agile Development'. De methode noemden ze Large Scale Scrum, afgekort LeSS.
LeSS heeft de afgelopen jaren een onopvallend bestaan geleid. Onlangs is besloten dit waardevolle gedachtengoed meer in de spotlights te zetten. Er komt in de zomer een derde boek, de site less.works is gelanceerd, er is een trainingen tournee gestart en Craig en Bas geven acte de presence op de toonaangevende conferenties. Zo zal Bas 4 juni als keynote optreden tijdens Xebicon 2015, in Amsterdam. Is LeSS meer of minder dan SAFe? Of min of meer SAFe?

Wat is LeSS?
Less is dus een methode om een grote(re) organisatie met Agile teams als basis in te richten. Zoals de naam verraadt, Scrum is daarbij het uitgangspunt. Er zijn 2 smaken: ‘gewoon’ LeSS, tot 8 teams en Less Huge, vanaf 8 teams. LeSS bouwt op verplichte regels (rules), bijvoorbeeld "An Overall Retrospective is held after the Team Retrospectives to discuss cross-team and system-wide issues, and create improvement experiments. This is attended by Product Owner, ScrumMasters, Team Representatives, and managers (if there are any).” Daarnaast kent LeSS principles (ontwerp criteria). De principes vormen het referentie raamwerk op basis waarvan je de juiste ontwerp besluiten neemt. Tenslotte zijn er de Guidelines en Experiments, de dingen die in de praktijk bij organisaties succesvol of juist niet zijn gebleken. LeSS gaat naast het basis framework verder dieper in op:

  • Structure (de organisatie structuur)
  • Management (de -veranderende- rol van management)
  • Technical Excellence (sterk gebaseerd op XP en Continuous Delivery)
  • Adoption (de transformatie naar de LeSS organisatie).

LeSS in een notendop
De basis van LeSS is dat Large Scale Scrum = Scrum! Net als SAFe wordt in LeSS gezocht naar hoe Scrum toegepast kan worden op een groep van zeg 100 man. LeSS blijft het dichtst bij Scrum: er is 1 sprint, met 1 Product Owner, 1 product backlog, 1 planning en 1 sprint review, waarin 1 product wordt gerealiseerd. Dit is dus anders dan in SAFe, waarin een opgeblazen sprint is gedefinieerd (de Product Increment). Om deze 1 sprint implementatie te kunnen waarmaken is naast een hele sterke whole product focus, bijvoorbeeld ook een technisch platform nodig, dat dit ondersteunt. Waar SAFe pragmatisch een geleidelijke invoering van Agile at Scale toestaat, is LeSS strenger in de klaar-voor-de-start eisen. Er moet een structuur worden neergezet die de cultuur van de 'contract game’ doorbreekt. De cultuur van overvragen, druk, onduidelijkheid, verrassingen, en afrekenende verantwoordelijkheid.

LeSS is meer en minder SAFe
De recente inspanning om LeSS toegankelijk te maken gaan ongetwijfeld leiden tot een sterk toenemende aandacht voor deze aansprekende benadering voor de inrichting van Agile at Scale. LeSS is anders dan SAFe, al hebben beide modellen vooral in hun inspiratiebronnen ook veel gemeen.
Beide modellen kiezen een andere insteek, bijvoorbeeld mbt:

  • hoe Scrum toe te passen op een cluster van teams
  • de benadering van de transformatie naar Agile at Scale
  • hoe oplossingen worden gebracht: SAFe geeft de oplossing, LeSS de voors en tegens van keuzes

Opvallend is verder dat SAFe (met het portfolioniveau) uitlegt hoe de verbinding tussen strategie en backlogs gelegd moet worden. LeSS besteedt daarentegen meer aandacht aan de transformatie (Adoption) en Agile op hele grote schaal (LeSS Huge).

Of een organisatie kiest voor LeSS of SAFe, zal afhangen wat het best past bij de organisatie. Past bij de veranderambitie en bij de ‘agility’ op moment van starten. Sterk ‘blauwe’ organisaties zullen kiezen voor SAFe, organisaties die een overtuigende stap richting een Agile organisatie durven te zetten, zullen eerder kiezen voor LeSS. In beide gevallen loont het om kennis te nemen van de oplossingen die de andere methode biedt.

R: Think Bayes – More posterior probability calculations

Mark Needham - Thu, 04/16/2015 - 21:57

As I mentioned in a post last week I’ve been reading through Think Bayes and translating some of the examples form Python to R.

After my first post Antonios suggested a more idiomatic way of writing the function in R so I thought I’d give it a try to calculate the probability that combinations of cookies had come from each bowl.

In the simplest case we have this function which takes in the names of the bowls and the likelihood scores:

f = function(names,likelihoods) {
  # Assume each option has an equal prior
  priors = rep(1, length(names)) / length(names)
 
  # create a data frame with all info you have
  dt = data.frame(names,priors,likelihoods)
 
  # calculate posterior probabilities
  dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods)
 
  # specify what you want the function to return
  list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post)  
}

We assume a prior probability of 0.5 for each bowl.

Given the following probabilities of of different cookies being in each bowl…

mixes = {
  'Bowl 1':dict(vanilla=0.75, chocolate=0.25),
  'Bowl 2':dict(vanilla=0.5, chocolate=0.5),
}

…we can simulate taking one vanilla cookie with the following parameters:

Likelihoods = c(0.75,0.5)
Names = c("Bowl 1", "Bowl 2")
res=f(Names,Likelihoods)
 
> res$posteriors[res$name == "Bowl 1"]
[1] 0.6
> res$posteriors[res$name == "Bowl 2"]
[1] 0.4

If we want to simulate taking 3 vanilla cookies and 1 chocolate one we’d have the following:

Likelihoods = c((0.75 ** 3) * (0.25 ** 1), (0.5 ** 3) * (0.5 ** 1))
Names = c("Bowl 1", "Bowl 2")
res=f(Names,Likelihoods)
 
> res$posteriors[res$name == "Bowl 1"]
[1] 0.627907
> res$posteriors[res$name == "Bowl 2"]
[1] 0.372093

That’s a bit clunky and the intent of ‘3 vanilla cookies and 1 chocolate’ has been lost. I decided to refactor the code to take in a vector of cookies and calculate the likelihoods internally.

First we need to create a data structure to store the mixes of cookies in each bowl that we defined above. It turns out we can do this using a nested list:

bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
 
> Mixes
$`Bowl 1`
  vanilla chocolate 
     0.75      0.25 
 
$`Bowl 2`
  vanilla chocolate 
      0.5       0.5

Now let’s tweak our function to take in observations rather than likelihoods and then calculate those likelihoods internally:

likelihoods = function(names, observations) {
  scores = c(1,1)
  names(scores) = names
 
  for(name in names) {
      for(observation in observations) {
        scores[name] = scores[name] *  mixes[[name]][observation]      
      }
    }  
  return(scores)
}
 
f = function(names,mixes,observations) {
  # Assume each option has an equal prior
  priors = rep(1, length(names)) / length(names)
 
  # create a data frame with all info you have
  dt = data.frame(names,priors)
 
  dt$likelihoods = likelihoods(Names, Observations)
 
  # calculate posterior probabilities
  dt$post = dt$priors*dt$likelihoods / sum(dt$priors*dt$likelihoods)
 
  # specify what you want the function to return
  list(names=dt$names, priors=dt$priors, likelihoods=dt$likelihoods, posteriors=dt$post)  
}

And if we call that function:

Names = c("Bowl 1", "Bowl 2")
 
bowl1Mix = c(0.75, 0.25)
names(bowl1Mix) = c("vanilla", "chocolate")
bowl2Mix = c(0.5, 0.5)
names(bowl2Mix) = c("vanilla", "chocolate")
Mixes = list("Bowl 1" = bowl1Mix, "Bowl 2" = bowl2Mix)
Mixes
 
Observations = c("vanilla", "vanilla", "vanilla", "chocolate")
 
res=f(Names,Mixes,Observations)
 
> res$posteriors[res$names == "Bowl 1"]
[1] 0.627907
 
> res$posteriors[res$names == "Bowl 2"]
[1] 0.372093

Exactly the same result as before! #win

Categories: Programming

Works with Google Cardboard: creativity plus compatibility

Google Code Blog - Thu, 04/16/2015 - 18:16

Posted by Andrew Nartker, Product Manager, Google Cardboard

All of us is greater than any single one of us. That’s why we open sourced the Cardboard viewer design on day one. And why we’ve been working on virtual reality (VR) tools for manufacturers and developers ever since. We want to make VR better together, and the community continues to inspire us.

For example: what began with cardboard, velcro and some lenses has become a part of toy fairs and art shows and film festivals all over the world. There are also hundreds of Cardboard apps on Google Play, including test drives, roller coaster rides, and mountain climbs. And people keep finding new ways to bring VR into their daily lives—from campus tours to marriage proposals to vacation planning.

It’s what we dreamed about when we folded our first piece of cardboard, and combined it with a smartphone: a VR experience for everyone! And less than a year later, there’s a tremendous diversity of VR viewers and apps to choose from. To keep this creativity going, however, we also need to invest in compatibility. That’s why we’re announcing a new program called Works with Google Cardboard.

At its core, the program enables any Cardboard viewer to work well with any Cardboard app. And the result is more awesome VR for all of us.

For makers: compatibility tools, and a certification badge

These days you can find Cardboard viewers made from all sorts of materials—plastic, wood, metal, even pizza boxes. The challenge is that each viewer may have slightly different optics and dimensions, and apps actually need this info to deliver a great experience. That’s why, as part of today’s program, we’re releasing a new tool that configures any viewer for every Cardboard app, automatically.

As a manufacturer, all you need to do is define your viewer’s key parameters (like focal length, input type, and inter-lens distance), and you’ll get a QR code to place on your device. Once a user scans this code using the Google Cardboard app, all their other Cardboard VR experiences will be optimized for your viewer. And that’s it.

Starting today, manufacturers can also apply for a program certification badge. This way potential users will know, at a glance, that a VR viewer works great with Cardboard apps and games. Visit the Cardboard website to get started.

The GoggleTech C1-Glass viewer works with Google Cardboard For developers: design guidelines and SDK updates

Whether you’re building your first VR app, or you’ve done it ten times before, creating an immersive experience comes with a unique set of design questions like, “How should I orient users at startup?” Or “How do menus even work in VR?”

We’ve explored these questions (and many more) since launch, and today we’re sharing our initial learnings with the developer community. Our new design guidelines focus on overall usability, as well as common VR pitfalls, so take a look and let us know your thoughts.

Of course, we want to make it easier to design and build great apps. So today we're also updating the Cardboard SDKs for Android and Unity—including improved head tracking and drift correction. In addition, both SDKs support the Works with Google Cardboard program, so all your apps will play nice with all certified VR viewers.

For users: apps + viewers = choices

The number of Cardboard apps has quickly grown from dozens to hundreds, so we’re expanding our Google Play collection to help you find high-quality apps even faster. New categories include Music and Video, Games, and Experiences. Whether you’re blasting asteroids, or reliving the Saturday Night Live 40th Anniversary Special, there’s plenty to explore on Google Play.

New collections of Cardboard apps on Google Play

Today’s Works with Google Cardboard announcement means you’ll get the same great VR experience across a wide selection of Cardboard viewers. Find the viewer that fits you best, and then fire up your favorite apps.

For the future: Thrive Audio and Tilt Brush are joining the Google family

Most of today’s VR experiences focus on what you see, but what you hear is just as important. That’s why we’re excited to welcome the Thrive Audio team from the School of Engineering in Trinity College Dublin to Google. With their ambisonic surround sound technology, we can start bringing immersive audio to VR.

In addition, we’re thrilled to have the Tilt Brush team join our family. With its innovative approach to 3D painting, Tilt Brush won last year’s Proto Award for Best Graphical User Interface. We’re looking forward to having them at Google, and building great apps together.

Ultimately, today’s updates are about making VR better together. Join the fold, and let’s have some fun.

Categories: Programming

Drive app installs through App Indexing

Android Developers Blog - Thu, 04/16/2015 - 18:04

Posted by Lawrence Chang, Product Manager

You’ve invested time and effort into making your app an awesome experience, and we want to help people find the great content you’ve created. App Indexing has already been helping people engage with your Android app after they’ve installed it — we now have 30 billion links within apps indexed. Starting this week, people searching on Google can also discover your app if they haven’t installed it yet. If you’ve implemented App Indexing, when indexed content from your app is relevant to a search done on Google on Android devices, people may start to see app install buttons for your app in search results. Tapping these buttons will take them to the Google Play store where they can install your app, then continue straight on to the right content within it.

App installs through app indexing

With the addition of these install links, we are starting to use App Indexing as a ranking signal for all users on Android, regardless of whether they have your app installed or not. We hope that Search will now help you acquire new users, as well as re-engage your existing ones. To get started, visit g.co/AppIndexing and to learn more about the other ways you can integrate with Google Search, visit g.co/DeveloperSearch.

Categories: Programming

Announcing General Availability of Azure Premium Storage

ScottGu's Blog - Scott Guthrie - Thu, 04/16/2015 - 18:01

I’m very excited to announce the general availability release of Azure Premium Storage. It is now available with an enterprise grade SLA and is available for everyone to use.

Microsoft Azure now offers two types of storage: Premium Storage and Standard Storage. Premium Storage stores data durably on Solid State Drives (SSDs) and provides high performance, low latency, disk storage with consistent performance delivery guarantees.

image

Premium Storage is ideal for I/O-sensitive workloads - and is especially great for database workloads hosted within Virtual Machines.  You can optionally attach several premium storage disks to a single VM, and support up to 32 TB of disk storage per Virtual Machine and drive more than 64,000 IOPS per VM at less than 1 millisecond latency for read operations. This provides an incredibly fast storage option that enables you to run even more workloads in the cloud.

Using Premium Storage, Azure now offers the ability run more demanding applications - including high-volume SQL Server, Dynamics AX, Dynamics CRM, Exchange Server, MySQL, Oracle Database, IBM DB2, MongoDB, Cassandra, and SAP solutions. Durability

Durability of data is of utmost importance for any persistent storage option. Azure customers have critical applications that depend on the persistence of their data and high tolerance against failures. Premium Storage keeps three replicas of data within the same region, and ensures that a write operation will not be confirmed back until it has been durably replicated. This is a unique cloud capability provided only be Azure today.

In addition, you can also optionally create snapshots of your disks and copy those snapshots to a Standard GRS storage account - which enables you to maintain a geo-redundant snapshot of your data that is stored > 400 miles away from your primary Azure region for disaster recovery purposes. Available Regions

Premium Storage is available today in the following Azure regions:

  • West US
  • East US 2
  • West Europe
  • East China
  • Southeast Asia
  • West Japan

We will expand Premium Storage to run in all Azure regions in the near future. Getting Started

You can easily get started with Premium Storage starting today. Simply go to the Microsoft Azure Management Portal and create a new Premium Storage account. You can do this by creating a new Storage Account and selecting the “Premium Locally Redundant” storage option (note: this option is only listed if you select a region where Premium Storage is available).

Then create a new VM and select the “DS” series of VM sizes. The DS-series of VMs are optimized to work great with Premium Storage. When you create the DS VM you can simply point it at your Premium Storage account and you’ll be all set. Learning More

Learn more about Premium Storage from Mark Russinovich's blog post on today's release.  You can also see a live 3 minute demo of Premium Storage in action by watching Mark Russinovich’s video on premium storage. In it Mark shows both a Windows Server and Linux VM driving more than 64,000 disk IOPS with low latency against a durable drive powered by Azure Premium Storage.

image

You can also visit the following links for more information:

Summary

We are very excited about the release of Azure Premium Storage. Premium Storage opens up so many new opportunities to use Azure to run workloads in the cloud – including migrating existing on-premises solutions.

As always, we would love to hear feedback via comments on this blog, the Azure Storage MSDN forum or send email to mastoragequestions@microsoft.com.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Why Your Million Dollar Idea Is Worthless

Making the Complex Simple - John Sonmez - Thu, 04/16/2015 - 16:00

In this video, I talk about why I think some multimillion dollar ideas are worthless, and how some crappy ideas are worth millions. Full transcript: John: Hey, this is John Sonmez from Simpleprogrammer.com and today I want to tell you why your million dollar idea is worthless. So that’s right. Every time that I’m talking […]

The post Why Your Million Dollar Idea Is Worthless appeared first on Simple Programmer.

Categories: Programming

Creating Better User Experiences on Google Play

Android Developers Blog - Thu, 04/16/2015 - 03:44

Posted by Eunice Kim, Product Manager for Google Play

Whether it's a way to track workouts, chart the nighttime stars, or build a new reality and battle for world domination, Google Play gives developers a platform to create engaging apps and games and build successful businesses. Key to that mission is offering users a positive experience while searching for apps and games on Google Play. Today we have two updates to improve the experience for both developers and users.

A global content rating system based on industry standards

Today we’re introducing a new age-based rating system for apps and games on Google Play. We know that people in different countries have different ideas about what content is appropriate for kids, teens and adults, so today’s announcement will help developers better label their apps for the right audience. Consistent with industry best practices, this change will give developers an easy way to communicate familiar and locally relevant content ratings to their users and help improve app discovery and engagement by letting people choose content that is right for them.

Starting now, developers can complete a content rating questionnaire for each of their apps and games to receive objective content ratings. Google Play’s new rating system includes official ratings from the International Age Rating Coalition (IARC) and its participating bodies, including the Entertainment Software Rating Board (ESRB), Pan-European Game Information (PEGI), Australian Classification Board, Unterhaltungssoftware Selbstkontrolle (USK) and Classificação Indicativa (ClassInd). Territories not covered by a specific ratings authority will display an age-based, generic rating. The process is quick, automated and free to developers. In the coming weeks, consumers worldwide will begin to see these new ratings in their local markets.

To help maintain your apps’ availability on Google Play, sign in to the Developer Console and complete the new rating questionnaire for each of your apps. Apps without a completed rating questionnaire will be marked as “Unrated” and may be blocked in certain territories or for specific users. Starting in May, all new apps and updates to existing apps will require a completed questionnaire before they can be published on Google Play.

An app review process that better protects users

Several months ago, we began reviewing apps before they are published on Google Play to better protect the community and improve the app catalog. This new process involves a team of experts who are responsible for identifying violations of our developer policies earlier in the app lifecycle. We value the rapid innovation and iteration that is unique to Google Play, and will continue to help developers get their products to market within a matter of hours after submission, rather than days or weeks. In fact, there has been no noticeable change for developers during the rollout.

To assist in this effort and provide more transparency to developers, we’ve also rolled out improvements to the way we handle publishing status. Developers now have more insight into why apps are rejected or suspended, and they can easily fix and resubmit their apps for minor policy violations.

Over the past year, we’ve paid more than $7 billion to developers and are excited to see the ecosystem grow and innovate. We’ll continue to build tools and services that foster this growth and help the developer community build successful businesses.

Join the discussion on

+Android Developers
Categories: Programming

Helping developers connect with families on Google Play

Android Developers Blog - Thu, 04/16/2015 - 03:43

Posted by Eunice Kim, Product Manager, Google Play

There are thousands of Android developers creating experiences for families and children — apps and games that broaden the mind and inspire creativity. These developers, like PBS Kids, Tynker and Crayola, carefully tailor their apps to provide high quality, age appropriate content; from optimizing user interface design for children to building interactive features that both educate and entertain.

Google Play is committed to the success of this emerging developer community, so today we’re introducing a new program called Designed for Families, which allows developers to designate their apps and games as family-friendly. Participating apps will be eligible for upcoming family-focused experiences on Google Play that will help parents discover great, age-appropriate content and make more informed choices.

Starting now, developers can opt in their app or game through the Google Play Developer Console. From there, our team will review the submission to verify that it meets the Designed for Families program requirements. In the coming weeks, we’ll be adding new ways to promote family content to users on Google Play — we’ll have more to share on this soon.

Join the discussion on

+Android Developers
Categories: Programming

Power Great Gaming with New Analytics from Play Games

Android Developers Blog - Thu, 04/16/2015 - 03:30

By Ben Frenkel, Google Play Games team

A few weeks ago at the Game Developers Conference (GDC), we announced Play Games Player Analytics, a new set of free reports to help you manage your games business and understand in-game player behavior. Today, we’re excited to make these new tools available to you in the Google Play Developer Console.

Analytics is a key component of running a game as a service, which is increasingly becoming a necessity for running a successful mobile gaming business. When you take a closer look at large developers that do this successfully, you find that they do three things really well:

  • Manage their business to revenue targets
  • Identify hot spots in their business metrics so they can continuously focus on the game updates that will drive the most impact
  • Use analytics to understand how players are progressing, spending, and churning

“With player engagement and revenue data living under one roof, developers get a level of data quality that is simply not available to smaller teams without dedicated staff. As the tools evolve, I think Google Play Games Player Analytics will finally allow indie devs to confidently make data-driven changes that actually improve revenue.”

Kevin Pazirandeh
Developer of Zombie Highway 2

With Player Analytics, we wanted to make these capabilities available to the entire developer ecosystem on Google Play in a frictionless, easy-to-use way, freeing up your precious time to create great gaming experiences. Small studios, including the makers of Zombie Highway 2 and Bombsquad, have already started to see the benefits and impact of Player Analytics on their business.

Further, if you integrate with Google Play game services, you get this set of analytics with no incremental effort. But, for a little extra work, you can also unlock another set of high impact reports by integrating Google Play game services Events, starting with the Sources and Sinks report, a report to help you balance your in-game economy.

If you already have a game integrated with Google Play game services, go check out the new reports in the Google Play Developer Console today. For everyone else, enabling Player Analytics is as simple as adding a handful of lines of code to your game to integrate Google Play game services.

Manage your business to revenue targets

Set your spend target in Player Analytics by choosing a daily goal

To help assess the health of your games business, Player Analytics enables you to select a daily in-app purchase revenue target and then assess how you're doing against that goal through the Target vs Actual report depicted below. Learn more.

Identify hot spots using benchmarks with the Business Drivers report

Ever wonder how your game’s performance stacks up against other games? Player Analytics tells you exactly how well you are doing compared to similar games in your category.

Metrics highlighted in red are below the benchmark. Arrows indicate whether a metric is trending up or down, and any cell with the icon can be clicked to see more details about the underlying drivers of the change. Learn more.

Track player retention by new user cohort

In the Retention report, you can see the percentage of players that continued to play your game on the following seven days after installing your game.

Learn more.

See where players are spending their time, struggling, and churning with the Player Progression report

Measured by the number of achievements players have earned, the Player Progression funnel helps you identify where your players are struggling and churning to help you refine your game and, ultimately, improve retention. Add more achievements to make progression tracking more precise.

Learn more.

Manage your in-game economy with the Sources and Sinks report

The Sources and Sinks report helps you balance your in-game economy by showing the relationship between how quickly players are earning or buying and using resources.

For example, Eric Froemling, one man developer of BombSquad, used the Sources & Sinks report to help balance the rate at which players earned and spent tickets.

Read more about Eric’s experience with Player Analytics in his recent blog post.

To enable the Sources and Sinks report you will need to create and integrate Play game services Events that track sources of premium currency (e.g., gold coins earned), and sinks of premium currency (e.g., gold coins spent to buy in-app items).

Categories: Programming

Experimenting with Swift and UIStoryboardSegues

Xebia Blog - Wed, 04/15/2015 - 21:58

Lately I've been experimenting a lot with doing things differently in Swift. I'm still trying to find best practices and discover completely new ways of doing things. One example of this is passing objects from one view controller to another through a segue in a single line of code, which I will cover in this post.

Imagine two view controllers, a BookViewController and an AuthorViewController. Both are in the same storyboard and the BookViewController has a button that pushes the AuthorViewController on the navigation controller through a segue. To know which author we need to show on the AuthorViewController we need to pass an author object from the BookViewController to the AuthorViewController. The traditional way of doing this is giving the segue an identifier and then setting the object:

class BookViewController: UIViewController {

    var book: Book!

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowAuthor" {
            let authorViewController = segue.destinationViewController as! AuthorViewController
            authorViewController.author = book.author
        }
    }
}

class AuthorViewController: UIViewController {

    var author: Author!
}

And in case we would use a modal segue that shows a AuthorViewController embedded in a navigation controller, the code would be slightly more complex:

if segue.identifier == "ShowAuthor" {
  let authorViewController = (segue.destinationViewController as! UINavigationController).viewControllers[0] as! AuthorViewController
  authorViewController.author = book.author
}

Now let's see how we can add an extension to UIStoryboardSegue that makes this a bit easier and works the same for both scenarios. Instead of checking the segue identifier we will just check on the type of the destination view controller. We assume that based on the type the same object is passed on, even when there are multiple segues going to that type.

extension UIStoryboardSegue {

    func destinationViewControllerAs<T>(cl: T.Type) -> T? {
        return destinationViewController as? T ?? (destinationViewController as? UINavigationController)?.viewControllers[0] as? T
    }
}

What we've done here is add the method destinationViewControllerAs to UIStoryboardSegue that checks if the destinationViewController is of the generic type T. If it's not, it will check if the destinationViewController is a navigation controller and if it's first view controller is of type T. If it finds either one, it will return that instance of T. Since it can also be nil, the return type is an optional T.

It's now incredibly simple to pass on our author object to the AuthorViewController:

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  segue.destinationViewControllerAs(AuthorViewController.self)?.author = book.author
}

No need to check any identifiers anymore and less code. Now I'm not saying that this is the best way to do it or that it's even better than the traditional way of doing things. But it does show that Swift offers us new ways of doing things and it's worth to experiment to find best practices.

The source code of the samples and extension is available on https://github.com/lammertw/StoryboardSegueExtension.

The Google Fit Developer Challenge winners

Google Code Blog - Wed, 04/15/2015 - 17:01

Posted by Angana Ghosh, Product Manager, Google Fit

Last year, we teamed up with adidas, Polar, and Withings to invite developers to create amazing fitness apps that integrated the new Google Fit platform. The community of Google Fit developers has flourished since then and to help get them inspired, we even suggested a few ideas for new, fun, innovative fitness apps. Today, we’re announcing the twelve grand prize winners, whose apps will be promoted on Google Play.

  • 7MinGym: All you need is this app, a chair, and a wall to start benefiting from 7 minute workouts at home. You can play music from your favorite music app and cast your workout to Chromecast or Android TV.
  • Aqualert: This app reminds you to stay hydrated throughout the day and lets you track your water intake.
  • Cinch Weight Loss and Fitness: Cinch helps you with detailed information your steps taken and calories burned. The app also supports heart-rate tracking with compatible Android Wear devices.
  • FitHub: FitHub lets you track your fitness activity from multiple accounts, including Google Fit, and multiple wearable devices, including Android Wear. You can also add your friends to compare your progress!
  • FitSquad: FitSquad turns fitness into a competition. Join your friends in a squad to compare progress, track achievements, and cheer each other on.
  • Instant - Quantified Self: Instant is a lifestyle app that helps you track not only your physical activity but your digital activity too and tells you how much you’re using your phone and apps.other activity. You can also set usage limits and reminders.
  • Jump Rope Wear Counter: This simple app lets you count your jump rope skips with an Android Wear device.
  • Move it!: This app packs one neat feature – it reminds you to get up and move about if you haven’t been active in the last hour.
  • Openrider - GPS Cycling Riding: Track and map your cycle routes with Openrider.
  • Running Buddies: In this run tracking app, runners can choose to share their runs and stats with those around them so that they can find other runners similar to themselves to go running with.
  • Strength: Strength is a workout tracking app that also lets you choose from a number of routines, so you can get to your workout quickly and track it without manual data entry. Schedules and rest timers come included.
  • Walkholic: Walkholic is another way to see your Google Fit walking, cycling, and running data. You can also turn on notifications if you don’t meet your own preset goals.

We saw a wide range of apps that integrated Google Fit, and both the grand prize winners and the runner ups will be receiving some great devices from our challenge partners to help with their ongoing fitness app development: the X_CELL and SPEED_CELL from adidas, a new Android Wear device, a Loop activity tracker with a H7 heart rate sensor from Polar, and a Smart Body Analyzer from Withings.

We’re thrilled these developers chose to integrate the Google Fit platform into their apps, giving users one place to keep all their fitness activities. With the user’s permission, any developer can store or read the user’s data from Google Fit and use it to build powerful and useful fitness experiences. Find out more about integrating Google Fit into your app.

Categories: Programming

Spark: Generating CSV files to import into Neo4j

Mark Needham - Tue, 04/14/2015 - 23:56

About a year ago Ian pointed me at a Chicago Crime data set which seemed like a good fit for Neo4j and after much procrastination I’ve finally got around to importing it.

The data set covers crimes committed from 2001 until now. It contains around 4 million crimes and meta data around those crimes such as the location, type of crime and year to name a few.

The contents of the file follow this structure:

$ head -n 10 ~/Downloads/Crimes_-_2001_to_present.csv
ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location
9464711,HX114160,01/14/2014 05:00:00 AM,028XX E 80TH ST,0560,ASSAULT,SIMPLE,APARTMENT,false,true,0422,004,7,46,08A,1196652,1852516,2014,01/20/2014 12:40:05 AM,41.75017626412204,-87.55494559131228,"(41.75017626412204, -87.55494559131228)"
9460704,HX113741,01/14/2014 04:55:00 AM,091XX S JEFFERY AVE,031A,ROBBERY,ARMED: HANDGUN,SIDEWALK,false,false,0413,004,8,48,03,1191060,1844959,2014,01/18/2014 12:39:56 AM,41.729576153145636,-87.57568059471686,"(41.729576153145636, -87.57568059471686)"
9460339,HX113740,01/14/2014 04:44:00 AM,040XX W MAYPOLE AVE,1310,CRIMINAL DAMAGE,TO PROPERTY,RESIDENCE,false,true,1114,011,28,26,14,1149075,1901099,2014,01/16/2014 12:40:00 AM,41.884543798701515,-87.72803579358926,"(41.884543798701515, -87.72803579358926)"
9461467,HX114463,01/14/2014 04:43:00 AM,059XX S CICERO AVE,0820,THEFT,$500 AND UNDER,PARKING LOT/GARAGE(NON.RESID.),false,false,0813,008,13,64,06,1145661,1865031,2014,01/16/2014 12:40:00 AM,41.785633535413176,-87.74148516669783,"(41.785633535413176, -87.74148516669783)"
9460355,HX113738,01/14/2014 04:21:00 AM,070XX S PEORIA ST,0820,THEFT,$500 AND UNDER,STREET,true,false,0733,007,17,68,06,1171480,1858195,2014,01/16/2014 12:40:00 AM,41.766348042591375,-87.64702037047671,"(41.766348042591375, -87.64702037047671)"
9461140,HX113909,01/14/2014 03:17:00 AM,016XX W HUBBARD ST,0610,BURGLARY,FORCIBLE ENTRY,COMMERCIAL / BUSINESS OFFICE,false,false,1215,012,27,24,05,1165029,1903111,2014,01/16/2014 12:40:00 AM,41.889741146006095,-87.66939334853973,"(41.889741146006095, -87.66939334853973)"
9460361,HX113731,01/14/2014 03:12:00 AM,022XX S WENTWORTH AVE,0820,THEFT,$500 AND UNDER,CTA TRAIN,false,false,0914,009,25,34,06,1175363,1889525,2014,01/20/2014 12:40:05 AM,41.85223460427207,-87.63185047834335,"(41.85223460427207, -87.63185047834335)"
9461691,HX114506,01/14/2014 03:00:00 AM,087XX S COLFAX AVE,0650,BURGLARY,HOME INVASION,RESIDENCE,false,false,0423,004,7,46,05,1195052,1847362,2014,01/17/2014 12:40:17 AM,41.73607283858007,-87.56097809501115,"(41.73607283858007, -87.56097809501115)"
9461792,HX114824,01/14/2014 03:00:00 AM,012XX S CALIFORNIA BLVD,0810,THEFT,OVER $500,STREET,false,false,1023,010,28,29,06,1157929,1894034,2014,01/17/2014 12:40:17 AM,41.86498077118534,-87.69571529596696,"(41.86498077118534, -87.69571529596696)"

Since I wanted to import this into Neo4j I needed to do some massaging of the data since the neo4j-import tool expects to receive CSV files containing the nodes and relationships we want to create.

Spark logo 192x100px

I’d been looking at Spark towards the end of last year and the pre-processing of the big initial file into smaller CSV files containing nodes and relationships seemed like a good fit.

I therefore needed to create a Spark job to do this. We’ll then pass this job to a Spark executor running locally and it will spit out CSV files.

2015 04 15 00 51 42

We start by creating a Scala object with a main method that will contain our processing code. Inside that main method we’ll instantiate a Spark context:

import org.apache.spark.{SparkConf, SparkContext}
 
object GenerateCSVFiles {  
    def main(args: Array[String]) {    
        val conf = new SparkConf().setAppName("Chicago Crime Dataset")    
        val sc = new SparkContext(conf)  
    }
}

Easy enough. Next we’ll read in the CSV file. I found the easiest way to reference this was with an environment variable but perhaps there’s a more idiomatic way:

import java.io.File
import org.apache.spark.{SparkConf, SparkContext}
 
object GenerateCSVFiles {
  def main(args: Array[String]) {
    var crimeFile = System.getenv("CSV_FILE")
 
    if(crimeFile == null || !new File(crimeFile).exists()) {
      throw new RuntimeException("Cannot find CSV file [" + crimeFile + "]")
    }
 
    println("Using %s".format(crimeFile))
 
    val conf = new SparkConf().setAppName("Chicago Crime Dataset")
 
    val sc = new SparkContext(conf)
    val crimeData = sc.textFile(crimeFile).cache()
}

The type of crimeData is RDD[String] – Spark’s way of representing the (lazily evaluated) lines of the CSV file. This also includes the header of the file so let’s write a function to get rid of that since we’ll be generating our own headers for the different files:

import org.apache.spark.rdd.RDD
 
// http://mail-archives.apache.org/mod_mbox/spark-user/201404.mbox/%3CCAEYYnxYuEaie518ODdn-fR7VvD39d71=CgB_Dxw_4COVXgmYYQ@mail.gmail.com%3E
def dropHeader(data: RDD[String]): RDD[String] = {
  data.mapPartitionsWithIndex((idx, lines) => {
    if (idx == 0) {
      lines.drop(1)
    }
    lines
  })
}

Now we’re ready to start generating our new CSV files so we’ll write a function which parses each line and extracts the appropriate columns. I’m using Open CSV for this:

import au.com.bytecode.opencsv.CSVParser
 
def generateFile(file: String, withoutHeader: RDD[String], fn: Array[String] => Array[String], header: String , distinct:Boolean = true, separator: String = ",") = {
  FileUtil.fullyDelete(new File(file))
 
  val tmpFile = "/tmp/" + System.currentTimeMillis() + "-" + file
  val rows: RDD[String] = withoutHeader.mapPartitions(lines => {
    val parser = new CSVParser(',')
    lines.map(line => {
      val columns = parser.parseLine(line)
      fn(columns).mkString(separator)
    })
  })
 
  if (distinct) rows.distinct() saveAsTextFile tmpFile else rows.saveAsTextFile(tmpFile)
}

We then call this function like this:

generateFile("/tmp/crimes.csv", withoutHeader, columns => Array(columns(0),"Crime", columns(2), columns(6)), "id:ID(Crime),:LABEL,date,description", false)

The output into ‘tmpFile’ is actually 32 ‘part files’ but I wanted to be able to merge those together into individual CSV files that were easier to work with.

I won’t paste the the full job here but if you want to take a look it’s on github.

Now we need to submit the job to Spark. I’ve wrapped this in a script if you want to follow along but these are the contents:

./spark-1.1.0-bin-hadoop1/bin/spark-submit \
--driver-memory 5g \
--class GenerateCSVFiles \
--master local[8] \ 
target/scala-2.10/playground_2.10-1.0.jar \
$@

If we execute that we’ll see the following output…”

Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Crimes_-_2001_to_present.csv
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/04/15 00:31:44 INFO SparkContext: Running Spark version 1.3.0
...
15/04/15 00:47:26 INFO TaskSchedulerImpl: Removed TaskSet 8.0, whose tasks have all completed, from pool
15/04/15 00:47:26 INFO DAGScheduler: Stage 8 (saveAsTextFile at GenerateCSVFiles.scala:51) finished in 2.702 s
15/04/15 00:47:26 INFO DAGScheduler: Job 4 finished: saveAsTextFile at GenerateCSVFiles.scala:51, took 8.715588 s
 
real	0m44.935s
user	4m2.259s
sys	0m14.159s

and these CSV files will be generated:

$ ls -alh /tmp/*.csv
-rwxrwxrwx  1 markneedham  wheel   3.0K 14 Apr 07:37 /tmp/beats.csv
-rwxrwxrwx  1 markneedham  wheel   217M 14 Apr 07:37 /tmp/crimes.csv
-rwxrwxrwx  1 markneedham  wheel    84M 14 Apr 07:37 /tmp/crimesBeats.csv
-rwxrwxrwx  1 markneedham  wheel   120M 14 Apr 07:37 /tmp/crimesPrimaryTypes.csv
-rwxrwxrwx  1 markneedham  wheel   912B 14 Apr 07:37 /tmp/primaryTypes.csv

Let’s have a quick check what they contain:

$ head -n 10 /tmp/beats.csv
id:ID(Beat),:LABEL
1135,Beat
1421,Beat
2312,Beat
1113,Beat
1014,Beat
2411,Beat
1333,Beat
2521,Beat
1652,Beat
$ head -n 10 /tmp/crimes.csv
id:ID(Crime),:LABEL,date,description
9464711,Crime,01/14/2014 05:00:00 AM,SIMPLE
9460704,Crime,01/14/2014 04:55:00 AM,ARMED: HANDGUN
9460339,Crime,01/14/2014 04:44:00 AM,TO PROPERTY
9461467,Crime,01/14/2014 04:43:00 AM,$500 AND UNDER
9460355,Crime,01/14/2014 04:21:00 AM,$500 AND UNDER
9461140,Crime,01/14/2014 03:17:00 AM,FORCIBLE ENTRY
9460361,Crime,01/14/2014 03:12:00 AM,$500 AND UNDER
9461691,Crime,01/14/2014 03:00:00 AM,HOME INVASION
9461792,Crime,01/14/2014 03:00:00 AM,OVER $500
$ head -n 10 /tmp/crimesBeats.csv
:START_ID(Crime),:END_ID(Beat),:TYPE
5896915,0733,ON_BEAT
9208776,2232,ON_BEAT
8237555,0111,ON_BEAT
6464775,0322,ON_BEAT
6468868,0411,ON_BEAT
4189649,0524,ON_BEAT
7620897,0421,ON_BEAT
7720402,0321,ON_BEAT
5053025,1115,ON_BEAT

Looking good. Let’s get them imported into Neo4j:

$ ./neo4j-community-2.2.0/bin/neo4j-import --into /tmp/my-neo --nodes /tmp/crimes.csv --nodes /tmp/beats.csv --nodes /tmp/primaryTypes.csv --relationships /tmp/crimesBeats.csv --relationships /tmp/crimesPrimaryTypes.csv
Nodes
[*>:45.76 MB/s----------------------------------|PROPERTIES(2)=============|NODE:3|v:118.05 MB/]  4M
Done in 5s 605ms
Prepare node index
[*RESOLVE:64.85 MB-----------------------------------------------------------------------------]  4M
Done in 4s 930ms
Calculate dense nodes
[>:42.33 MB/s-------------------|*PREPARE(7)===================================|CALCULATOR-----]  8M
Done in 5s 417ms
Relationships
[>:42.33 MB/s-------------|*PREPARE(7)==========================|RELATIONSHIP------------|v:44.]  8M
Done in 6s 62ms
Node --> Relationship
[*>:??-----------------------------------------------------------------------------------------]  4M
Done in 324ms
Relationship --> Relationship
[*LINK-----------------------------------------------------------------------------------------]  8M
Done in 1s 984ms
Node counts
[*>:??-----------------------------------------------------------------------------------------]  4M
Done in 360ms
Relationship counts
[*>:??-----------------------------------------------------------------------------------------]  8M
Done in 653ms
 
IMPORT DONE in 26s 517ms

Next I updated conf/neo4j-server.properties to point to my new database:

#***************************************************************
# Server configuration
#***************************************************************
 
# location of the database directory
#org.neo4j.server.database.location=data/graph.db
org.neo4j.server.database.location=/tmp/my-neo

Now I can start up Neo and start exploring the data:

$ ./neo4j-community-2.2.0/bin/neo4j start
MATCH (:Crime)-[r:CRIME_TYPE]->() 
RETURN r 
LIMIT 10
Graph  15

There’s lots more relationships and entities that we could pull out of this data set – what I’ve done is just a start. So if you’re up for some more Chicago crime exploration the code and instructions explaining how to run it are on github.

Categories: Programming

New course: Take Android app performance to the next level

Android Developers Blog - Tue, 04/14/2015 - 17:40

Posted by Jocelyn Becker, Developer Advocate

Building the next great Android app isn't enough. You can have the most amazing social integration, best API coverage, and coolest photo filters, but none of that matters if your app is slow and frustrating to use.

That's why we've launched our new online training course at Udacity, focusing entirely on improving Android performance. This course complements the Android Performance Patterns video series, focused on giving you the resources to help make fast, smooth, and awesome experiences for users.

Created by Android Performance guru Colt McAnlis, this course reviews the main pillars of performance (rendering, compute, and battery). You'll work through tutorials on how to use the tools in Android Studio to find and fix performance problems.

By the end of the course, you'll understand how common performance problems arise from your hardware, OS, and application code. Using profiling tools to gather data, you'll learn to identify and fix performance bottlenecks so users can have that smooth 60 FPS experience that will keep them coming back for more.

Take the course: https://www.udacity.com/course/ud825. Join the conversation and follow along on social at #PERFMATTERS.

Join the discussion on

+Android Developers
Categories: Programming

Questions with a license to kill in the Sprint Review

Xebia Blog - Tue, 04/14/2015 - 09:19

A team I had been coaching held a sprint review to show what they had achieved and to get feedback from stakeholders. Among these were managers, other teams, enterprise architects, and other interested colleagues.

In the past sprint they had built and realized the automation of part of the Continuous Delivery pipeline. This was quite a big achievement for the team. The organization had been struggling for quite some time to get this working, and the team had realized this in a couple of sprints!

Team - "Anyone has questions or wants to know more?"
Stakeholder - "Thanks for the demo. How does the shown solution deal with 'X'?"

The team replied with a straightforward answer to this relatively simple question.

Stakeholder - "I have more questions related to the presented solution and concerns on corporate level, but this is probably not the good time to go into details."

What just happened and how did the team respond?

First, let me describe how the dialogue continued.

Team - "Let's make time now because it is important. What do you have on your mind?"
Stakeholder - "On corporate level solution Y is defined to deal with the company's concern related to Z compliance. I am concerned with how your solution will deal with this. Please elaborate on this."
[Everybody in the organization knows that Z compliance has been a hot topic during the past year.]

Team - "We have thought of several alternatives to deal with this issue. One of these is to have a coupling with another system that will provide the compliance. Also we see possibilities in altering the ....."
Stakeholder - "The other system has issues coping with this and is not suited for what you want it to do. What are your plans for dealing with this?"

The team replied with more details after which the stakeholder asked even more detailed questions....

How dit the team get itself out of this situation?

After a couple of questions and answers the team responded with "Look, the organisation has been struggling to find a working solution for quite some time now and has't succeeded. Therefor, we are trying a new and different approach. Since this is new we don't have all the answers yet. Next steps will deal with your concerns."

Team - "Thanks for your feedback and see you all at the next demo!"

Killing a good idea

In the above dialogue between the team and one stakeholder during the sprint review the stakeholder kept asking details questions about specific aspects of the solution. He also related these to well-known corporate issues of which the importance was very clear to everyone. Thereby, consciously or unconsciously, intimidating the audience whether the approach chosen by the team is a good one and perhaps should be abandoned.

This could be especially dangerous if not appropriately dealt with. For instance, managers at first being supportive of the (good) idea might turn against the approach, even though the idea is a good one.

Dealing with these and other difficult questions

In his book 'Buy-in - saving your good idea from getting shot down' John Kotter describes 4 basic types of attack:

  1. Fear mongering
  2. Death by delay
  3. Confusion
  4. Ridicule

Attacks can be one of these four or any combination of these. The above attack is an example of a combination of 'Fear mongering' (relating to the fear that important organisational concerns are not properly addressed) and 'Confusion' (asking about many details that are not yet worked out).

In addition, Kotter describes 24 basic attacks. The attack as described above is an example of attack no. 6.

Don't worry. No need to remember all 24 responses; they all follow a very simple strategy that can be applied:

Step 1: Invite the stakeholder(s) to ask their questions,

Step 2: Respect the person asking the question by taking his point seriously,

Step 3: Respond in a reasonable and concise way.

The team did well by inviting the stakeholder to come forward with his questions. This is good marketing to the rest of the stakeholders: this shows the team believes in the idea (their solution) and is confident to respond to any (critical) question.

Second, the team responded in a respectful way taking the question serious as a valid concern. The team did so by responding in a concise and reasonable way.

As Kotter explains, it is not about convincing that one critical  stakeholder, but it's about not losing the rest of the stakeholders!

References

"Buy-in - saving your good idea from getting shot down" - John P. Kotter & Lorne A. Whitehead, https://hbr.org/product/buy-in-saving-your-good-idea-from-getting-shot-dow/an/12703-HBK-ENG

"24 attacks & responses" - John P. Kotter & Lorne Whitehead, http://nextgen.kotterinternational.com/our-principles/buy-in/24-attacks-and-24-responses

The Realtime API: In memory mode, debug tools, and more

Google Code Blog - Mon, 04/13/2015 - 21:20

Posted by Cheryl Simon Retzlaff, Software Engineer on the Realtime API team

Originally posted to the Google Apps Developer blog

Real-time collaboration is a powerful feature for getting work done inside Google docs. We extended that functionality with the Realtime API to enable you to create Google-docs style collaborative applications with minimal effort.

Integration of the API becomes even easier with a new in memory mode, which allows you to manipulate a Realtime document using the standard API without being connected to our servers. No user login or authorization is required. This is great for building applications where Google login is optional, writing tests for your app, or experimenting with the API before configuring auth.

The Realtime debug console lets you view, edit and debug a Realtime model. To launch the debugger, simply execute gapi.drive.realtime.debug(); in the JavaScript console in Chrome.

Finally, we have refreshed the developer guides to make it easier for you to learn about the API as a new or advanced user. Check them out at https://developers.google.com/drive/realtime.

For details on these and other recent features, see the release note.

Categories: Programming

Mobile Sync for Mongo

Eric.Weblog() - Eric Sink - Mon, 04/13/2015 - 19:00

We here at Zumero have been exploring the possibility of a mobile sync solution for MongoDB.

We first released our Zumero for SQL Server product almost 18 months ago, and today there are bunches of people using mobile apps which sync using our solution.

But not everyone uses SQL Server, so we often wonder what other database backends we should consider supporting. In this blog entry, I want to talk about some progress we've made toward a "Zumero for Mongo" solution and "think out loud" about the possibilities.

Background: Mobile Sync

The basic idea of mobile sync is to keep a partial copy of the database on the mobile device so the app doesn't have to go back to the network for every single CRUD operation. The benefit is an app that is faster, more reliable, and works offline. The flip side of that coin is the need to keep the mobile copy of the database synchronized with the data on the server.

Sync is tricky, but as mobile continues its explosive growth, this approach is gaining momentum:

If the folks at Mongo are already working on something in this area, we haven't seen any sign of it. So we decided to investigate some ideas.

Pieces of the puzzle

In addition to the main database (like SQL Server or MongoDB or whatever), a mobile sync solution has three basic components:

Mobile database
  • Runs on the mobile device as part of the app

  • Probably an embedded database library

  • Keeps a partial replica of the main database

  • Wants to be as similar as possible to the main database

Sync server
  • Monitors changes made by others to the main database

  • Sends incremental changes back and forth between clients and the main database

  • Resolves conflicts, such as when two participants want to change the same data

  • Manages authentication and permissions for mobile clients

  • Filters data so that each client only gets what it needs

Sync client
  • Monitors changes made by the app to the mobile database

  • Talks over the network to the sync server

  • Pushes and pulls incremental changes to keep the mobile database synchronized

  • For this blog entry, I want to talk mostly about the mobile database. In our Zumero for SQL Server solution, this role is played by SQLite. There are certainly differences between SQL Server and SQLite, but on the whole, SQLite does a pretty good job pretending to be SQL Server.

    What embedded database could play this role for Mongo?

    This question has no clear answer, so we've been building a a lightweight Mongo-compatible database. Right now it's just a prototype, but its development serves the purpose of helping us explore mobile sync for Mongo.

    Embeddable Lite Mongo

    Or "Elmo", for short.

    Elmo is a database that is designed to be as Mongo-compatible as it can be within the constraints of mobile devices.

    In terms of the status of our efforts, let me begin with stuff that does NOT work:

    • Sharding is an example of a Mongo feature that Elmo does not support and probably never will.

    • Elmo also has no plans to support any feature which requires embedding a JavaScript engine, since that would violate Apple's rules for the App Store.

    • We do hope to support full text search ($text, $meta, etc), but this is not yet implemented.

    • Similarly, we have not yet implemented any of the geo features, but we consider them to be within the scope of the project.

    • Elmo does not support capped collections, and we are not yet sure if it should.

    Broadly speaking, except for the above, everything works. Mostly:

    • All documents are stored in BSON

    • Except for JS code, all BSON types are supported

    • Comparison and sorting of BSON values (including different types) works

    • All basic CRUD operations are implemented

    • The update command supports all the update operators except $isolated

    • The update command supports upsert as well

    • The findAndModify command includes full support for its various options

    • Basic queries are fully functional, including query operators, projection, and sorting

    • The matcher supports Mongo's notion of query predicates matching any element of an array

    • CRUD operations support resolution of paths into array subobjects, like x.y to {x:[{y:2}]}

    • Regex works, with support for the i, s, and m options

    • The positional operator $ works in update and projection

    • Cursors and batchSize are supported

    • The aggregation pipeline is supported, including all expression elements and all stages (except geo)

    More caveats:

    • Support for indexes is being implemented, but they don't actually speed anything up yet.

    • The dbref format is tolerated, but is not [yet] resolved.

    • The $explain feature is not implemented yet.

    • For the purpose of storing BSON blobs, Elmo is currently using SQLite. Changing this later will be straightforward, as we're basically just using SQLite as a key-value store, so the API between all of Elmo's CRUD logic and the storage layer is not very wide.

    Notes on testing:

    • Although mobile-focused Elmo does not need an actual server, it has one, simply so that we can run the jstests suite against it.

    • The only test suite sections we have worked on are jstests/core and jstests/aggregation.

    • Right now, Elmo can pass 311 of the test cases from jstests.

    • We have never tried contacting Elmo with any client driver except the mongo shell. So this probably doesn't work yet.

    • Elmo's server only supports the new style protocol, including OP_QUERY, OP_GET_MORE, OP_KILL_CURSORS, and OP_REPLY. None of the old "fire and forget" messages are implemented.

    • Where necessary to make a test case pass, Elmo tries to return the same error numbers as Mongo itself.

    • All effort thus far has been focused on making Elmo functional, with no effort spent on performance.

    How Elmo should work:

    • In general, our spec for Elmo's behavior is the MongoDB documentation plus the jstests suite.

    • In cases where the Mongo docs seem to differ from the actual behavior of Mongo, we try to make Elmo behave like Mongo does.

    • In cases where the Mongo docs are silent, we often stick a proxy in front of the Mongo server and dump all the messages so we can see exactly what is going on.

    • We occasionally consult the Mongo server source code for reference purposes, but no Mongo code has been copied into Elmo.

    Notes on the code:

    • Elmo is written in F#, which was chosen because it's an insanely productive environment and we want to move quickly.

    • But while F# is a great language for this exploratory prototype, it may not be the right choice for production, simply because it would confine Elmo use cases to Xamarin, and Miguel's world domination plan is not quite complete yet. :-)

    • The Elmo code is now available on GitHub at https://github.com/zumero/Elmo. Currently the license is GPLv3, which makes it incompatible with production use on mobile platforms, which is okay for now, since Elmo isn't ready for production use anyway. We'll revisit licensing issues later.

    Next steps:

    • Our purpose in this blog entry is to start conversations with others who may be interested in mobile sync solutions for Mongo.

    • Feel free to post a question or comment or whatever as an issue on GitHub: https://github.com/zumero/Elmo/issues

    • Or email me: eric@zumero.com

    • Or Tweet: @eric_sink

    • If you're interested in a face-to-face conversation or a demo, we'll be at MongoDB World in NYC at the beginning of June.

     

Why Comments Are Stupid, a Real Example

Making the Complex Simple - John Sonmez - Mon, 04/13/2015 - 16:00

Nothing seems to stir up religious debate more so than when I write a post or do a YouTube video that mentions how most of the time comments are not necessary and are actually more harmful than helpful. I first switched sides in this debate when I read the second edition of Code Complete. In […]

The post Why Comments Are Stupid, a Real Example appeared first on Simple Programmer.

Categories: Programming

R: Creating an object with functions to calculate conditional probability

Mark Needham - Sun, 04/12/2015 - 08:55

I’ve been working through Alan Downey’s Thinking Bayes and I thought it’d be an interesting exercise to translate some of the code from Python to R.

The first example is a simple one about conditional probablity and the author creates a class ‘PMF’ (Probability Mass Function) to solve the following problem:

Suppose there are two bowls of cookies. Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. Bowl 2 contains 20 of each.

Now suppose you choose one of the bowls at random and, without looking, select a cookie at random. The cookie is vanilla.

What is the probability that it came from Bowl 1?

In Python the code looks like this:

pmf = Pmf()
pmf.Set('Bowl 1', 0.5)
pmf.Set('Bowl 2', 0.5)
 
pmf.Mult('Bowl 1', 0.75)
pmf.Mult('Bowl 2', 0.5)
 
pmf.Normalize()
 
print pmf.Prob('Bowl 1')

The ‘PMF’ class is defined here.

  • ‘Set’ defines the prior probability of picking a cookie from either bowl i.e. in our case it’s random.
  • ‘Mult’ defines the likelihood of picking a vanilla biscuit from either bowl
  • ‘Normalize’ applies a normalisation so that our posterior probabilities add up to 1.

We want to create something similar in R and the actual calculation is stragiht forward:

pBowl1 = 0.5
pBowl2 = 0.5
 
pVanillaGivenBowl1 = 0.75
pVanillaGivenBowl2 = 0.5
 
> (pBowl1 * pVanillaGivenBowl1) / ((pBowl1 * pVanillaGivenBowl1) + (PBowl2 * pVanillaGivenBowl2))
0.6
 
> (pBowl2 * pVanillaGivenBowl2) / ((pBowl1 * pVanillaGivenBowl1) + (pBowl2 * pVanillaGivenBowl2))
0.4

The problem is we have quite a bit of duplication and it doesn’t read as cleanly as the Python version.

I’m not sure of the idiomatic way of handling this type of problem in R with mutable state in R but it seems like we can achieve this using functions.

I ended up writing the following function which returns a list of other functions to call.

create.pmf = function() {
  priors <<- c()
  likelihoods <<- c()
  list(
    prior = function(option, probability) {
      l = c(probability)  
      names(l) = c(option)
      priors <<- c(priors, l)
    },
    likelihood = function(option, probability) {
      l = c(probability)  
      names(l) = c(option)
      likelihoods <<- c(likelihoods, l)
    },
    posterior = function(option) {
      names = names(priors)
      normalised = 0.0
      for(name in names) {
        normalised = normalised + (priors[name] * likelihoods[name])
      }
 
      (priors[option] * likelihoods[option]) / normalised
    }    
  )
}

I couldn’t work out how to get ‘priors’ and ‘likelihoods’ to be lexically scoped so I’ve currently got those defined as global variables. I’m using a list as a kind of dictionary following a suggestion on Stack Overflow.

The code doesn’t handle the unhappy path very well but it seems to work for the example from the book:

pmf = create.pmf()
 
pmf$prior("Bowl 1", 0.5)
pmf$prior("Bowl 2", 0.5)
 
pmf$likelihood("Bowl 1", 0.75)
pmf$likelihood("Bowl 2", 0.5)
 
> pmf$posterior("Bowl 1")
Bowl 1 
   0.6 
> pmf$posterior("Bowl 2")
Bowl 2 
   0.4

How would you solve this type of problem? Is there a cleaner/better way?

Categories: Programming