Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Book Review: Message Not Received By Phil Simon

Making the Complex Simple - John Sonmez - Thu, 02/12/2015 - 15:00

A few weeks ago, I sent off an email to Phil Simon, with my phone number, asking him to give me a call to discuss book publicists. I had figured email wasn’t the best form of communication for this discussion—and I couldn’t have been more right. Little did I know that author and—oops I almost used the word technologist—Phil Simon, ... Read More

The post Book Review: Message Not Received By Phil Simon appeared first on Simple Programmer.

Categories: Programming

Control protected ranges and sheets in Google Sheets with Apps Script

Google Code Blog - Wed, 02/11/2015 - 21:00
A few weeks ago, the Google Sheets team introduced improved control over protected sheets and ranges. Developers have told us they're looking for the same power in Google Apps Script — after all, ever since we added data validation to the Spreadsheet service, programmatic control over protected ranges has been the most popular request on the Apps Script issue tracker.

Today, we are excited to give you that granular control.

With the new Protection class in the Spreadsheet service, your scripts can touch every aspect of range or sheet protection, just like in the new UI. (The older PageProtection class, which had more limited features, will be deprecated, but will stick around in case you need to work with older spreadsheets. The new Protection class only applies to the newer version of Sheets.)

Code samples
So let's see the new stuff in action. Let's say you want to prohibit anyone other than yourself from editing cells A1:B10:

// Protect range A1:B10, then remove all other users from the list of editors.
var ss = SpreadsheetApp.getActive();
var range = ss.getRange('A1:B10');
var protection = range.protect().setDescription('Sample protected range');

// Ensure the current user is an editor before removing others. Otherwise, if the user's edit
// permission comes from a group, the script will throw an exception upon removing the group.
var me = Session.getEffectiveUser();
protection.addEditor(me);
protection.removeEditors(protection.getEditors());
if (protection.canDomainEdit()) {
  protection.setDomainEdit(false);
}

Or maybe you want to remove all range protections in the whole spreadsheet:

// Remove all range protections in the spreadsheet that the user has permission to edit.
var ss = SpreadsheetApp.getActive();
var protections = ss.getProtections(SpreadsheetApp.ProtectionType.RANGE);
for (var i = 0; i < protections.length; i++) {
  var protection = protections[i];
  if (protection.canEdit()) {
    protection.remove();
  }
}

Or perhaps you want to protect an entire sheet, but carve out a small hole in it — an unprotected range within a protected sheet — that others can still edit:

// Protect the active sheet except B2:C5, then remove all other users from the list of editors.
var sheet = SpreadsheetApp.getActiveSheet();
var protection = sheet.protect().setDescription('Sample protected sheet');
var unprotected = sheet.getRange('B2:C5');
protection.setUnprotectedRanges([unprotected]);

// Ensure the current user is an editor before removing others. Otherwise, if the user's edit
// permission comes from a group, the script will throw an exception upon removing the group.
var me = Session.getEffectiveUser();
protection.addEditor(me);
protection.removeEditors(protection.getEditors());
if (protection.canDomainEdit()) {
  protection.setDomainEdit(false);
}

Bam! Easy. Hope you find this useful, and happy scripting!

Posted by Sam Berlin, engineer, Google Sheets
Categories: Programming

Trulia sees 30% more engagement using notifications and further innovates with Android Wear

Android Developers Blog - Wed, 02/11/2015 - 20:01

Posted by Laura Della Torre, Google Play team

Trulia’s mission is to make it as easy as possible for home buyers, sellers, owners and renters to navigate the real estate market. Originally a website-based company, Trulia is keenly aware that its users are migrating to mobile. Today, more than 50 percent of Trulia’s business comes from mobile and growth shows no sign of slowing, so they know that’s where they need to innovate.

In the following video, Jonathan McNulty, VP of Consumer Product, and Lauren Hirashima, Mobile Product Manager, at Trulia, talked about how the company successfully leveraged notifications on Android to increase app engagement by 30 percent and has seen 2x the amount of engagement on Android relative to other platforms:

Trulia continues to focus on improving their mobile experience, using Google’s geo-fencing technology to create Nearby Home Alerts, which lets users know when they walk near a new listing. Combined with Android Wear, Trulia now makes it possible for users to see details and photos about a property and call or email the agent, all directly from their watch.

Find out more about using rich notifications on Android and developing for Android Wear. And check out The Secrets to App Success on Google Play (ebook) which contains a chapter dedicated to the best practices and tools you can use to increase user engagement and retention in your app.

Join the discussion on

+Android Developers
Categories: Programming

The Guardian — Understanding and engaging mobile users

Android Developers Blog - Wed, 02/11/2015 - 20:00

Posted by Leticia Lago, Google Play team

The Guardian is a global news organization with one of the world's largest quality English-speaking news websites, theguardian.com. It has more than 100 million monthly unique browsers and app users, two thirds of which come from outside the UK. With a longstanding reputation for agenda-setting journalism, the publication is most recently renowned for its Pulitzer Prize and Emmy-winning coverage of the disclosures made by whistleblower Edward Snowden. The Guardian’s early adoption of a digital-first policy and continued digital innovation means it has also become a respected name among developers and tech audiences. In the last year, it has launched a redesigned app and new website and been among a handful of publishers to develop its own Glassware.

The Guardian app is taking advantage of unique Google Play and Android features to drive user engagement. Their mobile app readers are now 10 to 20 times more engaged than their average web users. Improving engagement has also helped them lift the rating for their app from 4.0 to 4.4 on Google Play.

Anthony Sullivan, Director of Product, and Tom Grinsted, Product Manager, share some best practices for increasing app engagement in this video.

To learn more, be sure to check out these resources to better engage your users:

  • Convert installs to active users [video] — hear from Matteo Vallone, Partner Development Manager for Google Play, about the best practices for engaging and retaining users through intents, identity, context, and rich notifications as well as delivering a cross-platform user experience.
  • Adding Wearable Features to Notifications [tutorial] — learn how to add notifications to Android Wear devices, including how to make use of the Wear notification features: voice commands, stacks, and pages.
  • Beta testing [help] — discover how to make use of the alpha and beta testing features offered by Google Play, and get feedback from real users.
  • Build your community (of testers) [guide] — get advice on how to build communities on G+ or other social networks, then tap into their skills and enthusiasm to help with testing your app.
Join the discussion on

+Android Developers
Categories: Programming

Rescuing an Outsourced Project from Collapse: 8 Problems Found and 8 Lessons Learned

If you are one of those people that think most of the products featured on HighScalability use way too many servers then you'll love this story: 130 VMs serving less than 10,000 users daily were chopped down to just one machine.

Here's the setup. A smallish website was having problems. Users were unhappy. In the balance was not only the product, but the company. The site was built using Angular, Symfony2, Postgres, Redis, Centos, 8 HP blades with 128 G RAM each, two racks, a very large HP 3par storage array, a 1Gbps uplink, and VMWare.

More than enough power for the task at hand. Yet the system couldn't handle the load. What would you do?

That's the story Jacques Mattheij tells in his very entertaining and educational Saving a Project and a Company article.

Jacques says much was right about the website, but time pressure and mismanagement created big problems at the system level. "A single clueless person in a position of trust with non technical management, an outsourced project and a huge budget, what could possibly go wrong?" Sound familiar? 

Problem 1: Virtualization Gone Crazy
Categories: Architecture

Quote of the Day

Herding Cats - Glen Alleman - Wed, 02/11/2015 - 17:13

Our mathematical models are far from complete, but they provide us with schemes that model reality with great precision - a precision enormously exceeding that of any description that is free of mathematics - Roger Penrose - "What is Reality, New Scientist, 2006

Any suggestion that all models are wrong, some models are useful, from a person who does not have George Box's book on the shelf and can point to the page that quote is on, or who has not read Penrose, is speaking about which he is uninformed. Do not listen.

Related articles The Actual Science in Management Science Bayesian Statistics and Project Work We Suck At Estimating
Categories: Project Management

Posted: Creating an Environment of Leadership

My most recent Pragmatic Manager , Creating an Environment of Leadership is up.

If you like these tips and the ones in Discovering Your Leadership, check out the Influential Agile Leader.

Gil Broza and I are offering the Influential Agile Leader twice this year: once in San Francisco, and once in London. The early bird deadline for registration is Feb 15.

If you are trying to transition to agile and you’re having more challenges than you expected, you owe it to yourself to participate.

If you have questions, contact me. I’m happy to answer them.

Categories: Project Management

R: Weather vs attendance at NoSQL meetups

Mark Needham - Wed, 02/11/2015 - 08:09

A few weeks ago I came across a tweet by Sean Taylor asking for a weather data set with a few years worth of recording and I was surprised to learn that R already has such a thing – the weatherData package.

Winner is: @UTVilla! library(weatherData) df <- getWeatherForYear("SFO", 2013) ggplot(df, aes(x=Date, y = Mean_TemperatureF)) + geom_line()

— Sean J. Taylor (@seanjtaylor) January 22, 2015

weatherData provides a thin veneer around the wunderground API and was exactly what I’d been looking for to compare meetup at London’s NoSQL against weather conditions that day.

The first step was to download the appropriate weather recordings and save them to a CSV file so I wouldn’t have to keep calling the API.

I thought I may as well download all the recordings available to me and wrote the following code to make that happen:

library(weatherData)
 
# London City Airport
getDetailedWeatherForYear = function(year) {
  getWeatherForDate("LCY", 
                    start_date= paste(sep="", year, "-01-01"),
                    end_date = paste(sep="", year, "-12-31"),
                    opt_detailed = FALSE,
                    opt_all_columns = TRUE)
}
 
df = rbind(getDetailedWeatherForYear(2011), 
      getDetailedWeatherForYear(2012),
      getDetailedWeatherForYear(2013),
      getDetailedWeatherForYear(2014),
      getWeatherForDate("LCY", start_date="2015-01-01",
                        end_date = "2015-01-25",
                        opt_detailed = FALSE,
                        opt_all_columns = TRUE))

I then saved that to a CSV file:

write.csv(df, 'weather/temp_data.csv', row.names = FALSE)
"Date","GMT","Max_TemperatureC","Mean_TemperatureC","Min_TemperatureC","Dew_PointC","MeanDew_PointC","Min_DewpointC","Max_Humidity","Mean_Humidity","Min_Humidity","Max_Sea_Level_PressurehPa","Mean_Sea_Level_PressurehPa","Min_Sea_Level_PressurehPa","Max_VisibilityKm","Mean_VisibilityKm","Min_VisibilitykM","Max_Wind_SpeedKm_h","Mean_Wind_SpeedKm_h","Max_Gust_SpeedKm_h","Precipitationmm","CloudCover","Events","WindDirDegrees"
2011-01-01,"2011-1-1",7,6,4,5,3,1,93,85,76,1027,1025,1023,10,9,3,14,10,NA,0,7,"Rain",312
2011-01-02,"2011-1-2",4,3,2,1,0,-1,87,81,75,1029,1028,1027,10,10,10,11,8,NA,0,7,"",321
2011-01-03,"2011-1-3",4,2,1,0,-2,-5,87,74,56,1028,1024,1019,10,10,10,8,5,NA,0,6,"Rain-Snow",249
2011-01-04,"2011-1-4",6,3,1,3,1,-1,93,83,65,1019,1013,1008,10,10,10,21,6,NA,0,5,"Rain",224
2011-01-05,"2011-1-5",8,7,5,6,3,0,93,80,61,1008,1000,994,10,9,4,26,16,45,0,4,"Rain",200
2011-01-06,"2011-1-6",7,4,3,6,3,1,93,90,87,1002,996,993,10,9,5,13,6,NA,0,5,"Rain",281
2011-01-07,"2011-1-7",11,6,2,9,5,2,100,91,82,1003,999,996,10,7,2,24,11,NA,0,5,"Rain-Snow",124
2011-01-08,"2011-1-8",11,7,4,8,4,-1,87,77,65,1004,997,987,10,10,5,39,23,50,0,5,"Rain",230
2011-01-09,"2011-1-9",7,4,3,1,0,-1,87,74,57,1018,1012,1004,10,10,10,24,16,NA,0,NA,"",242

If we want to read that back in future we can do so with the following code:

weather = read.csv("weather/temp_data.csv")
weather$Date = as.POSIXct(weather$Date)
 
> weather %>% sample_n(10) %>% select(Date, Min_TemperatureC, Mean_TemperatureC, Max_TemperatureC)
           Date Min_TemperatureC Mean_TemperatureC Max_TemperatureC
1471 2015-01-10                5                 9               14
802  2013-03-12               -2                 1                4
1274 2014-06-27               14                18               22
848  2013-04-27                5                 8               10
832  2013-04-11                6                 8               10
717  2012-12-17                6                 7                9
1463 2015-01-02                6                 9               13
1090 2013-12-25                4                 6                7
560  2012-07-13               15                18               20
1230 2014-05-14                9                14               19

The next step was to bring the weather data together with the meetup attendance data that I already had.

For simplicity’s sake I’ve got those saved in a CSV file as we can just read those in as well:

timestampToDate <- function(x) as.POSIXct(x / 1000, origin="1970-01-01", tz = "GMT")
 
events = read.csv("events.csv")
events$eventTime = timestampToDate(events$eventTime)
 
> events %>% sample_n(10) %>% select(event.name, rsvps, eventTime)
                                                           event.name rsvps           eventTime
36                                   London Office Hours - Old Street    10 2012-01-18 17:00:00
137                                          Enterprise Search London    34 2011-05-23 18:15:00
256                           MarkLogic User Group London: Jim Fuller    40 2014-04-29 18:30:00
117                                  Neural Networks and Data Science   171 2013-03-28 18:30:00
210                                  London Office Hours - Old Street     3 2011-09-15 17:00:00
443                                                      July social!    12 2014-07-14 19:00:00
322                                                   Intro to Graphs    39 2014-09-03 18:30:00
203                                  Vendor focus: Amazon CloudSearch    24 2013-05-16 17:30:00
17  Neo4J Tales from the Trenches: A Recommendation Engine Case Study    12 2012-04-25 18:30:00
55                                                London Office Hours    10 2013-09-18 17:00:00

Now that we’ve got our two datasets ready we can plot a simple chart of the average attendance and temperature grouped by month:

byMonth = events %>% 
  mutate(month = factor(format(eventTime, "%B"), levels=month.name)) %>%
  group_by(month) %>%
  summarise(events = n(), 
            count = sum(rsvps)) %>%
  mutate(ave = count / events) %>%
  arrange(desc(ave))
 
averageTemperatureByMonth = weather %>% 
  mutate(month = factor(format(Date, "%B"), levels=month.name)) %>%
  group_by(month) %>% 
  summarise(aveTemperature = mean(Mean_TemperatureC))
 
g1 = ggplot(aes(x = month, y = aveTemperature, group=1), data = averageTemperatureByMonth) + 
  geom_line( ) + 
  ggtitle("Temperature by month")
 
g2 = ggplot(aes(x = month, y = count, group=1), data = byMonth) + 
  geom_bar(stat="identity", fill="dark blue") +
  ggtitle("Attendance by month")
 
library(gridExtra)
grid.arrange(g1,g2, ncol = 1)

2015 02 09 20 32 50

We can see a rough inverse correlation between the temperature and attendance, particularly between April and August – as the temperature increases, total attendance decreases.

But what about if we compare at a finer level of granularity such as a specific date? We can do that by adding a ‘day’ column to our events data frame and merging it with the weather one:

byDay = events %>% 
  mutate(day = as.Date(as.POSIXct(eventTime))) %>%
  group_by(day) %>%
  summarise(events = n(), 
            count = sum(rsvps)) %>%
  mutate(ave = count / events) %>%
  arrange(desc(ave))
weather = weather %>% mutate(day = Date)
merged = merge(weather, byDay, by = "day")

Now we can plot the attendance vs the mean temperature for individual days:

ggplot(aes(x =count, y = Mean_TemperatureC,group = day), data = merged) + 
  geom_point()
2015 02 10 07 21 24

Interestingly there now doesn’t seem to be any correlation between the temperature and attendance. We can confirm our suspicions by running a correlation:

> cor(merged$count, merged$Mean_TemperatureC)
[1] 0.008516294

Not even 1% correlation between the values! One way we could confirm that non correlation is to plot the average temperature against the average attendance rather than total attendance:

g1 = ggplot(aes(x = month, y = aveTemperature, group=1), data = averageTemperatureByMonth) + 
  geom_line( ) + 
  ggtitle("Temperature by month")
 
g2 = ggplot(aes(x = month, y = ave, group=1), data = byMonth) + 
  geom_bar(stat="identity", fill="dark blue") +
  ggtitle("Attendance by month")
 
grid.arrange(g1,g2, ncol = 1)

2015 02 11 06 48 05

Now we can see there’s not really that much of a correlation between temperature and month – in fact 9 of the months have a very similar average attendance. It’s only July, December and especially August where there’s a noticeable dip.

This could suggest there’s another variable other than temperature which is influencing attendance in these months. My hypothesis is that we’d see lower attendance in the weeks of school holidays – the main ones happen in July/August, December and March/April (which interestingly don’t show the dip!)

Another interesting thing to look into is whether the reason for the dip in attendance isn’t through lack of will from attendees but rather because there aren’t actually any events to go to. Let’s plot the number of events being hosted each month against the temperature:

g1 = ggplot(aes(x = month, y = aveTemperature, group=1), data = averageTemperatureByMonth) + 
  geom_line( ) + 
  ggtitle("Temperature by month")
 
g2 = ggplot(aes(x = month, y = events, group=1), data = byMonth) + 
  geom_bar(stat="identity", fill="dark blue") +
  ggtitle("Events by month")
 
grid.arrange(g1,g2, ncol = 1)

2015 02 11 06 57 16

Here we notice there’s a big dip in events in December – organisers are hosting less events and we know from our earlier plot that on average less people are attending those events. Lots of events are hosted in the Autumn, slightly fewer in the Spring and fewer in January, March and August in particular.

Again there’s no particular correlation between temperature and the number of events being hosted on a particular day:

ggplot(aes(x = events, y = Mean_TemperatureC,group = day), data = merged) + 
  geom_point()

2015 02 11 07 05 48

There’s not any obvious correlation from looking at this plot although I find it difficult to interpret plots where we have the values all grouped around very few points (often factor variables) on one axis and spread out (continuous variable) on the other. Let’s confirm our suspicion by calculating the correlation between these two variables:

> cor(merged$events, merged$Mean_TemperatureC)
[1] 0.0251698

Back to the drawing board for my attendance prediction model then!

If you have any suggestions for doing this analysis more effectively or I’ve made any mistakes please let me know in the comments, I’m still learning how to investigate what data is actually telling us.

Categories: Programming

Ready to Develop

Ready to develop insures that the team is ready to work as soon as they sit down.

Ready to develop insures that the team is ready to work as soon as they sit down.

The definition of done is an important tool to help guide programs and teams. It is the requirements that the software must meet to be considered complete. For example, a simple of the definition of done is:

All stories must be unit tested, a code review performed, integrated into the main build, integration tested, and release documentation completed.

The definition of done is generally agreed upon by the entire core team at the beginning of a project or program and stays roughly the same over the life of the project. It provides all team members with an outline of the macro requirements that all stories must meet. Therefore the definition helps in estimating by suggesting many of the tasks that will be required. Another perspective on the definition of done is that it represents the requirements generated by an organization’s policies, processes and methods. For example, the organization may have a policy that requires code to be scanned for security holes. Bookending the concept of done is the concept of ready to develop (or just ready). Ready to develop is just as powerful a tool as the definition done. Ready acts as a filter to determine whether a user story is ready to be passed to the team and taken into a sprint. Ready keeps teams from spinning their wheels working on stories that are unknown, malformed or are not understood. The basic definition of ready I typically use is:

  1. The story is well-formed. The story follows the format of persona, goal, value/benefit. The classic user story format ensures the team knows who the story is being done to support, the functionality the story is planned to deliver AND the business value the story is expected to deliver.
  2. The story fulfills the criteria encompassed by acronym INVEST.
    1. Independent – Development of the story is not dependent on other incomplete or undeveloped stories. This does not mean that any individual story does not build on a previous story.
    2. Negotiable – The story generates conversation and collaboration with the product owner and subject matter experts. Stories are not a contract rather they are  tool to ensure that business ideas are explore as the stories evolve from idea into code. Stories are never a specific contract.
    3. Valuable – The story will deliver a demonstrable benefit when it is complete.
    4. Estimable – The team can accurately (enough) estimate the size of the story. The ability to estimate requires that the team has a decent understanding of what the story will require.
    5. Small – The story can be completed in a single sprint.
    6. Testable – The story can be easily unit and acceptance tested.
  3. A story must have acceptance criteria. Acceptance criteria are critical components in the definition of the story’s requirements. All acceptance criteria must be satisfied for the story will be complete (not necessarily done).
  4. Each story should have any external (not on the team) subject matter experts (SMEs) identified with contact details. External SMEs generally participate in the conversation and collaboration needed to deliver a story (N in INVEST).
  5. There are no external dependencies that will prevent the story from being completed. In projects in which the work is completed by a single team, this criteria is generally subsumed by the independent criteria in INVEST. However in larger projects interactions with other teams, applications or factors can generate dependencies. Those dependencies need to be identified and cleared before a story enters the development process.

The definition of ready is a hurdle in much the same manner as the definition of done. The definition of done ensures that a team delivers functionality that meets the not only the requirements of the product owner and business, but also the requirements of the organization. The definition of ready makes sure that the stories that a team is asked to plan and develop are ready so they don’t waste their time and can deliver the maximum amount of value possible. Getting stories ready for development does not happen by magic rather the process to prepare stories is typically part of planning and grooming . . . but that’s up next.


Categories: Process Management

Random thoughts on big data

4367409632_a58b2798ca_m

I began blogging in 2005, back then I managed to post something new almost everyday. Now, 10 years after, I hardly post anything. I was beginning to think I don’t have anything left to say but I recently noticed I have quite a few posts in various states of “draft”. I guess that  I am spending too much thinking about how to get a polished idea out there, rather than just go on and write what’s on my mind. This post is an attempt to change that by putting some thought I have (on big data in this case) without worrying too much on how complete and polished they are.

Anyway, here we go:

  1. All data is time-series – When data is added to the big data store (Hadoop or otherwise) it is already historical i.e. it is being imported from a transactional system, even if it is being streamed into the platform. If you treat the data as historical and somehow version it (most simplistic is  adding timestamp) before you store it it would enable you to see how this data changes over time  – and when you relate it to other data you’d be able to see both the way the system was at the time of that data particular data (e.g. event) was created as well as getting the latest version and see its state now. Essentially  treating all data as slow changing dimensions gives you enhanced capabilities when you’d want to analyse the data later.
  2. Enrich data with “foreign keys” before persisting it (or close to it). Usually a data source is not stand alone and it can be related to other data – either form the same source or otherwise. Solving some of these correlations when the data is fresh and context is known can save a lot of time doing them later both because you’d probably do these correlations multiple times instead of once as well as because when the data is ingested the context and relations are more obvious than when, say, a year later you try to make sense of the data and recall its relations to other data.
  3. Land data in a queue – This ties nicely to the previous two points as the type of enrichments mentioned above are suited well for handling in streaming. if you lend all data in a queue you can gain a unified ingestion pipeline for both batch and streaming data. Naturally not all the computations can be handled in streaming, but you’d be able to share at least some of the pipeline.
  4. Lineage is important (and doesn’t get enough attention) – raw data is just that but to get insights you need to process it , enrich and aggregate it – a lot of times this creates a disconnect between the end result and the original data. Understanding how insights were generated is important both for debugging problems as well as ensuring compliance (for actions that demand that).
  5. Not everything is big data – Big data is all the rage today but a lot of problems don’t require that. Not only that when you make the move to a distributed system you both complicate the solution and more importantly slow the processing (until, of course, you hit the threshold where the data can’t be handled by a single machine). This is even truer for big data systems where you have a lot of distributed nodes so the management (coordination etc.) is more costly (back at Nice we referred to  the initialization time for Hadoop jobs as “waking the elephant”  and we wanted to make sure we need to wait it).
  6. Don’t underestimate scale-up – A related point to the above. Machines today are quite powerful and when the problem is only “bigish” it might be that a scale-up solution would solve it better and cheaper. Read for example “Scalability! But at what COST?” by Frank McSherry and “Command-line tools can be 235x faster than your Hadoop cluster” by Adam Drake as two examples for points 5 & 6

This concludes this batch of thoughts. Comments and questions are welcomed

image by Nilay Gandhi

Categories: Architecture

Debut your app during SxSW with Google Cloud Platform Build-Off

Google Code Blog - Tue, 02/10/2015 - 19:31

Posted by Greg Wilson, Head of Developer Advocacy for Google Cloud Platform

Originally posted to the Google Cloud Platform blog

From popular mobile apps (Foursquare) to acclaimed indie films (The Grand Budapest Hotel), some of the world’s most creative ideas have debuted at the annual SxSW Festival in Austin, Texas. For over 25 years, SxSW's goal has been to bring together the most creative people and companies to meet and share ideas. We think one of those next great ideas could be yours, and we’d like to help it be a part of SxSW.

Do you have an idea for a new app that you think is SxSW worthy? Enter it in the Google Cloud Platform Build-Off. Winners will receive up to $5,000 in cash. First prize also includes $100,000 in Google Cloud Platform credit and 24/7 support, and select winners will have the chance to present their app to 200 tech enthusiasts during the Build-Off awards ceremony at SxSW.

Here’s how it works:

  • Develop an app on Google Cloud Platform that pushes the boundary on what technology can do for music, film or gaming
  • Enter on your own or form teams of up to 3 members
  • Submit your application between 5 - 28 February 2015
  • Apps will be evaluated based on their originality, effectiveness in addressing a need, use of Google tools, and technical merit

Visit the official Build-Off website to see the full list of rules and FAQs and follow us at #GCPBuildOff on G+ and Twitter for more updates. We cannot wait to see what innovation your creativity leads to next.

Categories: Programming

Software Engineering is a Verb

Herding Cats - Glen Alleman - Tue, 02/10/2015 - 17:36

I found this picture on the web. The OP didn't know where it came from so I have no attribution. It speaks volumes to the gap between knowing and doing. This notion of knowing and doing is at the heart of Engineered solutions to complex problems, versus  created solutions. Engineered solutions are certainly created - in that creativity is mandatory for the engineered solution to provide the needed capabilities the customer ordered.

HDS simple minded plan

But the inverse may not be true. Engineering is a broad term

Engineering (from Latin ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise") is the application of scientific, economic, social, and practical knowledge in order to invent, design, build, maintain, research, and improve structures, machines, devices, systems, materials and processes.

Software Engineering is not stand alone, it is part of Systems Engineering

Which is an interdisciplinary field of engineering that focuses on how to design and manage complex engineering projects over their life cycles. Issues such as reliability, logistics, coordination of different teams (requirements management), evaluation measurements, and other disciplines become more difficult when dealing with large or complex projects. Systems engineering deals with work-processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as control engineering, industrial engineering, organizational studies, and project management. Systems engineering ensures that all likely aspects of a project or system are considered, and integrated into a whole.

Coding software, in the absence of Computer Science or Software Engineering, is not likely part of any engineering discipline, but is a skill of turns ideas into code.

Software Engineering is

The study and an application of engineering to the design, development, and maintenance of software. Typical formal definitions of software engineering are: the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.

So If You're in the Engineering Business...

If you're in the engineered solutions business, systems engineering, software engineering, business process engineering, would the picture at the top be considered credible by your customers, your professors in college, your business development team, your board of directors?

The picture at the top says several things:

  • You've never do a project like this before, or anything like it. Why would someone give you money?
  • You were not paying attention in that probability and statistics class in college (engineering or computer science) when the prof explained all elements of projects are random variables.
  • You've come to believe that the externalities of microeconomics of software development are applicable to your project.
  • You're a sole proprietor with tons of money and don't really care how much it costs, when you'll be done, or what you get when you're done

The picture at the botton says several things:

  • You have experience in the development of projects like this and know that uncertainty creates risk, and risk management is how adults manage projects.
  • You actually know how to apply probability and statistics, have used Bayesian forecasting - not just the word, but the actual math - to forecast a probabilistic outcome from your past experiences.
  • You paid attention in that engineering principles class where they taught you how to move forward in the design and development process with incremental outcomes. Yes this incremental development is the basis of all good engineering practices. The Big Design Up Front strawman went away 40 years ago.
Categories: Project Management

Software Engineering is a Verb

Herding Cats - Glen Alleman - Tue, 02/10/2015 - 16:31

I found this picture on the web. The OP didn't know where it came from so I have no attribution. It speaks volumes to the gap between knowing and doing. The notion of knowing and doing is at the heart of Engineered solutions to complex problems, versus created solutions. Of course engineered solutions are created - in that creativity is mandatory for the engineered solution to provide the needed capabilities the customer ordered. But the engineered aspects are the framework in which that creativity is performed.

HDS simple minded plan

But the inverse may not be true. Engineering is a broad term.

Engineering (from Latin ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise") is the application of scientific, economic, social, and practical knowledge in order to invent, design, build, maintain, research, and improve structures, machines, devices, systems, materials and processes.

Software Engineering is not always stand alone, but might be considered part of Systems Engineering. Software in many cases is embedded in a physical object used by people or other physical objects. Or embedded in a physical processes used by people. These objects are systems and maybe even System of Systems.

Systems Engineering is an interdisciplinary field of engineering that focuses on how to design and manage complex engineering projects over their life cycles. Issues such as reliability, logistics, coordination of different teams (requirements management), evaluation measurements, and other disciplines become more difficult when dealing with large or complex projects. Systems engineering deals with work-processes, optimization methods, and risk management tools in such projects. It overlaps technical and human-centered disciplines such as control engineering, industrial engineering, organizational studies, and project management. Systems engineering ensures that all likely aspects of a project or system are considered, and integrated into a whole.

Coding software, in the absence of Computer Science or Software Engineering, is not likely part of any engineering discipline, but is a skill of turns ideas into code.

Software Engineering is

The study and an application of engineering to the design, development, and maintenance of software. Typical formal definitions of software engineering are: the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.

So If You're in the Engineered Solutions Business...

If you're in the engineered solutions business, systems engineering, software engineering, business process engineering, would the picture at the top be considered credible by your customers, your professors in college, your business development team, your board of directors?

The picture at the top says several things:

  • You've not likely done a project like this before. Why would someone give you money for you to learn now? (Remember we're engineering the solution, not conducting research).
  • You were not paying attention in that probability and statistics class in college (engineering or computer science) when the prof explained all elements of projects are random variables.
  • You've come to believe that the externalities of microeconomics of software development are not applicable to your project.
  • You're a sole proprietor with tons of money and don't really care how much it costs, when you'll be done, or what you get when you're done. You just want to produce something meaningful to you.

The picture at the botton says several things:

  • You have experience in the development of systems like this and know that uncertainty creates risk, and risk management is how adults manage projects.
  • You actually know how to apply probability and statistics, have used Bayesian forecasting - not just tossed the word around, but applied actual math - to forecast a probabilistic outcome from your past experiences.
  • You paid attention in that engineering principles class where they taught you how to move forward in the design and development process with incremental outcomes. Yes this incremental development is the basis of all good engineering practices. The Big Design Up Front straw man went away 40 years ago.

Here's some background on this topic

Related articles Overarching Paradigm of Project Success The Actual Science in Management Science We Suck At Estimating Building a Credible Performance Measurement Baseline Turning Editorials into Root Cause Analysis
Categories: Project Management

Homer Simpson, Difficult Stakeholder

Software Requirements Blog - Seilevel.com - Tue, 02/10/2015 - 16:00
I am a firm believer that any situation in life or in business can be traced to an episode of either The Simpsons or Seinfeld. In The Simpsons season 2 episode “Oh Brother, Where Art Thou,” the patriarch of the Simpson family designs a car, which provides a surprisingly sophisticated look into project team dynamics. […]
Categories: Requirements

Using Scrum to Plan Your Wedding

Mike Cohn's Blog - Tue, 02/10/2015 - 16:00

In my ScrumMaster classes I always make the point that Scrum is a general purpose framework that can be applied to projects of all sorts. I’ve seen it applied to building construction, marketing, legal cases, families, restaurant renovations, and, of course, all sorts of product development. One of my favorite examples is using Scrum to plan a wedding.

Think about how perfect that is, though. Scrum excels at projects (yes, planning a wedding is project) that are complex and novel. And planning a wedding is both.

I came across a great website last week called ScrumYourWedding.com, which is a free resource for using Scrum to plan your wedding. Built by Hannah Kane and Julia Smith, the site is a great example of helping move Scrum outside its normal domain. Even if you’re already hitched, check it out for great insights on things like a Minimum Viable Wedding (a great example of that concept!) and sample Wedding Backlogs.

If you are getting married soon, congratulations, and check out their contest to win five hours of free coaching on using Scrum to plan your wedding.

Quote of the Month February 2015

From the Editor of Methods & Tools - Tue, 02/10/2015 - 13:47
The worrying thing about writing tests is that there are numerous accounts of people introducing tests into their development process and.. ending up with even more junk code to support than they had to begin with! Why? I guess, mainly because they lacked the knowledge and understanding needed to write good tests: tests, that is, that will be an asset, not a burden. Reference: Bad Tests, Good Tests, Tomek Kaczanowski, http://practicalunittesting.com/btgt.php

How to Design a Book… Make It an Experience

NOOP.NL - Jurgen Appelo - Tue, 02/10/2015 - 13:21
CIMG1633

I’m reading the book Steve Jobs, the exclusive biography by Walter Isaacson. Granted, I’m not a big fan of Steve Jobs, nor have I purchased any of the products he has offered to the world, but there are reasons to admire him. One thing that I can admire is his well-known passion for design and detail. It is, I believe, a talent (or behavior or competence) that is lacking, sadly, among most self-publishing authors.

The post How to Design a Book… Make It an Experience appeared first on NOOP.NL.

Categories: Project Management

Episode 219: Apache Kafka with Jun Rao

Jeff Meyerson talks to Jun Rao, a software engineer and researcher (formerly of LinkedIn). Jun has spent much of his time researching MapReduce, scalable databases, query processing, and other facets of the data warehouse. For the past three years, he has been a committer to the Apache Kafka project. Jeff and Jun first compare streaming […]
Categories: Programming

Mentoring Organization Applications Now Being Accepted for Google Summer of Code 2015!

Google Code Blog - Mon, 02/09/2015 - 20:44

Posted by Carol Smith, Open Source Team

Originally posted to the Google Open Source Blog

GoogleSummer_2015logo_horizontal.jpg

Do you represent a free or open source software organization looking for new contributors? Do you love the challenge and reward of mentoring new developers in your community? Apply to be a mentoring organization in the Google Summer of Code program! The organization application period is now open.

Now in its 11th year, Google Summer of Code is a program designed to pair university students from around the world with mentors at open source projects in such varied fields as operating systems, language translations, content management systems, games, and scientific software. Since 2005, over 8,500 students from more than 100 countries have completed the Google Summer of Code program with the support of over 480 mentoring organizations. Students gain exposure to real-world software development while earning a stipend for their work and an opportunity to explore areas related to their academic pursuits during their school break. In return, mentoring organizations have the opportunity to identify and attract new developers to their projects as these students often continue their work with the organizations after Google Summer of Code concludes.

The deadline for applying to be a mentoring organization for Google Summer of Code is Friday, February 20 at 19:00 UTC (11am PST). The list of accepted organizations will be posted on the Google Summer of Code site on Monday, March 2nd. Students will then have two weeks to reach out to the accepted organizations to discuss their project ideas before we begin accepting student applications on March 16th.

Please visit our Frequently Asked Questions page for more details on the program. For more information you can check out the Mentor Manual, timeline and join the discussion group. You can also check out the Melange Manual for more information on using the website. Good luck to all of our mentoring organization applicants!

Categories: Programming

Vinted Architecture: Keeping a busy portal stable by deploying several hundred times per day

This is guest post by Nerijus Bendžiūnas and Tomas Varaneckas of Vinted.

Vinted is a peer-to-peer marketplace to sell, buy and swap clothes. It allows members to communicate directly and has the features of a social networking service.

Started in 2008 as a small community for Lithuanian girls, it developed into a worldwide project that serves over 7 million users in 8 different countries, and is growing non-stop, handling over 200 M requests per day.

Stats
Categories: Architecture