Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

The Agile Team Charter Is a Team Tool

In the age of empires, kings and queens granted charters to companies.

In the age of empires, kings and queens granted charters to companies.

Project managers and developers have long been at odds with each other over deliverables like the project charter. The project management view is that the information is needed, albeit by parties outside the project team, and the project teams see their effort being siphoned off.  Generally, the charter has been considered a control document, therefore the bastion of stakeholders and project managers.  Agile techniques and the institution of the Agile team charter has changed that, however all of the parties that consumed the information from the classic charter still have information needs. There are three audience for the information in the classic charter.  Agile while still generating and sharing information provides it via different channels.

Agile teams are the primary consumer of the Agile team charter. The charter is a tool to help the team identify the project goal in their own words and then to concentrate the team’s attention specifically on that goal. Attention is an asset that is never overabundant, but is critical for any team if they are going to deliver the stories to which they have committed.  Along with direction, the charter also serves as a tool to guide the team’s behavior. The team identifies norms that establish an environment that is conductive to performance.

In the past, charters have been developed or framed by stakeholders or sponsors. Because of their authorship, it often takes on the feeling of a contract that can constrain and bind. Agile projects, and by extension Agile team charters, are flexible and dynamic to enhance the team’s ability to respond to the users and product owner needs, as they are discovered.

In many circumstances what stakeholders are looking for in the charter is a communication channel, or at the very least, a place at the table to influence and guide the project. In the past they have either written the charter or have signed off on the contents in the charter. Agile projects respect and embrace this need by providing techniques that generate involvement. Inclusion of the product owner as a direct part of the team, and the participation by a wide range of stakeholders and users in demonstrations where the team seeks approval and feedback are examples of Agile team’s recognition of the business’ place at the table. Instead of asking stakeholders to specifically define what is going to be delivered in the charter up front, Agile uses inclusion to dynamically define the projects outcome.  This ensures that as need change so do the goals of the project.

The classic project charter provides executives with an understanding of the important projects within their organizations. The need for information about strategic projects, whether Agile or any other another method, is no different. Charters are often used in organizations to authorize or approve a project; when an executive signs off on the charter it signals the beginning of a project. Authorization and notification is taken care of with one signature.  The Agile team charter is not the right tool for authorization because the charter now guides the team rather than authorizing the team. Agile organizations separate the team charter from authorization and typically develop a simple authorization form that is separate from the charter.

In the age of empires, kings and queens granted charters to companies. These charters identified the organization’s rights and obligations. Typically the charter established the governance structure for the endeavor. Today, many project charters follow in the same proud tradition. However when practicing Agile, much of the structure of classic project charters is overhead or shifted to other, more interactive techniques.  In today’s environment rarely do we need to approach projects as if we were chartering the East India Trading Company. Agile team charters work hand-in-hand with other Agile techniques like including product owner as team members and demonstrations that seek input to share information with a wider community.

 


Categories: Process Management

Quote of the Day

Herding Cats - Glen Alleman - Fri, 07/11/2014 - 23:44

Gentlemen, we have run out of money; now we have to think - Winston Churchill

The role of estimating in project and product development is many fold...

  • For product, the cost of development must recouped   over the life cycle of the product. Knowing the sunk cost of the product provides decision making information to the business if the target margin will be achieved and on what day.
  • For projects, the cost of development is part of the ROI equation. ROI = (Value - Cost) / Cost
  • For day to day business operations cash flow is the actual cost of producing outcomes. Budget is not the same as cost. We have define a budget for our work, but some forecast of the cost of that work, gathered from current operations or past performance, let's us know if we have sufficient budget.
  • For products when marginal cost exceeds marginal profit, we're going go out of business if we don't do something about controlling the cost. But our cost forecast and revenue forecast are the steering points to provide feedback for making choices.
  • For projects, the marginal cost and the marginal benefits obey the same rules of microeconomics.

In both cases the future cost and future monetized value are probabilistic numbers.

  • This project or product will cost $257,000 or less with an 80% confidence
  • This project or product will complete on or before May 2015 with a 75% confidence

With both these numbers and their Probability Distribution Function, decisions can be made about options - choices that can be made to influence the probability of project or product success.

Without this information, the microeconomics of writing software for money is not possible and the foundation of business processes abandoned.

In order to make these estimates of cost, schedule, and the technical performance of the project or product, some  model is needed and the underlying uncertainty of the elements of the model. These uncertainties come in two forms

  • Reducible (epistemic uncertainty) - money can be spent to reduce this uncertainty. Testing, prototypes, incremental development. 
  • Irreducible (aleatory uncertainty) - this is the normal variance in the process or technical components. The Deming uncertainty. The only action to reduce this uncertainty is margin, Cost margin, schedule, and technical margin. The cost margin is then part of the total project or product budget and the schedule margin part of the total period of performance for the project or the planned release date for the product.

To suggest decisions can be made without knowing this future information violates the principles of microeconomics of business 

Related articles Both Aleatory and Epistemic Uncertainty Create Risk Uncertainty is the Source of Risk Elements of Project Success More Uncertainty and Risk
Categories: Project Management

Agile Charters: More Barnacles!

The bigger the ships, the more barnacles.

The bigger the ships, the more barnacles.

As we noted in Agile Charter Barnacles, the basic Agile team charter is brief and to the point. However there is a natural tendency to add components back from classic charters.  The charter is a tool for the team, therefore the team needs to find the components that add value to how they work.  When these components add value to the team(s) doing the work adding these sections makes sense.  A number of readers of the Software Process and Measurement blog have asked about whether adding specific charter sections make sense in an Agile project or whether there are other techniques to address their needs.

  1. Roles:  Classic charters typically include a section that describes the roles that team members will play on the project team.  Agile teams are self-organizing, and roles change to ensure that the team can deliver the work they commit.  In large Agile programs, identifying areas of focus for teams is often valuable to help direct communication. Projects that are using classic project management methods with a smattering of Agile techniques will need to document roles.  However, at the team level, documenting regimented roles does not make sense for most Agile projects.
  2. Resources: Resources represent the assets that a team can draw from to deliver value and function effectively.  Resources can range from physical space to knowledge capital.  In general I only recommend identifying and documenting resources when they are outside of the norm and could be easily overlooked.  For example, I was once asked if was important to identify and document that the project team was going to use a new team room.  Since the team would tend not to forget where they worked, I suggested that they probably did not need to document the room as a resource.  While the statement sounds cynical, the discussion then turned to a discussion about who the target audience for the charter was, the team or others outside of the team.  An Agile team charter is a team-level tool, not an external communication vehicle.
  3. Context: The context for the project can be helpful for the team(s) involved in developing and delivering a solution.  In a very real sense the product owner is physical instantiation of context.  In small projects, the access of the product owner will generally suffice as a means to share context. In larger projects or programs with broad or complicated business objectives, capturing context is important.  I generally recommend in cases where capturing the narrative context is important that a separate document (this is generally not flip chart territory) is generated and that the product owner spearheads its creation.
  4. Milestones and Delivery Cadence: The Agile release plan documents the strategy that the team intends to use for delivering the project.  Release plans will identify whether functionality will be delivered to production on continuous basis, at the end of every sprint or after a number of sprints as a release. Identify the strategy the team intends to follow in the charter if it outside of the normal pattern the team follows. The release plan itself should be documented and maintained separately from the Agile team charter.
  5. Assumptions:  Assumptions are defined as things that accepted as true or certain to happen without proof.  Teams should always reflect on what they accept or believe is true that is not supported by evidence.  Assumptions that are outside of the norm need to be treated as potential blockers, captured and pursued by scrum master/coach to verify or treated as risks (documented as user stories and included in the backlog).

To add components to the Agile team charter or not to add components to the Agile team charter, that is the question. Since the Agile team charter is a tool that helps focus and guide the project teams — then the answer is add a component only if it adds value to the team.


Categories: Process Management

Stuff The Internet Says On Scalability For July 11th, 2014

Hey, it's HighScalability time:


Yesterday in history: Nikola Tesla's Birthday, born in 1856. The greatest geek who ever lived?
  • 10Gbps: New world record broadband speed of 10 Gbps over copper.
  • Quotable Quotes:
    • @BenedictEvans: There were 40m internet users when Netscape IPOed. The time's not far off when a startup with 40m users will be too small to get funded.
    • Scott Aaronson: In any case, the question I asked myself about CLEVER/PageRank was not the one that, maybe in retrospect, I should have asked: namely, “how can I leverage the fact that I know the importance of this idea before most people do, in order to make millions of dollars?”
    • chub79: µservices aren't technological as much as they are cultural.
    • @Elmood: I thought of a new term when talking about code: "It's made from unmaintainium."
    • @lxt: Amazing how quickly a bunch of nines go up in smoke.
    • @martinrue: Knock knock. Race condition. Who's there?

  • The Master Switch: The Rise and Fall of Information Empires by Tim Wu: History shows a typical progression of information technologies: from somebody’s hobby to somebody’s industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel—from open to closed system. History also shows that whatever has been closed too long is ripe for ingenuity’s assault: in time a closed industry can be opened anew, giving way to all sorts of technical possibilities and expressive uses for the medium before the effort to close the system likewise begins again.

  • Tim Freeman indulges a well developed Technothantos Complex and comes up with a great big list of outage postmortems. You'll find the usual, outages from configuration issues, failover failures, quorumnesia, protocol flapping, bugs in not your stuff that causes bugs in your stuff, power outages, capacity problems, JPOBs (just plain old bugs), DDOS attacks, and good old operator error. 

  • Pinterest describes PinLater, An asynchronous job execution system. PinLater executes hundreds of different job types at a processing rate of over 100,000 per second. So you may say yet another async job system, but it's clear keeping such a critical part of their infrastructure in house makes sense. The article is a good explanation of a fairly standard approach. It used Thrift for the API, it's written in Java, Twitter’s Finagle is used for the RPC framework. MySQL is "used for relatively low throughput use cases and those that schedule jobs over long periods and thus can benefit from storing jobs on disk rather than purely in memory." Redis is "used for high throughput job queues that are normally drained in real time." Horizontal scaling is via sharding. 

  • In science class we did this one day, but I just couldn't do it. Dissecting Message Queues. Tyler Treat looks at both brokerless and brokered queues by looking a throughput benchmarks, latency benchmarks, and through qualitative analysis. No winner was declared, but if you are making a choice in this area it's well worth reading. 

  • 40 Million hits a day on WordPress using a $10 VPS. Sure, it's a static site, but still a good example of what can be done these days. Stack: Nginx + PHP-FPM (aka LEMP Stack) + Microcaching. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Why Little's Law Works...Always

Xebia Blog - Fri, 07/11/2014 - 12:00

On the internet there is much information on Little's Law. It is described an explained in many places [Vac]. Recently, the conditions under which it is true got  attention [Ram11]. As will be explained in this blog the conditions under which the law is true are very mild. It will be shown that for teams working on backlog items virtually there are no conditions.

Why does it work? Under what conditions does it hold?

Airplane Folding

In the previous post (Applying Little's Law in Agile Games) I described how to calculate the quantities in Little's Law. As an example this was applied to the Agile game of folding airplanes consisting of 3 rounds of folding.

airplane-1

Let's look in more detail in which round an airplane was picked up and in which round it was completed. This is depicted in the following figure.

The horizontal axis shows the number of rounds. The vertical axis describes each airplane to fold. The picture is then interpreted as follows. Airplane no. 2 is picked up in round 1 and competed in the same round. It has a waiting time of 1 round. This is indicated at the right of the lowest shaded rectangle.
Airplane no. 8 was picked up in round 1 and finished in round 3. A waiting time of 3 rounds. Airplane no 12 (top most shaded area) was picked up in round 3 and unfinished. Up to round 3 a waiting time of 1 round.

The number 3, 5, and 10 denote the number of completed airplanes at the end of round 1, 2, and 3 respectively.

Waiting Times

The waiting times are determined by counting the number of 'cells' in a row.

The pictures show that we have 12 airplanes (12 'rows'), 3 completed in the first round, 2 more completed in the second round and 5 additionally  folded airplanes in the third and last round giving a total of 10 finished paper airplanes.

All twelve airplanes have waiting times of 1, 1, 1, 2, 2, 3, 3, 3, 3, 3, 3, and 1 respectively.

Work in Progress

In the figure below the number of airplanes picked up by the team in each round are indicated with red numbers above the round. new doc 6_2
In round 1 the team has taken up the folding of 11 airplanes (of which 3 are completed). In round 2 the team was folding 8 airplanes (of which 2 were competed) and in round 3 the team was folding 7 airplanes (of which it completed 5).

Work in progress is determined by counting the number of 'cells' in a column.

Little's Law.....Again

Now that we have determined the waiting times and amount of work in progress, let's calculate the average waiting time and average work in progress.

Average Waiting Time. This quantity we get by adding all waiting times and dividing by the number of items. This gives 26/12.

Average Work in Progress. This quantity is equal to (11+8+7)/3 = 26/3.

Average input rate. This is equal to 12 (the height of the third column) divided by 3 which gives 4.

Again we find that: Average Waiting Time = Average Work in Progress / Average input rate.

Why It Works

new doc 6_3 Little's Law works....always....because the average waiting times is got by adding the lengths of all the rows dividing by the number of rows, so it is proportional to the size of the shaded area in the picture to the right.

The average work in progress is got by adding the heights of the columns in the shaded area which is also proportional to the size of the shaded area.

Both the waiting time and work in progress relate to the size of the shaded area: one by adding the heights and the other by adding the rows. The proportionality corresponds to the average input rate.

Conditions

What assumptions did we make? None...well this is not exactly true. The only assumptions we make in this calculation:

  • We count discrete items
  • There are a finite number of rounds (or sprints)
  • Items enter and possibly leave the system.

That's it. It doesn't need to be stable, ageing (items having increasingly larger waiting times) is not a problem, prioritisation/scheduling of items (also known as queueing discipline), etc. Only the above assumptions need to be true.

Note: Especially the second condition is important, i.e. Little's Law is measured over a finite time interval. For infinite time interval additional conditions need to be fulfilled.

Note: When applying this to agile teams we always consider finite time intervals, e.g. 6 months, 1 year, 8 sprints, etc.

Conclusion

Little's Law is true because the average waiting time is proportional to the size of the shaded area (see figure) and the average work in progress is also proportional to the size of the same shaded area.

Only 3 basic conditions need to be met for Little's Law to be true.

References

[Vac] Little’s Law Part I, Dan Vacanti, http://corporatekanban.com/littles-law-part-i/

[Ram11] Little’s Law – It’s not about the numbers, Agile Ramblings, http://agileramblings.com/2012/12/11/littles-law-its-not-about-the-numbers/

Building Successful Global App Businesses

Android Developers Blog - Fri, 07/11/2014 - 06:34

By: Purnima Kochikar, Director, Google Play Apps & Games

With over 1 billion active Android users, an increasing number of developers like you are building successful global businesses on Google Play. Since the last Google I/O, we’ve also paid out more than $5 billion to developers.

This week at Google I/O, we announced new ways to help you build a successful business. These solutions work together at scale to help you find more users, understand and engage them, and effectively convert your active users into buyers.

Build an engaging app

Last year, Google Play became an even better place to try new ideas. Since May 2013, Google Play offers Alpha and Beta Testing so that you can engage users early to get feedback on your new app. Feedback provided by users is private, allowing you to fix issues before publicly launching the app, and without impacting your public ratings and reviews. Over 80,000 apps on Google Play are actively using beta testing. You can also ensure new versions get a positive response by updating through staged rollouts.

Scale operations

As your app business grows, you dedicate more time to release management. Today we announced the Google Play Developer Publishing API to help you scale your release operations. The new API will let you upload APKs, manage your in-app products and localized store listings. You will be able to integrate publishing operations with your release processes and toolchain through a RESTful API. With the Google Play Developer Publishing API you’ll spend less time managing your releases and more time managing your business. This API is currently in closed beta and we look forward to making it available to all developers.

Actionable insights

The Google Play Developer Console now offers more actionable insights into your app’s performance by sending you email notifications for Alerts and providing Optimization Tips. We’re also offering new revenue metrics including number of buyers and average revenue per paying user. You’ll also be able to export user reviews for further analysis. Click on Announcements in the Developer Console for a list of new features.

For game developers, we recently launched enhanced Play Games statistics on the Google Play Developer Console. You get a daily dashboard that visualizes player and engagement statistics for signed in users, including daily active users, retention analysis, and achievement and leaderboard performance.

Enhance discovery and engagement

With AdWords, we're building a robust platform to help you promote your app and drive re-engagement. This week we are launching Installed App Category Targeting, a new way to promote your app to new users. It helps you reach potential customers across the AdMob network who have already installed apps from related categories on Google Play and other app stores. For example, an action-oriented game developer may wish to reach users who have previously installed apps from the category Action & Adventure Games.

Ads can also remind users about the apps they already have. Through Google mobile display and search ads deep linking, you can re-engage users who have already installed your Android app by taking them directly to specific pages in the app. Let’s say someone has the Hotel Tonight app installed on their phone. If they search Google for “hotels in San Francisco," they'll see an ad that will open Hotel Tonight app and take them directly to a list of San Francisco hotels.

This deep-linking is also available through search for all apps that implement app indexing. If a user with the Walmart Android app searches for “Chromecast where to buy”, they’ll go directly to the Chromecast page in the Walmart app. The new App Indexing API is now open to all Android developers, globally. Get started now.

New services for game developers

For game developers using Play Games, we announced a new Game Profile that is automatically customized based on the gameplay and achievements earned in those games. Since its launch last year, users have loved saving their game progress in the cloud. We’re now evolving this feature to Saved Games, where users can save up to 3 “bookmarks” of their progress in the Play Games app, complete with images and descriptions. Finally, we announced a new service called Quests — it you run online, time-based goals in your game; for example, players can collect bunch of in-game items on a specific day, and the quests services coordinates with your game to know who completed the goal. These APIs run events for your players, and reward them, without the need to update your game.

New monetization tools

Today, we announced that users who have set up Direct Carrier Billing on their smartphone can also make purchases on Google Play from their tablet, charging to the same mobile phone bill. In addition to our recent launch of payments through PayPal, these new user payment options expand monetization opportunities for your apps.

As announced earlier this year, Google Analytics is now directly available in the AdMob interface, giving you powerful segmentation tools to determine the best monetization strategy for each user. For example, you might want to display in-app purchase ads to users most interested in buying, while showing regular ads to those less likely to buy right now. Once you’ve segmented your audience in this way, you can use AdMob to build interstitial ads that promote in-app purchase items to users at a point in your app that’s useful to them. This creates a more customized experience for users, can help prolong engagement and grow in-app purchase revenue. Learn more.

Join us

If you're at Google I/O 2014, please join us at our breakout sessions today and tomorrow, where we'll be talking about these features in much more detail. (Add us to your calendar!) And if you can't make I/O, you can always join us on the livestream or watch the videos online later.

.footer__link, .footer__text{ color: #8b8b8b !important; display: block; font-size: 12px; line-height: 1.33333; margin-bottom: 10px; text-decoration: none; } .hoverable .footer__link:hover { text-decoration: underline; }

Google I/O 2014 I/O Livestreams I/O Bytes Videos +Android Developers

L Developer Preview Material Design Android Wear Android TV Android Auto

Get it on Google Play
Categories: Programming

Building an Agile Charter

Flip charts!

Flip charts!

Building an Agile team charter is generally one the first events that marks the beginning of a new endeavor.  The charter helps a team clearly capture the project’s goals and definition of success.   For most projects, project initiation is a time full of possibilities, a time before all the mundane issues and the day-to-day begin. The process of framing the charter must acknowledge the excitement of the team without letting that excitement lead them astray.  A standard, moderated process process to create a charter provides focus and direction for a team raring to deliver value.

There are two primary prerequisites to starting an Agile Charter.

  • A product owner or stakeholder has conceived of a project and gotten approval to at least create a charter.
  • A project team has been assigned to the work (product owner, scrum master/coach, team).

Without a team or a project, creating a charter does not make sense.

A simple process for building a charter is shown below:

  1. Before convening the entire team, the product owner and scrum master should decide which components will be included in the Agile team charter (see Agile Team Charter and Agile Charter Barnacles). The scrum master/coach will moderate Agile team charter session and should do some work in preparation.
  2. For each section that will be included, prep the flip charts.  For example, label a single flip chart for each component and for the elevator speech put the outline down for the team.
  3. Create a set of framing questions for each component.  These questions can be used to facilitate the discussion.
  4. Convene the meeting to build the Agile team charter.  This session should typically be scheduled for half of a day. All team members should attend in person. When that is not possible, then ensure good tele- or videoconference packages are used. Include lunch if at all possible.  It is imperative that all team members, including the product owner, participate.  The Agile team charter critical for focusing a team
  5. Before beginning work on the charter, review with the team why they there, any ground rules (e.g. no smart phones) and the components that are being proposed as part of the charter.  Tailor the list of components based on feedback from the team. . I almost always suggest discussing risk in the session held to build the charter however never include the risks in the charter (add immediately to the backlog).
  6. Pass out markers, sticky notes, voting flags or any other items that the team will use during the session.  No one should have to spend time looking for office supplies during the session.  Generally when there are remote participants the moderator or someone helping the moderator will act as an intermediary to scribe their comments.
  7. Iteratively complete the components in the Agile team charter.  Have team members scribe their own comments on the flip charts.  One mark of a good session is multiple handwriting styles on each flip chart because is a reminder of the whole teams engagement creating the charter.
  8. Tape the completed components to the wall in the team room in a prominent place so that every team member can review the charter as needed.

The scrum master/coach is in the room to help the team complete the charter, not to complete it for them.  The moderator should manage the flow of discussion and the clock.  Defining the charter is time boxed so that is completed in one session.

The process of building a typical Agile team charter occurs once as a project begins.  This is no different than in a classic project.  However the in an Agile project the Agile team charter is referenced on a daily basis.  I recommend reviewing the charter at least once every ninety days or if any significant changes happen on the team.

An Agile team charter provides direction for the whole team. A standard process harnesses the energy teams have to build a charter and begin the project moving in same direction.


Categories: Process Management

Bootstrapping and monitoring multiple processes in Docker using monit

Xebia Blog - Thu, 07/10/2014 - 23:01

If you have every tried to start a docker container and keep it running, you must have encountered the problem that this is no easy task. Most stuff I like to start in container are things like http servers, application servers and various other middleware components which tend to have start scripts that daemonize the program. Starting a single process is a pain, starting multiple processes becomes nasty. My advise is to use monit to start all but the most simple Docker application containers! When I found monit while delving through the inner works Cloud Foundry, I was ecstatic about it! It was so elegant, small, fast, with a beautiful DSL that I thought it was the hottest thing since sliced bread! I was determined to blog it off the roof tops. Until.... I discovered that the first release dated from somewhere in 2002. So it was not hot and new; Clearly I had been lying under a UNIX rock for quite a while. This time, the time was right to write about it! Most of the middleware components I want to start in a docker container, have a habit to start the process, daemonize it and exit immediately, with the docker container on its tail. My first attempt to circumvent this while starting a tomcat server in Docker looked something like this:

/bin/bash -c "service tomcat7 start;while service tomcat7 status;do sleep 1;done

Quite horrific. Imaging the ugliness when you have to start multiple processes. A better solution is needed:  With the zabbix docker container  the problem was solved using simplevisor. As you can read in this post that was not a pleasant experience either. As I knew little about simplevisor and could not solve the problem, I put in an issue and  resorted to a plain installation. But a voice in my head started nagging: "Why don't you fix it and send a pull request?"  (actually, it was the voice of my colleague Arjan Molenaar). Then, I remembered from my earlier explorations to the inner workings of Cloud Foundry, a tool that would be really suitable for the job: monit. Why? It will:

  1. Give you a beautiful,readable specification file stating which processes to start
  2. Make sure that your processes will keep on running
  3. Deliver you a clean and crisp monitoring application
  4. Reduce all your Docker starts to a single command!

In the case of the Zabbix server there were seven processes to start: the zabbix server, agent, java agent, apache, mysql and sshd. In monit this looks as follows:

check process mysqld with pidfile /var/run/mysqld/mysqld.pid
        start program = "/sbin/service mysqld start"
        stop program = "/sbin/service mysqld stop"

check process zabbix-server with pidfile /var/run/zabbix/zabbix_server.pid
        start program = "/sbin/service zabbix-server start"
        stop program = "/sbin/service zabbix-server stop"
        depends on mysqld

check process zabbix-agent with pidfile /var/run/zabbix/zabbix_agentd.pid
        start program = "/sbin/service zabbix-agent start"
        stop program = "/sbin/service zabbix-agent stop"

check process zabbix-java-gateway with pidfile /var/run/zabbix/zabbix_java.pid
        start program = "/sbin/service zabbix-java-gateway start"
        stop program = "/sbin/service zabbix-java-gateway stop"

check process httpd with pidfile /var/run/httpd/httpd.pid
        start program = "/sbin/service httpd start"
        stop program = "/sbin/service httpd stop"
        depends on zabbix-server

check process sshd with pidfile /var/run/sshd.pid
        start program = "/sbin/service sshd start"
        stop program = "/sbin/service sshd stop"

Normally when you start monit it will start as a daemon. But fortunately, you can prevent this with the following configuration.

set init

Your Dockerfile CMD can now always look the same:

    monit -d 10 -Ic /etc/monitrc

Finally, by adding the following statement to the configuration you get an application to view the status of your container processes,

set httpd
     port 2812
     allow myuser:mypassword

After starting the container, surf to port 2812 and you will get a beautiful page showing the state of your processes and the ability to stop and restart them.

monit overview monit process control

Just delve into the documentation of monit and you will find much more features that will allow you to monitor network ports and files, start corrective actions and send out alerts.

Monit is true to its UNIX heritage: it is elegant and promotes an autonomous monitoring system. Monit is cool!

New Cross-Platform Tools for Game Developers

Android Developers Blog - Thu, 07/10/2014 - 19:25

By Ben Frenkel, Google Play Games team

There was a lot of excitement at Google I/O around Google Play Games, and today we’re delighted to share that the following tools are now available:

  • Updated Play Games cross-platform C++ SDK
  • Updated Play Games SDK for iOS
  • New game services alerts in the Developer Console

Here's a quick look at the cool new stuff for developers.

Updated Play Games C++ SDK

We've updated the Google Play Games C++ SDK with more cross-platform support for the new services and experiences we announced at I/O. Learn more»

The new C++ SDK now supports all of the following:

Cocos2D-x, a popular game engine, is an early adopter of the Play Games C++ SDK and is bringing the power of Play Games to their developers. Additionally, the Cocos2D-x team created Wagon War, a prototype game showcasing the capabilities of the Cocos2D-x engine with Play Games C++ SDK integration.

Wagon War is also a powerful reference for developers — it gives you immediately usable code samples to accelerate your C++ implementations. You can browse or download the game sources on the Wagon War page on GitHub.

Updated Play Games iOS SDK

The Play Games iOS SDK is now updated with support for Quests and Saved Games, enabling iOS developers to integrate the latest services and experiences with the Objective-C based tool-chains they are already familiar with. Learn more»

The new Play Games SDK for iOS now supports all of the following:

  • Quests and Events. Learn more»
  • Saved Games. Learn more»
  • Game Profile and related Player XP APIs — the SDK now also provides the UI for Game Profile and access to Player XP data for players.
New types of games services alerts

Last, you can now see new types of games services alerts in the Developer Console to learn about issues that might be affecting your users' gameplay experiences. For example, if your app implements Game Gifts, you'll now see an alert when players are unable to send a gift; if your app implements Multiplayer, you'll now see an alert when players are unable to join a match. Learn more»


Join the discussion on
+Android Developers


Categories: Programming

Design Sprints for Developers

Google Code Blog - Thu, 07/10/2014 - 19:04
Author PictureBy Nadya Direkova, Staff Designer and Design Evangelist at Google[x]

At Google and throughout the industry, we all agree that two things matter: design and speed. But how can we do great design quickly? For our teams, one of our most important tools is the design sprint.

While a typical product design process takes months or years, a design sprint compresses this into a week or less. The design sprint combines key design and research methods and focuses on a single challenge or multiple challenges in parallel. It brings all the stakeholders—designers, developers, product managers, and other decision makers—into one place to work together on a short deadline. It often leads to insights and solutions more quickly than anyone thought possible. At Google, we've been using design sprints for over four years, from external projects like Ads, Glass and Project Loon to our internal tools.

One team has even run a huge sprint with 175 participants in 23 teams. How did that feel? As Cordell Ratzlaff, User Experience Director for Ads & Commerce, says: “When you participate in a sprint, you either win or you learn.” Our latest Google Design Minutes short tells this story:

Design sprints at scale: Cordell Ratzlaff and team on the importance of constraints

We’re really excited about sharing our design sprint methods more broadly. Design sprints were an important theme in the “Design, Develop, Distribute” message at Google I/O 2014, where developers got a chance to learn about and experience short sprints in person.

The design sprint: from Google Ventures to Google[x]; Daniel Burka, Jake Knapp, Nadya Direkova share insights with developers at Google I/O 2014

However, this was just a first glimpse; over the summer, we’ll be hosting design sprints for select developers in the Bay Area, helping developers design for platforms like Glass and Android Wear or build with the material design approach. To get updates when these limited-seating events become available, sign up here.

No matter what your challenge and design process, design sprints can help you reduce the time it takes to create great ideas. So make great things, and make them quickly!

Categories: Programming

Neo4j: LOAD CSV – Processing hidden arrays in your CSV documents

Mark Needham - Thu, 07/10/2014 - 15:54

I was recently asked how to process an ‘array’ of values inside a column in a CSV file using Neo4j’s LOAD CSV tool and although I initially thought this wouldn’t be possible as every cell is treated as a String, Michael showed me a way of working around this which I thought was pretty neat.

Let’s say we have a CSV file representing people and their friends. It might look like this:

name,friends
"Mark","Michael,Peter"
"Michael","Peter,Kenny"
"Kenny","Anders,Michael"

And what we want is to have the following nodes:

  • Mark
  • Michael
  • Peter
  • Kenny
  • Anders

And the following friends relationships:

  • Mark -> Michael
  • Mark -> Peter
  • Michael -> Peter
  • Michael -> Kenny
  • Kenny -> Anders
  • Kenny -> Michael

We’ll start by loading the CSV file and returning each row:

$ load csv with headers from "file:/Users/markneedham/Desktop/friends.csv" AS row RETURN row;
+------------------------------------------------+
| row                                            |
+------------------------------------------------+
| {name -> "Mark", friends -> "Michael,Peter"}   |
| {name -> "Michael", friends -> "Peter,Kenny"}  |
| {name -> "Kenny", friends -> "Anders,Michael"} |
+------------------------------------------------+
3 rows

As expected the ‘friends’ column is being treated as a String which means we can use the split function to get an array of people that we want to be friends with:

$ load csv with headers from "file:/Users/markneedham/Desktop/friends.csv" AS row RETURN row, split(row.friends, ",") AS friends;
+-----------------------------------------------------------------------+
| row                                            | friends              |
+-----------------------------------------------------------------------+
| {name -> "Mark", friends -> "Michael,Peter"}   | ["Michael","Peter"]  |
| {name -> "Michael", friends -> "Peter,Kenny"}  | ["Peter","Kenny"]    |
| {name -> "Kenny", friends -> "Anders,Michael"} | ["Anders","Michael"] |
+-----------------------------------------------------------------------+
3 rows

Now that we’ve got them as an array we can use UNWIND to get pairs of friends that we want to create:

$ load csv with headers from "file:/Users/markneedham/Desktop/friends.csv" AS row 
  WITH row, split(row.friends, ",") AS friends 
  UNWIND friends AS friend 
  RETURN row.name, friend;
+-----------------------+
| row.name  | friend    |
+-----------------------+
| "Mark"    | "Michael" |
| "Mark"    | "Peter"   |
| "Michael" | "Peter"   |
| "Michael" | "Kenny"   |
| "Kenny"   | "Anders"  |
| "Kenny"   | "Michael" |
+-----------------------+
6 rows

And now we’ll introduce some MERGE statements to create the appropriate nodes and relationships:

$ load csv with headers from "file:/Users/markneedham/Desktop/friends.csv" AS row 
  WITH row, split(row.friends, ",") AS friends 
  UNWIND friends AS friend  
  MERGE (p1:Person {name: row.name}) 
  MERGE (p2:Person {name: friend}) 
  MERGE (p1)-[:FRIENDS_WITH]->(p2);
+-------------------+
| No data returned. |
+-------------------+
Nodes created: 5
Relationships created: 6
Properties set: 5
Labels added: 5
373 ms

And now if we query the database to get back all the nodes + relationships…

$ match (p1:Person)-[r]->(p2) RETURN p1,r, p2;
+------------------------------------------------------------------------+
| p1                      | r                  | p2                      |
+------------------------------------------------------------------------+
| Node[0]{name:"Mark"}    | :FRIENDS_WITH[0]{} | Node[1]{name:"Michael"} |
| Node[0]{name:"Mark"}    | :FRIENDS_WITH[1]{} | Node[2]{name:"Peter"}   |
| Node[1]{name:"Michael"} | :FRIENDS_WITH[2]{} | Node[2]{name:"Peter"}   |
| Node[1]{name:"Michael"} | :FRIENDS_WITH[3]{} | Node[3]{name:"Kenny"}   |
| Node[3]{name:"Kenny"}   | :FRIENDS_WITH[4]{} | Node[4]{name:"Anders"}  |
| Node[3]{name:"Kenny"}   | :FRIENDS_WITH[5]{} | Node[1]{name:"Michael"} |
+------------------------------------------------------------------------+
6 rows

…you’ll see that we have everything.

If instead of a comma separated list of people we have a literal array in the cell…

name,friends
"Mark", "[Michael,Peter]"
"Michael", "[Peter,Kenny]"
"Kenny", "[Anders,Michael]"

…we’d need to tweak the part of the query which extracts our friends to strip off the first and last characters:

$ load csv with headers from "file:/Users/markneedham/Desktop/friendsa.csv" AS row 
  RETURN row, split(substring(row.friends, 1, length(row.friends) -2), ",") AS friends;
+-------------------------------------------------------------------------+
| row                                              | friends              |
+-------------------------------------------------------------------------+
| {name -> "Mark", friends -> "[Michael,Peter]"}   | ["Michael","Peter"]  |
| {name -> "Michael", friends -> "[Peter,Kenny]"}  | ["Peter","Kenny"]    |
| {name -> "Kenny", friends -> "[Anders,Michael]"} | ["Anders","Michael"] |
+-------------------------------------------------------------------------+
3 rows

And then if we put the whole query together we end up with this:

$ load csv with headers from "file:/Users/markneedham/Desktop/friendsa.csv" AS row 
  WITH row, split(substring(row.friends, 1, length(row.friends) -2), ",") AS friends 
  UNWIND friends AS friend  
  MERGE (p1:Person {name: row.name}) 
  MERGE (p2:Person {name: friend}) 
  MERGE (p1)-[:FRIENDS_WITH]->(p2);;
+-------------------+
| No data returned. |
+-------------------+
Nodes created: 5
Relationships created: 6
Properties set: 5
Labels added: 5
Categories: Programming

Fitting In With Corporate Culture

Making the Complex Simple - John Sonmez - Thu, 07/10/2014 - 15:00

In this video I answer a question about fitting into corporate culture when you come from a different background.

The post Fitting In With Corporate Culture appeared first on Simple Programmer.

Categories: Programming

Registration Opens for More Workshops

NOOP.NL - Jurgen Appelo - Thu, 07/10/2014 - 09:51
open-registration

I’m very happy to say that the Management 3.0 Workout sessions so far have been well received, and they’re getting better every time! I’m almost ready for the summer break, but registration has opened for the remaining workshops this year. Please remember, I will personally visit each city only once!
It was a very joyful and insightful workshop

The post Registration Opens for More Workshops appeared first on NOOP.NL.

Categories: Project Management

Agile Charter Barnacles

Barnacles slow down the boat.

Barnacles slow down the boat.

The basic Agile charter is brief and to the point. This brevity seems to beg practitioners to add components from classic charters.  A sample of the types of additions include:

  1. Out of Scope:  Sometimes it is important to establish boundaries for a team.  Identifying what is not in scope sets limits for the team.  Identifying what is out of scope s more important as projects get larger.  Programs will almost always need spend time defining which groups should focus on what areas.  I often keep this data outside of the charter and post copies (on flip charts) in team rooms.
  2. Involved Groups: This possible addition identifies other teams, groups and stakeholders that the team will have to interface with to deliver the solution.  I generally find this section to be of value for new teams, teams involved with a business solution outside of their normal area of expertise or programs.  A quick test for whether the section is needed is to ask whether it can be completed by cutting and pasting the information from somewhere else.  If you can complete the involved groups section using a cut and paste from the last charter, the section is not needed.
  3. Risks: Risks are potential problems that could affect the team’s ability to deliver value or the value of that delivery. While it makes sense to identify and discuss risks when crafting a charter, I do not recommend adding risks to ANY charter.  In classic projects, add risks directly to the risk plan, and in Agile projects, add risks directly to the backlog.  Having information in multiple locations can require extra time to maintain OR lead to losing information.
  4. Proposed Release Plan: Developing a release plan helps a team answer questions about when a project will be delivered.   Embedding release plans in the charter generates an anchor bias (setting a date or a set of dates in the team and stakeholders minds), therefore can be problematic. Recognize that release plans developed early in a project are subject to high levels of variability since significant levels of discovery occur as a project develops.
  5. Proposed Solution: In most cases I question how the solution can be known before the project begins, therefore I tend to resist including it in the charter.
  6. Constraints: Everyone on the team should understand the constraints the team faces.  Capturing known constraints as an additional flip chart makes sense and does not add significantly to burden of the process.  Constraints can include: fixed deliver dates, budgets, key personnel absences or hardware/software constraints, such as COTS usage requirements.

For another view on what can or can’t be in an Agile charter take a look at the Agile Warrior’s blog.  The blog has longer list of items that compose his proposed inception deck/charter.

The decision tree I use for whether to include anything other than the basics in an Agile charter (which I recommend doing as flip charts and posting in the project’s team room) is:

  1. Will the team be able to use the information to guide their performance?
    If yes, consider including and go to next question.
  2. Is the data available somewhere else?
    If yes, don’t include it. If no, consider including and go to the next question.
  3. Will I have to replicate the data in another document or tool later?
    If no, I will typically include, while if yes I will not include.

My default answer to adding stuff to an Agile charter is no, and I require a significant level of convincing to change my mind.  The Agile team charter is a tool to help the team focus on achieving a goal and delivering specific value. Anything that does not help achieve that goal does not belong in the charter.


Categories: Process Management

Should the Product Owner test everything?

Good Requirements - Jeffrey Davidson - Thu, 07/10/2014 - 00:24

A scrum master I’ve coached recently sent me this question and I wanted to share my answer. Would you have answered the same way? What did I miss? What do you ask (demand?) from your product owner?

 

Question: Hi, Jeffrey,

Quick question for you: Does Product Owner (PO) approval need to be on a per story basis, per feature basis, or both?

We are facing a situation where some of the system environments were not in place and completed work has remained in Dev until today. We received today that the Test environment is ready. The stage environment is due to be completed at some point in the near future. Meanwhile, our team has modified the team’s “Definition of Done” so that completion criteria are more aligned with our capabilities within the framework of system environments being incomplete. Hence, the above question.

Signed,
The Client

 

Answer: Hello, Client.

First, it makes sense the PO is included in the conversation around “Definition of Done.” I’m not sure based on the question if they are in the loop, or not. I say this because the team is building and meeting expectations for the PO. It’s the polite thing to do to notify them and explain the new definition. In some cases, it may be more appropriate to ask their permission to change rather than simply notify them of the change.

Second, this change makes sense to me; you didn’t have the right environments previously and now you do. It makes sense the definition should change to accompany the environment and how the team is working.

Third, what’s happened to date and how much trust is there between the PO and team? If the PO has already tested all the existing stories, then they may not want to do more than audit the existing stories in the new environment(s). If the PO has trust in the team and testers, they many never do more than audit the stories. If the PO doesn’t have time, they may never get to more than auditing stories. In the end, it’s a great big “it depends” kind of answer.

What do I want from the PO? I want more involvement, as much as I can get. I want the PO to test every story as it’s finished and at least audit functionality and features as they are delivered. I don’t often get it, but it’s my request.

Categories: Requirements

One view or many?

Coding the Architecture - Simon Brown - Wed, 07/09/2014 - 22:16

In Diagramming Spring MVC webapps, I presented an approach that allows you to create a fairly comprehensive model of a software system in code. It starts with you creating a simple base model that includes software systems, people and containers. With this in place, all of the components can then be automatically populated into the model via a scan of the compiled Java code. This is all based upon Software architecture as code.

Once you have a model to work with, it's relatively straightforward to visualise it via a number of views. In the Spring PetClinic example, three separate views (one each of a system context, containers and components view) are sufficient to show everything. With larger software systems, however, this isn't the case.

As an example, here's what a single component diagram for the web application of my techtribes.je system looks like.

A mess

Yup, it's a mess. The components around the left, top and right edges are Spring MVC controllers, while those in the centre are the core components. There are clearly three hotspots here - the LoggingComponent, ActivityComponent and ContentSourceComponent. The reason for the first should be obvious, in that almost all components use the LoggingComponent. The latter two are used by all controllers, simply because some common information is displayed on the header of all pages on the website. I don't mind excluding the LoggingComponent from the view, but I'd quite like to keep the other two. That aside, even excluding the ActivityComponent and ContentSourceComponent doesn't actually solve the problem here. The resulting diagram is still a mess because it's showing far too much information. Instead, another approach is needed.

With this in mind, what I've done instead is use a programmatic approach to create a number of views for the techtribes.je web application, one per Spring MVC controller. The code looks like this.

The result is a larger number of simple diagrams, but I think that the trade-off is worth it. It's a much better way to navigate a large model.

Not so much of a mess

And here's an example component diagram that focusses on a single Spring MVC controller.

Not so much of a mess

The JSON representing the techtribes.je model can be found on GitHub and you can copy-paste it into my (still in-progress) diagramming tool if you'd like to explore the model yourself. I'm still experimenting with much of this but I really like the opportunities provided by having the software architecture model in code. This really is "software architecture for developers". :-)

Categories: Architecture

Update on Android Wear Paid Apps

Android Developers Blog - Wed, 07/09/2014 - 22:10

We have a workaround to enable paid apps (and other apps that use Google Play's forward-lock mechanism) on Android Wear. The assets/ directory of those apps, which contains the wearable APK, cannot be extracted or read by the wearable installer. The workaround is to place the wearable APK in the res/raw directory instead.

As per the documentation, there are two ways to package your wearable app: use the “wearApp” Gradle rule to package your wearable app or manually package the wearable app. For paid apps, the workaround is to manually package your apps with the following two changes, and you cannot use the “wearApp” Gradle rule. To manually package the wearable APK into res/raw, do the following:

  1. Copy the signed wearable app into your handheld project's res/raw directory and rename it to wearable_app.apk, it will be referred to as wearable_app.
  2. Create a res/xml/wearable_app_desc.xml file that contains the version and path information of the wearable app:
    <wearableApp package="wearable app package name">
        <versionCode>1</versionCode>
        <versionName>1.0</versionName>
        <rawPathResId>wearable_app</rawPathResId>
    </wearableApp>

    The package, versionCode, and versionName are the same as values specified in the wearable app's AndroidManifest.xml file. The rawPathResId is the static variable name of the resource. If the filename of your resource is wearable_app.apk, the static variable name would be wearable_app.

  3. Add a <meta-data> tag to your handheld app's <application> tag to reference the wearable_app_desc.xml file.
    <meta-data android:name="com.google.android.wearable.beta.app"
               android:resource="@xml/wearable_app_desc"/>
  4. Build and sign the handheld app.

We will be updating the “wearApp” Gradle rule in a future update to the Android SDK build tools to support APK embedding into res/raw. In the meantime, for paid apps you will need to follow the manual steps outlined above. We will be also be updating the documentation to reflect the above workaround. We're working to make this easier for you in the future, and we apologize for the inconvenience.

Categories: Programming

Putting your Professional Group on the Map

Google Code Blog - Wed, 07/09/2014 - 17:30
By Sarah Maddox, Google Developer Relations team

People love to know what's happening in their area of expertise around the world. What better way to show it, than on a map? Tech Comm on a Map puts technical communication tidbits onto an interactive map, together with the data and functionality provided by Google Maps.


I'm a technical writer at Google. In this post I share a project that uses the new Data layer in the Google Maps JavaScript API, with a Google Sheets spreadsheet as a data source and a location search provided by Google Places Autocomplete.

Although this project is about technical communication, you can easily adapt it for other special interest groups too. The code is on GitHub. The map in action Visit Tech Comm on a Map to see it in action. Here's a screenshot:
The colored circles indicate the location of technical communication conferences, societies, groups and businesses. The 'other' category is for bits and pieces that don't fit into any of the categories. You can select and deselect the checkboxes at top left of the map, to choose the item types you're interested in.

When you hover over a circle, an info window pops up with information about the item you chose. If you click a circle, the map zooms in so that you can see where the event or group is located. You can also search for a specific location, to see what's happening there.

Let's look at the building blocks of Tech Comm on a Map.
Getting hold of a mapI'm using the Google Maps JavaScript API to display and interact with a map. Where does the data come from?When planning this project, I decided I want technical communicators to be able to add data (conferences, groups, businesses, and so on) themselves, and the data must be immediately visible on the map.

I needed a data entry and storage tool that provided a data entry UI, user management and authorization, so that I didn't have to code all that myself. In addition, contributors shouldn't need to learn a new UI or a new syntax in order to add data items to the map. I needed a data entry mechanism that is familiar to most people - a spreadsheet, for example.

In an episode of Google Maps Developer Shortcuts, Paul Saxman shows how to pull data from Google Drive into your JavaScript app. That's just what I needed. Here's how it works.

The data for Tech Comm on a Map is in a Google Sheets spreadsheet. It looks something like this:


Also in the spreadsheet is a Google Apps Script that outputs the data in JSON format:

var SPREADSHEET_ID = '[MY-SPREADSHEET-ID]';var SHEET_NAME = 'Data';function doGet(request) {  var callback = request.parameters.jsonp;  var range = SpreadsheetApp      .openById(SPREADSHEET_ID)      .getSheetByName(SHEET_NAME)      .getDataRange();  var json = callback + '(' +      Utilities.jsonStringify(range.getValues()) + ')';    return ContentService      .createTextOutput(json)      .setMimeType(ContentService.MimeType.JAVASCRIPT);}


Follow these steps to add the script to the spreadsheet and make it available as a web service:
  1. In Google Sheets, choose 'Tools' > 'Script Editor'.
  2. Add a new script as a blank project.
  3. Insert the above code.
  4. Choose 'File' > 'Manage Versions', and name the latest version of the script.
  5. Choose 'Publish' >  'Deploy as web app'. Make it executable by 'anyone, even anonymous'. Note: This means anyone will be able to access the data in this spreadsheet via a script.
  6. Choose 'Deploy'.
  7. Copy the URL of the web service. You'll need to paste it into the JavaScript on your web page.

In your JavaScript, define a variable to contain the URL of the Google Apps script, and add the JSONP callback parameter:
var DATA_SERVICE_URL =   "https://script.google.com/macros/s/[MY-SCRIPT-ID]/exec?jsonp=?";

Then use jQuery's Ajax function to fetch and process the rows of data from the spreadsheet. Each row contains the information for an item: type, item name, description, website, start and end dates, address, latitude and longitude.

$.ajax({  url: DATA_SERVICE_URL,  dataType: 'jsonp',  success: function(data) {    // Get the spreadsheet rows one by one.    // First row contains headings, so start the index at 1 not 0.    for (var i = 1; i < data.length; i++) {      map.data.add({        properties: {          type: data[i][0],          name: data[i][1],          description: data[i][2],          website: data[i][3],          startdate: data[i][4],          enddate: data[i][5],          address: data[i][6]        },        geometry: {          lat: data[i][7],          lng: data[i][8]        }      });    }  }});The new Data layer in the Maps JavaScript API
Now that I could pull the tech comm information from the spreadsheet into my web page, I needed a way to visualize the data on the map. The new Data layer in the Google Maps JavaScript API is designed for just such a purpose. Notice the method map.data.add() in the above code. This is an instruction to add a feature in the Data layer.

With the basic JavaScript API you can add separate objects to the map, such as a polygon, a marker, or a line. But by using the Data layer, you can define a collection of objects and then manipulate and style them as a group. (The Data layer is also designed to play well with GeoJSON, but we don't need that aspect of it for this project.)

The tech comm data is represented as a series of features in the Data layer, each with a set of properties (type, name, address, etc) and a geometry (latitude and longitude).

Style the markers on the map, with different colors depending on the data type (conference, society, group, etc):


function techCommItemStyle(feature) {
 var type = feature.getProperty('type');
 var style = {
   icon: {      path: google.maps.SymbolPath.CIRCLE,      fillOpacity: 1,      strokeWeight: 3,      scale: 10            },    // Show the markers for this type if    // the user has selected the corresponding checkbox.    visible: (checkboxes[type] != false)  };
 // Set the marker colour based on type of tech comm item.  switch (type) {    case 'Conference':      style.icon.fillColor = '#c077f1';      style.icon.strokeColor = '#a347e1';      break;    case 'Society':      style.icon.fillColor = '#f6bb2e';      style.icon.strokeColor = '#ee7b0c';      break;. . . SNIPPED SOME DATA TYPES FOR BREVITY    default:      style.icon.fillColor = '#017cff';      style.icon.strokeColor = '#0000ff';  }  return style;}

Set listeners to respond when the user hovers over or clicks a marker. For example, this listener opens an info window on hover, showing information about the relevant data item:

 map.data.addListener('mouseover', function(event) {    createInfoWindow(event.feature);    infoWindow.open(map);  });The Place Autocomplete search
The last piece of the puzzle is to let users search for a specific location on the map, so that they can zoom in and see the events in that location. The location search box on the map is provided by the Place Autocomplete widget from the Google Places API.What's next?
Tech Comm on a Map is an ongoing project. We technical communicators are using a map to document our presence in the world!

Would you like to contribute? The code is on GitHub.

Posted by Louis Gray, Googler
Categories: Programming

Using SSD as a Foundation for New Generations of Flash Databases - Nati Shalom

“You just can't have it all” is a phrase that most of us are accustomed to hearing and that many still believe to be true when discussing the speed, scale and cost of processing data. To reach high speed data processing, it is necessary to utilize more memory resources which increases cost. This occurs because price increases as memory, on average, tends to be more expensive than commodity disk drive. The idea of data systems being unable to reliably provide you with both memory and fast access—not to mention at the right cost—has long been debated, though the idea of such limitations was cemented by computer scientist, Eric Brewer, who introduced us to the CAP theorem.

The CAP Theorem and Limitations for Distributed Computer Systems

Categories: Architecture

A Typical Conversation on Twitter

NOOP.NL - Jurgen Appelo - Wed, 07/09/2014 - 15:58
mouths

writer: “I believe that A is true.”

reader: That’s stupid. Everyone knows that A’ is true.”
writer: “I wasn’t referring to A’. I was talking about A.”

reader: There’s nothing wrong with A’.”

The post A Typical Conversation on Twitter appeared first on NOOP.NL.

Categories: Project Management