Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

How Digital is Changing Physical Experiences

The business economy is going through massive change, as the old world meets the new world.

The convergence of mobility, analytics, social media, cloud computing, and embedded devices is driving the next wave of digital business transformation, where the physical world meets new online possibilities.

And it’s not limited to high-tech and media companies.

Businesses that master the digital landscape are able to gain strategic, competitive advantage.   They are able to create new customer experiences, they are able to gain better insights into customers, and they are able to respond to new opportunities and changing demands in a seamless and agile way.

In the book, Leading Digital: Turning Technology into Business Transformation: Turning Technology Into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of the ways that businesses are meshing the physical experience with the digital experience to generate new business value.

Provide Customers with an Integrated Experience

Businesses that win find new ways to blend the physical world with the digital world.  To serve customers better, businesses are integrating the experience across physical, phone, mail, social, and mobile channels for their customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Companies with multiple channels to customers--physical, phone, mail, social, mobile, and so on--are experiencing pressure to provide an integrated experience.  Delivering these omni-channel experiences requires envisioning and implementing change across both front-end and operational processes.  Innovation does not come from opposing the old and the new.  But as Burberry has shown,  innovation comes from creatively meshing the digital and the physical to reinvent new and compelling customer experiences and to foster continuous innovation.”

Bridge In-Store Experiences with New Online Possibilities

Starbucks is a simple example of blending digital experiences with their physical store.   To serve customers better, they deliver premium content to their in-store customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Similarly, the unique Starbucks experience is rooted in connecting with customers in engaging ways.  But Starbucks does not stop with the physical store.  It has digitally enriched the customer experience by bridging its local, in-store experience with attractive new online possibilities.  Delivered via a free Wi-Fi connection, the Starbucks Digital Network offers in-store customers premium digital content, such as the New York Times or The Economist, to enjoy alongside their coffee.  The network also offers access to local content, from free local restaurant reviews from Zagat to check-in via Foursquare.”

An Example of Museums Blending Technology + Art

Museums can create new possibilities by turning walls into digital displays.  With a digital display, the museum can showcase all of their collections and provide rich information, as well as create new backdrops, or tailor information and tours for their customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Combining physical and digital to enhance customer experiences is not limited to just commercial enterprises.  Public services are getting on the act.  The Cleveland Museum of Art is using technology to enhance the experience and the management of visitors.  'EVERY museum is searching for this holy grail, this blending of technology and art,' said David Franklin, the director of the museum.

 

Fort-foot-wide touch screens display greeting-card sized images of all three thousand objects, and offers information like the location of the actual piece.  By touching an icon on the image, visitors can transfer it from the wall to an iPad (their own, or rented from the museum for $5 a day), creating a personal list of favorites.  From this list, visitors can design a personalized tour, which they can share with others.

 

'There is only so much information you can put on a wall, and no one walks around with catalogs anymore,' Franklin said.  The app can produce a photo of the artwork's original setting--seeing a tapestry in a room filled with tapestries, rather than in a white-walled gallery, is more interesting.  Another feature lets you take the elements of a large tapestry and rearrange them in either comic-book or movie-trailer format.  The experience becomes fun, educational, and engaging.  This reinvention has lured new technology-savvy visitors, but has also made seasoned museum-goers come more often.”

As you figure out the future capability vision for your business, and re-imagine what’s possible, consider how the Nexus of Forces (Cloud, Mobile, Social, and Big Data), along with the mega mega-trend (Internet-of-Things), can help you shape your digital business transformation.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

How League of Legends Scaled Chat to 70 million Players - It takes Lots of minions.

How would you build a chat service that needed to handle 7.5 million concurrent players, 27 million daily players, 11K messages per second, and 1 billion events per server, per day?

What could generate so much traffic? A game of course. League of Legends. League of Legends is a team based game, a multiplayer online battle arena (MOBA), where two teams of five battle against each other to control a map and achieve objectives.

For teams to succeed communication is crucial. I learned that from Michal Ptaszek, in an interesting talk on Scaling League of Legends Chat to 70 million Players (slides) at the Strange Loop 2014 conference. Michal gave a good example of why multiplayer team games require good communication between players. Imagine a basketball game without the ability to call plays. It wouldn’t work. So that means chat is crucial. Chat is not a Wouldn’t It Be Nice feature.

Michal structures the talk in an interesting way, using as a template the expression: Make it work. Make it right. Make it fast.

Making it work meant starting with XMPP as a base for chat. WhatsApp followed the same strategy. Out of the box you get something that works and scales well...until the user count really jumps. To make it right and fast, like WhatsApp, League of Legends found themselves customizing the Erlang VM. Adding lots of monitoring capabilities and performance optimizations to remove the bottlenecks that kill performance at scale.

Perhaps the most interesting part of their chat architecture is the use of Riak’s CRDTs (commutative replicated data types) to achieve their goal of a shared nothing fueled massively linear horizontal scalability. CRDTs are still esoteric, so you may not have heard of them yet, but they are the next cool thing if you can make them work for you. It’s a different way of thinking about handling writes.

Let’s learn how League of Legends built their chat system to handle 70 millions players...

Stats
Categories: Architecture

Managing Your Project With Dilbert Advice — Not!

Herding Cats - Glen Alleman - Mon, 10/13/2014 - 16:11

Scott Adams provides cartons of what not to do for most things technical. Software and Hardware. I actually saw him once, when he worked for PacBell in Pleasanton, CA. I was on a job at major oil company deploying document management systems for OSHA 1910.119 - process safety management and integrating CAD systems for control of safety critical documents.

The most popular use of Dilbert cartoon lately has been with the #NoEstimates community in support of the notion that estimates are somehow evil, used to make commitments that can't be met, and generally should be avoided when spending other people's money.

The cartoon below resonated with me for several reasons. What's happening here is classic misguided, intentionally ignoring the established processes of Reference Class Forecasting. As well, in typical Dilbert fashion, doing stupid things on purpose.

Screen Shot 2014-10-12 at 9.16.06 AM

Reference Class Forecasting is a well developed estimating process used across a broad range of technical, business, and finance domains. The characters above seem not to know anything about RCF. As a result they are DSTOP

Here's how not to DSTOP for cost and schedule estimates and the associated risks and the technical risk that the product you're building can't do what it's supposed to do on or before the date it needs to do it, at or below the cost you need it to do  it in order to stay in business.

The approach below may be complete overkill for your domain. So start by asking what's the Value at Risk. How much of our customers money are we willing to right off, if we don't have a sense of what DONE looks like in units of measure meaningful to the decision makers. Don't know that? then it's likely you've already put that money at risk, you're likely late, and don't really know what capabiltiies will be produced when you run out of time and money.

Don't end up a cartoon character in Dilbert strip. Learn how to properly manage your efforts, the efforts of others, using your customers money.

Managing in the presence of uncertainty from Glen Alleman
Categories: Project Management

Getting Started With Meteor Tutorial (In the Cloud)

Making the Complex Simple - John Sonmez - Mon, 10/13/2014 - 15:00

In this Meteor tutorial, I am going to show you how to quickly get started with the JavaScript Framework, Meteor, to develop a Meteor application completely from the cloud. I’ve been pretty excited about Meteor since I first saw a demo of it, but it can be a little daunting to get started, so I […]

The post Getting Started With Meteor Tutorial (In the Cloud) appeared first on Simple Programmer.

Categories: Programming

Xebia KnowledgeCast Episode 5: Madhur Kathuria and Scrum Day Europe 2014

Xebia Blog - Mon, 10/13/2014 - 10:48

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this 5th episode, we share key insights of Madhur Kathuria, Xebia India’s Director of Agile Consulting and Transformation, as well as some impressions of our Knowledge Exchange and Scrum Day Europe 2014. And of course, Serge Beaumont will have Fun With Stickies!

First, Madhur Kathuria shares his vision on Agile and we interview Guido Schoonheim at Scrum Day Europe 2014.

In this episode's Fun With Stickies Serge Beaumont talks about wide versus deep retrospectives.

Then, we interview Martin Olesen and Patricia Kong at Scrum Day Europe 2014.

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

Intentional Disregard for Good Engineering Practices?

Herding Cats - Glen Alleman - Mon, 10/13/2014 - 03:22

It seems lately there is an intentional disregard of the core principles of business development of software intensive systems. The #Noestimates community does, but other collections of developers do as well. 

  • We'd rather be writing code than estimating how much it's going to cost writing code.
  • Estimates are a waste.
  • The more precise the estimate, the more deceptive it is
  • We can't predict the future and it's a waste trying to
  • We can make decsision without estimating

These notions of course are seriously misinformed on how probability and statistics work in the estimating paradigm. I've written about this in the past. But there are a few new books we're putting to work in ouyr Software Intensive Systems (SIS) work that may be of interest to those wanted to learn more.

These are foundation texts for the profession of estimating. The continued disregard - ignoring possibly - of these principles has become all to clear. Not just in the sole contributor software development domain,. But all the way to Multi-Billion dolalr programs in defense, space, infrastructure, and other high risk domains. 

Which brings me back to a core conjecture - there is no longer any engineering discipline in the software development domain. At least outside the embedded systems like flight controls, process control, telecommunications equipment, and the like. There was a conjecture awhile back that the Computer Science discipline at the university level should be split - software engineering and coding.

Here's a sample of the Software Intensive System paradigm, where the engineering of the systems is a critical success factor. And Yes Virginia, the Discipline of Agile is applied in the Software Intensive Systems world - emphasis on the term DISCIPLINE.

Categories: Project Management

SPaMCAST 311 – Backlog Grooming, Software Sensei, Containment-Viruses and Software

www.spamcast.net

http://www.spamcast.net

Listen to SPaMCAST 311 now!

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time.

We also have a new installment of Kim Pries’s Software Sensei column.  What is the relationship between the containment of diseases and bad software?  Kim makes the case that the process for dealing both are related.

The Essay begins

Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time (although some times are better than others).

Listen to the rest now!

Next

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real practitioner in a real company that has really applied Agile.  Almost everyone finds their own path with Agile.  Alex has not only found his path but has gotten it right and is willing to share!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 311 – Backlog Grooming, Software Sensei, Containment-Viruses and Software

Software Process and Measurement Cast - Sun, 10/12/2014 - 22:00

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time.

We also have a new installment of Kim Pries’s Software Sensei column.  What is the relationship between the containment of diseases and bad software?  Kim makes the case that the process for dealing both are related.

The Essay begins

Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time (although some times are better than others).

Listen to the rest now!

Next

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real practitioner in a real company that has really applied Agile.  Almost everyone finds their own path with Agile.  Alex has not only found his path but has gotten it right and is willing to share!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Kaizen and Urgency

Fire Alarm

Hand Drawn Chart Saturday

Kaizen is a Japanese word meaning good change. Change in a dynamic business environment has become an accepted norm. Organizations must adapt or lose relevancy. The concept of kaizen has been adopted within the information technology industry as part of core management practices. In business terms, kaizen has been defined as continuous incremental change. You need energy to make change occur, in many cases, a sense of urgency is the mechanism used to generate the energy to drive continuous change.

John Kotter, author of Leading Change and the eight-step model of change, suggests that without a sense of urgency people don’t give the needed push of hard work to make change happen. Urgency begins by providing a focus that helps people to first become aware of the need for change and then pay attention to the need and the factors causing the need by piercing through the noise. (See the awareness, attention, action model). The energy a sense of urgency injects into the process is needed make the step from paying attention to taking to break through complacency and disrupt the status quo.

The need for urgency in the equation for change can lead to problems. The first is the potential for confusing importance with urgency. In a perfect world, we would only want to react to what is both important and urgent. The second problem area is that of manufactured/false urgency. Both problematic scenarios lead to the sapping of the organization’s energy in the long term, which makes it more difficult to recognize and react when real change is needed. If further there is an over reliance on a manufactured or a false sense of urgency the focus becomes short-term rather than strategic.

My first job out of university was as a statistical analyst/sales forecaster for a woman’s garment manufacturer. We had six “seasons” or product line offerings every year. Quotas were set for the sales force (incrementally bigger than last year). The sales management team for regional sales managers to the national sales manager provided constant “motivation” to the sales force. Motivation always included the dire consequences of missing the quota. There was always a sense of urgency, which drove action, including account prospecting (good behavior) and order padding (bad behavior). The manufactured urgency generated both good and bad change, and when things were going well it was pretty easy to sort those out. However, the business cycle has never been repealed and when an economic downturn occurred it was difficult to differentiate the real urgency. Therefore the organization did not make strategic changes quickly enough. A 80 year old firm with 750 million dollars in sales failed nearly overnight.

Urgency can become a narcotic that makes the need real change harder to recognize and harder to generate. Signs of an over reliance on urgency can include:

  • People that are “too busy” to do the right thing,
  • Generating highly crafted PowerPoint pitches for even small changes,
  • Continually chasing a new silver bullet before the last has been evaluated.

The goal of kaizen is to continually improve the whole organization. The whole organization includes empowering everyone from development and operational personnel to all layers of management to recognize and make change. Motivation is needed to evoke good change, however we need to be careful that motivation does not translate to a need to generate a false sense of urgency.


Categories: Process Management

Principles Trump Practices

Herding Cats - Glen Alleman - Sat, 10/11/2014 - 15:23

PrincipiaPrinciples, Practices, and Processes are the basis of successful project management. It is popular in some circles to think that practices come before Principles. 

The principles of management, project management, software development and its management, product development management are immutable.

What does done look like, what's our plan to reach done, what resources will we need along the way to done, what impediments will we encounter and how will we overcome them, and how are we going to measure our progress toward done in units meaningful to the decision makers?

This are immutable principles. These immutable principles can then be used to test practices and process by asking what is the evidence that the practice or process enables the principle to be applied and how do we know that the principle is being fulfilled.

 

Categories: Project Management

Lessons from running Neo4j based ‘hackathons’

Mark Needham - Sat, 10/11/2014 - 11:52

Over the last 6 months my colleagues and I have been running hands on Neo4j based sessions every few weeks and I was recently asked if I could write up the lessons we’ve learned.

So in no particular order here are some of the things that we’ve learnt:

Have a plan but don’t stick to it rigidly

Something we learnt early on is that it’s helpful to have a rough plan of how you’re going to spend the session otherwise it can feel quite chaotic for attendees.

We show people that plan at the beginning of the session so that they know what to expect and can plan their time accordingly if the second part doesn’t interest them as much.

Having said that, we’ve often gone off on a tangent and since people have been very interested in that we’ve just gone with it.

This sometimes means that you don’t cover everything you had in mind but the main thing is that people enjoy themselves so it’s nothing to worry about.

Prepare for people to be unprepared

We try to set expectations in advanced of the sessions with respect to what people should prepare or have installed on their machines but despite that you’ll have people in varying levels of readiness.

Having noticed this trend over a few months we now allot time in the schedule for getting people up and running and if we’re really struggling then we’ll ask people to pair with each other.

There will also be experience level differences so we always factor in some time to go over the basics for those who are new. We also encourage experienced people to help the others out – after all you only really know if you know something when you try to teach someone else!

Don’t try to do too much

Our first ‘hackathon’-esque event involved an attempt to build a Java application based on a British Library dataset.

I thought we’d be able to model the data set, import it and then wire up some queries to an application in a few hours.

This proved to be ever so slightly ambitious!

It took much longer than anticipated to do those first two steps and we didn’t get to build any of the application – teaching people how to model in a graph is probably a session in its own right.

Show the end state

Feedback we got from attendees to the first few versions was that they’d like to see what the end state should have looked like if they’d completed everything.

In our Clojure Hackathon Rohit got the furthest so we shared his code with everyone afterwards.

An even better approach is to have the final solution ready in advance and have it checked in on a different branch that you can point people at afterwards.

Show the intermediate states

Another thing we noticed was that if people got behind in the first part of the session then they’d never be able to catch up.

Nigel therefore came up with the idea of snapshotting intermediate states so that people could reset themselves after each part of the session. This is something that the Polymer tutorial does as well.

We worked out that we have two solid one hour slots before people start getting distracted by their journey home so we came up with two distinct types of tasks for people to do and then created a branch with the solution at the end of those tasks.

No doubt there will be more lessons to come as we run more sessions but this is where we are at the moment. If you fancy joining in our next session is Java based in a couple of weeks time.

Finally, if you want to see a really slick hands on meetup then you’ll want to head over to the London Clojure DojoBruce Durling has even written up some tips on how you run one yourself.

Categories: Programming

On F# and Object Oriented Guilt

Eric.Weblog() - Eric Sink - Sat, 10/11/2014 - 03:00

I'm writing a key-value database in F#. Because clearly the world needs another key-value database, right?

Actually, I'm doing this to learn F#. I wanted a non-trivial piece of code to write. Something I already know a little bit about.

(My design is a log structured merge tree, conceptually similar to LevelDB or the storage layer of SQLite4. The code is on GitHub.)

As a starting point, I wrote the whole thing in C# first. Then I ported it to F#, in a mostly line-by-line port. My intention was (and still is) to use that port as a starting point, evolving the F# version toward a more functional design.

The final result may not become useful, but I should end up knowing a lot more about F# than I knew when I started.

Learning

Preceding all this code was a lot of reading. I spent plenty of time scouring fsharpforfunandprofit.com and fsharp.org and the F# stuff on MSDN and Stack Overflow.

I learned that F# is closely related to OCaml, about which I knew nothing. But then I learned that OCaml is a descendant of ML, which I vaguely remember from when I took CS 325 as an undergrad with Dr. Campbell. This actually increased my interest, as it tied F# back to a positive experience of mine. (OTOH, if I run across something called L# and find that it has roots in Prolog, my interest shall proceed no further.)

I learned that there is an F# community, and that community is related to (or is a subset of) the functional programming community.

And I picked up on a faint whiff of tension within that community, centered on questions about whether something is purely functional or not. There are purists. And there are pragmatists.

I saw nothing ugly. But I definitely got the impression that I had choices to make, and that I might encounter differing opinions about how I should make those choices.

I decided to just ignore all that and write code. This path has usually served me well.

So I started writing my LSM tree in C#.

Digression: What is a log structured merge tree?

The basic idea of an LSM tree is that a database is implemented as list of lists. Each list is usually called a "sorted run" or a "segment".

The first segment is kept in memory, and is the only one where inserts, updates, and deletes happen.

When the memory segment gets too large, it is flushed out to disk. Once a segment is on disk, it is immutable.

Searching or walking the database requires checking all of the segments. In the proper order -- for a given key, newer segments override later segments.

The more segments there are, the trickier and slower this will be. So we can improve things by merging two disk segments together to form a new one.

The memory segment can be implemented with whatever data structure makes sense. The disk segment is usually something like a B+tree, except it doesn't need to support insert/update/delete, so it's much simpler.

The C# version: OO Everywhere

It's all about ICursor.

public interface ICursor
{
    void Seek(byte[] k, SeekOp sop);
    void First();
    void Last();
    void Next();
    void Prev();
    bool IsValid();
    byte[] Key();
    Stream Value();
    int ValueLength();
    int KeyCompare(byte[] k);
}

(Credit to D. Richard Hipp and the SQLite developers. ICursor is more or less what you would get if you perused the API for the SQLite4 storage engine and translated it to C#.)

This interface defines the methods which can be used to search or iterate over one segment. It's an interface. It doesn't care about how that segment is stored. It could be a memory segment. It could be a disk segment. It could be something else, as long as it plays nicely and follows the rules.

I have three objects which implementat ICursor.

First, there is MemorySegment, which is little more than a wrapper around a System.Collections.Generic.Dictionary<byte[],Stream>. It has a method called OpenCursor() which returns an ICursor. Nothing too interesting here.

The bulk of my code is in dealing with segments on "disk", which are represented as B+trees. To construct a disk segment, call BTreeSegment.Create. Its parameters are a System.IO.Stream (into which the B+tree will be written) and an ICursor (from which the keys and values will be obtained).

To get the ICursor for that BTreeSegment, call BTreeSegment.OpenCursor().

The object that makes it all work is MultiCursor. This is an ICursor that has one or more subcursors. You can search or iterate over a MultiCursor just like any other. Under the hood, it will deal with all the subcursors and present the illusion that they are one segment.

And mostly that's it.

  • To search the database, open a MultiCursor on all the segments and call Seek().

  • To flush a memory segment to disk, pass its cursor to BTreeSegment.Create.

  • To combine any two (or more) segments into one, create a MultiCursor on them and pass it to BTreeSegment.Create.

As I said above, this is not a complete implementation. If this were "LevelDB in F#", there would need to be, for example, something that owns all these segments and makes smart decisions about when to flush the memory segment and when to merge disk segments together. That piece of code is currently absent here.

Porting to F#

Before I started the F# port, I did some soul searching about the overall design. ICursor and its implementations are very elegant. Would I have to give them up in favor of something more functional?

I decided to just ignore all that and write code. This path has usually served me well.

So I proceed to do the line-by-line port to F#:

  • I made a copy of the C# code, which I'll refer to as CsFading.

  • Then I started a new F# library project, which we might call FsGrowing.

  • I changed my xUnit test to reference CsFading instead of my actual C# version.

  • I added a reference from CsFading to [the initially empty] FsGrowing.

Then I executed the following loop:

while (!CsFading.IsEmpty())
{
    var piece = ChooseSomethingFromCFade();
    FsGrowing.AddImplementation(piece);
    CsFading.Remove(piece);
    GetTheTestsPassingAgain();
}

I started with this:

type ICursor =
    abstract member Seek : k:byte[] * sop:SeekOp -> unit
    abstract member First : unit -> unit
    abstract member Last : unit -> unit
    abstract member Next : unit -> unit
    abstract member Prev : unit -> unit
    abstract member IsValid : unit -> bool
    abstract member Key : unit -> byte[]
    abstract member Value : unit -> Stream
    abstract member ValueLength : unit -> int
    abstract member KeyCompare : k:byte[] -> int

And little by little, CsFading got smaller as FsGrowing got bigger.

It was very gratifying when I reached the point where the F# version was passing my test suite.

But I didn't celebrate long. The F# code was a mess. Lots of mutables. Plenty of if-then-else. Nulls. Mutable collections. While loops.

Basically, I had written C# using F# syntax. Even the pseudocode for my approach was imperative.

But there were plenty of positives as well. Porting the C# code to F# actually made the C# code better. It was like a very intensive code review.

And the xUnit tests got very cool when I configured them to run every test four times:

  • Use the C# version
  • Use the F# version
  • Use the C# version to write B+trees and the F# version to read them.
  • Use the F# version to write B+trees and the C# version to read them.

If nothing else, I had convinced myself that full interop between F# and C# was actually quite smooth.

Also, I didn't completely punt on functional stuff during the port. There were a few places where I changed things toward a more F#-ish approach. Here's a function that turned out kinda nicely:

let rec searchLeaf k min max sop le ge = 
    if max < min then
        if sop = SeekOp.SEEK_EQ then -1
        else if sop = SeekOp.SEEK_LE then le
        else ge
    else
        let mid = (max + min) / 2
        let kmid = keyInLeaf(mid)
        let cmp = compareKeyInLeaf mid k
        if 0 = cmp then mid
        else if cmp<0  then searchLeaf k (mid+1) max sop mid ge
        else searchLeaf k min (mid-1) sop le mid

It's a binary search of the sorted key list within a single leaf page of the B+tree. The SeekOp can be used to specify that if the sought key does not exist, the one just before it or after it should be returned.

And yes, searchLeaf would be more idiomatic if it were using pattern matching instead of if-then-else. But at least I got rid of the mutables and the loop and made it recursive! :-)

Anyway, I expected to reach this point with a clear understanding of what to do next. And I didn't have that.

Gleaning from the experience of others

In terms of timing, all of this happened just before and during the Xamarin Evolve conference (which, as I write this, ended today). The C# version was mostly written the week before the conference. The F# port was done during the conference.

And that moment where the F# version passed the test suite, leaving me clueless about how to proceed? That happened Wednesday.

On Thursday, Rachel Reese gave a fantastic session on F#. I left with the impression that maybe a little OO in my F# wasn't so bad.

On Friday, Larry O'Brien gave another fantastic session on F#. And I left with the even stronger impression that even though learning cool functional stuff is great, I don't have to be a functional pursit to benefit from F#.

I also found a fair amount of "Don't feel guilty about OO in F#" in a document called The F# Component Design Guidelines on fsharp.org.

Anyway, for now, ICursor will remain. Maybe there's a more functional way, but right now I don't know what that would look like.

So I'm going to just ignore all that and write code. This path has usually served me well.

 

Clean Up

3-29 2013 poop

Why do death march (gruelingly overworked and therefore high risk) projects still occur? While poor processes, poor estimation and out of control changes are all still factors, I am seeing more that reflect poor choices based on business pressure.  How often have you heard someone say that the end justifies the means?  It is easier to say yes to everything and avoid hard discussions about trade-off and just admonish the troops to work harder and smarter.

In Agile, we would expect backlogs and release plans to help enforce those sorts of discussions. Unless, of course, you stop doing them.  I recently talked to a group that had identified the need to do a better job of breaking accepted backlog items down into tasks during sprint planning (identified during a retrospective), only to fall prey to an admonition to spend less time planning and more time “working” leading to rework and disappointing sprint results.

As you discover messes, whether in the code you are working on or in the processes you are using to guide your work, you are obligated to clean up after yourself.  If you don’t, sooner or later no one will want to play in your park  . . . and  probably neither will you.


Categories: Process Management

Stuff The Internet Says On Scalability For October 10th, 2014

Hey, it's HighScalability time:


Social climber: Instagram explorer scales to new heights in New York.

 

  • 11 billion: world population in 2100; 10 petabytes: Size of Netflix data warehouse on S3; $600 Billion: the loss when a trader can't type; 3.2: 0-60 mph time of probably not my next car.
  • Quotable Quotes:
    • @kahrens_atl: Last week #NewRelic Insights wrote 618 billion events and ran 237 trillion queries with 9 millisecond response time #FS14
    • @sustrik: Imagine debugging on a quantum computer: Looking at the value of a variable changes its value. I hope I'll be out of business by then.
    • Arrival of the Fittest: Solving Evolution's Greatest Puzzle: Every cell contains thousands of such nanomachines, each of them dedicated to a different chemical reaction. And all their complex activities take place in a tiny space where the molecular building blocks of life are packed more tightly than a Tokyo subway at rush hour. Amazing.
    • Eric Schmidt: The simplest outcome is we're going to end up breaking the Internet," said Google's Schmidt. Foreign governments, he said, are "eventually going to say, we want our own Internet in our country because we want it to work our way, and we don't want the NSA and these other people in it.
    • Antirez: Basically it is neither a CP nor an AP system. In other words, Redis Cluster does not achieve the theoretical limits of what is possible with distributed systems, in order to gain certain real world properties.
    • @aliimam: Just so we can fathom the scale of 1B vs 1M: 1,000,000 seconds is 11.5 days. 1,000,000,000 seconds is 31.6 YEARS
    • @kayousterhout: 92% of catastrophic failures in distributed data-intensive systems caused by incorrect error handling https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf … #osdi14
    • @DrQz: 'The purpose of computing is insight, not numbers.' (Hamming) Sometimes numbers ARE the insight so, make them accesible too. (Me)

  • Robert Scoble on the Gillmor Gang said that because of the crush of signups, ello had to throttle invites. Their single PostgreSQL server couldn't handle it captain.

  • Containers are getting much larger with new composite materials. Not that kind of container. Shipping containers. High oil costs have driven ships carrying 5000 containers to evolve. Now they can carry 18,000 and soon 19,000 containers!

  • If you've wanted to make a network game then this is a great start. Making Fast-Paced Multiplayer Networked Games is Hard: Fast-paced multiplayer games over the Internet are hard, but possible. First understanding your constraints then building within them is essential. I hope I have shed some light on what those constraints are and some of the techniques you can use to build within them. No doubt there are other ways out there and ways yet to be used. Each game is different and has its own set of priorities. Learning from what has been done before could help a great deal.

  • Arrival of the Fittest: Solving Evolution's Greatest Puzzle: Environmental change requires complexity, which begets robustness, which begets genotype networks, which enable innovations, the very kind that allow life to cope with environmental change, increase its complexity, and so on, in an ascending spiral of ever-increasing innovability...is the hidden architecture of life.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

New daily stand up questions

Xebia Blog - Fri, 10/10/2014 - 15:51

This post provides some alternate standup questions to let your standup be: aimed forward, goal focused, team focused.

The questions are:

  1. What have I achieved since our last SUM?
  2. What is my goal for today?
  3. What things keep me from reaching my goal?
  4. What is our team goal for the end of our sprint day?

The daily stand up runs on a few standard questions. The traditional questions are:

  • What did I accomplish yesterday?
  • What will I be doing today?
  • What obstacles are impeding my progress?

A couple of effects I see when using the above list are:

  • A lot of emphasis is placed on past activities rather than getting the most out of the day at hand.
  • Team members tell what they will be busy with, but not what they aim to complete.
  • Impediments are not related to daily goals.
  • There is no summary for the team relating to the sprint goal.

If you are experiencing the same issues you could try the alternate questions. They worked for me, but any feedback is appreciated of course. Are you using other questions? Let me know your experience. You could use the PDF below to print out the questions for your scrum board.

STAND_EN

STAND_EN

 

What Informs My Project Management View

Herding Cats - Glen Alleman - Fri, 10/10/2014 - 15:44

In recent discussion (of sorts) about estimating - Not Estimating actually - I realized something that should have been obvious. I travel in a world not shared by the staunch advocates of #NoEstimates. They appear to be sole contributors. I came top this after reading Peter Kretzman's 3rd installment, where he re-quoted a statement by Ron Jeffries,

Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before.

This is a sole contributor or small team paradigm.

So let's pretend we work at Price/Waterhouse/Cooper and we’re playing our roles  - Peter as CIO advisor, me as Program Performance Management adviser. We've been asked by our new customer to develop a product from scratch, estimate the cost and schedule, and provide some confidence level that the needed business capabilities will be available on or before a date and at or below a cost. Why you ask, because that's in the business plan for this new product and if they're late or overrun the planned cost that will be a balance sheet problem.

What would we do? Well, we'd start with PWC resource management database – held by HR – and ask for “past performance experience” people in the business domain and the problem domain. Our new customer did not “invent” a new business domain, so it's likely we'll find people who know what our new customer does for money. We’d look to see where in the 195,433 people in the database that work for PWC world wide there is someone, somewhere, that knows what the customer does for money and what kinds of business capabilities this new system needs to provide. If there is no one, then we'd look in our 10,000 or so partner relationships database to find someone.

If we found no one who knows the business and the needed capabilities, we’d no bid.

This notion of “I've been asked to do something that’s never been done before, so how can I possibly estimate it” really means “I'm doing something I’ve never done before.” And since “I’m a sole contributor, the population of experience in doing this new thing for the new customer is ONE – me." So since I don't know how the problem has been solved in the past, I can't possibly know how to estimate the cost, schedule, and needed capabilities. And of course I'm absolutely correct to say - new development with unknown requirements can't be estimated. Because those unknown requirements are actually Unknown to me, but may be known to another. But in the population of 195,000 other people in our firm, I'm no longer alone in my quest to come up with an estimate.

So the answer to the question, “what if we encounter new and unknown needs, how can we estimate?” is actually a core problem for the sole contributor, or small team. It'd be rare that the sole contributor or small team would have encountered the broad spectrum of domains and technologies needed to establish the necessary Reference Classes to address this open ended question. This is not the fault of the sole contributor. It is simply the situation of small numbers versus large numbers.

This is the reason the PWC’s of the world exist. They get asked to do things the sole contributors never have an opportunity to see.

Related articles Software Requirements Are Vague
Categories: Project Management

The Last Assignment

Phil Trelford's Array - Fri, 10/10/2014 - 08:53

Back in November last year, my eldest son and I popped over to the Insomnia Gaming Festival in Telford to take part in a game jam organised by Global GameCraft. (Today I  bumped into the source again on a USB stick).

The theme for the day was “The Last Assignment”. We decided to go with a text based adventure game loosely based on the Dirty Harry movie.

With just 7 hours on the clock we managed to put together quite a fun adventure game with ambient sound and graphics:

The Last Assignment - Start Screen

and picked up the prize for best storyline!

The Last Assignment - Insomnia Telford

Given the time constraints I decided to build the dialogue as a simple state machine using coroutines. In this scenario C# was my go to language as it provides basic iterator block support and a first class goto statement.

By building the game dialogue as a simple state machine I was able test it from the start as a console app and later easily integrate it into a graphical environment.

Here’s the state machine for the rookie scene:

public static IEnumerable<State> Rookie()
{
   yield return new State(
         "One way or another this will be your last assignment.\r\n" +
         "Just 2 weeks left on the force before you retire.\r\n" +
         "Back at the police station",
         "You get a black coffee and a donut",
         "A chai latte and a cup cake") { Theme="70s reflective;bullpen"};
   if (Choice.Taken == 2) goto imposter;
   yield return new State(
         "Your new partner introduces himself.",
         "You give him a stern look",
         "Ignore him") { Theme = "70s reflective;bullpen" };
   yield return new State(
         "\"Why do they call ya 'Dirty Harry'?\"",
         "Make up your own mind kid",
         "Turn up your eye brow"
         ) { Theme = "70s reflective;bullpen" };
   yield break;
imposter:
   Game.Ended = true;
   yield return new State(
         "You have been exposed as an imposter.\r\n" +
         "Cops don't chai latte, keep it real!")
         { Theme = "end game mp3;bullpen" };
}

 

which looked like this:

Rookie Scene

If you fancy having a play, the source for the game as a console app is available here:

Have fun!

Categories: Programming

The LGPL on Android

Xebia Blog - Fri, 10/10/2014 - 08:11

My client had me code review an Android app built for them by a third party. As part of my review, I checked the licensing terms of the open source libraries that it used. Most were using Apache 2.0 without a NOTICE file. One was using the GNU Lesser General Public License (LGPL).

My client has commercial reasons to avoid Copyleft-style licenses and so I flagged the library as unusable. The supplier understandably was not thrilled about the rework that implied and asked for an explanation and ideally some way to make it work within the license. Looking into it in more detail, I'm convinced that if you share my client's concerns, then there is no way to use LGPL licensed code on Android. Here's why I believe this to be the case.

The GNU LGPL

When I first encountered the LGPL years ago, it was explained to me as “the GPL, without the requirement to publish your source code”. The actual license terms turn out to be a bit more restrictive. The LGPL is an add-on to the full GPL that weakens (only) the restrictions to how you license and distribute your work. These weaker restrictions are in section 4.

Here's how I read that section:

You may convey a Combined Work under terms of your choice that […] if you also
do each of the following:
  a) [full attribution]
  b) [include a copy of the license]
  c) [if you display any copyright notices, you must mention the licensed Library]
  d) Do one of the following:
    0) [provide means for the user to rebuild or re-link your application against
       a modified version of the Library]
    1) [use runtime linking against a copy already present on the system, and allow
       the user to replace that copy]
  e) [provide clear instructions how to rebuild or re-link your application in light
     of the previous point]

The LGPL on Android

An Android app can use two kinds of libraries: Java libraries and native libraries. Both run into the same problem with the LGPL.

The APK file format for Android apps is a single, digitally signed package. It contains native libraries directly, while Java libraries are packaged along with your own bytecode into the dex file. Android has no means of installing shared libraries into the system outside of your APK, ruling out out (d)(1) as an option. That leaves (d)(0). Making the library replaceable is not the issue. It may not be the simplest thing, but I'm sure there is some way to make it work for both kinds of libraries.

That leaves the digital signature, and here's where it breaks down. Any user who replaces the LGPL licensed library in your app will have to digitally sign their modified APK file. You can't publish your code signing key, so they have to sign with a different key. This breaks signature compatibility, which breaks updates and custom permissions and makes shared preferences and expansion files inaccessible. It can therefore be argued that such an APK file is not usable in lieu of the original app, thus violating the license.

In short

The GNU Lesser General Public License ensures that a user has freedom to modify a so licensed library used by your application, even if your application is itself closed source. Android's app packaging and signature requirements are such that I believe it is impossible to comply with the license when using an LGPL licensed library in a closed source Android app.

Team Backlogs

If it isn't on the list, it won't get done.

If it isn’t on the list, it won’t get done.

The team backlog is comprised of everything a team might get done because if it is not on the list, it can’t be prioritized and taken into a sprint. Simply put, if it isn’t on the list there is no chance of it will get done. The attributes of the team backlog are:

  1. The backlog includes all potential work items of the team. Work items will include user stories, non-functional requirements, technical items, architectural requirements, issues and risks. Note: All of these can use the user story construct for communication and presentation.
  2. Backlog items represent possibilities. Until the team’s product owner prioritizes an item and the team accepts the item into a sprint, it is merely an opportunity that may get done.
  3. Backlog items should be estimated. An estimate reflects the level of effort required to complete the work item based the definition of done and the work items acceptance criteria. Until it si accepted into the sprint, having an estimate is not a sign that the work item has been committed to be delivered.
  4. The product owner owns the backlog. The product owner prioritizes that backlog therefore controls what the team works on (product owner is a member of the team). In programs, the product owner helps to channel the programs priorities.
  5. The backlog is groomed. Grooming includes ensuring that backlog items are:
    • Well formed,
    • Understood,
    • Estimated,
    • Prioritized, and
    • When necessary removed.

Disclaimer: I am a list person. I have lots of them. I have not descended into lists of lists, albeit I do have a folder in Evernote of lists. Backlogs are the queue of work that a team will draw from. Backlogs, like any list, can feel like they are the end in their own right; almost a form of tyranny. I heard it said, “What is the reward for completing a user story? Another user story.” The backlog can seem relentless. However grooming, prioritization and sprint planning ensures that the team works on what is important and valuable. The backlog is merely a tool to make sure nothing is inadvertently forgotten or ignored.


Categories: Process Management

Level One: The Intro Stage

Coding Horror - Jeff Atwood - Thu, 10/09/2014 - 23:21

Way back in 2007, before Stack Overflow was a glint in anyone's eye, I called software development a collaborative game. And perhaps Stack Overflow was the natural outcome of that initial thought – recasting online software development discussion into a collaborative game where the only way to "win" is to learn from each other.

That was before the word gamification existed. But gamification is no longer the cool, hip concept it was back in 2011. Still, whether you call yourself a "gamer" or not, whether you believe in "gamification" or not, five years later you're still playing the world's largest multiplayer game.

In fact, you're playing it right now.

One of the most timeless aspects of games is how egalitarian they are, how easy it is for anyone to get started. Men, women, children — people love games because everyone can play along. You don't have to take classes or go to college or be certified: you just play. And this is, not so incidentally, how many of the programmers I know came to be programmers.

Do you know anyone that bought the video game Halo, or Myst, then proceeded to open the box and read the manual before playing the game? Whoa there guys, we can't play the game yet, we gotta read these instructions first! No, they stopped making manuals for games a long time ago, unless you count the thin sheet of paper that describes how to download / install the game on your device. Because they found out nobody reads the manual.

The project I’m working on is critical, but it has only about 3 to 4 users, most of whom are already familiar the application. One of the users even drives the design. The manual I’m writing, which is nearly 200 pages, is mostly a safety measure for business continuity planning. I don’t expect anyone will ever read it.

It’s a project I managed to procrastinate for months, working on other projects, even outside the scope of my regular assignments. The main deterrent, I believe, was my perception that no one needed the manual. The users seemed to be getting along fine without it.

And so as the year ticked to a close, instead of learning more about Mediawiki and screencasting and After Effects, I spent my time updating a 200-page manual that I don’t think anyone will ever read. It will be printed out, three-hole punched, and placed in a binder to collect dust on a shelf.

I guess that's not surprising for games. Games are supposed to be fun, and reading manuals isn't fun; it's pretty much the opposite of fun. But it is also true for software in general. Reading manuals isn't work, at least, it isn't whatever specific thing you set out to do when you fired up that bit of software on your phone, tablet, or laptop.

Games have another clever trick up their sleeve, though. Have you ever noticed that in most of today's games, the first level is kind of easy. Like… suspiciously easy?

That's because level one, the intro stage, isn't really part of the game. It's the manual.

As MegaMan X illustrates, manuals are pointless when we can learn about the game in the best and most natural way imaginable: by playing the actual game. You learn by doing, provided you have a well designed sandbox that lets you safely experiment as you're starting out in the game.

(The above video does contain some slightly NSFW language, but it is utterly brilliant, applies to every app, software and website anyone has ever built, and I strongly recommend watching it all.)

This same philosophy applies to today's software and websites. Don't bother with all the manuals, video introductions, tutorials, and pop-up help dialogs. Nobody's going to read that stuff, at least, not the people who need it.

Instead, follow the lesson of MegaMan: if you want to teach people about your software, consider how you can build a great intro stage and let them start playing with it immediately.

[advertisement] What's your next career move? Stack Overflow Careers has the best job listings from great companies, whether you're looking for opportunities at a startup or Fortune 500. You can search our job listings or create a profile and let employers find you.
Categories: Programming