Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Micro services architecture principle #1: Each Micro service delivers a single complete business capability

Xebia Blog - Sat, 05/23/2015 - 21:13

Micro services are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Micro service Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why a Micro service should deliver a complete business capability.

A complete business capability is a process that can be finished consecutively without interruptions or excursions to other services. This means that a business capability should not depend on other services to complete its work.
If a process in a micro service depends on other micro services we would end up in the dependency hell ESBs introduced: in order to service a customer request we need many other services and therefore if one of them fails everything stops. A more robust solution would be to define a service that handles a process that makes sense to a user. Examples include ordering a book in a web shop. This process would start with the selection of a book and end with creating an order. Actually fulfilling the order is a different process that lives in its own service. The fulfillment process might run right after the order process but it doesn’t have to. If the customer orders a PDF version of a book order fulfillment may be completed right away. If the order was for the print version, all the order service can promise is to ask shipping to send the book. Separating these two processes in different services allows us to make choices about the way the process is completed, making sure that a problem or delay in one service has no impact on other services.

So, building a micro service such that it does a single thing well without interruptions or waiting time is at the foundation of a robust architecture.

Are You an Integration Specialist?

Some people specialize in a narrow domain.  They are called specialists because they focus on a specific area of expertise, and they build skills in that narrow area.

Rather than focus on breadth, they go for depth.

Others focus on the bigger picture or connecting the dots.  Rather than focus on depth, they go for breadth.

Or do they?

It actually takes a lot of knowledge and depth to be effective at integration and “connecting the dots” in a meaningful way.  It’s like being a skilled entrepreneur or a skilled business developer.   Not just anybody who wants to generalize can be effective.  

True integration specialists are great pattern matchers and have deep skills in putting things together to make a better whole.

I was reading the book Business Development: A Market-Oriented Perspective where Hans Eibe SĂžrensen introduces the concept of an Integrating Generalist and how they make the world go round.

I wrote a post about it on Sources of Insight:

The Integrating Generalist and the Art of Connecting the Dots

Given the description, I’m not sure which is better, the Integration Specialist or the Integrating Generalist.  The value of the Integrating Generalist is that it breathes new life into people that want to generalize so that they can put the bigger puzzle together.  Rather than de-value generalists, this label puts a very special value on people that are able to fit things together.

In fact, the author claims that it’s Integrating Generalists that make the world go round.

Otherwise, there would be a lot of great pieces and parts, but nothing to bring them together into a cohesive whole.

Maybe that’s a good metaphor for the Integrating Generalist.  While you certainly need all the parts of the car, you also need somebody to make sure that all the parts come together.

In my experience, Integration Generalists are able to help shape the vision, put the functions that matter in place, and make things happen.

I would say the most effective Program Managers I know do exactly that.

They are the Oil and the Glue for the team because they are able to glue everything together, and, at the same time, remove friction in the system and help people bring out their best, towards a cohesive whole.

It’s synergy in action, in more ways than one.

You Might Also Like

Anatomy of a High-Potential

E-Shape People, Not T-Shape

Generalists vs. Specialists

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For May 22nd, 2015

Hey, it's HighScalability time:


Where is the World Brain? San Fernando marshes in Spain (by Cristobal Serrano)
  • 569TB: 500px total data transfer per month; 82% faster: elite athletes' brains; billions and millions: Facebook's graph store read and write load; 1.3 billion: daily Pinterest spam fighting events; 1 trillion: increase in processing power performance over six decades; 5 trillion: Facebook pub-sub messages per day
  • Quotable Quotes:
    • Silicon Valley: “Tell me the truth,” Gavin demands of a staff member. “Is it Windows Vista bad? Zune bad?” “I’m sorry,” the staffer tells Gavin, “but it’s Apple Maps bad!”
    • @garybernhardt: Reminder to people whose "big data" is under a terabyte: servers with 1 TB RAM can be had about $20k. Your data set fits in RAM.
    • @epc: μServices and AWS Lambda are this year’s containers and Docker at #Gluecon
    • orasis: So by this theory the value of a tech startup is the developer's laptops and the value of a yoga studio is the loaner mats.
    • @ajclayton: An average attacker sits on your network for 229 days, collecting information. @StephenCoty #gluecon
    • @mipsytipsy: people don't *cause* problems, they trigger latent conditions that make failures more likely.  @allspaw on post mortems #srecon15europe
    • @pas256: The future of cloud infrastructure is a secure, elastically scalable, highly reliable, and continuously deployed microservices architecture
    • Kevin Marks: The Web is the network
    • @cdixon: We asked for flying cars and all we got was the entire planet communicating instantly via $34 pocket supercomputers 
    • @ajclayton: Uh oh, @pas256 just suggested that something could be called a "nanoservice"...microservices are already old. #gluecon
    • @jamesurquhart: A sign that containers are interim step? Pkging procs better than pkging servers, but not as good as pkging functs? 
    • @markburgess_osl: Let's rename "immutable infrastructure" to "prefab/disposable" infrastructure, to decouple it from the false association with functionalprog
    • @Beaker: Key to startup success: solve a problem that has been solved before but was constrained due to platform tech cost or non-automated ops scale
    • @mooreds: 10M req/month == $45 for lambda.  Cheap. -- @pas256 #gluecon
    • @ajclayton: Microservices "exist on all points of the hype cycle simultaneously" @johnsheehan #gluecon
    • @oztalip: "Treat web server as a library not as a container, start it inside your application, not the other way around!" -@starbuxman #GOTOChgo
    • @sharonclin: If a site doesn't load in 3 sec, 57% abandon, 80% never return.  @krbenedict #m6xchange #Telerik
    • QuirksMode: Tools don’t solve problems any more, they have become the problem.
    • @rzazueta: Was considering taking a shot every time I saw "Microservices" on the #gluecon hashtag. But I've already gone through two livers.
    • @MariaSallis: "If you don't invest in infrastructure, don't invest in microservices" @johnsheehan #gluecon
    • Brian Gallagher: If the world devolved into a single cloud provider, there would be no need for Cloud Foundry.
    • @b6n: startup idea: use technology from the 70s.
    • Steven Hawking: The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge
    • @aneel: "Monolithic apps have unlimited invisible internal dependencies" -@adrianco #gluecon
    • @windley: microservices don’t reduce complexity, they move it around, from dev to ops. #gluecon
    • @paulsbruce: When everyone has to be an expert in everything, that doesn't scale." @dberkholz @451research #gluecon
    • @oamike: I didn’t do SOA right, I didn’t do REST right, I’m sure as hell not going to do micro services right. #gluecon @kinlane
    • Urs Hölzle: My biggest worry is that regulation will threaten the pace of innovation.
    • @mccrory: There has been an explosion in managed OpenStack solutions - Platform9, MetaCloud, BlueBox
    • @viktorklang: Remember that you heard it here first, CPU L1 cache is the new disk.

  • This is more a measure of the fecundity of the ecosystem than an indication of disease. By its very nature the magic creation machine that it is Silicon Valley must create both wonder and bewilderment. Silicon Valley Is a Big Fat Lie: That gap between the Silicon Valley that enriches the world and the Silicon Valley that wastes itself on the trivial is widening daily.

  • In a liquidity crisis all those promises mean nothing. RadioShack Sold Your Data to Pay Off Its Debts.

  • YouTube has to work at it too. To Take On HBO And Netflix, YouTube Had To Rewire Itself: All of the things that InnerTube has enabled—faster iteration, improved user testing, mobile user analytics, smarter recommendations, and more robust search—have paid off in a big way. As of early 2015, YouTube was finally becoming a destination: On mobile, 80% of YouTube sessions currently originate from within YouTube itself.

  • If you aren't doing web stuff, do you really need to use HTTP? Do you really know why you prefer REST over RPC? There's no reason for API requests to pass through an HTTP stack.

  • If scaling is specialization and the cloud is the computer then why are we still using TCP/IP between services within a datacenter? Remote Direct Memory Access is fast. FaRM: Fast Remote Memory: FaRM’s per-machine throughput of 6.3 million operations per second is 10x that reported for Tao. FaRM’s average latency at peak throughput was 41µs which is 40–50x lower than reported Tao latencies. 

  • MigratoryData with 10 Million Concurrent Connections on a single commodity server. Lots of details on how the benchmark was run and the various configuration options. CPU usage under 50% (with spikes), memory usage was predictable, network traffic was  0.8 Gbps for 168,000 messages per second, 95th Percentile Latency: 374.90 ms. Next up? C100M.

  • Does anyone have a ProductHunt invite that they would be willing share with me?

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Database Scaling Redefined: Scaling Demanding Queries, High Velocity Data Modifications and Fast Indexing All At Once for Big Data

This is a guest post by Cihan Biyikoglu, Director of Product Management at Couchbase.

Question: A few million people are out looking for a setup to efficiently live and interact. What is the most optimized architecture they can use?

  1. Build one giant high-rise for everyone,
  2. Build many single-family homes OR
  3. Build something in between?

Schools, libraries, retail stores, corporate HQs, homes are all there to optimize variety of interactions. Sizes of groups and type of exchange vary drastically… Turns out, what we have chosen to do is, to build all of the above. To optimize different interactions, different architectures make sense.

While high rises can be effective for interactions with high density of people in a small amount of land, it is impractical to build 500 story buildings. It is also hard to add/remove floors as you need them. So high-rises feel awfully like scaling-up – cluster of processors communicating over fast memory to compute fast but limited scale ceiling and elasticity.

As your home, single-family architecture work great. Nice backyard to play and private space for family dinners... You may need to get in your car to interact with other families, BUT it is easy to build more single family houses: so easy elasticity and scale. Single-family structure feels awfully like scaling-out, doesn't it? Cluster of commodity machines that communicate over slower networks and come with great elasticity.

“How does this all relate to database scalability?” you ask…

Categories: Architecture

Paper: FlashGraph: Processing Billion-Node Graphs on an Array of Commodity SSDs

It's amazing what you can accomplish these days on a single machine using SSDs and smart design. In the paper FlashGraph: Processing Billion-Node Graphs on an Array of Commodity SSDs they:

demonstrate that FlashGraph is able to process graphs with billions of vertices and hundreds of billions of edges on a single commodity machine.

The challenge is SSDs are a lot slower than RAM:

The throughput of SSDs are an order of magnitude less than DRAM and the I/O latency is multiple orders of magnitude slower. Also, I/O performance is extremely non-uniform and needs to be localized. Finally, high-speed I/O consumes many CPU cycles, interfering with graph processing.

Their solution exploits caching, parallelism, smart scheduling and smart placement algorithms:

We build FlashGraph on top of a user-space SSD file system called SAFS [32] to overcome these technical challenges. The set-associative file system (SAFS) refactors I/O scheduling, data placement, and data caching for the extreme parallelism of modern NUMA multiprocessors. The lightweight SAFS cache enables FlashGraph to adapt to graph applications with different cache hit rates. We integrate FlashGraph with the asynchronous user-task I/O interface of SAFS to reduce the overhead of accessing data in the page cache and memory consumption, as well as overlapping computation with I/O.

The result performs up to 80% of its in-memory implementation:

We observe that in many graph applications a large SSD array is capable of delivering enough I/Os to saturate the CPU. This suggests the importance of optimizing for CPU and RAM in such an I/O system. It also suggests that SSDs have been sufficiently fast to be an important extension for RAM when we build a machine for large-scale graph analysis applications.

Abstract: 

Categories: Architecture

Do you have a shared vocabulary?

Coding the Architecture - Simon Brown - Mon, 05/18/2015 - 16:57

"This is a component of our system", says one developer, pointing to a box on a diagram labelled "Web Application". Next time you're sitting in an conversation about software design, listen out for how people use terms like "component", "module", "sub-system", etc. We can debate whether UML is a good notation to visually communicate the design of a software system, but we have a more fundamental problem in our industry. That problem is our vocabulary, or rather, the lack of it.

Notation

I've been running my software architecture sketching exercises in various formats for nearly ten years, and thousands of people have participated. The premise is very simple - here are some requirements, design a software solution and draw some pictures to illustrate the design. The range of diagrams I've seen, and still see, is astounding. The percentage of people who choose to use UML is tiny, with most people choosing to use an informal boxes and lines notation instead. With some simple guidance, the notational aspects of the diagrams are easy to clean up. There's a list of tips in my book that can be summarised with this slide from my workshop.

Some tips for effective sketches

Abstractions

What's more important though is the set of abstractions used. What exactly are people supposed to be drawing? How should they think about, describe and communicate the design of their software system? The primary aspect I'm interested in is the static structure. And I'm interested in the static structure from different levels of abstraction. Once this static structure is understood and in place, it's easy to supplement it with other views to illustrate runtime/behavioural characteristics, infrastructure, deployment models, etc.

In order to get to this point though, we need to agree upon some vocabulary. And this is the step that is usually missed during my workshops. Teams charge headlong into the exercise without having a shared understanding of the terms they are using. I've witnessed groups of people having design discussions using terms like "component" where they are clearly not talking about the same thing. Yet everybody in the group is oblivious to this. For me, the answer is simple. Each group needs to agree upon the vocabulary, terminology and abstractions they are going to use. The notation can then evolve.

My Simple Sketches for Diagramming Your Software Architecture article explains why I believe that abstractions are more important than notation. Maps are a great example of this. Two maps of the same location will show the same things, but they will often use different notations. The key to understanding these different maps is exactly that - a key tucked away in the corner of each map somewhere. I teach people my C4 model, based upon a simple set of abstractions (software systems, containers, components and classes), which can be used to describe the static structure of a software system from a number of different levels of abstraction. A common set of abstractions allows you to have better conversations and easily compare solutions. In my workshops, the notation people use to represent this static structure is their choice, with the caveat that it must be self-describing and understandable by other people without explanation.

Next time you have a design discussion, especially if it's focussed around some squiggles on a whiteboard, stop for a second and take a step back to make sure that everybody has a shared understanding of the vocabulary, terminology and abstractions that are being used. If this isn't the case, take some time to agree upon it. You might be surprised with the outcome.

Categories: Architecture

How MySQL is able to scale to 200 Million QPS - MySQL Cluster

This is a guest post by Andrew Morgan, MySQL Principal Product Manager at Oracle.

MySQL Cluster logo

The purpose of this post is to introduce MySQL Cluster - which is the in-memory, real-time, scalable, highly available version of MySQL. Before addressing the incredible claim in the title of 200 Million Queries Per Second it makes sense to go through an introduction of MySQL Cluster and its architecture in order to understand how it can be achieved.

Introduction to MySQL Cluster
Categories: Architecture

Agile goes beyond Epic Levels

Xebia Blog - Fri, 05/15/2015 - 17:10

IMG_5514A snapshot from my personal backlog last week:

  • The Agile transformation at ING was frontpage news in the Netherlands. This made us even more realize how epic this transformation and assignment actually is.
  • The Agile-built hydrogen race car from the TU Delft set an official track record on the Nurburgring. We're proud on our guys in Delft!
  • Hanging out with Boeings’ Agile champs at their facilities in Seattle exchanging knowledge. Impressive and extremely fruitful!
  • Coaching the State of Washington on their ground breaking Agile initiatives together with my friend and fellow consultant from ScrumInc, Joe Justice.

One thing became clear for me after a week like this: Something Agile is cookin’.  And it’s BIG!

In this blog I will be explaining why and how Agile will develop in the near future.

Introduction; what’s happening?

Human kind is currently facing the biggest era change since the 19th Century. Our industries, education, technologies and society are simply not compliant anymore with today’s and tomorrows needs.  Some systems like healthcare and the economy are that broken they actually should be to be reinvented again. Everything has just become too vulnerable and complex. Just Quantitative-easing“lubricating the engine” like quantitive easing the economy, are no sustainable solutions anymore. Like Russell Ackoff already said, you can only fix a system as a whole not only by separate parts.

This reinvention will only succeed when we are able to learn and adjust our systems very rapidly.  Agile, Lean and a different way of organizing our selfs, can make this reality.  Lean will provide us with the right tools to do exactly what’s needed, nothing more, nothing less.  But applying Lean for only efficiency purposes will not bring the innovations and creativity we need.  We also need an additional catalyst and engine: Agile. It will provide us with the right mindset and tools to innovate, inspect and adapt very fast.  And finally, we must reorganize ourself more on cooperation not on directive command and control. This was useful in the industrial revolution, not in our modern complex times.

Agile’s for everything

Unlike most people think, Agile is not only a software development tool. You can apply it to almost everything. slider_Forze_VI_ValkenburgFor example, as Xebia consultants we've successfully coached Agile and Lean non-IT initiatives in Marketing, Innovation, Education, Automotive, Aviation and non-Profit organizations. It just simply works! And how. A productivity increase of 500% is no exception. But above all, team members and customers are much happier.

Agile's for everybody

At this moment, a lot of people are still unconsciously addicted to their patterns and unaware about the unlimited possibilities out there.  It’s like having a walk in the forrest.  You can bring your own lunch like you always do, but there are some delicious fruits out there for free!  Technology like 3D printing offer unlimited possibilities straight on your desk, where only a few years a go, you needed a complicated, million dollar machine for this. The same goes for Agile. It’s open source and out there waiting for you. It will also help you getting more out of all these new awesome developments!

The maturity of Agile explained

Until recently, most agile initiatives emerged bottom up, but stalled on a misfit between Agile and (conventional) organizations.  Loads of software was produced, but could not be brought to production, simply because the whole development chain was not Agile yet. Tools like TDD and Continuous Integration improved the situation significantly, but dependencies were still not managed properly most of the time.

Screen Shot 2015-05-13 at 7.53.15 PM

The last couple of years, some good scaled agile frameworks like LeSS and SAFe emerged. Managing the dependencies better, but not directly encouraging the Agile Mindset and motivation of people.  In parallel, departments like HR, Control and Finance were struggling with Agile. There was a scaled agile framework implemented, but the hierarchical organization structure was not adjusted creating a gap between fast moving Agile teams and departments still hindered by non-Agile procedures, proceses and systems.

Therefor, we see conventional organizations moving towards a more Agile, community based model like Spotify, Google or Zappos.  ING is now transforming towards to a similar organization model.

Predictions for the near future

My expectation is that we will see Agile transformations continue on a much wider scale.  For example, people developing their own products in an agile fashion while using 3D printing.  Governments will use Agile and Holacracy for solving issues like the reshaping the economic system together with society. Or like I have observed last week, the State of Washington government using these techniques successfully in solving the challenges they're facing.

For me, it currently feels like the early Nineties again when the Internet emerged.  In that time I explained to many people the Internet would be like electricity for them in the near future.  Most people laughed and stated it was just a new way of communication.  The same applies now for the Agile Mindset.   It's not just a hype or a corporate tool. It will reshape the world as we know it today.

 

 

 

 

Stuff The Internet Says On Scalability For May 15th, 2015

Hey, it's HighScalability time:


Stand a top a volcano and survey the universe.  (By Shane Black & Judy Schmidt)
  • 1 million: Airbnb's room inventory; 2 billion: Telegram messages sent daily; Two billion: photos shared daily on Facebook; 10,000: sensors in every Airbus wing
  • Quotable Quotes:
    • Silicon Valley: “We’re about shaving yoctoseconds off latency for every layer in the stack,” he said. “If we rent from a public cloud, we’re using servers that are, by definition, generic and unpredictable.”
    • @liviutudor: Netflix: approx 250 Cassandra clusters over 7,000+ server instances #cloud
    • @GreylockVC: "More billion-dollar marketplaces will be created in the next five years than in the previous 20." - @simonrothman 
    • CDIXON: Exponential growth curves in the “feels gradual” phase are deceptive. There are many things happening today in technology that feel gradual and disappointing but will soon feel sudden and amazing.
    • @badnima: OH: "The gossip protocol has reached its scaling limits"
    • marcosdumay: People get pretty excited every time physicists talk about information. The bottom line is that information manipulation is just Math, viewed by a different angle.
    • Bill Janeway: There's only one way to hedge against uncertainty in venture capital...cash and control. Enough cash that when something goes wrong you can buy time to figure out what is and assess what you can do about it. 
    • zylo4747's coworker: Where's the step about preparing to have all your plans crushed and rushing shit out the door as fast as possible?
    • Martin Fowler: don't even consider microservices unless you have a system that's too complex to manage as a monolith. 
    • @postwait: Ingesting, querying, & visualizing data isn't a monitoring system. It isn't even sufficient plumbing for such a system. #srecon15europe
    • @techsummitpr: "Up to date weather conditions? It's not a marvel from Google, it's a marvel from the National Weather Service." @timoreilly #techsummitpr
    • @sovereignfund: Verified as legit: The top 25 hedge fund managers earn more than all kindergarten teachers in U.S. combined. 
    • Adrian Colyer: In their evaluation, the authors found that mixing MapReduce and memcached traffic in the same network extended memcached latency in the tail by 85x compared to an interference free network. 
    • @BenedictEvans: US ecommerce revenues 1999: $12bn 2013: $219bn
    • Gregory Hickok: the brain samples the world in rhythmic pulses, perhaps even discrete time chunks, much like the individual frames of a movie. From the brain’s perspective, experience is not continuous but quantized.
    • David Bollier: There is no master inventory of commons. They can arise whenever a community decides it wishes to manage a resource in a collective manner, with a special regard for equitable access, use and sustainability.

  • What’s Next for Moore’s Law?: I predict that Intel's 10nm process technology will use Quantum Well FETs (QWFETs) with a 3D fin geometry, InGaAs for the NFET channel, and strained Germanium for the PFET channel, enabling lower voltage and more energy efficient transistors in 2016, and the rest of the industry will follow suit at the 7nm node.

  • Don't read How to Build a Unicorn From Scratch – and Walk Away with Nothing if you are easily frightened. Years of work down the drain. **chills** To walk safely through the Valley: Focus on terms, not just valuation; Build a waterfall; Don’t do bad business deals just to get investment capital; Understand the motivations of others; Understand your own motivation.

  • How do you build a real-time chat system? Scaling Secret: Real-time Chat. Goal was to handle 50,000 simultaneous conversations. Pusher was used to deliver messages. For a database Secret used Google App Engine’s High-Replication Datastore. Some nice details on the schema and other issues. Good thread on HN where the main point of contention is should an expensive service like Pusher be used to do something so simple? Usual arguments about wasting money vs displaying your hacker plumage. 

  • Under the hood: Facebook’s cold storage system. A top to bottom reengineering to save power for infrequently accessed photos. Yes, that's cool. Each cold storage datacenter uses 1/6th the energy as a normal datacenter while storing hundreds of petabytes of data. Erasure coding is used to store data. Data is scanned every 30 days to recreate any lost data.  As capacity is added data is rebalanced to the new racks. No file system is used at all. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

To see the Future of the Apple Watch Just Go to Disneyland


by AreteStock

 

Removing friction. That’s what the Apple Watch is good at.

Many think watches are a category flop because they don’t have that obvious killer app. Like hot sauce, maybe a watch isn’t something you eat all by itself, but it gives whatever you sprinkle it on a little extra flavor?

Walk into your hotel, the system recognizes you, your room number pops up on your watch, you walk directly to your room and unlock it with your watch.

Walk into an airport, your flight displays on your watch along with directions to your terminal. To get on the plane you just flash your watch. On landing, walk to your rental car and unlock it with your watch.

A notification arrives that it’s time to leave for your meeting, traffic is bad, best get an early start.

While shopping you check with your partner if you need milk by talking directly through your watch. In the future you’ll just know if you need milk, but we’re not there yet.

You can do all these things with a phone. Google Now, for example. What the easy accessibility of the watch does in these scenarios is remove friction. It makes it natural for a complex backend system to talk to you about things it learns from you and your environment. Hiding in a pocket or a purse, a phone is too inconvenient and too general purpose. Your watch becomes a small custom viewport on to a much larger more connected world.

After developing my own watch extension, using other extensions, and listening to a lot of discussion on the subject, it’s clear the form factor of a watch is very limiting and will always be limiting. You’ll never be able to do much UI-wise on a watch. Even the cleverest programmers can only do so much with so little screen real estate and low resource usage requirements. Instagram and Evernote simply aren’t the same on a watch.

But that’s OK. Every device has what it does well. It takes time for users and developers to explore a new device space.

What a watch does well is not so much enable new types of apps, but plug people into much larger and smarter systems. This is where the friction is removed.

Re-enchanting the World Disneyland Style
Categories: Architecture

Sponsored Post: Learninghouse, OpenDNS, MongoDB, Internap, Aerospike, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • The Cloud Platform team at OpenDNS is building a PaaS for our engineering teams to build and deliver their applications. This is a well rounded team covering software, systems, and network engineering and expect your code to cut across all layers, from the network to the application. Learn More

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • 90 Days. 1 Bootcamp. A whole new life. Interested in learning how to code? Concordia St. Paul's Coding Bootcamp is an intensive, fast-paced program where you learn to be a software developer. In this full-time, 12-week on-campus course, you will learn either .NET or Java and acquire the skills needed for entry-level developer positions. For more information, read the Guide to Coding Bootcamp or visit bootcamp.csp.edu.

  • June 2nd – 4th, Santa Clara: Register for the largest NoSQL event of the year, Couchbase Connect 2015, and hear how innovative companies like Cisco, TurboTax, Joyent, PayPal, Nielsen and Ryanair are using our NoSQL technology to solve today’s toughest big data challenges. Register Today.

  • The Art of Cyberwar: Security in the Age of Information. Cybercrime is an increasingly serious issue both in the United States and around the world; the estimated annual cost of global cybercrime has reached $100 billion with over 1.5 million victims per day affected by data breaches, DDOS attacks, and more. Learn about the current state of cybercrime and the cybersecurity professionals in charge with combatting it in The Art of Cyberwar: Security in the Age of Information, provided by Russell Sage Online, a division of The Sage Colleges.

  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at http://mongodbworld.com/ but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Loggly alternative.

  • Instructions for implementing Redis functionality in Aerospike. Aerospike Director of Applications Engineering, Peter Milne, discusses how to obtain the semantic equivalent of Redis operations, on simple types, using Aerospike to improve scalability, reliability, and ease of use. Read more.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

How to improve Scrum team performance with Kanban

Xebia Blog - Mon, 05/11/2015 - 21:22
This blogpost has been a collaborative effort between Jeroen Willemsen and Jasper Sonnevelt

In this post we will take a look at a real life example of a Scrum team transitioning to Kanban. We will address a few pitfalls and discuss how to circumvent those. With this we provide additional insights to advance your agile way of working.

The example

At the time we joined this project, three Scrum-teams worked together to develop a large software product. The teams shared a Product Owner (PO) and backlog and just had their MVP released. Now, bug-reports started coming in and the Scrum-teams faced several problems:

  • The developers were making a lot of progress per sprint, so the PO had to put a lot of effort in the backlog to create a sufficiently sized work inventory.New bugs came to light as the product started to become adopted. Filing and fixing these bugs and monitoring the progress was taking a lot of effort from the PO. Solving each bug in the next sprint reduced the team's response time.
  • Prioritizing the bug-fixing and the newly requested features became hard as the PO had to many stakeholders to manage.
  • The Product Owner saw Kanban as a possible solution to these problems. He convinced the team to implement it in an effort to deal with these problems and to provide a better way of working.

The practices that were implemented included:

  • No more sprint replenishment rhythm (planning & backlog refinement);
  • A Work in Progress (WIP) limit.

As a result the backlog quality started to deprecate. Less information was written down about a story and developers did not take the time to ask the right questions towards the PO. The WIP limit maximised the amount of work the team could take on. This made them focus on the current, sometimes unclear and too complex stories. Because of this developers would keep working on their current tasks, making assumptions along the way. These assumptions could sometimes lead to conflicting insights as developers would collaborate on stories, while at the same time being influenced by different stakeholders. This resulted in misalignment.

All of these actions resulted in a lower burn down rate, less predictability and therefore nervous stakeholders. And after a few weeks, the PO decided to stop the migration to Kanban, and go back to Scrum.

However, the latter might not have been necessary for the team to be successful. After all, it is about how you implement Kanban. There were a few key elements missing for instance. In our experience, teams that are successful (would) have also implemented:

  • Forecasting based on historical data and simulations: Scrum teams use Planning Poker and Velocity to make predictions about the amount of work they will be able to do in a sprint. When sprint cadence is let go this will become more difficult. Practices in the Kanban Method tell us that by using historical data in simulations, the same and probably more predictability can be achieved.
  • Policies for prioritizing, WIP monitoring and handover criteria: Kanban demands a very high amount of discipline from all participants. Policies will help here. In the case of this team, it would have benefited greatly from having clear defined policies about priorities and prioritization. For instance: Who can (and is allowed to) prioritize? How can the team see what has most priority? Are there different classes of service we can identify? And how do we treat those? The same holds for WIP-limits and handover criteria. We always use a Definition of Ready and Definition of Done in Scrum teams. Keeping these in a Kanban team is a very good practice.
  • A feedback cycle to review data and process on a regular basis: Where Scrum demands a retrospective after every sprint, Kanban does not explicitly define such a thing. Kanban only dictates that you should implement feedback loops. When an organization starts implementing Kanban it is key that they do review the implementation, data and process at a regular basis. This is crucial during the first weeks: all participants will have to get used to the new way of working. Policies should be reviewed as well, as they might need adjustments to facilitate the team in having an optimized workflow.

The hard thing about Kanban

What most people think and see when talking about Kanban is introducing WIP limits to their board, adding columns (other than To do, In Progress, Done) and stop using sprints. And herein lies one of the problems. For organizations that are used to working with Scrum letting go of sprints is a very unnatural thing to do. It feels like letting go of predictability and regular feedback. And instead of making the organization a bit better people feel like they have just taken a step back.

The hard thing about Kanban is that it doesn't provide you with a clear cut solution to your problems like Scrum does. Kanban is a method that helps you fix the problems in your organization in a way that best fits your context. For instance: it tells you to visualize a lot of things but only provides examples of what you could visualize. Typically teams and organizations visualize their process and work(types).

To summarize:

  • Kanban uses the current process and doesn't enforce a process of it's own. Therefore it demands a higher degree of discipline, both from the team-members and the the rest of the organization. If you are not aware of this the introduction of Kanban will only lead to chaos.
  • Kanban doesn't say anything about (To Do column) replenishment frequency or demo frequency.

Recommendations:

  • Implement forecasting based on historical data and simulations, policies for prioritizing, WIP limits and handover criteria and a feedback cycle to review data and process on a regular basis.
  • Define clear policies about how to collaborate. This will help create transparency and predictability.
  • The Scrum ceremonies happen in a rhythm that is very easy to understand and learn for stakeholders. Keep that mind when designing Kanban policies.

 

Designing for Scale - Three Principles and Three Practices from Tapad Engineering

This is a guest post by Toby Matejovsky, Director of Engineering at Tapad (@TapadEng).

Here at Tapad, scaling our technology strategically has been crucial to our immense growth. Over the last four years we’ve scaled our real-time bidding system to handle hundreds of thousands of queries per second. We’ve learned a number of lessons about scalability along that journey.

Here are a few concrete principles and practices we’ve distilled from those experiences:

  • Principle 1: Design for Many
  • Principle 2: Service-Oriented Architecture Beats Monolithic Application
  • Principle 3: Monitor Everything
  • Practice 1: Canary Deployments
  • Practice 2: Distributed Clock
  • Practice 3: Automate To Assist, Not To Control
Principle 1: Design for Many
Categories: Architecture

Understanding the 'sender' in segues and use it to pass on data to another view controller

Xebia Blog - Fri, 05/08/2015 - 22:59

One of the downsides of using segues in storyboards is that you often still need to write code to pass on data from the source view controller to the destination view controller. The prepareForSegue(_:sender:) method is the right place to do this. Sometimes you need to manually trigger a segue by calling performSegueWithIdentifier(_:sender:), and it's there you usually know what data you need to pass on. How can we avoid adding extra state variables in our source view controller just for passing on data? A simple trick is to use the sender parameter that both methods have.

The sender parameter is normally used by storyboards to indicate the UI element that triggered the segue, for example an UIButton when pressed or an UITableViewCell that triggers the segue by selecting it. This allows you to determine what triggered the segue in prepareForSegue:sender:, and based on that (and of course the segue identifier) take some actions and configure the destination view controller, or even determine that it shouldn't perform the segue at all by returning false in shouldPerformSegueWithIdentifier(_:sender:).

When it's not possible to trigger the segue from a UI element in the Storyboard, you need to use performSegueWithIdentifier(_:sender:) instead to manually trigger it. This might happen when no direct user interaction should trigger the action of some control that was created in code. Maybe you want to execute some additional logic when pressing a button and after that perform the segue. Whatever the situation is, you can use the sender argument to your benefit. You can pass in whatever you may need in prepareForSegue(_:sender:) or shouldPerformSegueWithIdentifier(_:sender:).

Let's have a look at some examples.

Screen Shot 2015-05-08 at 23.25.37

Here we have two very simple view controllers. The first has three buttons for different colors. When tapping on any of the buttons, the name of the selected color will be put on a label and it will push the second view controller. The pushed view controller will set its background color to the color represented by the tapped button. To do that, we need to pass on a UIColor object to the target view controller.

Even though this could be handled by creating 3 distinct segues from the buttons directly to the destination view controller, we chose to handle the button tap ourselves and the trigger the segue manually.

You might come up with something like the following code to accomplish this:

class ViewController: UIViewController {

    @IBOutlet weak var label: UILabel!

    var tappedColor: UIColor?

    @IBAction func tappedRed(sender: AnyObject) {
        label.text = "Tapped Red"
        tappedColor = UIColor.redColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    @IBAction func tappedGreen(sender: AnyObject) {
        label.text = "Tapped Green"
        tappedColor = UIColor.greenColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    @IBAction func tappedBlue(sender: AnyObject) {
        label.text = "Tapped Blue"
        tappedColor = UIColor.blueColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowColor" {
            if let colorViewController = segue.destinationViewController as? ColorViewController {
                colorViewController.color = tappedColor
            }
        }
    }

}

class ColorViewController: UIViewController {

    var color: UIColor?

    override func viewDidLoad() {
        super.viewDidLoad()

        view.backgroundColor = color
    }

}

We created a state variable called tappedColor to keep track of the color that needs to be passed on. It is set in each of the action methods before calling performSegueWithIdentifier("ShowColor", sender: sender) and then read again in prepareForSegue(_:sender:) so we can pass it on to the destination view controller.

The action methods will have the tapped UIButtons set as the sender argument, and since that's the actual element that initiated the action, it makes sense to set that as the sender when performing the segue. So that's what we do in the above code. But since we don't actually use the sender when preparing the segue, we might as well pass on the color directly instead. Here is a new version of the ViewController that does exactly that:

class ViewController: UIViewController {

    @IBOutlet weak var label: UILabel!

    @IBAction func tappedRed(sender: AnyObject) {
        label.text = "Tapped Red"
        performSegueWithIdentifier("ShowColor", sender: UIColor.redColor())
    }

    @IBAction func tappedGreen(sender: AnyObject) {
        label.text = "Tapped Green"
        performSegueWithIdentifier("ShowColor", sender: UIColor.greenColor())
    }

    @IBAction func tappedBlue(sender: AnyObject) {
        label.text = "Tapped Blue"
        performSegueWithIdentifier("ShowColor", sender: UIColor.blueColor())
    }

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowColor" {
            if let colorViewController = segue.destinationViewController as? ColorViewController {
                colorViewController.color = sender as? UIColor
            }
        }
    }

}

This allows us to get rid of our extra tappedColor variable.

It might seem to (and perhaps it does) abuse the sender parameter though, so use it with care and only where appropriate. Be aware of the consequences; if some other code or some element in a Storyboard triggers the same segue (i.e. with the same identifier), then the sender might just be an UI element instead of the object you expected, which will lead to unexpected results and perhaps even crashes when you force cast the sender to something it's not.

You can find the sample code in the form of an Xcode project on https://github.com/lammertw/SegueColorSample.

Stuff The Internet Says On Scalability For May 8th, 2015

Hey, it's HighScalability time:


Not spooky at all. A 1,000 robot self-organizing flash mob.
  • 400 ppm: global CO2 concentration; 13.1 billion: distance in light-years of farthest galaxy
  • Quotable Quotes:
    • Pied Piper: It’s built on a universal compression engine that stacks on any file, data, video or image no matter what size.
    • Bokardo: 1 hour of research saves 10 hours of development time
    • @12Knocksinna: Microsoft uses Cassandra open source tech to help manage the 500+ million events generated by Office 365 hourly (along with SQL and Azure)
    • @antirez: Redis had a lot of client libs ASAP. By reusing the Redis protocol, Disque is getting clients even faster, and 2700 Github stars in 9 days!
    • @blueben: AWS Glacier seems like a great DR option until you realize it costs $180,000 to retrieve your 100TB archive in an emergency.
    • Peter Diamandis: The best way to become a billionaire is to solve a billion-person problem.
    • Cordkillers: YouTube visits up 40% from last year
    • @acroll: "It's about economics not innovation, otherwise we'd all be flying Concorde instead of Jumbo Jets." @JulieMarieMeyer #StrataHadoop
    • @DLoesch: Start time delayed because cable systems are overloaded due to PPV buys. Insane. Don't snooze, don't lose! #MayPac
    • grauenwolf: This is where unit test fanboys piss me off. They claim that they can't use integration tests because they are too slow. I claim that they need integration tests to find their slow queries.
    • nuclearqtip: The open source world needs a standardized trust model for binary artifacts. 
    • Greg Ferro: SDN and SNA are about as similar Model T Ford & any modern car. For the record, no drives a Model T Ford to work everyday. Stop comparing SDN to SNA. Its pointless.
    • Urs Hölzle: Now the decade of work we put into NoSQL is available to everyone using GCP.  One way it shows that we've been working on this longer than anyone else: 99% read latency is 6ms vs ~300ms for other systems.
    • Swardley: Cloud is not about saving money - never was. It's about doing more stuff with exactly the same amount of money. That can cause a real headache in competition. 
    • Johns Hopkins: scientists have discovered that neurons are risk takers: They use minor "DNA surgeries" to toggle their activity levels all day, every day. 

  • Tesla's Powerwall has already sold out. So will Tesla's next gigafactory be a terafactory or a petafactory?

  • Something to keep in mind when hiring: 21% of [NFL] Hall of Fame players were selected in the 4th round or later.

  • Move along, nothing to see here. Brett Slatkin: I wonder how long it will be before people realize that all of this server orchestration business is a waste of time? Ultimately, what you really want is to never think about systems like Borg that schedule processes to run on machines. That's the wrong level of abstraction. You want something like App Engine, vintage 2008 platform as a service, where you run a single command to deploy your system to production with zero configuration.

  • Can any product withstand Aphyr's Jepsen partition torture test? Aeropspike, Elasticsearch, MongoDB, RabbitMQ, Riak, Cassandra, Kafka, NuoDB, Postgres, Redis, all had problems when stress tested under network partitions. Not surprising really, as Aphyr says, "Distributed systems design is really hard." That we find problems in popular well regarded products indicates that "We need formal theory, written proofs, computer verification, and experimental demonstration that our systems make the tradeoffs we think they make. As systems engineers, we continually struggle to erase the assumption of safety before that assumption causes data loss or downtime. We need to clearly document system behaviors so that users can make the right choices. We must understand our systems in order to explain them–and distributed systems are hard to understand." gmagnusson has a good sense of things: "I admire the work that Aphyr does - though at the end of the day, I need to build systems that work for the problem I'm trying to solve (and I have to choose from real things that are available). These technologies in general are trying to address really hard problems and design and architecture is the art of balancing tradeoffs. Nothing is going to be perfect. Yet."  

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Stockholm and Bucharest

Coding the Architecture - Simon Brown - Fri, 05/08/2015 - 16:41

Following on from CRAFT and SATURN last month, the conference tour continues in May with visits to Stockholm and Bucharest.

First up is DevSum in Stockholm, Sweden, where I'll be speaking about Agility and the essence of software architecture. This session looks at my approach to doing "just enough up front" design and how to introduce these techniques into software development teams.

Later during the same week I'll then be delivering the opening keynote at the I T.A.K.E. Unconference in Bucharest, Romania. My talk here is called Software architecture as code. It's about rethinking the way we describe software architecture and how to create a software architecture model as code for living, up to date, software architecture documentation.

See you in a few weeks.

Categories: Architecture

Task Management for Teams

I’m a fan of monthly plans for meaningful work.

Whether you call it a task list or a To-Do list or a product backlog, it helps to have a good view of the things that you’ll invest your time in.

I’m not a fan of everybody trying to make sense of laundry lists of cells in a spreadsheet.

Time changes what’s important and it’s hard to see the forest for the trees, among rows of tasks that all start to look the same.

One of the most important things I’ve learned to do is to map out work for the month in a more meaningful way.

It works for individuals.  It works for teams.  It works for leaders.

It’s what I’ve used for Agile Results for years on projects small and large, and with distributed teams around the world.  (Agile Results is my productivity method introduced in Getting Results the Agile Way.)

A picture is worth a thousand words, so let’s just look at a sample output and then I’ll walk through it:

clip_image002

What I’ve found to be the most effective is to focus on a plan for the month – actually take an hour or two the week before the new month.  (In reality, I’ve done this with teams of 10 or more people in 30 minutes or less.  It doesn’t take long if you just dump things fast on the board, and just keep asking people “What else is on our minds.”)

Dive-in at a whiteboard with the right people in the room and just list out all the top of mind, important things – be exhaustive, then prioritize and prune.

You then step back and identify the 3 most important outcomes (3 Wins for the Month.)

I make sure each work item has a decent name – focused on the noun – so people can refer to it by name (like mini-initiatives that matter.)

I list it in alphabetical by the name of the work so it’s easy to manage a large list of very different things.

That’s the key.

Most people try to prioritize the list, but the reality is, you can use each week to pick off the high-value items.   (This is really important.  Most people spend a lot of time prioritizing lists, and re-prioritizing lists, and yet, people tend to be pretty good prioritizing when they have a quick list to evaluate.   Especially, if they know the priorities for the month, and they know any pressing events or dead-lines.   This is where clarity pays off.)

The real key is listing the work in alphabetical order so that it’s easy to scan, easy to add new items, and easy to spot duplicates.

Plus, it forces you to actually name the work and treat it more like a thing, and less like some fuzzy idea that’s out there.

I could go on and on about the benefits, but here are a few of the things that really matter:

  1. It’s super simple.   By keeping it simple, you can actually do it.   It’s the doing, not just the knowing that matters in the end.
  2. It chops big work down to size.   At the same time, it’s easy to quickly right-size.  Rather than bog down in micro-management, this simple list makes it easy to simply list out the work that matters.
  3. It gets everybody in the game.   Everybody gets to look at a whiteboard and plan what a great month will look like.  They get to co-create the journey and dream up what success will look like.   A surprising thing happens when you just identify Three Wins for the Month.

I find a plan for the month is the most useful.   If you plan a month well, the weeks have a better chance of taking care of themselves.   But if you only plan for the week or every two weeks, it’s easy to lose sight of the bigger picture, and the next thing you know, the months go by.  You’re busy, things happen, but the work doesn’t always accrue to something that matters.

This is a simple way to have more meaningful months.

I also can’t say it enough, that it’s less about having a prioritized list, and more about having an easy to glance at map of the work that’s in-flight.   I’m glad the map of the US is not a prioritized list by states.  And I’m glad that the states are well named.  It makes it easy to see the map.  I can then prioritize and make choices on any trip, because I actually have a map to work from, and I can see the big picture all at once, and only zoom in as I need to.

The big idea behind planning tasks and To-Do lists this way is to empower people to make better decisions.

The counter-intuitive part is first exposing a simple view of the map of the work, so it’s easy to see, and this is what enables simpler prioritization when you need it, regardless of which prioritization you use, or which workflow management tool you plug in to.

And, nothing stops you from putting the stuff into spreadsheets or task management tools afterwards, but the high-value part is the forming and storming and conforming around the initial map of the work for the month, so more people can spend their time performing.

May the power of a simple information model help you organize, prioritize, and optimize your outcomes in a more meaningful way.

If you need a deeper dive on this approach, and a basic introduction to Agile Results, here is a good getting started guide for Agile Results in action.

Categories: Architecture, Programming

Minimal Viable UX

Xebia Blog - Wed, 05/06/2015 - 20:38

An approach to incorporate UX into the LEAN principle.

User Experience is often interpreted as a process where the ‘UX guru’ holds the ultimate truth in designing for an experience. The guru likes to keep control of his design and doesn’t want to feel less valuable when adopting advice from non-designers, where his concern is becoming a pixel pusher.

Adopting UX in a LEAN way, the feedback from team members minimizes the team going down the wrong path. This prevents the guru from perfecting a design where constraints over time will become clearer and less aligned with the customer needs. Interaction with the team speeds up development time by giving early insight.

Design for User Experience

UX has many different definitions, in the end it enables the user to perform a task with the help of an interface. All disciplines in a software development team should be aware of the user they are designing or developing for, starting in Sprint Zero. UX is not about setting up mockups, wireframes, prototypes and providing designs, it has to be part of the team culture where every team member can attribute to. We are trying to solve problems and problems are not being solved with design documentation but solved with efficient, elegant and sophisticated software.

How to get there

Create user awareness

Being aware of the user helps reduce waste and keeps you focused on things you should care about, functionality that adds value in the perception of the customer.

First, use a set of personas, put them on a wall and let your team members align those users with the functionality they are building. Developers can reflect functionality, interaction designers can optimize interface elements and visual designers can align styling with the user.

Second, use a customer journey map. This is a powerful tool. It helps in creating context, gives an overview of the user experience and helps to find gaps.

Prototype quickly

Prototyping becomes easier by the day, thanks to the amount and quality of tools out there. Prototyping can be performed by using paper, mockups (Balsamiq) or a web framework, such as FramerJS. Pick the type you prefer and which is suitable for the situation and has the appropriate depth.

Diagram of the iterative design and critique process. Warfel, Todd Zaki. 2009. Prototyping: A Practitioner’s Guide. New York: Rosenfeld Media.

Use small portions of prototypes and validate those with a minimal set of users. This helps you to deliver faster, therefore again eliminate waste and improves built-in quality. Iterative design helps you to amplify learning. KISS!

Communicate

Involved parties need to be convinced that what you are saying is based on business needs, the product and the people. You need to befriend and understand all involved parties in order to make it work across the board. Besides that, don’t forget your best friends, the users.

If you don't talk to your customers, how will you know how to talk to your customers? - Will Evans

Varnish Goes Upstack with Varnish Modules and Varnish Configuration Language

This is a guest post by Denis Brækhus and Espen Braastad, developers on the Varnish API Engine from Varnish Software. Varnish has long been used in discriminating backends, so it's interesting to see what they are up to.

Varnish Software has just released Varnish API Engine, a high performance HTTP API Gateway which handles authentication, authorization and throttling all built on top of Varnish Cache. The Varnish API Engine can easily extend your current set of APIs with a uniform access control layer that has built in caching abilities for high volume read operations, and it provides real-time metrics.

Varnish API Engine is built using well known components like memcached, SQLite and most importantly Varnish Cache. The management API is written in Python. A core part of the product is written as an application on top of Varnish using VCL (Varnish Configuration Language) and VMODs (Varnish Modules) for extended functionality.

We would like to use this as an opportunity to show how you can create your own flexible yet still high performance applications in VCL with the help of VMODs.

VMODs (Varnish Modules)
Categories: Architecture

Elements of Scale: Composing and Scaling Data Platforms

This is a guest repost of Ben Stopford's epic post on Elements of Scale: Composing and Scaling Data Platforms. A masterful tour through the evolutionary forces that shape how systems adapt to key challenges.

As software engineers we are inevitably affected by the tools we surround ourselves with. Languages, frameworks, even processes all act to shape the software we build.

Likewise databases, which have trodden a very specific path, inevitably affect the way we treat mutability and share state in our applications.

Over the last decade we’ve explored what the world might look like had we taken a different path. Small open source projects try out different ideas. These grow. They are composed with others. The platforms that result utilise suites of tools, with each component often leveraging some fundamental hardware or systemic efficiency. The result, platforms that solve problems too unwieldy or too specific to work within any single tool.

So today’s data platforms range greatly in complexity. From simple caching layers or polyglotic persistence right through to wholly integrated data pipelines. There are many paths. They go to many different places. In some of these places at least, nice things are found.

So the aim for this talk is to explain how and why some of these popular approaches work. We’ll do this by first considering the building blocks from which they are composed. These are the intuitions we’ll need to pull together the bigger stuff later on.

Categories: Architecture