Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
You could use Salt to build and run Docker containers but that is not how I use it here. This blogpost is about Docker containers that run Salt minions, which is just an experiment. The use case? Suppose you have several containers that run a particular piece of middleware, and this piece of middleware needs a security update, i.e. an OpenSSL hotfix. It is necessary to perform the update immediately.
In order to build a container you have to write down the container description in a file called Dockerfile. Here is the Dockerfile:
#------- # Standard heading stuff FROM centos MAINTAINER No Reply firstname.lastname@example.org # Do Salt install stuff and squeeze in a master.conf snippet that tells the minion # to contact the master specified. RUN rpm -Uvh http://ftp.linux.ncsu.edu/pub/epel/6/i386/epel-release-6-8.noarch.rpm RUN yum install -y salt-minion --enablerepo=epel-testing RUN [ ! -d /etc/salt/minion.d ] && mkdir /etc/salt/minion.d ADD ./master.conf /etc/salt/minion.d/master.conf # Run the Salt Minion and do not detach from the terminal. # This is important because the Docker container will exit whenever # the CMD process exits. CMD /usr/bin/salt-minion #-------
Build the image
Time to run the Dockerfile through docker. The command is:
$ docker build --rm=true -t salt-minion .
provided that you run this command in the directory where file Dockerfile and master.conf resides. Docker creates an image with tag âsalt-minionâ and throws away all intermediate images after a successful build.
Run a container
The command is:
$ docker run -d salt-minion
and Docker returns:
The Salt minion on the container is started and searches for a Salt master to connect to, defined by the configuration setting âmasterâ in file /etc/salt/minion.d/master.conf. You might want to run the Salt master in âauto_acceptâ mode so that minion keys are accepted automatically. Docker assigns a container id to the running container. That is the magic key that docker reports as a result of the run command.
The following command shows the running container:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES 273a6b77a8fa salt-minion:latest /bin/sh -c /etc/rc.l 3 seconds ago Up 3 seconds distracted_lumiere
Apply the hot fix
There you are: the Salt minion is controlled by your Salt master. Provided that you have a state module that contains the OpenSSL hot fix, you can now easily update all docker nodes to include the hotfix:
salt \* state.sls openssl-hotfix
That is all there is to it.
âEverything should be made as simple as possible, but not simpler.â â Albert Einstein
Simplicity is among the ultimate of pursuits. Itâs one of your most efficient and effective tools in your toolbox. I used simplicity as the basis for my personal results system, Agile Results, and itâs served me well for more than a decade.
And yet, simplicity still isnât treated as a first-class citizen.
Itâs almost always considered as an afterthought. And, by then, itâs too little, too late.
In the book, Simple Architectures for Complex Enterprises (Developer Best Practices), Roger Sessions shares his insights on how simplicity is the ultimate enabler to solving the myriad of problems that complexity creates.Complex Problems Do Not Require Complex Solutions
Simplicity is the only thing that actually works.
âSo yes, the problems are complex. But complex problems do not ipso facto require complex solutions. Au contraire! The basic premise of this book is that simple solutions are the only solutions to complex problems that work. The complex solutions are simply too complex.âSimplicity is the Antidote to Complexity
It sounds obvious but itâs true. You canât solve a problem with the same complexity that got you there in the first place.
âThe antidote to complexity is simplicity. Replace complexity with simplicity and the battle is three-quarters over. Of course, replacing complexity with simplicity is not necessarily simple.âFocus on Simplicity as a Core Value
If you want to achieve simplicity, you first have to explicitly focus on it as a core value.
âThe first thing you need to do to achieve simplicity is focus on simplicity as a core value. We all discuss the importance of agility, security, performance, and reliability of IT systems as if they are the most important of all requirements. We need to hold simplicity to as high a standard as we hold these other features. We need to understand what makes architectures simple with as much critical reasoning as we use to understand what makes architectures secure, fast, or reliable. In fact, I argue that simplicity is not merely the equal of these other characteristics; it is superior to all of them. It is, in many ways, the ultimate enabler.âA Security Example
Complex systems work against security.
âTake security for example. Simple systems that lack security can be made secure. Complex systems that appear to be secure usually aren't. And complex systems that aren't secure are virtually impossible to make either simple or secure.âAn Agility Example
Complexity works against agility, and agility is the key to lasting solutions.
âConsider agility. Simple systems, with their well-defined and minimal interactions, can be put together in new ways that were never considered when these systems were first created. Complex systems can never used in an agile way. They are simply too complex. And, of course, retrospectively making them simple is almost impossible.âNobody Ever Considers Simplicity as a Critical Feature
And thatâs the problem.
âYet, despite the importance of simplicity as a core system requirement, simplicity is almost never considered in architectural planning, development, or reviews. I recently finished a number of speaking engagements. I spoke to more than 100 enterprise architects, CIOs, and CTOs spanning many organizations and countries. In each presentation, I asked if anybody in the audience had ever considered simplicity as a critical architectural feature for any projects on which they had participated. Not one person had. Ever.âThe Quest for Simplicity is Never Over
Simplicity is a quest. And the quest is never over. Simplicity is a ongoing pursuit and itâs a dynamic one. Itâs not a one time event, and itâs not static.
âThe quest for simplicity is never over. Even systems that are designed from the beginning with simplicity in mind (rare systems, indeed!) will find themselves under a never-ending attack. A quick tweak for performance here, a quick tweak for interoperability there, and before you know it, a system that was beautifully simple two years ago has deteriorated into a mass of incomprehensibility.â
Simplicity is your ultimate sword for hacking your way through complexity âŠ in work âŠ in life âŠ in systems âŠ and ecosystems.
Wield it wisely.You Might Also Like
If any of these items interest you there's a full description of each sponsor below. Please click to read more...
“Chance favors the prepared mind.” - Louis Pasteur
Are you feeling lucky?
If you’re an engineer or a developer, you’ll appreciate the idea that you can design for luck, or stack the deck in your favor.
How do you do this?
As Harry Golden said, "The only thing that overcomes hard luck is hard work."
While I believe in hard work, I also believe in working smarter.
Luck is the same game.
It’s a game of skill.
And, success is a numbers game.
You have to stay in long enough to get “lucky” over time. That means finding a sustainable approach and using a sustainable system. It means avoiding going all in without testing your assumptions and reducing the risk out of it. It means taking emotion out of the equation, taking calculated risks, minimizing your exposure, and focusing on skills.
That’s why Agile methods and Lean approaches can help you outpace your failures.
Because they are test-driven and focus on continuous learning.
And because they focus on capacity and capability versus burnout or blowup.
So if you aren’t feeling the type of luck you’d like to see more of in your life, go back to the basics, and design for it.
They funny thing about luck is that the less you depend on it, the more of it you get.
BTW – Agile Results and Getting Results the Agile Way continue to help people “get lucky.” Recently, I heard a story where a social worker shared Getting Results the Agile Way with two girls living off the streets. They are off drugs now, have jobs, and are buying homes. I’m not doing the story justice, but it’s great to hear about people turning their lives around and these kinds of life changing things that a simple system for meaningful results can help achieve.
It’s not luck.
It’s desire, determination, and effective strategies applied in a sustainable way.
The Agile way.
âEach of the practices still has the same weaknesses as before, but what if those weaknesses were now made up for by the strengths of other practices? We might be able to get away with doing things simply." â Kent Beck
Extreme Programming (XP) has been around a while, but not everybody knows âwhat it looks like.â
What does it look like when you step back and take the balcony view and observe the flow of things?
It might look a bit like this âŠ
I put this view together to help some folks get an idea of what the âsystemâ sort of looks like. It didnât need to be perfect, but they needed at least a baseline idea or skeleton so they could better understand how the various practices fit together.
The beauty is that once you put a simple picture up on the whiteboard, then you can have real discussions with the team about where things can be improved. Once again, a picture is worth 1,000 words.
For reference, here are the 12 Practices of Extreme Programming
The main idea here is to get simple visuals in your mind that you can easily draw on a whiteboard, and know the names of the various activities and artifacts.
If you nail this down, it helps you build a simple vocabulary.
This vocabulary will help you get others on board faster, and it will help you expand your own toolbox at a rapid rate, and youâll soon find yourself composing new methods and creating interesting innovations in your process that will help you do things better, faster, and cheaper âŠ on Cloud time.You Might Also Like
On the 5th of June, we (Lammert Westerhoff and Freek Wielstra) presented the State of Web Development 2014 at our yearly conference, XebiConÂ on the SS Rotterdam. It was a great success and we had some great feedback.Â We'd like to share the same presentation we gave there with everyone who couldn't make it to the conference in this blog post.
We started the presentation by showing an overwhelming tag cloud:
When youâre new to web development or when you havenât done any web development for a long time, youâre likely to get lost in this jungle of web technologies. Donât worry if you are overwhelmed with this, because thatâs intentional. This tag cloud contains many languages and web frameworks that modern web applications consist of. But it also contains a number of development tools that are used during development. TheÂ goal of this postÂ is to guide you through this jungle by explaining toÂ you which technologies you can use to build a modern web application and which tools you need during the full development lifecycle.
The get a better understanding of which technologies and tools we should use, we should first look at bit at the evolution of a number of important technologies. Then we will understand why web development is quite complex these days and why we need tooling. Once that's clear, we can have a look at the available tooling.The evolution of web applications
Of course, these static websites were hard to maintain. A change of content needed a change in the HTML file which then needed to be uploaded to the web server with FTP.
Luckily, there were a couple of technologies that solved this problem for us. Using a technology like PHP, JSP or ASP we could dynamically generate the websites at runtime, which were then sent to the browser. These websites usually were connected to a database. The database then stored the content instead of storing it directly in the HTML files. That way we could make changes to the website by changing the content stored in the database. This allowed us to already build more complex web applications that also stored user created content, like blogs or even entire web stores.
While we had the server side technologies that we needed to generate our websites, we needed new innovations on the client side as well. Especially because we entered the era of mobile devices, we needed new browser capabilities, like location services, browser storage and touch events. Therefore, new HTML and CSS standards were being developed that contain many features that make the web browsers much more powerful and richer, which allows us developers to build richer web applications for both mobile and desktop.
To make this a lot easier for everyone, Twitter released Bootstrap, which handles all the responsiveness automatically. Itâs still being used by many responsive websites today.
With all the new powerful features of CSS3, the style sheets also became more complex and tedious. To overcome that problem, technologies like Sass and Less were developed. These are CSS preprocessors that offer new language features that make some of the more tedious styling easier to do. Both will produce CSS and both solve the same problems. Which one you should choose depends on your circumstances and personal flavour.
AJAX requests are also at the core of single web page applications (SPA).
The single page web applications are becoming more and more popular since they usually provide a better user experience. They rely heavily on the AJAX requests that we saw earlier to retrieve data from a server. The data that is retrieved by these request are one of the only types of communication between the client and server once the page is initially loaded.
Since with SPAâs, we move a lot of logic and complexity from the server to the client, it really comes in handy to use the framework that handles most of the complexity. Three of the most popular frameworks are Backbone.js, Ember.js and AngularJS. Which one you should choose depends a lot on your requirements, what youâre used to and personal preference. Though it is clear that AngularJS is the most popular one at this moment, has the most activity and the largest community. People seem to be moving from Backbone to Angular and Ember. We also use Angular ourselves and weâre happy with it. The Google Angular team is working on the next major version which should even be a big improvement. Thatâs why we think Angular is a good and safe choice.
From what we've seen in the previous paragraphs,Â itâs obvious that more and more functionality and complexity is moved from the server side to the client side. And by using frameworks and libraries such as Bootstrap, jQuery and Angular, we also increase our client side dependencies. This also means that our development focus and efforts are moving from the server to the client. Itâs time to take front end development serious and to do it right. So letâs have a look at what this means for our development process.Web Application Development
Of course we want to use many of the great third party libraries that we discussed earlier. And since those libraries often depend on other libraries, weâll end up with a lot of external libraries.
Now weâre at a stage that we could deploy these files to our web server, but still thatâs not really the way we like to develop our apps. So weâre going to do a couple of extra steps. We will minify almost all of our files. This will grab all those files, do some smart tricks, and weâll end up with much smaller files. This will safe the end user some download time, which is nice.
We also want to make sure that all of the files we made donât have any syntax mistakes. Thatâs why weâll run a syntax validator on all of those files.
With all these steps in place weâre pretty confident that we have everything we need to make sure that we write quality code. But there is one more thing we need to worry about during development. And thatâs dependency management. We want automatic downloads of the third party libraries that we use and their dependencies. And we want to specify their version as well, or specify a version range and get automatic updates within our range.
Now weâre completely happy with all the steps in our development process. But how do we do all this? Or better said, how do we automate all this? We need a build tool.
Letâs quickly recap the things we want our build tool to do. It needs to compile and minify our style sheets and scripts. After that it will take these files and together with the HTML build a distribution package that we can deploy to our production server.
And with every change in any of our files, we want to make sure that everything still works as we would expect. So we validate our files and run our tests. These are our quality assurance steps.
Like we just mentioned, we also want all of our dependencies to be managed automatically.
But there are even more things that we can do with modern build tools that we couldnât even think of but are great for development. How about live watch and reload of our browser during development. We launch our app in a browser, change some logic or styling, and the web browser will refresh the page automatically to show our changes.
The web development community has been very innovative and came up with many more cool thing we can do with these tools. And you can even write your own build actions.
At this point we have a full understanding of what the web development process looks like. And by seeing this, we can say that front end development has become a first class citizen of software development.Â Now that we recognise the complexity of front end development and that we need tooling to manage this complexity, it time to have a deeper look at some of the most popular tools of this moment.Modern tools
Most of the web development tools that are built these days are part of the Node.jsÂ ecosystem.
The Node Package Manager makes it easy to install any registered Node.jsÂ tool or application with just a single command. It keeps an online registry of all available packages, and anyone can publish their own. The Node.js and NPM ecosystem have developed into an active and healthy developer community.
One of the most popular tools written in Node.jsÂ is Grunt. It is the standard build tool for front-end web projects. Itâs a task based build tool, which means you can define a set of tasks that need to run for a production build, development build or a test run. Sounds pretty much what we need for our development process.
Since itâs plugin based, it has plugins for pretty much any task you want to run. That includes plugins for our compilation, minification, validation and testing steps, as well as dependency management. That makes it perfect for us.
Once you start grunt in development mode, it will launch its built-in webserver and host your web application. And with the right plugins, we get our live reload. The Grunt plugin will watch for any changes in our resources, perform necessary tasks and refresh the open browser page for us, so we can see the results of our work right away - drastically shortening the feedback loop compared to traditional web development.
Since Grunt is the standard and most popular build tool for web projects at this moment, itâs widely adopted and has a large and mature developer ecosystem.
But Grunt also has itâs drawbacks.Â Letâs have a look at Gruntâs configuration file, the Gruntfile.js. This file contains configuration for our tasks and the plugins that perform them.
With a lot of tasks, the Gruntfile really becomes a lot of configuration. The example belowÂ taken from our own project and itâs only about half of our configuration. No need to read what it says exactly, itâs just to give you an idea about the amount.
But the amount of configuration is not the only downside of Grunt. The other problem is that it works with temporary files and directories. For each task (depending on the task, of course), it will read a couple of files, transform or convert them into something else and write the output back to disk. The the next task will then use these new files as their input. This creates a chain of temporary files and directories. Itâs not a huge problem, especiallyÂ but it certainly is not the fastest way of doing things.
Thatâs why Gulp.js, the streaming build system, was made. Itâs a newer, alternative build tool thatâs quickly gaining popularity. Gulp can do pretty much the same things as Grunt, so we donât need to worry too much that we lose essential features of our development process. However, since Gulp is newer than Grunt, it doesnât have as many plugins as Grunt yet. But since Gulp does the same as Grunt, but then better, it seems that many developers make the switch to Gulp and more and more plugins are made for Gulp as well.
So whatâs the difference?Â Instead of requiring a lot of configuration for itâs tasks, Gulp instead uses âCode over configurationâ. That means that you just tell the tool to do something in a certain way instead of configuring both what it should do and how it should do it. Much easier and much more direct. And resulting in far less code than Gruntâs configuration for doing the same.
It also solves Gruntâs problem of temporary files and directories since itâs stream based. Tasks will directly stream their output to the next task. This results in much faster build times.
So it seems that Gulp solves exactly the two problems we had with Grunt. So itâs to be expected that it will gain popularity and eventually pass Grunt. Itâs just a matter of time until it reaches the same maturity that Grunt currently has.
We talked earlier about dependency management to manage our third party libraries. While Grunt and Gulp use the Node Package Manager directly to download itâs plugins and their dependencies, we still need to manage the dependencies for our app. To do that, we use Bower, a package manager for the web, by Twitter. Bower works similar as NPM in that you specify all your dependencies in a single configuration file. Bower will then download these libraries from the Internet. Since Grunt and Gulp have plugins for Bower, you donât even need to worry about this, since it will make sure to download the files when you build your project or to download updates when needed - if your build system is properly configured.
So many tools and frameworks! Isnât it too much work to setup and configure?
Yes; setting up a full web app development stack, configuring each plugin for your project, is a lot of development work which can take up days. Luckily there is a tool to solve that problem: Yeoman.
Yeoman is a project scaffolding tool that generates a basis for your project, using templates called Generators. Some of those generators will set up a web application project with a preconfigured build strait, containing everything we need for compilation, minification, validation, testing, dependency management and much more. It will also ask us if we want to use frameworks such as Bootstrap and AngularJS. This makes it an ideal solution to setup a new project and automated development process.
More recent generators will set up Gulp as the build tool, providing an alternative to Bower. This should help developers that arenât familiar with Gulp yet to get up to speed quickly.
Letâs recap what we talked aboutÂ in this post in a summary.
The full presentation slides will soon be available from the XebiCon website. Feel free to comment below with your feedback or reach out to us on Twitter.
Lammert Westerhoff @lwesterhoff
Freek WielstraÂ @frwielstra
âBecoming limitless involves mental agility; the ability to quickly grasp and incorporate new ideas and concepts with confidence.â -- Lorii Myers
I was asked to give an Intro to Agile talk to a group in Microsoft, in addition to a talk on Getting Results the Agile Way.
It worked out well.
The idea was to do a level set and get everybody on the same page in terms of what Agile is.
I thought it was a great chance to build a shared vocabulary and establish some shared mental models. I believe that when people have a shared vocabulary and common mental models, they can take a look from the balcony. And, it makes it a lot easier to move up the chain and take things further, faster.
Anyway, here is how I summarized what Agile is:
That said, I need to find something a bit more pithy and precise, yet insightful.
If I had to put it in a simple sentence, Iâd say Agile is empowerment through flexibility.
One thing Iâve noticed over the years is that some people struggle when they try to go Agile.
They struggle because they canât seem to âflip a switch.â And if they donât flip the switch, they donât change their mindset.
And, if they donât change their mindset, Agile remains just beyond their grasp.
Agile is like happiness, grow it right under your feet.You Might Also Like
The introduction of cloud technologies is not a simple evolution of existing ones, but a real revolution. Like all revolutions, it changes the points of views and redefines all the meanings. Nothing is as before. This post wants to analyze some key words and concepts, usually used in traditional architectures, redefining them according the standpoint of the cloud. Understanding the meaning of new words is crucial to grasp the essence of a pure cloud architecture.<<There is no greater impediment to the advancement of knowledge than the ambiguity of words.>> THOMAS REID, Essays on the Intellectual Powers of Man
Nowadays, it is required to challenge the limits of traditional architectures that go beyond the normal concepts of scalability and support millions of users (What's Up 500 Million) billions of transactions per day (Salesforce 1.3 billion), five 9s of availability (99.999 AOL). I wish all of you the success of the examples cited above, but do not think that it is completely impossible to reach mind-boggling numbers. Using cloud technology, everyone can create a service with a little investment and immediately have a world stage. If successful, the architecture must be able to scale appropriately.
Using the same design criteria or move the current configuration to the cloud simply does not work and it could reveal unpleasant surprises.
Infrastructure - commodity HW instead of high-end HW
I thought I had written about âWhy Agileâ before, but I donât see anything crisp enough.
Anyway, hereâs my latest rundown on Why Agile?
Remember that nature favors the flexible and agility is the key to success.You Might Also Like
Google’s Jeffrey Dean and Sanjay GhemawatÂ filed theÂ patent requestÂ and published theÂ map/reduce paper Â 10 year ago (2004). According to WikiPedia Doug Cutting and Mike Cafarella created Hadoop, with its own implementation of Map/Reduce, Â one year later at Yahoo – both these implementations were done for the same purpose – batch indexing of the web.
Back than, the web beganÂ its “web 2.0″ transition, pages became more dynamic , people began to create more content – so an efficient way to reprocess and build the web index was needed and map/reduce was it. Web Indexing was a great fit for map/reduce since the initial processing of each source (web page) is completely independent from any other – i.e. Â a very convenient map phase and you need Â to combine the results to build the reverse index. That said, even the coreÂ google algorithm – Â the famous pagerankÂ is iterative (so less appropriate for map/reduce), not to mention that Â as the internet got bigger and theÂ updates became more and more frequent map/reduceÂ wasn’t enough. Again Google (who seem to beÂ consistentlyÂ few years ahead of the industry) began coming up with alternatives like Google PercolatorÂ orÂ Â Google DremelÂ (both papers were published in 2010, Percolator was introduced at that year, andÂ dremel has been used in Google since 2006).
So now, it is 2014, and it is time for the rest of us to catch up with Google and get over Map/Reduce and Â for multiple reasons:
In my opinion, Map/Reduce is an idea whose time has come and gone – it won’t die in a day or a year, there is still a lot of working systems that use it and the alternatives are still maturing. I do think, however, that if you need to write or implement something new that would build on map/reduce – you should use other option or at the very least carefully consider them.
So how is this change going to happenÂ ? Â Luckily, Hadoop has recently adopted YARN (you can see my presentation on it here), which opens up the possibilities to go beyond map/reduce without changing everything … even though in effect, Â a lotÂ will change. Note that some of the new options do have migration paths and also we still retain theÂ Â access to all that “big data” we have in Hadoopm as well as the extended reuse of some of the ecosystem.
The first type of effort to replace map/reduce is to actually subsume it by offering moreÂ Â flexible batch. After all saying Map/reduce is not relevant, deosn’t mean that batch processing is not relevant. It does mean that there’s a need to more complex processes. There are two main candidates here Â TezÂ and SparkÂ where Tez offers a nice migration path as it is replacing map/reduce as the execution engine for both Pig and Hive and Spark has a compelling offer by combining Batch and Stream processing (more on this later) in a single engine.
The second type of effort or processing capability that will help kill map/reduce is MPP databases on Hadoop. Like the “flexible batch” approach mentioned above, this is replacing a functionality that map/reduce was used for – unleashing the data already processed and stored in Hadoop. Â The idea here is twofold
Efforts in this arena include Impala from Cloudera, Hawq from Pivotal (which is essentially greenplum over HDFS), startups like Hadapt or even Actian trying to leverage their ParAccel acquisition with the recently announced Actian Vector . Hive is somewhere in the middle relying on Tez on one hand and using vectorization and columnar format (Orc) Â on the other
The ThirdÂ type of processing that will help dethrone Map/Reduce is Stream processing. Unlike the two previous types of effort this is covering a ground the map/reduce can’t cover, even inefficiently. Stream processing is about Â handling continuous flow of new data (e.g. events) and processing them Â (enriching, aggregating, etc.)Â Â them in seconds or less. Â The two major contenders in the Hadoop arena seem to be Spark Streaming and StormÂ though, of course, there are several other commercial and open source platforms that handle this type of processing as well.
In summary – Map/Reduce is great. It has served us (as an industry) for a decade but it is now time to move on and bring the richer processing capabilities we have elsewhere to solve our big data problems as well.
Last note Â - I focused on Hadoop in this post even thought there are several other platforms and tools around. I think that regardless if Hadoop is the best platform it is the one becoming the de-facto standard for big data (remember betamax vs VHS?)
One really, really last note – if you read upÂ to here, and you are a developer living in Israel, and you happen to be looking for a job – Â I am looking for another developer to join my Technology Research team @ Amdocs. If you’re interested drop me a note: arnon.rotemgaloz at amdocs dot com or via my twitter/linkedin profiles
*esp. in regard to analytical queries – operational SQL on hadoop with efforts like Â Phoenix ,IBM’s BigSQL or Splice Machine are also happening but that’s another story
illustration idea found inÂ Â James Mickens’s talk in Monitorama 2014 – Â (which is, by the way, a really funny presentation – go watch it) -ohh yeah… and pulp fiction :)
Okay, this is the separate blog post that I referred to in Software architecture vs code. What exactly do we mean by an "architecturally-evident coding style"? I built a simple content aggregator for the local tech community here in Jersey called techtribes.je, which is basically made up of a web server, a couple of databases and a standalone Java application that is responsible for actually aggegrating the content displayed on the website. You can read a little more about the software architecture at techtribes.je - containers. The following diagram is a zoom-in of the standalone content updater application, showing how it's been decomposed.
This diagram says that the content updater application is made up of a number of core components (which are shown on a separate diagram for brevity) and an additional four components - a scheduled content updater, a Twitter connector, a GitHub connector and a news feed connector. This diagram shows a really nice, simple architecture view of how my standalone content updater application has been decomposed into a small number of components. "Component" is a hugely overloaded term in the software development industry, but essentially all I'm referring to is a collection of related behaviour sitting behind a nice clean interface.
Back to the "architecturally-evident coding style" and the basic premise is that the code should reflect the architecture. In other words, if I look at the code, I should be able to clearly identify each of the components that I've shown on the diagram. Since the code for techtribes.je is open source and on GitHub, you can go and take a look for yourself as to whether this is the case. And it is ... there's a je.techtribes.component package that contains sub-packages for each of the components shown on the diagram. From a technical perspective, each of these are simply Spring Beans with a public interface and a package-protected implementation. That's it; the code reflects the architecture as illustrated on the diagram.
So what about those core components then? Well, here's a diagram showing those.
Again, this diagram shows a nice simple decomposition of the core of my techtribes.je system into coarse-grained components. And again, browsing the source code will reveal the same one-to-one mapping between boxes on the diagram and packages in the code. This requires conscious effort to do but I like the simple and explicit nature of the relationship between the architecture and the code.When architecture and code don't match
The interesting part of this story is that while I'd always viewed my system as a collection of "components", the code didn't actually look like that. To take an example, there's a tweet component on the core components diagram, which basically provides CRUD access to tweets in a MongoDB database. The diagram suggests that it's a single black box component, but my initial implementation was very different. The following diagram illustrates why.
My initial implementation of the tweet component looked like the picture on the left - I'd taken a "package by layer" approach and broken my tweet component down into a separate service and data access object. This is your stereotypical layered architecture that many (most?) books and tutorials present as a way to build (e.g.) web applications. It's also pretty much how I've built most software in the past too and I'm sure you've seen the same, especially in systems that use a dependency injection framework where we create a bunch of things in layers and wire them all together. Layered architectures have a number of benefits but they aren't a silver bullet.
This is a great example of where the code doesn't quite reflect the architecture - the tweet component is a single box on an architecture diagram but implemented as a collection of classes across a layered architecture when you look at the code. Imagine having a large, complex codebase where the architecture diagrams tell a different story from the code. The easy way to fix this is to simply redraw the core components diagram to show that it's really a layered architecture made up of services collaborating with data access objects. The result is a much more complex diagram but it also feels like that diagram is starting to show too much detail.
The other option is to change the code to match my architectural vision. And that's what I did. I reorganised the code to be packaged by component rather than packaged by layer. In essence, I merged the services and data access objects together into a single package so that I was left with a public interface and a package protected implementation. Here's the tweet component on GitHub.But what about...
Again, there's a clean simple mapping from the diagram into the code and the code cleanly reflects the architecture. It does raise a number of interesting questions though.
This is still a layered architecture, it's just that the layers are now a component implementation detail rather than being first-class architectural building blocks. And that's nice, because I can think about my components as being my architecturally significant structural elements and it's these building blocks that are defined in my dependency injection framework. Something I often see in layered architectures is code bypassing a services layer to directly access a DAO or repository. These sort of shortcuts are exactly why layered architectures often become corrupted and turn into big balls of mud. In my codebase, if any consumer wants access to tweets, they are forced to use the tweet component in its entirety because the DAO is an internal implementation detail. And because I have layers inside my component, I can still switch out my tweet data storage from MongoDB to something else. That change is still isolated.Component testing vs unit testing
Ah, unit testing. Bundling up my tweet service and DAO into a single component makes the resulting tweet component harder to unit test because everything is package protected. Sure, it's not impossible to provide a mock implementation of the MongoDBTweetDao but I need to jump through some hoops. The other approach is to simply not do unit testing and instead test my tweet component through its public interface. DHH recently published a blog post called Test-induced design damage and I agree with the overall message; perhaps we are breaking up our systems unnecessarily just in order to unit test them. There's very little to be gained from unit testing the various sub-parts of my tweet component in isolation, so in this case I've opted to do automated component testing instead where I test the component as a black-box through its component interface. MongoDB is lightweight and fast, with the resulting component tests running acceptably quick for me, even on my ageing MacBook Air. I'm not saying that you should never unit test code in isolation, and indeed there are some situations where component testing isn't feasible. For example, if you're using asynchronous and/or third party services, you probably do want to ability to provide a mock implementation for unit testing. The point is that we shouldn't blindly create designs where everything can be mocked out and unit tested in isolation.Food for thought
The purpose of this blog post was to provide some more detail around how to ensure that code reflects architecture and to illustrate an approach to do this. I like the structure imposed by forcing my codebase to reflect the architecture. It requires some discipline and thinking about how to neatly carve-up the responsibilities across the codebase, but I think the effort is rewarded. It's also a nice stepping stone towards micro-services. My techtribes.je system is constructed from a number of in-process components that I treat as my architectural building blocks. The thinking behind creating a micro-services architecture is essentially the same, albeit the components (services) are running out-of-process. This isn't a silver bullet by any means, but I hope it's provided some food for thought around designing software and structuring a codebase with an architecturally-evident coding style.
âLearning and innovation go hand in hand. The arrogance of success is to think that what you did yesterday will be sufficient for tomorrow.â -- William Pollard
The Internet of Things is hot. But itâs more than a trend. Itâs a new way of life (and business.)
Itâs transformational in every sense of the word (and world.)
A colleague shared some of their most interesting finds with me, and one of them is:
Here are my key take aways:
Itâs a fast read, with nice and tight insight âŠ my kind of style.
Enjoy.You Might Also Like
Iâve shared a Scrum Flow at a Glance before, but it was not visual.
I think itâs helpful to know how to whiteboard a simple view of an approach so that everybody can quickly get on the same page.
Here is a simple visual of Scrum:
There are a lot of interesting tools and concepts in scrum. The definitive guide on the roles, events, artifacts, and rules is The Scrum Guide, by Jeff Sutherland and Ken Schwaber.
I like to think of Scrum as an effective Agile project management framework for shipping incremental value. It works by splitting big teams into smaller teams, big work into smaller work, and big time blocks into smaller time blocks.
I try to keep whiteboard visuals pretty simple so that they are easy to do on the fly, and so they are easy to modify or adjust as appropriate.
I find the visual above is pretty helpful for getting people on the same page pretty fast, to the point where they can go deeper and ask more detailed questions about Scrum, now that they have the map in mind.You Might Also Like
I presented two talks last week with the title "Software architecture vs code" - first as the opening keynote for the inaugural Software Design and Development conference and also the next day as a regular conference session at GOTO Chicago. Videos from both should be available at some point and the slides are available now. The talk itself seems to polarise people, with responses ranging from Without a doubt, Simon delivered one of the best keynotes I have seen. I got a lot from it, with plenty 'food for thought' moments. through to "hmmm, meh".Separating software architecture from code
The basic premise of the talk is that the architecture and code of a software system never quite match up. The traditional way to communicate the architecture of a software system is with diagrams based upon a number of views ... a logical view, a functional view, a module view, a physical view, etc, etc. Philippe Kruchten's 4+1 model is an example often cited as a starting point for such approaches. I've followed these approaches in the past myself and, although I can get my head around them, I don't find them an optimal way to describe a software system. The "why?" has taken me a while to figure out, but the thing I dislike is the way in which you get an artificial separation between the architecture-related views (logical, module, functional, etc) and the code-related views (implementation, design, etc). I don't like treating the architecture and the code as two separate things, but this seems to be the starting point for many of the ways in which software systems are communicated/documented. If you want a good example of this, take a look at the first chapter of "Software Architecture in Practice" where it describes the relationship between modules, components, and component instances. It makes my head hurt.The model-code gap
This difference between the architecture and code views is also exaggerated by what George Fairbanks calls the "model-code gap" in his book titled "Just Enough Software Architecture" (highly recommended reading, by the way). George basically says that your architecture models will include abstract concepts (e.g. components, services, modules, etc) but the code usually doesn't reflect this. This matches my own experience of helping people communicate their software systems ... people will usually draw components or services, but the actual implementation is a bunch of classes sitting inside a traditional layered architecture. Actually, if I'm being honest, this matches my own experience of building software myself because I've done the same thing! :-)The intersection of software architecture and code
My approach to all of this is to ensure that the architecture and code views of a software system are one and the same thing, albeit from different levels of abstraction. In other words, my primary focus when describing a software system is the static structure, which ranges from code (classes) right up through components and containers. I model this with my C4 approach, which recognises that software developers are the primary stakeholders in software architecture. Other views of the software system (deployment, infrastructure, etc) slot into place really easily when you understand the static structure.
To put this all very simply, your code should reflect the architecture diagrams that you draw. If your diagrams include abstract concepts such as components, your code should reflect this. If the diagrams and code don't line up, you have to question the value of the diagrams because they're creating a fantasy and there's little point in referring to them.Challenging the traditional layered approach
This deserves a separate blog post, but something I also mentioned during the talk was that teams should challenge the traditional layered architecture and the way that we structure our codebase. One way to achieve a nice mapping between architecture and code is to ensure that your code reflects the abstract concepts shown on your architecture diagrams, which can be achieved by writing components rather than classes in layers. Another side-effect of changing the organisation of the code is less test-induced design damage. The key question to ask here is whether layers are architecturally significant building blocks or merely an implementation detail, which should be wrapped up inside of (e.g.) components. As I said, this needs a separate blog post.Thoughts?
As I said, the slides are here. Aligning the architecture and the code raises a whole bunch of interesting questions but provides some enormous benefits for a software development team. A clean mapping between diagrams and code makes a software system easy to explain, the impact of change becomes easier to understand and architectural refactorings can seem much less daunting if you know what you have and where you want to get to. I'm interested in your thoughts on things like the following:
Convincing people to structure the code underlying their monolithic systems as a bunch of collaborating components seems to be a hard pill to swallow, yet micro-service architectures are going to push people to reconsider how they structure a software system, so I think this discussion is worth having. Thoughts?
If any of these items interest you there's a full description of each sponsor below. Please click to read more...
I did a short overview of Hadoop YARN to our big data development team. The presentation covers the motivation for YARN, how it works and its major weaknesses
You can watch/download on slideshare
As I help more people go Agile, I try to simplify the most important concepts.
For me, one of the most important changes in Agile is what it means to the product development cycle.
I think a picture is worth a 1,000 words. Iâve put together a couple of simple visuals to show what it means to go from a Waterfall development approach to an Agile development approach.
Contrast the Waterfall Model with the Agile Model:
With these visuals, I attempted to show a couple of key ideas:
If you need to keep up with the pace of change, deal with changing requirements, keep up with user demands, while shipping value faster, Agile might be what youâre looking for.
You might of saw the announcement I made that I just joined Gartner. You might be thinking what this means for my blog, right? WellâŠ there will be some changes but I think ultimately they will be good ones.Â
Just like with most things, there is good news and the not so good news.
How about we get the bad news out of the way first... So the not so good news is that this will be my last blog post, well at least for the foreseeable future. Â I will still keep it alive out here but will not be able to update it.
So this leads to the good news is that I will still be able to share my insights with all of you. I will continue to express myself through Gartner research notes, technology profiles, hype cycles and conferences. Who knows, I may even show up on Gartner blogs as well. While I do love what I have done with "Mike The Architect", blogging on any one platform/persona was never my goal. Itâs was simply a vehicle or a means to an end to communicate my thoughts to all of you.
As you might imagine this post is a bittersweet one for me. This closes one chapter and opens another for my public writing to all of you. I have really enjoyed blogging all these years about my observations, experiences and my wild haired crazy ideas. Can you believe itâs been 8 years of Enterprise Architecture blogging? I can't. Goes by fast.
I just want to say thank you to everyone that has subscribed to my blog, provided comments and believed in my guidance.ÂRelated articles Mike Walker has joined Gartner
Well, not exactly Fishin', but I'll be on a month long vacation starting today.
I won't be posting new content, so we'll all have a break. Disappointing, I know.
If you've ever wanted to write an article for HighScalability this would be a great time :-) I'd be very interested in your experiences with containers vs VMs if you have some thoughts on the subject.
So if the spirit moves you, please write something.
See you on down the road...