Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
On April 27, something very cool is happening. A bunch of Windows Azure MVP's and community activists are organizing a Global Windows Azure Bootcamp. This is a completely free, one-day training event for Windows Azure, all organized by the community, and presented in person all over the World.
Iâ€™m not sure if this is the largest community event ever - it is very cool to see how many places this event is happening. Below is the location map as it stands today â€“ and new locations are being added daily. Right now there are almost 100 locations and several thousand attendees already registered to take part. Browse the location listings to find a location near you.
If you are interested in learning about Windows Azure or want more info, checkout the Global Windows Azure Bootcamp website to learn more about the bootcamps. Then find a location near you, sign-up and attend the event for free, and get involved with the Windows Azure community near you!
Hope this helps,
P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu
Hey, it's HighScalability time:
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...
Gartner just released a report entitled, "EA Practitioners Have Significant Influence on $1.1 Trillion in Enterprise IT Spendâ€ť that strongly links to their Business Outcome Driven Enterprise Architecture.Â This is interesting article because itâ€™s shows the latest thinking from real EA practitioners with some real good stats on where we are at from an industry perspective.Â
What is also clear is that EA is now positioned to do what we have wanted to do for years, drive business results not just technology decisions. This is a big opportunity for us and it is now our opportunity to lose.
EA Practitioners Have Significant Influence on $1.1 Trillion in Enterprise IT Spend
Fifty percent of enterprise architecture (EA) practitioners have a significant impact on enterprise IT budget activities and decisions, according to a recent survey by Gartner, Inc. A July 2012 Gartner survey of EA practitioners found that half of EA practitioners have an influence over their organization's IT budget allocation that is either "final decision maker" or "great deal of influence."
Based on the EA survey results from Gartner events in North America and Europe, analysts estimate that EA practitioners have a "final decision-making" influence on $331 billion in worldwide enterprise IT spend and a "great deal of influence" on $774 billion in worldwide enterprise IT spending. Overall, EA practitioners have an influence that is either "final decision maker" or "great deal of influence" on $1.1 trillion in worldwide enterprise IT spending.
"Overwhelmingly we find EA practitioners focused on delivering on business value and strategic transformation," said Philip Allega, managing vice president at Gartner. "Gone are the days of just 'doing EA' with little value or impact. Sixty-eight percent of organizations surveyed stated that they are focusing their EA program on aligning business and IT strategies, delivering strategic business and IT value, or enabling major business transformation."
Gartner is leading the way in defining and mastering a radical new approach to EA, which is business outcome-driven EA. Leading EA practitioners are focused on creating diagnostic deliverables to help business and IT leaders respond to business and technology disruptions.
"This new generation of EA practitioners offers technology and service providers (TSPs) with an opportunity as well as a threat," said Mr. Allega. "Technology and service providers should develop targeted marketing to this new generation of EA practitioner as they have a significant influence on their organization buying decisions. Those that fail to understand the priorities, strategic focus and impact of EA practitioners will jeopardize their ability to sell into an organization."
Gartner has identified the impact of EA trends on IT purchasing decisions, and has the following advice and recommendations to help TSPs target this audience more effectively:
In organizations supporting EA as strategic, and as collaborative between business leaders and IT, TSPs will increasingly find EA practitioners influencing IT spend.
EA practitioners have a high degree of influence over emerging technology purchases, with 52 percent of the EA practitioners surveyed reporting directly to a CIO or CTO. They are also "very involved" in integration consulting services (64 percent) and business applications (52 percent). As EA practitioners continue to focus on integrating and aligning with business priorities and actively working with business leaders, their degree of influence on business intelligence tools, workplace tools and business applications will likely increase as well.
Organizations starting, restarting or renewing their EA efforts present an opportunity for providers to market to and influence a new generation of EA practitioners.
The survey revealed that 77 percent of respondents were either restarting or renewing EA efforts (18 percent), initiating EA for the first time (34 percent) or taking EA efforts to the next level (25 percent). In organizations starting EA for the first time, EA practitioners have a significant influence on IT budget decisions, but significantly less have decision-making authority. These new and restarting organizations present an opportunity for TSPs to target a new generation of EA practitioners.
As organizations become more mature in supporting EA, they will have a greater degree of influence on IT budget allocations to products and services.
Many organizations begin their EA journey by focusing inside the IT organization on system consolidation, standardization and cost management. As they mature, this evolves into looking more closely at the "alignment" between the business strategy and IT strategy. From here the EA program evolves further to become "business outcome-oriented," such that in a mature EA program, other areas of decision making are guided and influenced by business outcome-driven EA.
Additional information is available in the Gartner report, "EA Practitioners Have Significant Influence on $1.1 Trillion in Enterprise IT Spendâ€ť. The report is available on Gartner's website at http://www.gartner.com/resId=2286216.
In Don’t panic! Here’s how to quickly scale your mobile apps Mike Maelzer paints a wonderful picture of how Avocado, a mobile app for connecting couples, evolved to handle 30x traffic within a few weeks. If you are just getting started then this is a great example to learn from.
What I liked: it's well written, packing a lot of useful information in a little space; it's failure driven, showing the process of incremental change driven by purposeful testing and production experience; it shows awareness of what's important, in their case, user signup; a replica setup was used for testing, a nice cloud benefit.
Their Biggest lesson learned is a good one:It would have been great to start the scaling process much earlier. Due to time pressure we had to make compromises –like dropping four of our media resizer boxes. While throwing more hardware at some scaling problems does work, it’s less than ideal.
Here's my gloss on the article:Evolution One - Make it Work
Two days ago, I read the book Toyota Kata by Mike Rother. Like most management books, the central message is hammered home by repetition. Some people, like me, may find that a bit annoying. That does not make this book any less a must read though. If you’re interested in making Lean/Agile really work in your organization without running the risk of organizational gravity eroding all your hard efforts over time, this book has the answer on how to do that. I’ll be incorporating the concepts of the Toyota Kata in my consulting from now on. Empower yourself. Read this book now! Or at least check out my summary of it.
â€śManagers help people see themselves as they are; Leaders help people to see themselves better than they are.â€ť â€” Jim Rohn
Actually, it's leadership development in a book. The book is, Intelligent Leadership: What You Need to Know to Unlock Your Full Potential, by John Mattone.
Intelligent Leadership is seriously a breakthrough book.
You should be able to tell from my book review, that it's one of the best books on leadership development.
It works the inner and outer you â€“ in a very deep and skillful way. Itâ€™s among the best self-paced leadership development books available (and ultimately leadership is powerful personal development in action, as you learn to groom and grow your capabilities, and the capabilities of others with skill.)
It's the real deal, and the book includes significant leadership tools for helping you make the most of what you've got. Mattone is an executive leadership coach and it shows. His book is deep and his leadership tools are powerful. The beauty is just how much he's packed into an actual book, so the tools are right there at your fingertips.
I like the fact that Mattone organizes leadership styles into a set of 9 leadership types:
He says we're a mix of all of them, and that's where our power comes from -- if we know how to harness it. To harness our personal power and to mature our leadership capabilities, we need to learn how to sharpen our strengths, and address our weaknesses. We can use these leadership types to see ourselves and to see others, and to better integrate our strengths when we interact with others.
Another powerful aspect of the book is how Mattone connects your inner goo with your outer you and shows the flow and relationships:
Self-Concept and Character (values, beliefs, and references) -> Thoughts -> Emotions -> Behavioral Tendencies -> Tactical/Strategic Competencies -> Self-Concept and Character.
This is a book that can be your short-cut for getting ahead in today's super competitive world. You don't have to hope for somebody to identify you as high potential. You can own this. You can take your leadership abilities to the next level, using Mattone's prescriptive guidance and leadership tools. You can immediately use his leadership tools to assess where you are, and to identify a very specific leadership development plan.
I would put this book up there wither Stephen Covey's The 7 Habits of Highly Effective People, and Tony Robbinsâ€™ Unlimited Power. It's more than a book. It's a framework. It's a playbook for building your personal leadership dojo.
When you read the book, John Mattone's 30+ years of experience, and his insight as a leadership coach, will quickly become apparent.
For a "movie trailer" style review of the book, and some of my favorite parts, check out my book review of Intelligent Leadership.
Edward de Bono wrote that scientists and engineers had proven that man-powered flight was impossible because a human couldnâ€™t generate enough horsepower to raise a plane off the ground. Then Paul MacCready did it successfully because he didnâ€™t know it was impossible.
What would you do if you didn't know it was "impossible"?
As Walt Disney said, "Itâ€™s kind of fun to do the impossible.â€ť Disney was a dreamer and a doer. He was a man of action. He said, â€śThe way to get started is to quit talking and begin doing.â€ť Walt Disney turned dreams into reality. According to Walt Disney, the secret of turning dreams into reality is the four Câ€™s: Curiosity, Confidence, Courage, and Constancy. (See Walt Disney Quotes for a more comprehensive list of Walt Disneyâ€™s mantras and thoughts.)
When you study success, â€śactionâ€ť is the active ingredient. Edward de Bono agrees that there are "describers" and "doers", where describers are happy enough just to describe or explain something in detail, while "doers" use action to test their ideas and get feedback.
Edward de Bono is fan of combining thought with action: â€śâ€¦ there is a continuous synergy between though and action. The suggestion is that you cannot smell a flower at a distance â€“ you have to get up close to it.â€ť
Walt Disney was a fan of combining imagination with action. As Walt Disney said, â€śâ€śI dream, I test my dreams against my beliefs, I dare to take risks, and I execute my vision to make those dreams come true.â€ť
Even the best laid schemes of mice and men, often go awry, when the rubber meets the road. You fall down, you get back up, you learn, you change your approach, and you try again.
And thatâ€™s where Curiosity, Confidence, Courage, and Constancy come into play.
Itâ€™s how you make your dreams happen.
In the words of Walt Disney, â€śâ€śIf you can dream it, you can do it.â€ť
This is a repost of the blog entry written by NuoDB's Tommy Reilly.
We at NuoDB were recently given the opportunity to kick the tires on the Google Compute Engine by our friends over at Google. You can watch the entire Google Developer Live Session by clicking here. In order to access the capabilities of GCE we decided to run the same YCSB based benchmark we ran at our General Availability Launch back in January. For those of you who missed it we demonstrated running the YCSB benchmark on a 24 machine cluster running on our private cloud in the NuoDB datacenter. The salient results were 1.7 million transactions per second with sub-millisecond latencies...
Today we released some great enhancements to Windows Azure. These new capabilities include:
All of these improvements are now available to start using immediately (note: some services are still in preview). Below are more details on them:Active Directory: Announcing the General Availability release
Iâ€™m excited to announce the General Availability (GA) release of Windows Azure Active Directory! This means it is ready for production use.
All Windows Azure customers can now easily create and use a Windows Azure Active Directory to manage identities and security for their apps and organizations. Best of all, this support is available for free (there is no charge to create a directory, populate it with users, or write apps against it).
Creating a New Active Directory
All Windows Azure customers (including those that manage their Windows Azure accounts using Microsoft ID) can now create a new directory by clicking the â€śActive Directoryâ€ť tab on the left hand side of the Windows Azure Management Portal, and then by clicking the â€śCreate your directoryâ€ť link within it:
Clicking the â€śCreate Your Directoryâ€ť link above will prompt you to specify a few directory settings â€“ including a temporary domain name to use for your directory (you can then later DNS map any custom domain you want to it â€“ for example: mycompanyname.com):
When you click the â€śOkâ€ť button, Windows Azure will provision a new Active Directory for you in the cloud. Within a few seconds youâ€™ll then have a cloud-hosted Directory deployed that you can use to manage identities and security permissions for your apps and organization:
Managing Users within the Directory
Once a directory is created, you can drill into it to manage and populate new users:
You can choose to maintain a â€ścloud onlyâ€ť directory that lives and is managed entirely within Windows Azure. Alternatively, if you already have a Windows Server Active Directory deployment in your on-premises environment you can set it up to federate or directory sync with a Windows Azure Active Directory you are hosting in the cloud. Once you do this, anytime you add or remove a user within your on-premises Active Directory deployment, the change is immediately reflected as well in the cloud. This is really great for enterprises and organizations that want to have a single place to manage user security.
Clicking the â€śDirectory Integrationâ€ť tab within the Windows Azure Management Portal provides instructions and steps on how to enable this:
Starting with todayâ€™s release, we are also greatly simplifying the workflow involved to grant and revoke directory access permissions to applications. This makes it much easier to build secure web or mobile applications that are deployed in the cloud, and which support single-sign-on (SSO) with your enterprise Active Directory.
You can enable an app to have SSO and/or richer directory permissions by clicking the new â€śIntegrated Appsâ€ť tab within a directory you manage:
Clicking the â€śAdd an Appâ€ť link will then walk you through a quick wizard that you can use to enable SSO and/or grant directory permissions to an app:
Windows Azure Active Directory supports several of the most widely used authentication and authorization protocols. You can find more details about the protocols we support here.
Todayâ€™s general availability release includes production support for SAML 2.0 â€“ which can be used to enable Single Sign-On/Sign-out support from any web or mobile application to Windows Azure Active Directory. SAML is particularly popular with enterprise applications and is an open standard supported by all languages + operating systems + frameworks.
Todayâ€™s release of Windows Azure Active Directory also includes production support of the Windows Azure Active Directory Graph â€“ which provides programmatic access to a directory using REST API endpoints. You can learn more about how to use the Windows Azure Active Directory Graph here.
In the next few days we are also going to enable a preview of OAuth 2.0/OpenID support which will also enable Single Sign-On/Sign-out support from any web or mobile application to Windows Azure Active Directory.
For a more detailed discussion of the new Active Directory support released today, read Alex Simonsâ€™ post on the Active Directory blog. Also review the Windows Azure Active Directory documentation on MSDN and the following tutorials on the windowsazure.com website.Windows Azure Backup: Enables secure offsite backups of Windows Servers in the cloud
Todayâ€™s Windows Azure update also includes the preview of some great new services that make it really easy to enable backup and recovery protection with Windows Server.
With the new Windows Azure Backup service, we are adding support to enable offsite backup protection for Windows Server 2008 R2 SP1 and Windows Server 2012, Windows Server 2012 Essentials, and System Center Data Protection Manager 2012 SP1 to Windows Azure. You can manage cloud backups using the familiar backup tools that administrators already use on these servers - and these tools now provide similar experiences for configuring, monitoring, and recovering backups be it to local disk or Windows Azure Storage. After data is backed up to the cloud, authorized users can easily recover backups to any server. And because incremental backups are supported, only changes to files are transferred to the cloud. This helps ensure efficient use of storage, reduced bandwidth consumption, and point-in-time recovery of multiple versions of the data. Configurable data retention policies, data compression, encryption and data transfer throttling also offer you added flexibility and help boost efficiency.
Managing your Backups in the Cloud
To get started, you first need to sign up for the Windows Azure Backup preview.
Then login to the Windows Azure Management Portal, click the New button, choose the Recovery Services category and then create a Backup Vault:
Once the backup vault is created youâ€™ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it:
Once the servers are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials for these:
Within the Windows Azure Management Portal, you can drill into a backup value and click the SERVERS tab to see which Windows Servers have been configured to use it. You can also click the PROTECTED ITEMS tab to view the items that have been backed up from the servers,Web Sites: Monitoring and Diagnostics Improvements
Todayâ€™s Windows Azure update also includes a bunch of new monitoring and diagnostic capabilities for Windows Azure Web Sites. This includes the ability to easily turn on/off tracing and store trace + log information in log files that can be easily retrieved via FTP or streamed to developer machines (enabling developers to see it in real-time â€“ which can be super useful when you are trying to debug an issue and the app is deployed remotely). The streaming support allows you to monitor the â€śtailâ€ť of your log files â€“ so that you only retrieve content appended to them â€“ which makes it especially useful when you clicking want to check something out without having to download the full set of logs.
The new tracing support integrates very nicely with .NETâ€™s System.Diagnostics library as well as ASP.NETâ€™s built-in tracing functionality. It also works with other languages and frameworks. The real-time streaming tools are cross platform and work with Windows, Mac and Linux dev machines.
Read Scott Hanselmanâ€™s awesome tutorial and blog post that covers how to take advantage of this new functionality. It is very, very slick.Other Cool Things
In addition to the features above, there are several other really nice improvements added with todayâ€™s release. These include:
The above features are now available to start using immediately (note: some of the services are still in preview). If you donâ€™t already have a Windows Azure account, you can sign-up for a free trial and start using them today. Visit the Windows Azure Developer Center to learn more about how to build apps with it!
Hope this helps,
P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu
One of the things I'm currently doing with a number of software teams is teaching them how to draw pictures. As an industry we've got really good at visualising the way that we work using things like Kanban boards and story walls, but we've forgotten how to visualise the software that we're building. In a nutshell, many teams are trying to move fast but they struggle to create a shared vision that the whole team can work from, which ultimately slows them down. And few people use UML nowadays, which just exaggerates the problem. I've written an article about this and it's due for publication soon (I'll come back and add a link) plus it's covered in my Software Architecture for Developers ebook and in a number of talks that I'm doing around Europe (ITARC, IASA UK, Mix-IT) and the US (SATURN) during April. Here are the slides from Agile software architecture sketches - NoUML! that I presented a few weeks ago in Dublin.The TL;DR version
The TL;DR version of this post is simply this ... if you're building monolithic software systems but think of them as being made up of a number of smaller components, ensure that your codebase reflects this. Consider organising your code by component (rather than by layer or feature) to make the mapping between software architecture and code explicit. If it's hard to explain the structure of your software system, change it.Decomposition into components
For the purpose of this post, let's assume visualising a software system isn't a problem and that you're sketching some ideas related to the software architecture for a new system you've been tasked to build. An important aspect of "just enough" software architecture is to understand how the significant elements of a software system fit together. For me, this means going down to the level of components, services or modules. It's worth stressing this isn't about understanding low-level implementation details, it's about performing an initial level of decomposition. The Wikipedia page for Component based development has a good summary, but essentially a component might be something like a risk calculator, audit logger, report generator, data importer, etc. The simplest way to think about a component is that it's a set of related behaviours behind an interface, which may be implemented using one or more collaborating classes. Good components share a number of characteristics with good classes. They should have high cohesion, low coupling, a well-defined public interface, good encapsulation, etc.
There are a number of benefits to thinking about a software system in terms of components, but essentially it allows us to think and talk about the software as a small number of high-level abstractions rather than the hundreds and thousands of individual classes that make up most enterprise systems. The photo below shows a typical component diagram produced during the training classes we run. Groups are asked to design a simple financial risk system that needs to pull in some data, perform some calculations and generate an Excel report as the output.
This sketch includes the major components you would expect to see for a system that is importing data, performing risk calculations and generating a report. These components provide us with a framework for partitioning the behaviour within the boundary of our system and it should be relatively easy to trace the major use cases/user stories across them. This is a really useful starting point for the software development process and can help to create a shared vision that the team can work towards. But it's also very dangerous at the same time. Without technology choices (or options), this diagram looks like the sort of thing an ivory tower architect might produce and it can seem very "conceptual" for many people with a technical background.Talk about components, write classes
People generally understand the benefit of thinking about software as higher level building blocks and you'll often hear people talking in terms of components when they're having architecture discussions. This often isn't reflected in the codebase though. Take a look at your own codebase. Can you clearly see components or does your codebase reflect some other structure? When you open up a codebase, it will often reflect some other structure due to the organisation of the code. Mark Needham has a great post called Coding: Packaging by vertical slice that talks about one approach to code organisation and a Google search for "package by feature vs package by layer" will throw up lots of other discussions on the same topic. The mapping between the architectural view of a software system and the code are often very different. This is sometimes why you'll see people ignore architecture diagrams (or documentation) and say "the code is the only single point of truth".Auto-generating architecture diagrams
To change tack slightly, I was in Dublin a few weeks ago and I met Chris Chedgey, who is part of the inspiration behind this post. Chris is the co-founder of a company called Headway Software and they have a product called Structure101. You should take a look if you've not seen it before, they have some cool stuff in the pipeline. I won't do their product any justice by trying to summarise what it does, but one of its many features is to visualise and understand an existing codebase.
When I teach people how to visualise their software systems, we create a number of simple NoUML sketches at different levels of abstraction. These are the context, containers and components diagrams. This context, containers and components approach is basically just a tree structure. A system is made up of containers (e.g. a web server, application server, database, etc), each of which is further made up of components. You can see some example diagrams on Flickr and in my book.
Given this is really just a tree structure, it should be fairly straightforward to auto-generate these diagrams from an existing codebase. And perhaps there is a tool out there that can do this, but I've never seen one that has worked really well. Microsoft Visual Studio can generate some layer diagrams but I've never met anybody that really raves about the architecture diagram support. Most tools generate diagrams showing dependencies between packages or classes but they don't tend to show components. And what's a component anyway? Is any class that implements an interface a component? If you're using inversion of control, perhaps everything that you inject is a component?
There are a number of reasons why auto-generating such diagrams is tricky but, once we start coding, much of the semantics associated with "containers" (runtime environments, process boundaries, etc) and "components" becomes lost of the sea of classes that make up the typical codebase. Many developers break their systems up into a number of projects within their IDEs to represent reusable libraries and deployable units but external tools often don't have access to this information if they are solely working from a bunch of JAR files or DLLs (for example). In essence, the information related to the abstract structural elements isn't adequately represented within a codebase. If you take a look at most codebases, I'm fairly sure that you could come up with a set of rules as to what defines a component but perhaps it would be easier to simply make these concepts explicit. Some techniques already exist to do this (e.g. the Architecture Description Language) but I've never seen them used in the corporate world.Packaging by component
To bring this discussion back to code, the organisation of the codebase can really help or hinder here. Organising a codebase by layer makes it easy to see the overall structure of the software but there are trade-offs. For example, you need to delve inside multiple layers (e.g. packages, namespaces, etc) in order to make a change to a feature or user story. Also, many codebases end up looking eerily similar given the fairly standard approach to layering within enterprise systems. Uncle Bob Martin says that if you're looking at a codebase, it should scream something about the business domain. Organising your code by feature rather than by layer gives you this, but again there are trade-offs. A variation I've been experimenting with is organising code explicitly by component. The following screenshot shows an example of this in the codebase for my techtribes.je website (a content aggregator and portal for Jersey's digital sector). This screenshot only shows the core components; there's a separate Spring MVC project and the controllers use the components illustrated here.
This is similar to packaging by feature, but it's more akin to the "micro services" that Mark Needham talks about in his blog post. Each sub-package of je.techtribes.component houses a separate component, complete with it's own internal layering and Spring configuration. As far as possible, all of the internals are package scoped. You could potentially pull each component out and put it in it's own project or source code repository to be versioned separately. This approach will likely seem familiar to you if you're building something that has a very explicit loosely coupled architecture such as a distributed messaging system made up of loosely coupled components. I'm fairly confident that most people are still building something more monolithic in nature though, despite thinking about their system in terms of components. I've certainly packaged *parts* of monolithic codebases using a similar approach in the past but it's tended to be fairly ad hoc. Let's be honest, organising code into packages isn't something that gets a lot of brain-time, particularly given the refactoring tools that we have at our disposal. Organising code by component lets you explicitly reflect the concept of "a component" from the architecture into the codebase. If your software architecture diagram screams something about your business domain (and it should), this will be reflected in your codebase too.The structural elements of software
We could create a convention here to say that all sub-packages of je.techtribes.component are components, but it would be much easier to explicitly mark components using metadata. In Java, we could use annotations to do this, attributes in .NET, etc. If we used the same approach for other structural elements of software (e.g. services, layers, containers, etc), tool vendors could use this metadata to generate meaningful and *simple* architecture diagrams automatically. Plus, they could also use this structural information to generate dependency diagrams that focus on components rather than classes. I've started experimenting with annotations as a way to do this and I've created a Github repo to store whatever I come up with.
The major caveat to all of this is that designing a software system based around components isn't "the only way". It's a nice approach to think about software systems that are more monolithic in nature and it's a great stepping stone to designing loosely coupled architectures. But it isn't a silver bullet. Regardless of how you design software, I do hope this post has got you thinking about the mapping between software architecture and how it's reflected in the code.
Software architecture and coding are often seen as mutually exclusive disciplines and there's often very little mapping from the architecture into the code and back again. Effectively and efficiently visualising a software architecture can help to create a good shared vision within the team, which can help it go faster. Having a simple and explicit mapping from the architecture to the code can help even further, particularly when you start looking at collaborative design and collective code ownership. Furthermore, it helps bring software architecture firmly back into the domain of the development team, which is ultimately where it belongs.
Hey, it's HighScalability time:
I read a lot. I read fast. I go through a lot of books each month. Books help give me new ideas and ways to do things better, faster, and cheaper. Books are one of the best ways I get the edge in work and life.
Here are the 5 of the best books Iâ€™ve read recently, along with links to my reviews:
When Can You Start?, as the name implies, is all about turning interviews into job offers. Itâ€™s a quick read and it tackles many of the common pitfalls you can run into during the interview process. Best of all, it provides a methodical approach for preparing for your interviews, by using your resume as a platform for telling your story in a relevant way. If youâ€™re trying to find a job, this is a great book for helping you get your head in the game, and stand out from the crowd, during the interview process.
Advices is for Winners is a cornucopia of insights and actions for creating an effective board of advisors to help you in work and life. I thought it would be a fluff book, but it was actually a very technical guide. It's written by an engineer, so the advice is very specific, and very data-driven. It includes a lot of lists, such as 6 benefits of getting advice, 22 questions for scoring a scenario, and 28 reasons why people resist advice. Mentors are the short-cuts and getting better advice is how you get ahead.
The Power of Starting Something Stupid is all about how to crush fear, make dreams happen, and live without regret. In the forward, Stephen Covey wrote: "It reminds each of us that all things are possible, that life is short, and to take action now."
Stories that Move Mountains introduces the CAST system for creating visual stories. Itâ€™s a powerful book about how to improve your presentation skills using storytelling and visuals. I ended up using some of the ideas in one of my presentations recently to senior leadership, and it helped me prioritize and sequence my slides in a far more effective way.
It's Already Inside directly addresses the question, "Are leaders born or made?" The book is a really great synthesis of the leadership habits and practices that will make you a more productive and more effective leader.
Each of these books has something for you in it. Of course, the challenge for you is to dive inside, find the gems that ring true for you, and apply them.
How do you layer a programmable Internet of smart things on top of the web? That's the question addressed by Dominique Guinard in his ambitious dissertation: A Web of Things Application Architecture - Integrating the Real-World (slides). With the continued siloing of content, perhaps we can keep our things open and talking to each other?
In the architecture things are modeled using REST, they will be findable via search, they will be social via a social access controller, and they will be mashupable. Here's great graphical overview of the entire system:
You may believe it or not, but the post that drains most of the traffic of this blog, is the one about C# static interfaces !
In october 2009, I simply tried to imagine where the idea of C# static interfaces could lead us, and, since then, I have more viewed pages (> 15%) on this post than on my home page !
And since then, nothing moved in this area in the C# langage, and I donâ€™t expect it to happen soon.
But some other thing happenedâ€¦
The thing is called Statically Resolved Type Parameters and is closer to C++ templates than from C# generics.
The trick is that you can define an inline function with statically resolved types, denoted by a ^ prefix. The usage of defined methods on the type is not given here by an interface, but by a constraint on the resolved type :
let inline count (counter: ^T) = let value = (^T: (member Count : int) counter) value
here , the count function takes a counter of type ^T (statically resolved).
The second line express that ^T actually should have a member Count of type int, and that it will call it on counter to get the result value !
Now, we can call count on various types that have a Count member property like :
type FakeCounter() = member this.Count = 42;
type ImmutableCounter(count: int) = member this.Count = count; member this.Next() = ImmutableCounter(count + 1)
type MutableCounter(count: int) = let mutable count = 0 member this.Count = count; member this.Next() = count <- count + 1
without needing an interface !
For instance :
let c = count (new FakeCounter())
True, this is compile time duck typing !
And it works with methods :
let inline quack (duck: ^T) = let value = (^T: (member Quack : int -> string) (duck, 3)) value
This will call a Quack method that takes int and returns string with the value 3 on any object passed to it that has a method corresponding to the constraint.
And magically enough, you can do it with static methods :
let inline nextThenstaticCount (counter: ^T) = (^T: (member Next : unit -> unit) counter) let value = (^T: (static member Count : int) ()) value
this function calls an instance method called Next, then gets the value of a static property called Count and returns the value !
It also works with operators :
let inline mac acc x y = acc + x * y
notice the signature of this function :
acc: ^a -> x: ^c -> y: ^d -> ^e when ( ^a or ^b) : (static member ( + ) : ^a * ^b -> ^e) and ( ^c or ^d) : (static member ( * ) : ^c * ^d -> ^b)
It accepts any types as long as they provide expected + and * operators.
The only thing is that a specific implementation of the function will be compiled for each type on which itâ€™s called. Thatâ€™s why it called statically resolved.
You can use this kind of method from F# code but not from C#.
No need for static interfaces in C#, use F# !
Sources of Insight is ready for action. It's my blog focused on proven practices for personal effectiveness for work and life. I started it a while back to help you sharpen your skills and to grow your personal capabilities.
The big idea is to help people "stand on the shoulders of giants", drawing from great books, great people, and great quotes.
The big change is the user experience. I upgraded the theme to a modern and responsive design, so now you should be able to read it more easily, even from your phone. The other big change is that it's easier to explore the knowledge base more easily. With the new menu, I got the chance to better surface key topics for you, such as Emotional Intelligence, Personal Effectiveness, Leadership, Personal Development, and Productivity. It's also easier to dive right into the articles or browse by key topics. It's also easier to explore key resources like Checklists or How Tos.A Garden of Greatness for YOU â€¦
Great Books, Great People, and Great Quotes are also front and center. With Great Books, you can easily browse the best business books, the best leadership books, or the best time management books. With Great People, you can browse lessons learned from Tony Robbins, John Wooden, Stephen Covey, and more. With Great Quotes, you can browse timeless wisdom from folks like Confucius, Buddha, Gandhi, and more.
Itâ€™s life wisdom at your fingertips.
Sources of Insight is meant to be a "Garden of Greatness" where you can find specific tools and techniques to help you get the edge in work and life. It also features guest posts from best-selling authors and experts from around the world, who share what they do best.
Itâ€™s a work in progress. Your feedback is always welcome to help shape it to something more useful, relevant, insightful, and actionable. Iâ€™ll be focusing on sharing a lot more principles, patterns, and practices for key topics in the near future.Subscribe to Sources of Insight
Iâ€™ll add more social features in the future, meanwhile, Iâ€™m still exploring the best ways to create an effective platform thatâ€™s simple and scales. Itâ€™s up to 210,000 monthly readers now, so Iâ€™m trying to focus on slow growth, with a strong platform.
While the overall site is focused on personal effectiveness, and especially topics like emotional intelligence, personal development, leadership, and productivity, be sure to let me know if there are scenarios or topics, youâ€™d like me to address.
Deciding to use a managed NoSQL datastore is a great step in ensuring you run a fast, scalable and resilient application without needing to be an expert in highly available architecture. How do you know which technology is the best for your application? How do you know whether the provider's performance claims are true? You are putting your application on someone elseâ€™s infrastructure and that requires some hard answers about their claims.
To determine the suitability of a provider, your first port of call is to benchmark. Choosing a service provider is often done in a number of stages. First is to shortlist providers based on capabilities and claimed performance, ruling out those that do not meet your application requirements. Second is to look for benchmarks conducted by third parties, if any. The final stage is to benchmark the service yourself.
In this article we will show you how to run some preliminary benchmarks against two managed NoSQL systems. For this test we will compare Instaclustr and Amazon DynamoDB using the Yahoo Cloud Serving Benchmark (YCSB). Instaclustr provides managed Apache Cassandra hosting and DynamoDB is Amazons own managed key value store solution...
Â Continuing on with our demystification series, I will talk about the comments I hear form people with regards to the TOGAF certification itself and the process. When I hear comments about this topic they usually gravitate to one end of the extreme to another. I often don't hear a middle ground. This for the obvious reason is an area of extreme passion, and rightfully so. After all we are talking about your career credentials, time investments in learning and the stressful certification process that architects will have to make.Â
In this post, I will talk about the specific myth that TOGAF certification is weak. I suppose that the term "weak" is a matter of perspective. As we walk through the post we can explore if TOGAF certification is weak and but a bit more qualification around that term.
With a total of 21,390 certified TOGAF practitioners worldwide, the TOGAF certification has proved to be a market leader in the industry. Combined With all those TOGAF practicing architects and the amount of focus on TOGAF there is bound to be opinions and perceptions around what it takes to become certified along with the level of quality in the process.
Taking a step back, TOGAF certification is based on its extensive experience certifying UNIX implementations. The Open Group believed that the certification process needed to be demonstrably objectiveâ€”that is, the same results would be achieved, regardless of who executed the process. So, in addition to the publication of the TOGAF framework, The Open Group membership defined a policy for certifying TOGAF products (specifically tools and training), services (consulting), and individuals (practitioners). The requirements for certifying TOGAF tools, training courses, professional services, and individual architects are defined by four TOGAF product standards. TOGAF-certified training courses and TOGAF-certified professional services must be delivered by TOGAF-certified architects.
There are two ways an architect can become TOGAF certified: by taking TOGAF certified training, or by passing a TOGAF-certified examination. The training must address, and the examination will test, knowledge and awareness of TOGAF, and a thorough and complete knowledge of the elements of TOGAF listed in the TOGAF 9.1 Framework specification.
Is the Certification Weak?
Lets look at the areas that I have heard scrutiny on the TOGAF certification:
Achieving the Certification
As I was sitting in on a training session for TOGAF a few months back this topic came up. It's a small misconception but I still wanted to talk about it because I think it's an important point to understand. The point at the training session was, "When do we take the test to become certified". Most people I talk to believe it is simple to get TOGAF certified. The common view (at least with the folks I talk to) is that all you need to do is go to a TOGAF trainer and then take a test at the end of the 4 days.
This simply isn't true. To preserve the integrity of the certification process The Open Group uses a third party called Prometric to administer the defined process. This ensures that there isn't a chance that training providers or other educators alter the process to make it easier (or harder for that matter) for the candidates.
It's subtle but there is an important point here, the TOGAF certification isn't just a certification that you can get by just going to the training and getting the award at the end of the class. There is a lot more to it.
How does the Pass Mark Compare to Other Certifications
The TOGAF Level 2 certification, which most people get is set at a 60% pass rate. I think this is a little low but I think it is acceptable. Lets face it, there is a lot of material to learn inside TOGAF. I'm not convinced that if you raised the pass mark up that it would yield significant better results. I believe that there are other factors at play to increase it's benefits to practitioners .
Let's compare TOGAF to other certifications. Below are industry leading certifications with their pass marks:
As you can see from the sampling of certifications in other disciplines TOGAF is not that dissimilar from other pass marks. Again, slightly lower but still in the same ball park.
Hopefully what I was able to do was to dispel some of the myths on TOGAF certification along with some hard numbers and comparisons. As I have said a few times in this post, I think the pass mark is on the low side but it's still in the right range and isn't drastically different from the industry.Â
So, love TOGAF or hate TOGAF it is the market leader in Enterprise Architecture certifications. The number of TOGAF certified praticiners continues to increase year after year and we see continued support from organizations world wide that recognize that as the defacto standard. I see the evidence from two primary areas:
Last week I made a big 3 hour presentation (oops I forgot the pause..) at the DevoxxFr conference.
It was the second edition of this big Java oriented event in Paris, and a big success: 1400 attendees, 180 speakers for 160 presentations.
Iâ€™ve seen a lot of interesting talks there and met nice people ! I can sincerely recommend it to anyone, even not fluent in Java.
I posted the slides of my talk on SlideShare â€“ french only.
Iâ€™ll post a link the the recorded video as soon as itâ€™s available.
The slides donâ€™t contain the details of the F# live coding of a Uno game, but it was quite similar to my SimpleCQRS F# implementation on GitHub.
Someone noticed it was a F# presentation in a JVM conference while there was no F# at the last TechEdâ€¦