Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Cloud Architecture Revolution

The introduction of cloud technologies is not a simple evolution of existing ones, but a real revolution.  Like all revolutions, it changes the points of views and redefines all the meanings. Nothing is as before.  This post wants to analyze some key words and concepts, usually used in traditional architectures, redefining them according the standpoint of the cloud.  Understanding the meaning of new words is crucial to grasp the essence of a pure cloud architecture.

<<There is no greater impediment to the advancement of knowledge than the ambiguity of words.>> THOMAS REID, Essays on the Intellectual Powers of Man

Nowadays, it is required to challenge the limits of traditional architectures that go beyond the normal concepts of scalability and support millions of users (What's Up 500 Million) billions of transactions per day (Salesforce 1.3 billion), five 9s of availability (99.999 AOL).  I wish all of you the success of the examples cited above, but do not think that it is completely impossible to reach mind-boggling numbers. Using cloud technology, everyone can create a service with a little investment and immediately have a world stage.  If successful, the architecture must be able to scale appropriately.

Using the same design criteria or move the current configuration to the cloud simply does not work and it could reveal unpleasant surprises.

Infrastructure - commodity HW instead of high-end HW

Categories: Architecture

Why Agile?

I thought I had written about “Why Agile” before, but I don’t see anything crisp enough.

Anyway, here’s my latest rundown on Why Agile?

  1. Increase customer involvement which can build empathy and avoid rework
  2. Learn faster which means you can adapt to change
  3. Improve quality through focus
  4. Reduce risk through shorter feedback loops and customer interaction
  5. Simplify by getting rid of overhead and waste
  6. Reduce cycle time through timeboxing and parallel development
  7. Improve operational awareness through transparency
  8. Drive process improvement through continuous improvement
  9. Empower people through less mechanics and more interaction, continuous learning, and adaptation
  10. Flow more value through more frequent releases and less “big bang”

Remember that nature favors the flexible and agility is the key to success.

You Might Also Like

Agile vs. Waterfall

Agile Life-Cycle Frame

Methodologies at a Glance

Roles on Agile Teams

The Art of the Agile Retrospective

Categories: Architecture, Programming

Is there a future for Map/Reduce?

8w9jj

Google’s Jeffrey Dean and Sanjay Ghemawat filed the patent request and published the map/reduce paper  10 year ago (2004). According to WikiPedia Doug Cutting and Mike Cafarella created Hadoop, with its own implementation of Map/Reduce,  one year later at Yahoo – both these implementations were done for the same purpose – batch indexing of the web.

Back than, the web began its “web 2.0″ transition, pages became more dynamic , people began to create more content – so an efficient way to reprocess and build the web index was needed and map/reduce was it. Web Indexing was a great fit for map/reduce since the initial processing of each source (web page) is completely independent from any other – i.e.  a very convenient map phase and you need  to combine the results to build the reverse index. That said, even the core google algorithm –  the famous pagerank is iterative (so less appropriate for map/reduce), not to mention that  as the internet got bigger and the updates became more and more frequent map/reduce wasn’t enough. Again Google (who seem to be consistently few years ahead of the industry) began coming up with alternatives like Google Percolator  or  Google Dremel (both papers were published in 2010, Percolator was introduced at that year, and dremel has been used in Google since 2006).

So now, it is 2014, and it is time for the rest of us to catch up with Google and get over Map/Reduce and  for multiple reasons:

  • end-users’ expectations (who hear “big data” but interpret that as  “fast data”)
  • iterative problems like graph algorithms which are inefficient as you need to load and reload the data each iteration
  • continuous ingestion of data (increments coming on as small batches or streams of events) – where joining to existing data can be expensive
  • real-time problems – both queries and processing

In my opinion, Map/Reduce is an idea whose time has come and gone – it won’t die in a day or a year, there is still a lot of working systems that use it and the alternatives are still maturing. I do think, however, that if you need to write or implement something new that would build on map/reduce – you should use other option or at the very least carefully consider them.

So how is this change going to happen ?  Luckily, Hadoop has recently adopted YARN (you can see my presentation on it here), which opens up the possibilities to go beyond map/reduce without changing everything … even though in effect,  a lot  will change. Note that some of the new options do have migration paths and also we still retain the  access to all that “big data” we have in Hadoopm as well as the extended reuse of some of the ecosystem.

The first type of effort to replace map/reduce is to actually subsume it by offering more  flexible batch. After all saying Map/reduce is not relevant, deosn’t mean that batch processing is not relevant. It does mean that there’s a need to more complex processes. There are two main candidates here  Tez and Spark where Tez offers a nice migration path as it is replacing map/reduce as the execution engine for both Pig and Hive and Spark has a compelling offer by combining Batch and Stream processing (more on this later) in a single engine.

The second type of effort or processing capability that will help kill map/reduce is MPP databases on Hadoop. Like the “flexible batch” approach mentioned above, this is replacing a functionality that map/reduce was used for – unleashing the data already processed and stored in Hadoop.  The idea here is twofold

  • To provide fast query capabilities* – by using specialized columnar data format and database engines deployed as daemons on the cluster
  • To provide rich query capabilities – by supporting more and more of the SQL standard and enriching it with analytics capabilities (e.g. via MADlib)

Efforts in this arena include Impala from Cloudera, Hawq from Pivotal (which is essentially greenplum over HDFS), startups like Hadapt or even Actian trying to leverage their ParAccel acquisition with the recently announced Actian Vector . Hive is somewhere in the middle relying on Tez on one hand and using vectorization and columnar format (Orc)  on the other

The Third type of processing that will help dethrone Map/Reduce is Stream processing. Unlike the two previous types of effort this is covering a ground the map/reduce can’t cover, even inefficiently. Stream processing is about  handling continuous flow of new data (e.g. events) and processing them  (enriching, aggregating, etc.)  them in seconds or less.  The two major contenders in the Hadoop arena seem to be Spark Streaming and Storm though, of course, there are several other commercial and open source platforms that handle this type of processing as well.

In summary – Map/Reduce is great. It has served us (as an industry) for a decade but it is now time to move on and bring the richer processing capabilities we have elsewhere to solve our big data problems as well.

Last note  – I focused on Hadoop in this post even thought there are several other platforms and tools around. I think that regardless if Hadoop is the best platform it is the one becoming the de-facto standard for big data (remember betamax vs VHS?)

One really, really last note – if you read up to here, and you are a developer living in Israel, and you happen to be looking for a job –  I am looking for another developer to join my Technology Research team @ Amdocs. If you’re interested drop me a note: arnon.rotemgaloz at amdocs dot com or via my twitter/linkedin profiles

*esp. in regard to analytical queries – operational SQL on hadoop with efforts like  Phoenix ,IBM’s BigSQL or Splice Machine are also happening but that’s another story

illustration idea found in  James Mickens’s talk in Monitorama 2014 –  (which is, by the way, a really funny presentation – go watch it) -ohh yeah… and pulp fiction :)

Categories: Architecture

An architecturally-evident coding style

Coding the Architecture - Simon Brown - Sun, 06/01/2014 - 12:51

Okay, this is the separate blog post that I referred to in Software architecture vs code. What exactly do we mean by an "architecturally-evident coding style"? I built a simple content aggregator for the local tech community here in Jersey called techtribes.je, which is basically made up of a web server, a couple of databases and a standalone Java application that is responsible for actually aggegrating the content displayed on the website. You can read a little more about the software architecture at techtribes.je - containers. The following diagram is a zoom-in of the standalone content updater application, showing how it's been decomposed.

techtribes.je content updater - component diagram

This diagram says that the content updater application is made up of a number of core components (which are shown on a separate diagram for brevity) and an additional four components - a scheduled content updater, a Twitter connector, a GitHub connector and a news feed connector. This diagram shows a really nice, simple architecture view of how my standalone content updater application has been decomposed into a small number of components. "Component" is a hugely overloaded term in the software development industry, but essentially all I'm referring to is a collection of related behaviour sitting behind a nice clean interface.

Back to the "architecturally-evident coding style" and the basic premise is that the code should reflect the architecture. In other words, if I look at the code, I should be able to clearly identify each of the components that I've shown on the diagram. Since the code for techtribes.je is open source and on GitHub, you can go and take a look for yourself as to whether this is the case. And it is ... there's a je.techtribes.component package that contains sub-packages for each of the components shown on the diagram. From a technical perspective, each of these are simply Spring Beans with a public interface and a package-protected implementation. That's it; the code reflects the architecture as illustrated on the diagram.

So what about those core components then? Well, here's a diagram showing those.

techtribes.je core components

Again, this diagram shows a nice simple decomposition of the core of my techtribes.je system into coarse-grained components. And again, browsing the source code will reveal the same one-to-one mapping between boxes on the diagram and packages in the code. This requires conscious effort to do but I like the simple and explicit nature of the relationship between the architecture and the code.

When architecture and code don't match

The interesting part of this story is that while I'd always viewed my system as a collection of "components", the code didn't actually look like that. To take an example, there's a tweet component on the core components diagram, which basically provides CRUD access to tweets in a MongoDB database. The diagram suggests that it's a single black box component, but my initial implementation was very different. The following diagram illustrates why.

techtribes.je tweet component

My initial implementation of the tweet component looked like the picture on the left - I'd taken a "package by layer" approach and broken my tweet component down into a separate service and data access object. This is your stereotypical layered architecture that many (most?) books and tutorials present as a way to build (e.g.) web applications. It's also pretty much how I've built most software in the past too and I'm sure you've seen the same, especially in systems that use a dependency injection framework where we create a bunch of things in layers and wire them all together. Layered architectures have a number of benefits but they aren't a silver bullet.

This is a great example of where the code doesn't quite reflect the architecture - the tweet component is a single box on an architecture diagram but implemented as a collection of classes across a layered architecture when you look at the code. Imagine having a large, complex codebase where the architecture diagrams tell a different story from the code. The easy way to fix this is to simply redraw the core components diagram to show that it's really a layered architecture made up of services collaborating with data access objects. The result is a much more complex diagram but it also feels like that diagram is starting to show too much detail.

The other option is to change the code to match my architectural vision. And that's what I did. I reorganised the code to be packaged by component rather than packaged by layer. In essence, I merged the services and data access objects together into a single package so that I was left with a public interface and a package protected implementation. Here's the tweet component on GitHub.

But what about...

Again, there's a clean simple mapping from the diagram into the code and the code cleanly reflects the architecture. It does raise a number of interesting questions though.

  • Why aren't you using a layered architecture?
  • Where did the TweetDao interface go?
  • How do you mock out your DAO implementation to do unit testing?
  • What happens if I want to call the DAO directly?
  • What happens if you want to change the way that you store tweets?
Layers are now an implementation detail

This is still a layered architecture, it's just that the layers are now a component implementation detail rather than being first-class architectural building blocks. And that's nice, because I can think about my components as being my architecturally significant structural elements and it's these building blocks that are defined in my dependency injection framework. Something I often see in layered architectures is code bypassing a services layer to directly access a DAO or repository. These sort of shortcuts are exactly why layered architectures often become corrupted and turn into big balls of mud. In my codebase, if any consumer wants access to tweets, they are forced to use the tweet component in its entirety because the DAO is an internal implementation detail. And because I have layers inside my component, I can still switch out my tweet data storage from MongoDB to something else. That change is still isolated.

Component testing vs unit testing

Ah, unit testing. Bundling up my tweet service and DAO into a single component makes the resulting tweet component harder to unit test because everything is package protected. Sure, it's not impossible to provide a mock implementation of the MongoDBTweetDao but I need to jump through some hoops. The other approach is to simply not do unit testing and instead test my tweet component through its public interface. DHH recently published a blog post called Test-induced design damage and I agree with the overall message; perhaps we are breaking up our systems unnecessarily just in order to unit test them. There's very little to be gained from unit testing the various sub-parts of my tweet component in isolation, so in this case I've opted to do automated component testing instead where I test the component as a black-box through its component interface. MongoDB is lightweight and fast, with the resulting component tests running acceptably quick for me, even on my ageing MacBook Air. I'm not saying that you should never unit test code in isolation, and indeed there are some situations where component testing isn't feasible. For example, if you're using asynchronous and/or third party services, you probably do want to ability to provide a mock implementation for unit testing. The point is that we shouldn't blindly create designs where everything can be mocked out and unit tested in isolation.

Food for thought

The purpose of this blog post was to provide some more detail around how to ensure that code reflects architecture and to illustrate an approach to do this. I like the structure imposed by forcing my codebase to reflect the architecture. It requires some discipline and thinking about how to neatly carve-up the responsibilities across the codebase, but I think the effort is rewarded. It's also a nice stepping stone towards micro-services. My techtribes.je system is constructed from a number of in-process components that I treat as my architectural building blocks. The thinking behind creating a micro-services architecture is essentially the same, albeit the components (services) are running out-of-process. This isn't a silver bullet by any means, but I hope it's provided some food for thought around designing software and structuring a codebase with an architecturally-evident coding style.

Categories: Architecture

Capitalizing on the Internet of Things: How To Succeed in a Connected World

“Learning and innovation go hand in hand. The arrogance of success is to think that what you did yesterday will be sufficient for tomorrow.” -- William Pollard

The Internet of Things is hot.  But it’s more than a trend.  It’s a new way of life (and business.)

It’s transformational in every sense of the word (and world.)

A colleague shared some of their most interesting finds with me, and one of them is:

Capitalizing on the Internet of Things: How To Succeed in a Connected World

Here are my key take aways:

  1. The Fourth Industrial Revolution:  The Internet of Things
  2. “For many companies, the mere prospect of remaking traditional products into smart and connected ones is daunting.  But embedding them into the digital world using services-based business models is much more fundamentally challenging.  The new business models impact core processes such as product management, operations, and production, as well as sales and channel management.”
  3. “According to the research database of the analyst firm Machina Research, there will be approx. 14 billion connected devices by 2022 – ranging from IP-enabled cars to heating systems, security cameras, sensors, and production machines.”
  4. “Managers need to envision the valuable new opportunities that become possible when the physical world is merged with the virtual
    world.”
  5. “The five key markets are connected buildings, automotive, utilities, smart cities, and manufacturing.”
  6. “In order to provide for the IoT’s multifaceted challenges, the most important thing to do is develop business ecosystems comparable to a coral reef, where we can find diversity of species, symbiosis, and shared development.”
  7. “IoT technologies create new ways for companies to enrich their services, gain customer insights, increase efficiency, and create differentiation opportunities.”
  8. “From what we have seen, IoT entrepreneurs also need to follow exploratory approaches as they face limited predictability and want to minimize risks, preferably in units that are small, agile, and independent.”

It’s a fast read, with nice and tight insight … my kind of style.

Enjoy.

You Might Also Like

4 Stages of Market Maturity

E-Shaped People, Not T-Shaped

Trends for 2014

Categories: Architecture, Programming

Scrum at a Glance (Visual)

I’ve shared a Scrum Flow at a Glance before, but it was not visual.

I think it’s helpful to know how to whiteboard a simple view of an approach so that everybody can quickly get on the same page. 

Here is a simple visual of Scrum:

image

There are a lot of interesting tools and concepts in scrum.  The definitive guide on the roles, events, artifacts, and rules is The Scrum Guide, by Jeff Sutherland and Ken Schwaber.

I like to think of Scrum as an effective Agile project management framework for shipping incremental value.  It works by splitting big teams into smaller teams, big work into smaller work, and big time blocks into smaller time blocks.

I try to keep whiteboard visuals pretty simple so that they are easy to do on the fly, and so they are easy to modify or adjust as appropriate.

I find the visual above is pretty helpful for getting people on the same page pretty fast, to the point where they can go deeper and ask more detailed questions about Scrum, now that they have the map in mind.

You Might Also Like

Agile vs. Waterfall

Agile Life-Cycle Frame

Don’t Push Agile, Pull It

Scrum Flow at a Glance

The Art of the Agile Retrospective

Categories: Architecture, Programming

Software architecture vs code

Coding the Architecture - Simon Brown - Thu, 05/29/2014 - 13:01

I presented two talks last week with the title "Software architecture vs code" - first as the opening keynote for the inaugural Software Design and Development conference and also the next day as a regular conference session at GOTO Chicago. Videos from both should be available at some point and the slides are available now. The talk itself seems to polarise people, with responses ranging from Without a doubt, Simon delivered one of the best keynotes I have seen. I got a lot from it, with plenty 'food for thought' moments. through to "hmmm, meh".

Separating software architecture from code

The basic premise of the talk is that the architecture and code of a software system never quite match up. The traditional way to communicate the architecture of a software system is with diagrams based upon a number of views ... a logical view, a functional view, a module view, a physical view, etc, etc. Philippe Kruchten's 4+1 model is an example often cited as a starting point for such approaches. I've followed these approaches in the past myself and, although I can get my head around them, I don't find them an optimal way to describe a software system. The "why?" has taken me a while to figure out, but the thing I dislike is the way in which you get an artificial separation between the architecture-related views (logical, module, functional, etc) and the code-related views (implementation, design, etc). I don't like treating the architecture and the code as two separate things, but this seems to be the starting point for many of the ways in which software systems are communicated/documented. If you want a good example of this, take a look at the first chapter of "Software Architecture in Practice" where it describes the relationship between modules, components, and component instances. It makes my head hurt.

The model-code gap

This difference between the architecture and code views is also exaggerated by what George Fairbanks calls the "model-code gap" in his book titled "Just Enough Software Architecture" (highly recommended reading, by the way). George basically says that your architecture models will include abstract concepts (e.g. components, services, modules, etc) but the code usually doesn't reflect this. This matches my own experience of helping people communicate their software systems ... people will usually draw components or services, but the actual implementation is a bunch of classes sitting inside a traditional layered architecture. Actually, if I'm being honest, this matches my own experience of building software myself because I've done the same thing! :-)

The intersection of software architecture and code

My approach to all of this is to ensure that the architecture and code views of a software system are one and the same thing, albeit from different levels of abstraction. In other words, my primary focus when describing a software system is the static structure, which ranges from code (classes) right up through components and containers. I model this with my C4 approach, which recognises that software developers are the primary stakeholders in software architecture. Other views of the software system (deployment, infrastructure, etc) slot into place really easily when you understand the static structure.

To put this all very simply, your code should reflect the architecture diagrams that you draw. If your diagrams include abstract concepts such as components, your code should reflect this. If the diagrams and code don't line up, you have to question the value of the diagrams because they're creating a fantasy and there's little point in referring to them.

Challenging the traditional layered approach

This deserves a separate blog post, but something I also mentioned during the talk was that teams should challenge the traditional layered architecture and the way that we structure our codebase. One way to achieve a nice mapping between architecture and code is to ensure that your code reflects the abstract concepts shown on your architecture diagrams, which can be achieved by writing components rather than classes in layers. Another side-effect of changing the organisation of the code is less test-induced design damage. The key question to ask here is whether layers are architecturally significant building blocks or merely an implementation detail, which should be wrapped up inside of (e.g.) components. As I said, this needs a separate blog post.

Thoughts?

As I said, the slides are here. Aligning the architecture and the code raises a whole bunch of interesting questions but provides some enormous benefits for a software development team. A clean mapping between diagrams and code makes a software system easy to explain, the impact of change becomes easier to understand and architectural refactorings can seem much less daunting if you know what you have and where you want to get to. I'm interested in your thoughts on things like the following:

  • Aligning the architecture and code - is this something you do. If so, how? If not, why not?
  • Is your codebase more than just a bunch of classes in layers? Do you follow what George Fairbanks calls an "architecturally-evident coding style"? If yes, how? If not, why not?
  • If you have any architecture documentation (e.g. diagrams), is it useful? If not, why not?

Convincing people to structure the code underlying their monolithic systems as a bunch of collaborating components seems to be a hard pill to swallow, yet micro-service architectures are going to push people to reconsider how they structure a software system, so I think this discussion is worth having. Thoughts?

Categories: Architecture

Hadoop YARN overview

I did a short overview of Hadoop YARN to our big data development team. The presentation covers the motivation for YARN, how it works and its major weaknesses

You can watch/download on slideshare

Categories: Architecture

The Future of the Mike The Architect Blog

Mike Walker's Blog - Fri, 05/23/2014 - 20:50

You might of saw the announcement I made that I just joined Gartner. You might be thinking what this means for my blog, right? Well… there will be some changes but I think ultimately they will be good ones. 

Just like with most things, there is good news and the not so good news.

How about we get the bad news out of the way first... So the not so good news is that this will be my last blog post, well at least for the foreseeable future.  I will still keep it alive out here but will not be able to update it.

So this leads to the good news is that I will still be able to share my insights with all of you. I will continue to express myself through Gartner research notes, technology profiles, hype cycles and conferences. Who knows, I may even show up on Gartner blogs as well. While I do love what I have done with "Mike The Architect", blogging on any one platform/persona was never my goal. It’s was simply a vehicle or a means to an end to communicate my thoughts to all of you.

As you might imagine this post is a bittersweet one for me. This closes one chapter and opens another for my public writing to all of you. I have really enjoyed blogging all these years about my observations, experiences and my wild haired crazy ideas. Can you believe it’s been 8 years of Enterprise Architecture blogging? I can't. Goes by fast.

I just want to say thank you to everyone that has subscribed to my blog, provided comments and believed in my guidance. 

Related articles Mike Walker has joined Gartner
Categories: Architecture

Mike Walker has joined Gartner

Mike Walker's Blog - Thu, 05/22/2014 - 15:42

I’ve got some very exciting news to share with all of you. I have accepted the position of Research Director within the Enterprise Architecture practice at Gartner!

As many of you know that read my blog I often comment on the analyst community and more specifically the leader in that community, Gartner. I have a great deal of respect for not only to research but also the Gartner EA team. I will be joining a stellar team of luminaries that have been providing enterprise architecture guidance for a very long time. It is very humbling to be part of this already high-octane team.

You might be wondering, why did I decide to join Gartner? It was a bit of an interesting discovery for me. I spent my career primarily in two world’s. First as a practitioner space as an enterprise architect or a chief architect and second at a technology vendor in roles such as advisory/chief architect roles. Each of these roles and organizations provided great experiences in their own right and provided me a great deal of experience and enjoyment.

However, when when I looked at Gartner as a possible career choice, it offered a very different value proposition. As a practitioner working for a single company my role and scope of influence was only with that one company with an occasional speaking engagement or blog posting. But even when I did speak publicly many factors limited my ability to provide the candid guidance that I would of preferred to give. This was primarily due to intellectual-property or competitive factors.

With large technology firms I was able to get that broad and pervasive megaphone that allowed me to amplify my message across many companies to maximize the impact that I could have. However there is one major drawback, I work for a technology firm and no matter what you still have some level of accountability for the company’s bottom line or another way of stating that would be to enable the sale of technology. While I personally have avoided having “big evil vendor” pitches there is still a very legitimate perception that there is a technology bias.

So I ask myself a question, is it more important to sell technology or to sell enterprise architecture? The answer was very clear to me. It had been for many years but it's just like trying to remember something that's on the tip of your tongue you know it's there but you can't quite put your finger on it. Once I realized that enterprise architecture was the area passion for me, everything else on the place.

Moving to Gartner is the most logical choice for me given my true passion for advancing the Enterprise Architecture profession, communicating its value and ultimately sharing proven practices. if I wants to advance the enterprise architecture profession. Gartner provides the platform, the breadth of clients, the credibility and none of the technical shackles that you would find at a large mega vendor.

Not only do I think it's a good move for me but I also think I would be good at being an analyst. After all, many of my roles have comprised of an advisory component to customers, writing white papers and speaking at conferences.

As for my existing customers, many of you are Gartner customers. If you still want to continue to engage with me I would really like that!

Related articles Five Take-Aways from Gartner Symposium 2013 Recapping the Gartner Enterprise Architecture Summit 2013
Categories: Architecture

Using Dropwizard in combination with Elasticsearch

Gridshore - Thu, 05/15/2014 - 21:09

Dropwizard logo

How often do you start creating a new application? How often have you thought about configuring an application. Where to locate a config file, how to load the file, what format to use? Another thing you regularly do is adding timers to track execution time, management tools to do thread analysis etc. From a more functional perspective you want a rich client side application using angularjs. So you need a REST backend to deliver json documents. Does this sound like something you need regularly? Than this blog post is for you. If you never need this, please keep on reading, you might like it.

In this blog post I will create an application that show you all the available indexes in your elasticsearch cluster. Not very sexy, but I am going to use: AngularJS, Dropwizard and elasticsearch. That should be enough to get a lot of you interested.


What is Dropwizard

Dropwizard is a framework that combines a lot of other frameworks that have become the de facto standard in their own domain. We have jersey for REST interface, jetty for light weight container, jackson for json parsing, free marker for front-end templates, Metric for the metrics, slf4j for logging. Dropwizard has some utilities to combine these frameworks and enable you as a developer to be very productive in constructing your application. It provides building blocks like lifecycle management, Resources, Views, loading of bundles, configuration and initialization.

Time to jump in and start creating an application.

Structure of the application

The application is setup as a maven project. To start of we only need one dependency:

<dependency>
    <groupId>io.dropwizard</groupId>
    <artifactId>dropwizard-core</artifactId>
    <version>${dropwizard.version}</version>
</dependency>

If you want to follow along, you can check my github repository:


https://github.com/jettro/dropwizard-elastic

Configure your application

Every application needs configuration. In our case we need to configure how to connect to elasticsearch. In drop wizard you extend the Configuration class and create a pojo. Using jackson and hibernate validator annotations we configure validation and serialization. In our case the configuration object looks like this:

public class DWESConfiguration extends Configuration {
    @NotEmpty
    private String elasticsearchHost = "localhost:9200";

    @NotEmpty
    private String clusterName = "elasticsearch";

    @JsonProperty
    public String getElasticsearchHost() {
        return elasticsearchHost;
    }

    @JsonProperty
    public void setElasticsearchHost(String elasticsearchHost) {
        this.elasticsearchHost = elasticsearchHost;
    }

    @JsonProperty
    public String getClusterName() {
        return clusterName;
    }

    @JsonProperty
    public void setClusterName(String clusterName) {
        this.clusterName = clusterName;
    }
}

Then you need to create a yml file containing the properties in the configuration as well as some nice values. In my case it looks like this:

elasticsearchHost: localhost:9300
clusterName: jc-play

How often did you start in your project to create the configuration mechanism? Usually I start with maven and quickly move to tomcat. Not this time. We did do maven, now we did configuration. Next up is the runner for the application.

Add the runner

This is the class we run to start the application. Internally jetty is started. We extend the Application class and use the configuration class as a generic. This is the class that initializes the complete application. Used bundles are initialized, classes are created and passed to other classes.

public class DWESApplication extends Application<DWESConfiguration> {
    private static final Logger logger = LoggerFactory.getLogger(DWESApplication.class);

    public static void main(String[] args) throws Exception {
        new DWESApplication().run(args);
    }

    @Override
    public String getName() {
        return "dropwizard-elastic";
    }

    @Override
    public void initialize(Bootstrap<DWESConfiguration> dwesConfigurationBootstrap) {
    }

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        logger.info("Running the application");
    }
}

When starting this application, we have no succes. A big error because we did not register any resources.

ERROR [2014-05-14 16:58:34,174] com.sun.jersey.server.impl.application.RootResourceUriRules: 
	The ResourceConfig instance does not contain any root resource classes.
Nothing happens, we just need a resource.

Before we can return something, we need to have something to return. We create a pogo called Index that contains one property called name. For now we just return this object as a json object. The following code shows the IndexResource that handles the requests that are related to the indexes.

@Path("/indexes")
@Produces(MediaType.APPLICATION_JSON)
public class IndexResource {

    @GET
    @Timed
    public Index showIndexes() {
        Index index = new Index();
        index.setName("A Dummy Index");

        return index;
    }
}

The @GET, @PATH and @Produces annotations are from the jersey rest library. @Timed is from the metrics library. Before starting the application we need to register our index resource with jersey.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        logger.info("Running the application");
        final IndexResource indexResource = new IndexResource();
        environment.jersey().register(indexResource);
    }

Now we can start the application using the following runner from intellij. Later on we will create the executable jar.

Running the app from intelij

Run the application again, this time it works. You can browse to http://localhost:8080/index and see our dummy index as a nice json document. There is something in the logs though. I love this message, this is what you get when running the application without health checks.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!    THIS APPLICATION HAS NO HEALTHCHECKS. THIS MEANS YOU WILL NEVER KNOW      !
!     IF IT DIES IN PRODUCTION, WHICH MEANS YOU WILL NEVER KNOW IF YOU'RE      !
!    LETTING YOUR USERS DOWN. YOU SHOULD ADD A HEALTHCHECK FOR EACH OF YOUR    !
!         APPLICATION'S DEPENDENCIES WHICH FULLY (BUT LIGHTLY) TESTS IT.       !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Creating a health check

We add a health check, since we are creating an application interacting with elasticsearch, we create a health check for elasticsearch. Don’t think to much about how we connect to elasticsearch yet. We get there later on.

public class ESHealthCheck extends HealthCheck {

    private ESClientManager clientManager;

    public ESHealthCheck(ESClientManager clientManager) {
        this.clientManager = clientManager;
    }

    @Override
    protected Result check() throws Exception {
        ClusterHealthResponse clusterIndexHealths = clientManager.obtainClient().admin().cluster().health(new ClusterHealthRequest())
                .actionGet();
        switch (clusterIndexHealths.getStatus()) {
            case GREEN:
                return HealthCheck.Result.healthy();
            case YELLOW:
                return HealthCheck.Result.unhealthy("Cluster state is yellow, maybe replication not done? New Nodes?");
            case RED:
            default:
                return HealthCheck.Result.unhealthy("Something is very wrong with the cluster", clusterIndexHealths);

        }
    }
}

Just like with the resource handler, we need to register the health check. Together with the standard http port for normal users, another port is exposed for administration. Here you can find the metrics reports like Metrics, Ping, Threads, Healthcheck.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        Client client = ESClientFactorybean.obtainClient(config.getElasticsearchHost(), config.getClusterName());

        logger.info("Running the application");
        final IndexResource indexResource = new IndexResource(client);
        environment.jersey().register(indexResource);

        final ESHealthCheck esHealthCheck = new ESHealthCheck(client);
        environment.healthChecks().register("elasticsearch", esHealthCheck);
    }

You as a reader now have an assignment to start the application and check the admin pages yourself: http://localhost:8081. We are going to connect to elasticsearch in the mean time.

Connecting to elasticsearch

We connect to elasticsearch using the transport client. This is taken care of by the ESClientManager. We make use of the drop wizard managed classes. The lifecycle of these classes is managed by drop wizard. From the configuration object we take the host(s) and the cluster name. Now we can obtain a client in the start method and pass this client to the classes that need it. The first class that needs it is the health check, but we already had a look at that one. Using the ESClientManager other classes have access to the client. The managed interface mandates the start as well as the stop method.

    @Override
    public void start() throws Exception {
        Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", clusterName).build();

        logger.debug("Settings used for connection to elasticsearch : {}", settings.toDelimitedString('#'));

        TransportAddress[] addresses = getTransportAddresses(host);

        logger.debug("Hosts used for transport client : {}", (Object) addresses);

        this.client = new TransportClient(settings).addTransportAddresses(addresses);
    }

    @Override
    public void stop() throws Exception {
        this.client.close();
    }

We need to register our managed class with the lifecycle of the environment in the runner class.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        ESClientManager esClientManager = new ESClientManager(config.getElasticsearchHost(), config.getClusterName());
        environment.lifecycle().manage(esClientManager);
    }	

Next we want to change the IndexResource to use the elasticsearch client to list all indexes.

    public List<Index> showIndexes() {
        IndicesStatusResponse indices = clientManager.obtainClient().admin().indices().prepareStatus().get();

        List<Index> result = new ArrayList<>();
        for (String key : indices.getIndices().keySet()) {
            Index index = new Index();
            index.setName(key);
            result.add(index);
        }
        return result;
    }

Now we can browse to http://localhost:8080/indexes and we get back a nice json object. In my case I got this:

[
	{"name":"logstash-tomcat-2014.05.02"},
	{"name":"mymusicnested"},
	{"name":"kibana-int"},
	{"name":"playwithip"},
	{"name":"logstash-tomcat-2014.05.08"},
	{"name":"mymusic"}
]
Creating a better view

Having this REST based interface with json documents is nice, but not if you are human like me (well kind of). So let us add some AngularJS magic to create a slightly better view. The following page can of course also be created with easier view technologies, but I want to demonstrate what you can do with dropwizard.

First we make it possible to use free marker as a template. To make this work we need to additional dependencies: dropwizard-views and dropwizard-views-freemarker. The first step is a view class that knows the free marker template to load and provide the fields that you template can read. In our case we want to expose the cluster name.

public class HomeView extends View {
    private final String clusterName;

    protected HomeView(String clusterName) {
        super("home.ftl");
        this.clusterName = clusterName;
    }

    public String getClusterName() {
        return clusterName;
    }
}

Than we have to create the free marker template. This looks like the following code block

<#-- @ftlvariable name="" type="nl.gridshore.dwes.HomeView" -->
<html ng-app="myApp">
<head>
    <title>DWAS</title>
</head>
<body ng-controller="IndexCtrl">
<p>Underneath a list of indexes in the cluster <strong>${clusterName?html}</strong></p>

<div ng-init="initIndexes()">
    <ul>
        <li ng-repeat="index in indexes">{{index.name}}</li>
    </ul>
</div>

<script src="/assets/js/angular-1.2.16.min.js"></script>
<script src="/assets/js/app.js"></script>
</body>
</html>

By default you put these template in the resources folder using the same sub folders as your view class has for the package. If you look closely you see some angularjs code, more on this later on. First we need to map a url to the view. This is done with a resource class. The following code block shows the HomeResource class that maps the “/” to the HomeView.

@Path("/")
@Produces(MediaType.TEXT_HTML)
public class HomeResource {
    private String clusterName;

    public HomeResource(String clusterName) {
        this.clusterName = clusterName;
    }

    @GET
    public HomeView goHome() {
        return new HomeView(clusterName);
    }
}

Notice we now configure it to return text/html. The goHome method is annotated with GET, so each GET request to the PATH “/” is mapped to the HomeView class. Now we need to tell jersey about this mapping. That is done in the runner class.

final HomeResource homeResource = new HomeResource(config.getClusterName());
environment.jersey().register(homeResource);
Using assets

The final part I want to show is how to use the assets bundle from drop zone to map a folder “/assets” to a part of the url. To use this bundle you have to add the following dependency in maven: dropwizard-assets. Than we can easily map the assets folder in our resources folder to the web assets folder

    @Override
    public void initialize(Bootstrap<DWESConfiguration> dwesConfigurationBootstrap) {
        dwesConfigurationBootstrap.addBundle(new ViewBundle());
        dwesConfigurationBootstrap.addBundle(new AssetsBundle("/assets/", "/assets/"));
    }

That is it, now you can load the angular javascript file. My very basic sample has one angular controller. This controller uses the $http service to call our /indexes url. The result is used to show the indexes in a list view.

myApp.controller('IndexCtrl', function ($scope, $http) {
    $scope.indexes = [];

    $scope.initIndexes = function () {
        $http.get('/indexes').success(function (data) {
            $scope.indexes = data;
        });
    };
});

And the result

the very basic screen showing the indexes

Concluding

This was my first go at using drop wizard, I must admit I like what I have seen so far. Not sure if I would create a big application with it, on the other hand it is really structured. Before moving on I would need to reed a bit more about the library and check all of its options. There is a lot more possible than what I have showed you in here.

References

The post Using Dropwizard in combination with Elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

Just Released: World-Class EA: Business Reference Model

Mike Walker's Blog - Fri, 05/09/2014 - 17:47

Check out this new whitepaper from the Open Group in the area of Business Architecture. What are your thoughts on the latest edition in their World-Class EA Series? Does it get The Open Group or TOGAF closer and deeper into the business architecture world? Is the material useful? Love to hear your thoughts.

The whitepaper is entitled Business Reference Model part of the .

See the whitepaper description below:

Business architecture is being used to design, plan, execute, and govern change initiatives throughout public and private sector entities. An architectural approach can systematically highlight the most effective state for a given environment, and then define how change can be effected within acceptable benefit, cost, and risk parameters. A key challenge to this approach is the consistent definition of the organization and where it needs to be, and in response this White Paper introduces a comprehensive reference model for business. The Business Reference Model (BRM) can be applied to both private and public sector organizations alike, and gives complex organizations a common way to view themselves in order to plan and execute effective transformational change.

It is envisaged that the introduction of a BRM into a transformation planning exercise will increase collaboration across the business, increase awareness of organizational opportunity and risk, and facilitate more holistic business investment; all of which culminates in an improved and more sustainable working environment leading to a better working world.

 

Find the whitepaper here: http://bit.ly/1sagaSK

Categories: Architecture

Texas AEA Summit Recap

Mike Walker's Blog - Sun, 04/27/2014 - 20:07

image

Wow, what a great event.  It’s been a month since the first the event and we are still getting  a great deal of feedback on the value of the event. Thank you to all of you that attended. 

With close to 125 attendees for this first time summit of the Texas Association of Enterprise Architects we couldn't of asked for a better turnout. We had attendees from multiple sectors, multiple Texas cities (Dallas, San Antonio and Houston) and from other architecture associations that decided to show up. As an example, we've had the San Antonio Enterprise Architecture Association come to our meetings along with IASA. At this event we had the good fortune of having the IASA Austin chapter president and an IASA marketing person so support for what we are doing at this summit. What I want people to take away from this is that the Texas AEA is open to all architecture professionals regardless of affiliation or level of experience. All are welcome.

 

IMG_1271   IMG_1291   IMG_1269

 

The theme for the summit was centered all around real world practitioner stories, Keeping EA Real. I think we did that with our excellent speakers in the real world stories from the trenches.

However, don’t just take my word for it. Check out all the great social media activity on the event at hashtag #TexasAEA

image

 

All of the presentations from the day are available on the Global Association of Enterprise Architects portal under the Texas Chapter. All you need to do is login and and the presentations freely available to you. We also recorded the sessions and looking to find a way of sharing those with you as well.

This summit may of been very different compared to traditional conferences that you may be used to. The day was split into two major sections. The day started out with a traditional conference format with keynote speakers and sessions lots for more of a one-way dialogue. We had Jeanne Ross and John Zachman as the headliners for the summit followed by 20 minute TED style customer case studies. These were rapid succession get to the point sessions without any of the fluff. I think this was extremely valuable.

After lunch and the second half of the summit was built to be more interactive similar to what you would find at in an conference. We kicked off the second half with a panel from the TD style presentations from earlier in the morning. This was built to be interactive serving the questions of the audience. From there we went into open space/uncomfortable style roundtables. At the beginning of the conference we had voting on which topics they wanted to discuss at the roundtables. Those that got the votes the highest were nominated as the roundtable that people want to. After the roundtables we recaps the entire day and went directly to the social event afterwards. Again to facilitate meaningful conversation amongst enterprise architecture peers.

Bellow is a further break down of the conference. I don't cover every point that was made but more the salient points that resonated with me that I would like to share with all of you.

Morning Sessions

Welcome address

The day started off with myself kicking off the day in the welcome address. I talked about our vision in charter for the Texas chapter. This included some things we'd already discussed in our first meeting but for the sake of the new attendees we wanted to go through the full vision of the Texas AEA.

There's also some highlights that we discussed as well. Namely the current state of the chapter. Month over month our chapter doubles in size.

image

The chapters only been around for three months. What this teams me is the there is an enormous amount of demand for what we're doing here for the EA profession at least here in Texas. With both potential and non-members of the AEA, the attendance has shown us an overwhelming amount of support for what we're doing. As an example we usually find of the typical audience one fourth of the audience is non-members with a month over month growth rate seeing that those very non-members transition into members.

 

image Another important announcement is around professional development. The AEA will be supporting the ability to issue credits for activities you do through the AEA. So if you go to a summit, present at a monthly meeting or collaborate through the portal these all generate credits for you to demonstrate all the great things you've been doing in the year profession. Keep in mind this only applies to Open CA and not skills-based certifications like TOGAF. This is taking certification to the next level of maturity that gives people the credit where credit is due.

 

Along with professional development credits, a related announcements is around actual certification. Here in the Texas AEA we have several Open CA certified enterprise architects. With that we are planning on creating a mentorship program to help those that want to achieve their open sea certification. Along with us we been granted the ability to hold certification boards as well. We will be the first AEA chapter to do this. Very exciting news!

 

Jeanne Ross: Enterprise Architecture State of the Union

image

Jeanne Ross opened up the conference with her keynote. Unfortunately she was in Paris this week but delivered the next best thing, a prerecorded message just for us!

Jeannie continue to describe the evolution of her research at MIT Sloan. The message that sticks out most of my mind is really centered around a change in mentality that enterprise architects need to adopt. Jeanie cover this, and I wholeheartedly agree, enterprise architects should stop trying to have their customers try to understand exactly what they do. Rather, we should be focused on what our customers want rather than having our customers understand exactly what and how we execute the end deliverable for the end customer.

I keep it as simple as this. If I hire a plumber 2 o'clock a drain at my home, I don't want to understand how to be a plumber I just want my drink the unclogged. But in this scenario, enterprise architects as the plumber, we're trying to give a schematic of the drain systems and discuss optimization in tolling over a set of blueprints that myself as a consumer doesn't fully understand and respect the S can't have any informed opinion on. It's just a waste of time and energy. I think EA is having a big opportunity here to change their mindset.

 

John Zachman: Enterprise physics 101

image

Next up was John Zachman. Just being in Zachman's presence is extremely humbling. After all, we wouldn't be here if it wasn't for him. And he seems to be the perfect EA. Not because he started this whole thing in I'm a bit starstruck but he has the ideal demeanor and personality traits of the ideal Enterprise Archtect. He has the ability to greatly influence a room while also checking the ego at the door. I was pleasantly surprised on how humble he really was.

John took us through his latest thinking on the Zachman framework. He discussed how the past couple years he has really learned a great deal about enterprise architecture. This was through a colleague of his based in India that was building a set of EA consultancy services around the Zachman framework. By exercising the framework broadly like this exposed quite a few things that were considered before.

He explained to the group the philosophy behind the Zachman framework. Essentially it's ensuring that you're asking all the right questions to make sure that you have a complete understanding of what is to be architected. So John borrowed from the 6 interrogative's that fully complete a story: who, what, when, where and why. With this, that and explained that his framework really wasn't an EA framework but rather an ontology. Personally, I couldn't agree more. You can see you post here on this topic.

imageAnother important point that John made was in a similar vein as Jeanne Ross. He chose just to think about EA profession and how we been conducting ourselves. Well Jeannie focus on the interpersonal or sauce skills Mr. Zachman looked at it from the perspective of what we do as enterprise architects. You know do that he used was comparing what we do to either a manufacturer or in engineer. John's point was that we call ourselves or contrast we do with engineers but reality we can Dr. sells more as manufacturers. Meaning that there isn't much that is truly engineered and thought through with great detail and rigor but rather we are more supply line manufacturers crunching out widgets.

This is a very interesting analogy and one I don't fully think is easily understandable however I get the incident and agree with it. If you've heard me talk you know that I talk about architecture versus implementation. This is essentially what Mr. Zachman is talking about here. Architecture is all about planning, designing and engineering. The things we do after Architecture are all about executing meaning we going bill or in John's terms manufacture.

After getting some fundamental framing on how are conducting ourselves in this profession in a bit of setting stage for what's next, John went through his ontology or what is commonly referred to as the Zachman framework. He referred to it as the periodic table of elements for enterprise architecture it has all of the fundamental elements of what we need to do in enterprise architecture. What how I refer to it, it's a measure of completeness. But it's up to you to figure out the right questions to ask and how to implement this tool. It's not predicated that every box gets checked off or all questions get answered that's where your judgment comes.

When John talks about the usage of the pure a table for enterprise architecture he talks a great deal about how to compose an implement it. It's so he draws analogies from the chemistry world. He challenges us to think about using those foundational elements versus what he refers to ask composite. Already prefabricated or combined foundational elements of the Zachman framework to make business decisions. His assertion and I agree is that when we do that mean heritage whole set of constraints or objectives that we know me know he's trying so for. So my take a step back and looking at all the foundational elements might be a really good thing. However keep in mind having composites isn't entirely a bad thing in my opinion, however you want to make sure that you understand all the characteristics of it.

The last and final messages that John sent was around misconceptions of the framework. Mr. Zachman made it very clear that his framework was never intended to operate on its own. It is merely an ontology.

Again, Thank you to the attendees

Categories: Architecture

Yet More Change for the Capitals

DevHawk - Harry Pierson - Sat, 04/26/2014 - 21:13

Six years ago, I was pretty excited about the future for the Washington Capitals. They had just lost their first round match up with the Flyers – which was a bummer – but they had made the playoffs for the first time in 3 seasons. I wrote at the time:

Furthermore, even though they lost, these playoffs are a promise of future success. I tell my kids all the time that the only way to get good at something is to work hard while you’re bad at it. Playoff hockey is no different. Most of the Caps had little or no playoff experience going into this series and it really showed thru the first three games. But they kept at it and played much better over the last four games of the series. They went 2-2 in those games, but the two losses went to overtime. A little more luck (or better officiating) and the Caps are headed to Pittsburgh instead of the golf course.

What a difference six seasons makes. Sure, they won the President’s Trophy in 2010. But the promise of future playoff success has been broken, badly. The Caps have been on a pretty steep decline after getting beat by the eighth seed Canadians in the first round of the playoffs in 2010. Since then, they’ve switched systems three times and head coaches twice. This year, they missed the playoffs entirely even with Alex Ovechkin racking up a league-leading 51 goals.

Today, the word came down that both the coach and general manager have been let go. As a Caps fan, I’m really torn about this. I mean, I totally agree that the coach and GM had to go – frankly, I was surprised it didn’t happen 7-10 days earlier. But now what do you do? The draft is two months and one day away, free agency starts two days after that. The search for a GM is going to have to be fast. Then the GM will have to make some really important decisions about players at the draft, free agency and compliance buyouts with limited knowledge of the players in our system. Plus, he’ll need to hire a new head coach – preferably before the draft as well.

The one positive note is that the salary cap for the Capitals looks pretty good for next year. The Capitals currently have the second largest amount of cap space / open roster slot in the league. (The Islanders are first with $14.5 million / open roster slot. The Caps have just over $7 million / open roster slot.) They have only a handful of unrestricted free agents to resign – with arguably only one “must sign” (Mikhail Grabovski) in the bunch. Of course, this could also be a bug rather than a feature – having that many players under contract may make it harder for the new GM to shape the team in his image.

Who every the Capitals hire to be GM and coach, I’m not expecting a promising start. It feels like the next season is already a wash, and we’re not even finished with the first round of this year’s playoffs yet.

I guess it could be worse.

I could be a Toronto Leafs fan.

Categories: Architecture, Programming

Brokered WinRT Components Step Three

DevHawk - Harry Pierson - Fri, 04/25/2014 - 16:45

So far, we’ve created two projects, written all of about two lines of code and we have both our brokered component and its proxy/stub ready to go. Now it’s time to build the Windows Runtime app that uses the component. So far, things have been pretty easy – the only really tricky and/or manual step so far has been registering the proxy/stub, and that’s only tricky if you don’t want to run VS as admin. Unfortunately, tying this all together in the app requires a few more manual steps.

But before we get to the manual steps, let’s create the WinRT client app. Again, we’re going to create a new project but this time we’re going to select “Blank App (Windows)” from the Visual C# -> Store Apps -> Windows App node of the Add New Project dialog. Note, I’m not using “Blank App (Universal)” or “Blank App (Windows Phone)” because the brokered WinRT component feature is not support on Windows Phone. Call the client app project whatever you like, I’m calling mine “HelloWorldBRT.Client”.

Before we start writing code, we need to reference the brokered component. We can’t reference the brokered component directly or it will load in the sandboxed app process. Instead, the app need to reference a reference assembly version of the .winmd that gets generated automatically by the proxy/stub project. Remember in the last step when I said Kieran Mockford is an MSBuild wizard? The proxy/stub template project includes a custom target that automatically publishes the reference assembly winmd file used by the client app. When he showed me that, I was stunned – as I said, the man is a wizard. This means all you need to do is right click on the References node of the WinRT Client app project and select Add Reference. In the Reference Manager dialog, add a reference to the proxy/stub project you created in step two.

Now I can add the following code to the top of my App.OnLaunched function. Since this is a simple Hello World walkthru, I’m not going to bother to build any UI. I’m just going to inspect variables in the debugger. Believe me, the less UI I write, the better for everyone involved. Note, I’ve also added the P/Invoke signatures for GetCurrentProcess/ThreadID and to the client app like I did in the brokered component in step one. This way, I can get the process and thread IDs for both the app and broker process and compare them.

var pid = GetCurrentProcessId();
var tid = GetCurrentThreadId();

var c = new HelloWorldBRT.Class();
var bpid = c.CurrentProcessId;
var btid = c.CurrentThreadId;

At this point the app will compile, but if I run it the app will throw a TypeLoadException when it tries to create an instance of HelloWorldBRT.Class. The type can’t be loaded because the we’re using the reference assembly .winmd published by the proxy/stub project – it has no implementation details, so it can’t load. In order to be able to load the type, we need to declare the HelloWorldBRT.Class as a brokered component in the app’s pacakge.appxmanifest file. For non-brokered components, Visual Studio does this for you automatically. For brokered components we have to do it manually unfortunately. Every activatable class (i.e. class you can construct via “new”) needs to be registered in the appx manifest this way.

To register HelloWorldBRT.Class, right click the Package.appxmanifest file in the client project, select “Open With” from the context menu and then select “XML (Text) editor” from the Open With dialog. Then you need to insert inProcessServer extension that includes an ActivatableClass element for each class you can activate (aka has a public constructor). Each ActivatableClass element contains an ActivatableClassAttribute element that contains a pointer to the folder where the brokered component is installed. Here’s what I added to Package.appxmainfest of my HelloWorldBRT.Client app.

<Extensions>
  <Extension Category="windows.activatableClass.inProcessServer">
    <InProcessServer>
      <Path>clrhost.dll</Path>
      <ActivatableClass ActivatableClassId="HelloWorldBRT.Class" 
                        ThreadingModel="both">
        <ActivatableClassAttribute 
             Name="DesktopApplicationPath" 
             Type="string" 
             Value="D:\dev\HelloWorldBRT\Debug\HelloWorldBRT.PS"/>
      </ActivatableClass>
    </InProcessServer>
  </Extension>
</Extensions>

The key thing here is the addition of the DesktopApplicationPath ActivatableClassAttribute. This tells the WinRT activation logic that HelloWorldBRT.Class is a brokered component and where the managed .winmd file with the implementation details is located on the device. Note, you can use multiple brokered components in your side loaded app, but they all have the same DesktopApplicationPath.

Speaking of DesktopApplicationPath, the path I’m using here is path the final location of the proxy/stub components generated by the compiler. Frankly, this isn’t an good choice to use in a production deployment. But for the purposes of this walk thru, it’ll be fine.

ClientWatchWindow

Now when we run the app, we can load a HelloWorldBRT.Class instance and access the properties. re definitely seeing a different app process IDs when comparing the result of calling GetCurrentProcessId directly in App.OnLoaded vs. the result of calling GetCurrentProcessId in the brokered component. Of course, each run of the app will have different ID values, but this proves that we are loading our brokered component into a different process from where our app code is running.

Now you’re ready to go build your own brokered components! Here’s hoping you’ll find more interesting uses for them than comparing the process IDs of the app and broker processes in the debugger! :)

Categories: Architecture, Programming

Brokered WinRT Components Step Two

DevHawk - Harry Pierson - Fri, 04/25/2014 - 16:43

Now that we have built the brokered component , we have to build a proxy/stub for it. Proxies and stubs are how WinRT method calls are marshalled across process boundaries. If you want to know more – or you have insomnia – feel free to read all the gory details up on MSDN.

Proxies and stubs look like they might be scary, but they’re actually trivial (at least in the brokered component scenario) because 100% of the code is generated for you. It couldn’t be much easier.

Right click the solution node and select Add -> New Project. Alternatively, you can select File -> New -> Project in the Visual Studio main menu, but if you do that make sure you change the default solution from “Create new Solution” to “Add to Solution”. Regardless of how you launch the new project wizard, search for “broker” again, but this time select the “Brokered Windows Runtime ProxyStub” template. Give the project a name – I chose “HelloWorldBRT.PS”.

ProxyStubAddReferenceOnce you’ve created the proxy/stub project, you need to set a reference to the brokered component you created in step 1. Since proxies and stubs are native, this is a VC++ project. Adding a reference in a VC++ is not as straightforward as it is in C# projects. Right click the proxy/stub project, select “Properties” and then select Common Properties -> References from the tree on the left. Press the “Add New Reference…” button to bring up the same Add Reference dialog you’ve seen in managed code projects. Select the brokered component project and press OK.

Remember when I said that 100% of the code for the proxy/stub is generated? I wasn’t kidding – creating the template and setting referencing the brokered component project is literally all you need to do. Want proof? Go ahead and build now. If you watch the output windows, you’ll see a bunch of output go by referencing IDL files and MIDLRT among other stuff. This proxy/stub template has some custom MSBuild tasks that generates the proxy/stub code using winmdidl and midlrt. The process is similar to what is described here. BTW, if you get a chance, check out the proxy/stub project file – it is a work of art. Major props to Kieran Mockford for his msbuild wizardry.

ProxyStubRegisterOutputUnfortunately, it’s not enough just to build the proxy/stub – you also have to register it. The brokered component proxy/stub needs to be registered globally on the machine, which means you have to be running as an admin to do it. VS can register the proxy/stub for you automatically, but that means you have to run VS as an administrator. That always makes me nervous, but if you’re OK with running as admin you can enable proxy/stub registration by right clicking the proxy/stub project file, selecting Properties, navigating to Configuration properties -> Linker -> General in the tree of the project properties page, and then changing Register Output to “Yes”.

If you don’t like running VS as admin, you can manually register the proxy/stub by running “regsvr32 <proxystub dll>” from an elevated command prompt. Note, you do have to re-register every time the public surface area of your brokered component changes so letting VS handle registration admin is definitely the easier route to go.

In the third and final step, we’ll build a client app that accesses our brokered component.

Categories: Architecture, Programming

Brokered WinRT Components Step One

DevHawk - Harry Pierson - Fri, 04/25/2014 - 16:41

In this step, we’ll build the brokered component itself. Frankly, the only thing that makes a brokered component different than a normal WinRT component is some small tweaks to the project file to enable access to the full .NET Runtime and Base Class Library. The brokered component whitepaper describes the these tweaks in detail, but the new brokered component template takes care of these small tweaks for you.

BRT_NewProjectStart by selecting File -> New -> Project in Visual Studio. With the sheer number of templates to choose from these days, I find it’s easier to just search for the one I want. Type “broker” in the search box in the upper left, you’ll end up with two choices – the brokered WinRT component and the brokered WinRT proxy/stub. For now, choose the brokered component. We’ll be adding a brokered proxy/stub in step two. Name the project whatever you want. I named mine “HelloWorldBRT”.

This is probably the easiest step of the three as there’s nothing really special you have to do – just write managed code like you always do. In my keynote demo, this is where I wrote the code that wrapped the existing ADO.NET based data access library. For the purposes of this walkthrough, let’s do something simpler. We’ll use P/Invoke to retrieve the current process and thread IDs. These Win32 APIs are supported for developing WinRT apps and will make it obvious that the component is running in a separate process than the app. Here’s the simple code to retrieve those IDs (hat tip to pinvoke.net for the interop signatures):

public sealed class Class
{
    [DllImport("kernel32.dll")]
    static extern uint GetCurrentThreadId();

    [DllImport("kernel32.dll")]
    static extern uint GetCurrentProcessId();

    public uint CurrentThreadId
    {
        get { return GetCurrentThreadId(); }
    }

    public uint CurrentProcessId
    {
        get { return GetCurrentProcessId(); }
    }
}

That’s it! I didn’t even bother to change the class name for this simple sample.

Now, to be clear, there’s no reason why this code needs to run in a broker process. As I pointed out, the Win32 functions I’m wrapping here are supported for use in Windows Store apps. For this walkthrough, I’m trying to keep the code simple in order to focus on the specifics of building brokered components. If you want to see an example that actually leverages the fact that it’s running outside of the App Container, check out the NorthwindRT sample.

In the next step, we’ll add the proxy/stub that enables this component to communicate across a process boundary.

Categories: Architecture, Programming

Brokered WinRT Components Step-by-Step

DevHawk - Harry Pierson - Fri, 04/25/2014 - 16:40

Based on the feedback I’ve gotten since my keynote appearance @ Build – both in person and via email & twitter – there are a lot of folks who are excited about the Brokered WinRT Component feature. However, I’ve been advising folks to hold off a bit until the new VS templates were ready. Frankly, the developer experience for this feature is a bit rough and the VS template makes the experience much better. Well, hold off no longer! My old team has published the Brokered WinRT Component Project Templates up on the Visual Studio Gallery!

Now that the template is available, I’ve written a step-by-step guide demonstrating how to build a “Hello World” style brokered component. Hopefully, this will help folks in the community take advantage of this cool new feature in Windows 8.1 Update.

To keep it readable, I’ve broken it into three separate posts:

Note, this walkthrough assumes you’re running Windows 8.1 Update, Visual Studio 2013 with Update 2 RC (or later) and the Brokered WinRT Component Project Templates installed.

I hope this series helps you take advantage of brokered WinRT components. If you have any further questions, feel free to drop me an email or hit me up on Twitter.

Categories: Architecture, Programming

Updated Speaker Lineup for The texas Association of Enterprise Architects Summit

Mike Walker's Blog - Sat, 03/15/2014 - 18:28

 Texas Association of Enterprise Architects Summit Mike Walker

Space Limited -- Register Now!

With the popularity and as we come closer to the event on March 20th seats become more limited. To be sure you have a spot in the audience to listen to luminaries like John Zachman and Jeanne Ross along with our local Enterprise Architects in the Texas area proving thought leadership that you shouldn’t miss. Make sure to RSVP soon. Space is limited and admission is first come first served.

Come join us at the Austin Renaissance in the Arboretum on March 20th, 2014 from 8:00am to 5:00pm with an evening social from approximately 4:30pm –7: 00pm. Costs of the event are highly subsidized by our sponsors for a low cost of $50.00 for members of the AEA and $100.00 for non-members of the AEA.

 RSVP for the Texas Association of Enterprise Architects Summit Mike Walker

 

Updated Agenda for the Day

We have an action packed agenda!

 Texas Association of Enterprise Architects Summit Mike Walker

Venue and Lodging Details

The summit will be held at:

RENAISSANCE AUSTIN | 9721 Arboretum Blvd | Austin, TX 78759

www.renaissancehotels.com

Making Reservations

We have negotiated discounted room rates at: $189 per room per night plus applicable tax and service fee. You have until Friday, March 14th before the discount expires. Please make your arrangements before then.

Booking Website:
https://resweb.passkey.com/Resweb.do?mode=welcome_ei_new&eventID=11150319

Questions?

info@TexasAEA.org

Categories: Architecture

Register Now for the Texas Association of Enterprise Architects | Summit

Mike Walker's Blog - Sat, 03/01/2014 - 19:28

 Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas

1st Texas Association of Enterprise Architects Summit!

Designed by Enterprise Architects, for Enterprise Architects this summit provides an opportunity to discover the latest approaches and innovative ideas in Strategy, Enterprise Architecture and Business Architecture.

This full day event is different than most. Shifting the focus from a speaker only event to a mix of presentations and collaborative sessions with highly relevant and proven practices that are applicable to the issues Enterprise Architects face today.

Featured Speakers

 Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas Jeanne Ross     Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas John Zachman

Jeanne Ross          John Zachman

 

Register

Come join us at the Austin Renaissance in the Arboretum on March 20th, 2014 from 8:00am to 5:00pm with an evening social from 5:00pm – 8:00pm. Costs of the event are highly subsidized by our sponsors for a low cost of $50.00 for members of the AEA and $100.00 for non-members of the AEA.

 Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas RSVP

 

Agenda

We have an action packed agenda!

 Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas Agenda

Venue and Lodging Details

The summit will be held at:

RENAISSANCE AUSTIN | 9721 Arboretum Blvd | Austin, TX 78759

www.renaissancehotels.com

We have negotiated discounted room rates at: $189 per room per night plus applicable tax and service fee.

Questions?

info@TexasAEA.org

Categories: Architecture