Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

You Never Really Learn Something Until You Teach It

Making the Complex Simple - John Sonmez - Mon, 05/26/2014 - 16:00

As software developers we spend a large amount of time learning. There is always a new framework or technology to learn. It can seem impossible to keep up with everything when there is something new every day. So, it should be no surprise to you that learning quickly and gaining a deeper understanding of what […]

The post You Never Really Learn Something Until You Teach It appeared first on Simple Programmer.

Categories: Programming

The Future of the Mike The Architect Blog

Mike Walker's Blog - Fri, 05/23/2014 - 20:50

You might of saw the announcement I made that I just joined Gartner. You might be thinking what this means for my blog, right? Well… there will be some changes but I think ultimately they will be good ones. 

Just like with most things, there is good news and the not so good news.

How about we get the bad news out of the way first... So the not so good news is that this will be my last blog post, well at least for the foreseeable future.  I will still keep it alive out here but will not be able to update it.

So this leads to the good news is that I will still be able to share my insights with all of you. I will continue to express myself through Gartner research notes, technology profiles, hype cycles and conferences. Who knows, I may even show up on Gartner blogs as well. While I do love what I have done with "Mike The Architect", blogging on any one platform/persona was never my goal. It’s was simply a vehicle or a means to an end to communicate my thoughts to all of you.

As you might imagine this post is a bittersweet one for me. This closes one chapter and opens another for my public writing to all of you. I have really enjoyed blogging all these years about my observations, experiences and my wild haired crazy ideas. Can you believe it’s been 8 years of Enterprise Architecture blogging? I can't. Goes by fast.

I just want to say thank you to everyone that has subscribed to my blog, provided comments and believed in my guidance. 

Related articles Mike Walker has joined Gartner
Categories: Architecture

First NuGet package for SQLitePCL.raw

Eric.Weblog() - Eric Sink - Fri, 05/23/2014 - 19:00
What?

I have pushed up the first NuGet package for SQLitePCL.raw.

Is this ready for use on mission-critical applications?

Hardly.

I gave this a version number of "0.1.0-alpha".

By including the -alpha part, I signal to NuGet that this is a pre-release package.

By using 0.1.0 as the version number, I signal to human beings that if you use this package, everything in your life will go wrong. Your dog will leave you for someone else. Your favorite TV show will get canceled. A really dumb VC will send you a term sheet.

This is the first NuGet package, not the last. I gotta start somewhere.

So this package doesn't work at all?

Well, actually, no, it shouldn't be that bad. Underpromise and overdeliver.

I have run my test suite against this package for all of the following environments:

  • Xamarin.iOS (simulator)

  • Xamarin.Android (emulator, API level 15)

  • WinRT 8.1 (x86, on Windows 8.1)

  • Windows Phone 8.0 (in the emulator)

  • Windows Phone 8.1 (RT flavor, in the emulator)

  • Windows Phone 8.1 (Silverlight flavor, in the emulator)

  • .NET 4.5 (on Windows 8.1)

On all the Windows platforms, the tests all pass. For iOS and Android, the only failures are the expected ones.

Why do some of your tests fail on iOS and Android?

Because the version of SQLite which is preinstalled by Apple/Google is a bit old, and some of my tests are explicitly exercising newer SQLite features.

So is this NuGet package ready for testing?

Yes, please.

Eric, I am much smarter than you. Can I give you some constructive feedback?

Yes, please.

Why is the word "basic" in the name of this package?

I think it likely that I may end up with more than one NuGet package for SQLitePCL.raw. Different packages for different use cases. So I need them to have different names.

This one is "basic" in the sense that it tries to be the package that most people will want to use. All of the other contemplated packages would have some sort of less-appealing word in the name, designed to scare people away. The label for the next one might be "broccoli".

The main tradeoff is the issue of where your app is getting SQLite itself. For [much] more detail on this, see my recent blog entry on The Multiple SQLite Problem.

Anyway, for this "basic" package, the iOS and Android assemblies use the SQLite library which is part of the mobile OS, and all of the Windows assemblies bundle a sqlite3.dll.

Which version of SQLite is bundled on the Windows platforms?

3.8.4.3

How was the bundled SQLite library compiled?

With:

  • SQLITE_ENABLE_FTS4

  • SQLITE_ENABLE_FTS3_PARENTHESIS

  • SQLITE_ENABLE_COLUMN_METADATA

Can I rely on all future versions of this package having the SQLite library compiled with exactly those options?

No.

How do I find this package?

Direct link: https://www.nuget.org/packages/SQLitePCL.raw_basic

And, it comes up for me if I just search for "SQLitePCL" on NuGet.

Hey, there are TWO results of that search? What's the deal?

Mine is the one called SQLitePCL.raw. The other one is by MSOpenTech.

And actually, my work is a [hopefully friendly] fork of theirs. Thanks to those folks (whoever they are) for a solid starting point. I am available to collaborate with them if there is interest.

Why did you fork the other one?

See the README on GitHub for some info on this.

Any caveats when trying to use this on iOS?

AFAIK, no. For me, this Just Works.

Any caveats when trying to use this on Android?

AFAIK, no. For me, this Just Works.

Any caveats when trying to use this on .NET 4.5?

You need to compile for a specific CPU (x86, x64), not "Any CPU". In Visual Studio, right-click on your solution and choose Configuration Manager.

Any caveats when trying to use this on Windows Phone 8.0?

Not really. Just make sure you are building for x86 for the emulator or ARM for an actual device. In Visual Studio, right-click on your solution and choose Configuration Manager.

Any caveats when trying to use this on Windows RT or Windows Phone 8.1?

Two issues:

  • Build failure: You need to compile for a specific CPU (x86, x64, ARM), not "Any CPU". In Visual Studio, right-click on your solution and choose Configuration Manager.

  • Runtime failure, file not found: You need to add a reference to the Visual C++ 2013 Runtime. Hopefully a future version of this package will automatically add this reference for you.

What other forms of this package are you planning?

I'm considering one that doesn't bundle any SQLite instances at all. For use with cases where somebody wants to have their own build of SQLite. Or for people who want to use the SQLite vsix SDK builds on visualstudiogallery.msdn.microsoft.com.

Why are some of the platform assemblies in the build directory instead of the lib directory?

Android and iOS are in lib.

All the Windowsy ones are in build, because they're all CPU-specific, so they need more help than lib can provide. An MSBuild .targets file is used to inject the appropriate reference.

 

Let’s test at Let’s Test

James Bach’s Blog - Fri, 05/23/2014 - 07:55

I’ve been telling people that the best conference I know for thinking testers is Let’s Test (followed closely by CAST, which I will also be at, this year, in New York). Let’s Test was created by people who experienced CAST and wanted to be even more dedicated to Context-Driven testing principles.

Now, I’m here in Stockholm once again to be with the most interesting testers in Europe. I’m not done with my presentations, yet. But I still have a couple of days.

(I will presenting a new model of what it means to be an excellent observer, together with one or two observation challenges for participants. And Pradeep Soundararajan and I will be presenting a tutorial on reviewing a specification by testing it.)

Let’s Test is not for the faint of heart. Events go on day and night. I suffer from terrible jet lag, so I probably won’t be seen after dinner. But for you crazy kids, it’s a great place to try a testing exercise, or present one.

(Note: I’m being paid to teach at Let’s Test. I don’t get a percentage of the gate, though– I get paid the same whether anyone shows up or not.)

Australia Let’s Test

I will also be in Australia for the first Let’s Test happening down there, in September. There are some interesting testers in Oz. I’m sure they will all be there. It will be the first great party of ambitious intellectual testers that I know of in the history of Australian testing.

Anne-Marie Charrett and I will be doing our Coaching Testers tutorial, which is the only time this year we will teach it together.

“Intellectual” testers?

Why do I keep saying that? Because the state of the practice in testing is for testers NOT to read about their craft, NOT to study social science or know anything about the proper use of statistics or the meaning of the word “heuristic”, and NOT to challenge the now 40 year stale ideas about making testing into factory work that lead directly to mass outsourcing of testing to lowest bidder instead of the most able tester.

Intellectual testers are not the most common type of tester.

The ISTQB and similar programs require your stupidity and your fear in order to survive. And their business model is working. They don’t debate us for the same reason that HP made billions of dollars selling bad test tools by pitching them to non-testers who had more money than wisdom. Debating us would spoil their racket.

So, don’t be like that. Be smart.

I’ll see you at Let’s Test.

Categories: Testing & QA

Mike Walker has joined Gartner

Mike Walker's Blog - Thu, 05/22/2014 - 15:42

I’ve got some very exciting news to share with all of you. I have accepted the position of Research Director within the Enterprise Architecture practice at Gartner!

As many of you know that read my blog I often comment on the analyst community and more specifically the leader in that community, Gartner. I have a great deal of respect for not only to research but also the Gartner EA team. I will be joining a stellar team of luminaries that have been providing enterprise architecture guidance for a very long time. It is very humbling to be part of this already high-octane team.

You might be wondering, why did I decide to join Gartner? It was a bit of an interesting discovery for me. I spent my career primarily in two world’s. First as a practitioner space as an enterprise architect or a chief architect and second at a technology vendor in roles such as advisory/chief architect roles. Each of these roles and organizations provided great experiences in their own right and provided me a great deal of experience and enjoyment.

However, when when I looked at Gartner as a possible career choice, it offered a very different value proposition. As a practitioner working for a single company my role and scope of influence was only with that one company with an occasional speaking engagement or blog posting. But even when I did speak publicly many factors limited my ability to provide the candid guidance that I would of preferred to give. This was primarily due to intellectual-property or competitive factors.

With large technology firms I was able to get that broad and pervasive megaphone that allowed me to amplify my message across many companies to maximize the impact that I could have. However there is one major drawback, I work for a technology firm and no matter what you still have some level of accountability for the company’s bottom line or another way of stating that would be to enable the sale of technology. While I personally have avoided having “big evil vendor” pitches there is still a very legitimate perception that there is a technology bias.

So I ask myself a question, is it more important to sell technology or to sell enterprise architecture? The answer was very clear to me. It had been for many years but it's just like trying to remember something that's on the tip of your tongue you know it's there but you can't quite put your finger on it. Once I realized that enterprise architecture was the area passion for me, everything else on the place.

Moving to Gartner is the most logical choice for me given my true passion for advancing the Enterprise Architecture profession, communicating its value and ultimately sharing proven practices. if I wants to advance the enterprise architecture profession. Gartner provides the platform, the breadth of clients, the credibility and none of the technical shackles that you would find at a large mega vendor.

Not only do I think it's a good move for me but I also think I would be good at being an analyst. After all, many of my roles have comprised of an advisory component to customers, writing white papers and speaking at conferences.

As for my existing customers, many of you are Gartner customers. If you still want to continue to engage with me I would really like that!

Related articles Five Take-Aways from Gartner Symposium 2013 Recapping the Gartner Enterprise Architecture Summit 2013
Categories: Architecture

Ever have a day like this one?

Eric.Weblog() - Eric Sink - Tue, 05/20/2014 - 19:00
  • Check email and notice a message from somebody having trouble using SQLitePCL.raw on Windows Phone 8.1. Realize that I haven't run the test suite since I started working on the new build scripts. Assume that I broke something.

  • Hook up the automated test project to the output of the new build system. Sure enough, the tests fail.

  • Notice that the error message is different from the one in the user's email.

  • Realize that the user is actually using the old build system, not the new one. Wonder how that could have broken.

  • Bring up the old build system, run the tests. Yep, they fail here too. Must be something in the actual code.

  • Dig around for a while and try to find what changed.

  • Use git to go back to the last commit before I started the new build system stuff. Rebuild all. Run the tests. They pass. Good. Now I just have to diff and figure out which change caused the breakage.

  • git my working directory back to the current version of the code. Rebuild all and run the tests again to watch them fail again. BUT NOW THEY PASS.

  • Wonder if perhaps Visual Studio is less frustrating for people who drink Scotch in the mornings.

  • Decide that maybe something was flaky in my machine. The tests are passing again, so there's no problem.

  • Realize that the user wasn't actually running the test suite. He was trying to reference from his own project. And he had to do that manually, because I haven't published the nuget package yet. Maybe he just screwed up the reference or didn't copy all the necessary pieces.

  • Run the tests in the new build system to watch them pass there as well. But here they STILL FAIL.

  • Decide to take the build system out of the equation and just finish getting things working right with nuget. Build the unit test package separately in its own solution. Add a reference to the nuget package and start working out the issues.

  • Run the tests. Everything throws because the reference got added to the "bait" version of the PCL instead of the to the WP81 platform assembly. Oh well. This is what I need to be fixing anyway.

  • Notice that the .targets file didn't get properly imported into the test project when the package was installed. Wonder why. But that's gotta be why the platform assembly didn't get referenced.

  • Realize that the bait assembly somehow got referenced. Wonder why.

  • What is Scotch anyway? Go read several articles about single malt whiskey.

  • Decide to take nuget out of the equation and focus on why the new build system is producing dlls that won't load.

  • Google the error message "Package failed updates, dependency or conflict validation". I need to know exactly what was the cause of the failure.

  • Realize that the default search engine or IE is Bing. Do the same search in Google. Get different results.

  • Become annoyed when co-worker interrupts me to tell me that there is a new trailer for Guardians of the Galaxy.

  • Read a web page on the Microsoft website which explains how to get the actual details of that error message. Spend time wandering around Event Viewer until I see the right stuff.

  • Realize that the web page is actually talking about WinRT on the desktop, not Windows Phone.

  • Try to find a way to get developer-grade error messages in the Windows Phone emulator. Fail.

  • Notice that below the error message, Visual Studio's suggested resolution is to instead use a unit test project that is targeted for Windows Phone, even thought IT ALREADY IS.

  • Blame Steve Ballmer FOR EVERYTHING.

  • Wonder if WP81 is the only thing that broke. Run the tests for WinRT. They fail as well.

  • Get annoyed because the only way Visual Studio can run the unit tests for just one project is to unload all the others.

  • Get upset because the Visual Studio Reload Project command doesn't work like the way it did a week or two ago. Now it reloads all the projects instead of just the one I wanted. Did the installation of the Xamarin Visual Studio integration break it?

  • Go back to the very basics. Run the unit tests for plain old .NET 4.5. They pass.

  • Re-run the unit tests for WinRT to watch them fail again. NOW THEY PASS.

  • Realize the co-worker is absolutely right. The most important thing is to watch the Guardians of the Galaxy trailer.

  • Get annoyed because the sound on my MBP isn't working. Watch the whole trailer anyway, without sound.

  • Review all my project settings in the Visual Studio dialogs, just to see if I notice anything odd.

  • Go back to my web browser. Realize that the world of Scotch whiskey might actually be more complicated than Visual Studio.

  • Go home. Discover that the annual spring invasion of ants in our kitchen is proceeding nicely.

  • Fight some more with Visual Studio. Give up. Go to bed.

  • Wake up the next morning. Discover that the teenager's contribution to our war against the ants was to leave unrinsed plates by the sink. Thousands of ants feasting on cheesecake debris and syrup.

  • Open the laptop. Run diff to compare the csproj and vcxproj files from the old build system against the new one. See that there are no differences that should make any difference.

  • Change them all anyway. Update every setting to exactly match the old build system. One at a time. Run the test suite after each tweak so I can figure out exactly which of the seeminlgy-harmless changes caused the breakage.

  • Wait. My kid had cheesecake and waffles FOR DINNER?

  • Become seriously annoyed that Visual Studio changes the Output pane from "Tests" to "Build" EVERY SINGLE TIME I run the tests.

  • Finish getting all the settings to match. The tests still don't pass.

  • Try to remember if I ever done anything successfully. Anything at all. Distinctly recall that when I was mowing the lawn this weekend, the grass got shorter. Focus on that accomplishment. Build on that success.

  • Realize that the old build system works and the new one doesn't. There has to be a difference that I'm missing. I just have to find it.

  • Go back to the old build system. Rebuild all. Run the tests so I can watch them pass and start over from there. BUT NOW THEY'RE FAILING AGAIN.

  • Go do something else.

 

Using Dropwizard in combination with Elasticsearch

Gridshore - Thu, 05/15/2014 - 21:09

Dropwizard logo

How often do you start creating a new application? How often have you thought about configuring an application. Where to locate a config file, how to load the file, what format to use? Another thing you regularly do is adding timers to track execution time, management tools to do thread analysis etc. From a more functional perspective you want a rich client side application using angularjs. So you need a REST backend to deliver json documents. Does this sound like something you need regularly? Than this blog post is for you. If you never need this, please keep on reading, you might like it.

In this blog post I will create an application that show you all the available indexes in your elasticsearch cluster. Not very sexy, but I am going to use: AngularJS, Dropwizard and elasticsearch. That should be enough to get a lot of you interested.


What is Dropwizard

Dropwizard is a framework that combines a lot of other frameworks that have become the de facto standard in their own domain. We have jersey for REST interface, jetty for light weight container, jackson for json parsing, free marker for front-end templates, Metric for the metrics, slf4j for logging. Dropwizard has some utilities to combine these frameworks and enable you as a developer to be very productive in constructing your application. It provides building blocks like lifecycle management, Resources, Views, loading of bundles, configuration and initialization.

Time to jump in and start creating an application.

Structure of the application

The application is setup as a maven project. To start of we only need one dependency:

<dependency>
    <groupId>io.dropwizard</groupId>
    <artifactId>dropwizard-core</artifactId>
    <version>${dropwizard.version}</version>
</dependency>

If you want to follow along, you can check my github repository:


https://github.com/jettro/dropwizard-elastic

Configure your application

Every application needs configuration. In our case we need to configure how to connect to elasticsearch. In drop wizard you extend the Configuration class and create a pojo. Using jackson and hibernate validator annotations we configure validation and serialization. In our case the configuration object looks like this:

public class DWESConfiguration extends Configuration {
    @NotEmpty
    private String elasticsearchHost = "localhost:9200";

    @NotEmpty
    private String clusterName = "elasticsearch";

    @JsonProperty
    public String getElasticsearchHost() {
        return elasticsearchHost;
    }

    @JsonProperty
    public void setElasticsearchHost(String elasticsearchHost) {
        this.elasticsearchHost = elasticsearchHost;
    }

    @JsonProperty
    public String getClusterName() {
        return clusterName;
    }

    @JsonProperty
    public void setClusterName(String clusterName) {
        this.clusterName = clusterName;
    }
}

Then you need to create a yml file containing the properties in the configuration as well as some nice values. In my case it looks like this:

elasticsearchHost: localhost:9300
clusterName: jc-play

How often did you start in your project to create the configuration mechanism? Usually I start with maven and quickly move to tomcat. Not this time. We did do maven, now we did configuration. Next up is the runner for the application.

Add the runner

This is the class we run to start the application. Internally jetty is started. We extend the Application class and use the configuration class as a generic. This is the class that initializes the complete application. Used bundles are initialized, classes are created and passed to other classes.

public class DWESApplication extends Application<DWESConfiguration> {
    private static final Logger logger = LoggerFactory.getLogger(DWESApplication.class);

    public static void main(String[] args) throws Exception {
        new DWESApplication().run(args);
    }

    @Override
    public String getName() {
        return "dropwizard-elastic";
    }

    @Override
    public void initialize(Bootstrap<DWESConfiguration> dwesConfigurationBootstrap) {
    }

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        logger.info("Running the application");
    }
}

When starting this application, we have no succes. A big error because we did not register any resources.

ERROR [2014-05-14 16:58:34,174] com.sun.jersey.server.impl.application.RootResourceUriRules: 
	The ResourceConfig instance does not contain any root resource classes.
Nothing happens, we just need a resource.

Before we can return something, we need to have something to return. We create a pogo called Index that contains one property called name. For now we just return this object as a json object. The following code shows the IndexResource that handles the requests that are related to the indexes.

@Path("/indexes")
@Produces(MediaType.APPLICATION_JSON)
public class IndexResource {

    @GET
    @Timed
    public Index showIndexes() {
        Index index = new Index();
        index.setName("A Dummy Index");

        return index;
    }
}

The @GET, @PATH and @Produces annotations are from the jersey rest library. @Timed is from the metrics library. Before starting the application we need to register our index resource with jersey.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        logger.info("Running the application");
        final IndexResource indexResource = new IndexResource();
        environment.jersey().register(indexResource);
    }

Now we can start the application using the following runner from intellij. Later on we will create the executable jar.

Running the app from intelij

Run the application again, this time it works. You can browse to http://localhost:8080/index and see our dummy index as a nice json document. There is something in the logs though. I love this message, this is what you get when running the application without health checks.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!    THIS APPLICATION HAS NO HEALTHCHECKS. THIS MEANS YOU WILL NEVER KNOW      !
!     IF IT DIES IN PRODUCTION, WHICH MEANS YOU WILL NEVER KNOW IF YOU'RE      !
!    LETTING YOUR USERS DOWN. YOU SHOULD ADD A HEALTHCHECK FOR EACH OF YOUR    !
!         APPLICATION'S DEPENDENCIES WHICH FULLY (BUT LIGHTLY) TESTS IT.       !
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Creating a health check

We add a health check, since we are creating an application interacting with elasticsearch, we create a health check for elasticsearch. Don’t think to much about how we connect to elasticsearch yet. We get there later on.

public class ESHealthCheck extends HealthCheck {

    private ESClientManager clientManager;

    public ESHealthCheck(ESClientManager clientManager) {
        this.clientManager = clientManager;
    }

    @Override
    protected Result check() throws Exception {
        ClusterHealthResponse clusterIndexHealths = clientManager.obtainClient().admin().cluster().health(new ClusterHealthRequest())
                .actionGet();
        switch (clusterIndexHealths.getStatus()) {
            case GREEN:
                return HealthCheck.Result.healthy();
            case YELLOW:
                return HealthCheck.Result.unhealthy("Cluster state is yellow, maybe replication not done? New Nodes?");
            case RED:
            default:
                return HealthCheck.Result.unhealthy("Something is very wrong with the cluster", clusterIndexHealths);

        }
    }
}

Just like with the resource handler, we need to register the health check. Together with the standard http port for normal users, another port is exposed for administration. Here you can find the metrics reports like Metrics, Ping, Threads, Healthcheck.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        Client client = ESClientFactorybean.obtainClient(config.getElasticsearchHost(), config.getClusterName());

        logger.info("Running the application");
        final IndexResource indexResource = new IndexResource(client);
        environment.jersey().register(indexResource);

        final ESHealthCheck esHealthCheck = new ESHealthCheck(client);
        environment.healthChecks().register("elasticsearch", esHealthCheck);
    }

You as a reader now have an assignment to start the application and check the admin pages yourself: http://localhost:8081. We are going to connect to elasticsearch in the mean time.

Connecting to elasticsearch

We connect to elasticsearch using the transport client. This is taken care of by the ESClientManager. We make use of the drop wizard managed classes. The lifecycle of these classes is managed by drop wizard. From the configuration object we take the host(s) and the cluster name. Now we can obtain a client in the start method and pass this client to the classes that need it. The first class that needs it is the health check, but we already had a look at that one. Using the ESClientManager other classes have access to the client. The managed interface mandates the start as well as the stop method.

    @Override
    public void start() throws Exception {
        Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", clusterName).build();

        logger.debug("Settings used for connection to elasticsearch : {}", settings.toDelimitedString('#'));

        TransportAddress[] addresses = getTransportAddresses(host);

        logger.debug("Hosts used for transport client : {}", (Object) addresses);

        this.client = new TransportClient(settings).addTransportAddresses(addresses);
    }

    @Override
    public void stop() throws Exception {
        this.client.close();
    }

We need to register our managed class with the lifecycle of the environment in the runner class.

    @Override
    public void run(DWESConfiguration config, Environment environment) throws Exception {
        ESClientManager esClientManager = new ESClientManager(config.getElasticsearchHost(), config.getClusterName());
        environment.lifecycle().manage(esClientManager);
    }	

Next we want to change the IndexResource to use the elasticsearch client to list all indexes.

    public List<Index> showIndexes() {
        IndicesStatusResponse indices = clientManager.obtainClient().admin().indices().prepareStatus().get();

        List<Index> result = new ArrayList<>();
        for (String key : indices.getIndices().keySet()) {
            Index index = new Index();
            index.setName(key);
            result.add(index);
        }
        return result;
    }

Now we can browse to http://localhost:8080/indexes and we get back a nice json object. In my case I got this:

[
	{"name":"logstash-tomcat-2014.05.02"},
	{"name":"mymusicnested"},
	{"name":"kibana-int"},
	{"name":"playwithip"},
	{"name":"logstash-tomcat-2014.05.08"},
	{"name":"mymusic"}
]
Creating a better view

Having this REST based interface with json documents is nice, but not if you are human like me (well kind of). So let us add some AngularJS magic to create a slightly better view. The following page can of course also be created with easier view technologies, but I want to demonstrate what you can do with dropwizard.

First we make it possible to use free marker as a template. To make this work we need to additional dependencies: dropwizard-views and dropwizard-views-freemarker. The first step is a view class that knows the free marker template to load and provide the fields that you template can read. In our case we want to expose the cluster name.

public class HomeView extends View {
    private final String clusterName;

    protected HomeView(String clusterName) {
        super("home.ftl");
        this.clusterName = clusterName;
    }

    public String getClusterName() {
        return clusterName;
    }
}

Than we have to create the free marker template. This looks like the following code block

<#-- @ftlvariable name="" type="nl.gridshore.dwes.HomeView" -->
<html ng-app="myApp">
<head>
    <title>DWAS</title>
</head>
<body ng-controller="IndexCtrl">
<p>Underneath a list of indexes in the cluster <strong>${clusterName?html}</strong></p>

<div ng-init="initIndexes()">
    <ul>
        <li ng-repeat="index in indexes">{{index.name}}</li>
    </ul>
</div>

<script src="/assets/js/angular-1.2.16.min.js"></script>
<script src="/assets/js/app.js"></script>
</body>
</html>

By default you put these template in the resources folder using the same sub folders as your view class has for the package. If you look closely you see some angularjs code, more on this later on. First we need to map a url to the view. This is done with a resource class. The following code block shows the HomeResource class that maps the “/” to the HomeView.

@Path("/")
@Produces(MediaType.TEXT_HTML)
public class HomeResource {
    private String clusterName;

    public HomeResource(String clusterName) {
        this.clusterName = clusterName;
    }

    @GET
    public HomeView goHome() {
        return new HomeView(clusterName);
    }
}

Notice we now configure it to return text/html. The goHome method is annotated with GET, so each GET request to the PATH “/” is mapped to the HomeView class. Now we need to tell jersey about this mapping. That is done in the runner class.

final HomeResource homeResource = new HomeResource(config.getClusterName());
environment.jersey().register(homeResource);
Using assets

The final part I want to show is how to use the assets bundle from drop zone to map a folder “/assets” to a part of the url. To use this bundle you have to add the following dependency in maven: dropwizard-assets. Than we can easily map the assets folder in our resources folder to the web assets folder

    @Override
    public void initialize(Bootstrap<DWESConfiguration> dwesConfigurationBootstrap) {
        dwesConfigurationBootstrap.addBundle(new ViewBundle());
        dwesConfigurationBootstrap.addBundle(new AssetsBundle("/assets/", "/assets/"));
    }

That is it, now you can load the angular javascript file. My very basic sample has one angular controller. This controller uses the $http service to call our /indexes url. The result is used to show the indexes in a list view.

myApp.controller('IndexCtrl', function ($scope, $http) {
    $scope.indexes = [];

    $scope.initIndexes = function () {
        $http.get('/indexes').success(function (data) {
            $scope.indexes = data;
        });
    };
});

And the result

the very basic screen showing the indexes

Concluding

This was my first go at using drop wizard, I must admit I like what I have seen so far. Not sure if I would create a big application with it, on the other hand it is really structured. Before moving on I would need to reed a bit more about the library and check all of its options. There is a lot more possible than what I have showed you in here.

References

The post Using Dropwizard in combination with Elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

Just Released: World-Class EA: Business Reference Model

Mike Walker's Blog - Fri, 05/09/2014 - 17:47

Check out this new whitepaper from the Open Group in the area of Business Architecture. What are your thoughts on the latest edition in their World-Class EA Series? Does it get The Open Group or TOGAF closer and deeper into the business architecture world? Is the material useful? Love to hear your thoughts.

The whitepaper is entitled Business Reference Model part of the .

See the whitepaper description below:

Business architecture is being used to design, plan, execute, and govern change initiatives throughout public and private sector entities. An architectural approach can systematically highlight the most effective state for a given environment, and then define how change can be effected within acceptable benefit, cost, and risk parameters. A key challenge to this approach is the consistent definition of the organization and where it needs to be, and in response this White Paper introduces a comprehensive reference model for business. The Business Reference Model (BRM) can be applied to both private and public sector organizations alike, and gives complex organizations a common way to view themselves in order to plan and execute effective transformational change.

It is envisaged that the introduction of a BRM into a transformation planning exercise will increase collaboration across the business, increase awareness of organizational opportunity and risk, and facilitate more holistic business investment; all of which culminates in an improved and more sustainable working environment leading to a better working world.

 

Find the whitepaper here: http://bit.ly/1sagaSK

Categories: Architecture

Texas AEA Summit Recap

Mike Walker's Blog - Sun, 04/27/2014 - 20:07

image

Wow, what a great event.  It’s been a month since the first the event and we are still getting  a great deal of feedback on the value of the event. Thank you to all of you that attended. 

With close to 125 attendees for this first time summit of the Texas Association of Enterprise Architects we couldn't of asked for a better turnout. We had attendees from multiple sectors, multiple Texas cities (Dallas, San Antonio and Houston) and from other architecture associations that decided to show up. As an example, we've had the San Antonio Enterprise Architecture Association come to our meetings along with IASA. At this event we had the good fortune of having the IASA Austin chapter president and an IASA marketing person so support for what we are doing at this summit. What I want people to take away from this is that the Texas AEA is open to all architecture professionals regardless of affiliation or level of experience. All are welcome.

 

IMG_1271   IMG_1291   IMG_1269

 

The theme for the summit was centered all around real world practitioner stories, Keeping EA Real. I think we did that with our excellent speakers in the real world stories from the trenches.

However, don’t just take my word for it. Check out all the great social media activity on the event at hashtag #TexasAEA

image

 

All of the presentations from the day are available on the Global Association of Enterprise Architects portal under the Texas Chapter. All you need to do is login and and the presentations freely available to you. We also recorded the sessions and looking to find a way of sharing those with you as well.

This summit may of been very different compared to traditional conferences that you may be used to. The day was split into two major sections. The day started out with a traditional conference format with keynote speakers and sessions lots for more of a one-way dialogue. We had Jeanne Ross and John Zachman as the headliners for the summit followed by 20 minute TED style customer case studies. These were rapid succession get to the point sessions without any of the fluff. I think this was extremely valuable.

After lunch and the second half of the summit was built to be more interactive similar to what you would find at in an conference. We kicked off the second half with a panel from the TD style presentations from earlier in the morning. This was built to be interactive serving the questions of the audience. From there we went into open space/uncomfortable style roundtables. At the beginning of the conference we had voting on which topics they wanted to discuss at the roundtables. Those that got the votes the highest were nominated as the roundtable that people want to. After the roundtables we recaps the entire day and went directly to the social event afterwards. Again to facilitate meaningful conversation amongst enterprise architecture peers.

Bellow is a further break down of the conference. I don't cover every point that was made but more the salient points that resonated with me that I would like to share with all of you.

Morning Sessions

Welcome address

The day started off with myself kicking off the day in the welcome address. I talked about our vision in charter for the Texas chapter. This included some things we'd already discussed in our first meeting but for the sake of the new attendees we wanted to go through the full vision of the Texas AEA.

There's also some highlights that we discussed as well. Namely the current state of the chapter. Month over month our chapter doubles in size.

image

The chapters only been around for three months. What this teams me is the there is an enormous amount of demand for what we're doing here for the EA profession at least here in Texas. With both potential and non-members of the AEA, the attendance has shown us an overwhelming amount of support for what we're doing. As an example we usually find of the typical audience one fourth of the audience is non-members with a month over month growth rate seeing that those very non-members transition into members.

 

image Another important announcement is around professional development. The AEA will be supporting the ability to issue credits for activities you do through the AEA. So if you go to a summit, present at a monthly meeting or collaborate through the portal these all generate credits for you to demonstrate all the great things you've been doing in the year profession. Keep in mind this only applies to Open CA and not skills-based certifications like TOGAF. This is taking certification to the next level of maturity that gives people the credit where credit is due.

 

Along with professional development credits, a related announcements is around actual certification. Here in the Texas AEA we have several Open CA certified enterprise architects. With that we are planning on creating a mentorship program to help those that want to achieve their open sea certification. Along with us we been granted the ability to hold certification boards as well. We will be the first AEA chapter to do this. Very exciting news!

 

Jeanne Ross: Enterprise Architecture State of the Union

image

Jeanne Ross opened up the conference with her keynote. Unfortunately she was in Paris this week but delivered the next best thing, a prerecorded message just for us!

Jeannie continue to describe the evolution of her research at MIT Sloan. The message that sticks out most of my mind is really centered around a change in mentality that enterprise architects need to adopt. Jeanie cover this, and I wholeheartedly agree, enterprise architects should stop trying to have their customers try to understand exactly what they do. Rather, we should be focused on what our customers want rather than having our customers understand exactly what and how we execute the end deliverable for the end customer.

I keep it as simple as this. If I hire a plumber 2 o'clock a drain at my home, I don't want to understand how to be a plumber I just want my drink the unclogged. But in this scenario, enterprise architects as the plumber, we're trying to give a schematic of the drain systems and discuss optimization in tolling over a set of blueprints that myself as a consumer doesn't fully understand and respect the S can't have any informed opinion on. It's just a waste of time and energy. I think EA is having a big opportunity here to change their mindset.

 

John Zachman: Enterprise physics 101

image

Next up was John Zachman. Just being in Zachman's presence is extremely humbling. After all, we wouldn't be here if it wasn't for him. And he seems to be the perfect EA. Not because he started this whole thing in I'm a bit starstruck but he has the ideal demeanor and personality traits of the ideal Enterprise Archtect. He has the ability to greatly influence a room while also checking the ego at the door. I was pleasantly surprised on how humble he really was.

John took us through his latest thinking on the Zachman framework. He discussed how the past couple years he has really learned a great deal about enterprise architecture. This was through a colleague of his based in India that was building a set of EA consultancy services around the Zachman framework. By exercising the framework broadly like this exposed quite a few things that were considered before.

He explained to the group the philosophy behind the Zachman framework. Essentially it's ensuring that you're asking all the right questions to make sure that you have a complete understanding of what is to be architected. So John borrowed from the 6 interrogative's that fully complete a story: who, what, when, where and why. With this, that and explained that his framework really wasn't an EA framework but rather an ontology. Personally, I couldn't agree more. You can see you post here on this topic.

imageAnother important point that John made was in a similar vein as Jeanne Ross. He chose just to think about EA profession and how we been conducting ourselves. Well Jeannie focus on the interpersonal or sauce skills Mr. Zachman looked at it from the perspective of what we do as enterprise architects. You know do that he used was comparing what we do to either a manufacturer or in engineer. John's point was that we call ourselves or contrast we do with engineers but reality we can Dr. sells more as manufacturers. Meaning that there isn't much that is truly engineered and thought through with great detail and rigor but rather we are more supply line manufacturers crunching out widgets.

This is a very interesting analogy and one I don't fully think is easily understandable however I get the incident and agree with it. If you've heard me talk you know that I talk about architecture versus implementation. This is essentially what Mr. Zachman is talking about here. Architecture is all about planning, designing and engineering. The things we do after Architecture are all about executing meaning we going bill or in John's terms manufacture.

After getting some fundamental framing on how are conducting ourselves in this profession in a bit of setting stage for what's next, John went through his ontology or what is commonly referred to as the Zachman framework. He referred to it as the periodic table of elements for enterprise architecture it has all of the fundamental elements of what we need to do in enterprise architecture. What how I refer to it, it's a measure of completeness. But it's up to you to figure out the right questions to ask and how to implement this tool. It's not predicated that every box gets checked off or all questions get answered that's where your judgment comes.

When John talks about the usage of the pure a table for enterprise architecture he talks a great deal about how to compose an implement it. It's so he draws analogies from the chemistry world. He challenges us to think about using those foundational elements versus what he refers to ask composite. Already prefabricated or combined foundational elements of the Zachman framework to make business decisions. His assertion and I agree is that when we do that mean heritage whole set of constraints or objectives that we know me know he's trying so for. So my take a step back and looking at all the foundational elements might be a really good thing. However keep in mind having composites isn't entirely a bad thing in my opinion, however you want to make sure that you understand all the characteristics of it.

The last and final messages that John sent was around misconceptions of the framework. Mr. Zachman made it very clear that his framework was never intended to operate on its own. It is merely an ontology.

Again, Thank you to the attendees

Categories: Architecture

Yet More Change for the Capitals

DevHawk - Harry Pierson - Sat, 04/26/2014 - 21:13

Six years ago, I was pretty excited about the future for the Washington Capitals. They had just lost their first round match up with the Flyers – which was a bummer – but they had made the playoffs for the first time in 3 seasons. I wrote at the time:

Furthermore, even though they lost, these playoffs are a promise of future success. I tell my kids all the time that the only way to get good at something is to work hard while you’re bad at it. Playoff hockey is no different. Most of the Caps had little or no playoff experience going into this series and it really showed thru the first three games. But they kept at it and played much better over the last four games of the series. They went 2-2 in those games, but the two losses went to overtime. A little more luck (or better officiating) and the Caps are headed to Pittsburgh instead of the golf course.

What a difference six seasons makes. Sure, they won the President’s Trophy in 2010. But the promise of future playoff success has been broken, badly. The Caps have been on a pretty steep decline after getting beat by the eighth seed Canadians in the first round of the playoffs in 2010. Since then, they’ve switched systems three times and head coaches twice. This year, they missed the playoffs entirely even with Alex Ovechkin racking up a league-leading 51 goals.

Today, the word came down that both the coach and general manager have been let go. As a Caps fan, I’m really torn about this. I mean, I totally agree that the coach and GM had to go – frankly, I was surprised it didn’t happen 7-10 days earlier. But now what do you do? The draft is two months and one day away, free agency starts two days after that. The search for a GM is going to have to be fast. Then the GM will have to make some really important decisions about players at the draft, free agency and compliance buyouts with limited knowledge of the players in our system. Plus, he’ll need to hire a new head coach – preferably before the draft as well.

The one positive note is that the salary cap for the Capitals looks pretty good for next year. The Capitals currently have the second largest amount of cap space / open roster slot in the league. (The Islanders are first with $14.5 million / open roster slot. The Caps have just over $7 million / open roster slot.) They have only a handful of unrestricted free agents to resign – with arguably only one “must sign” (Mikhail Grabovski) in the bunch. Of course, this could also be a bug rather than a feature – having that many players under contract may make it harder for the new GM to shape the team in his image.

Who every the Capitals hire to be GM and coach, I’m not expecting a promising start. It feels like the next season is already a wash, and we’re not even finished with the first round of this year’s playoffs yet.

I guess it could be worse.

I could be a Toronto Leafs fan.

Categories: Architecture, Programming

Brokered WinRT Components Step Three

DevHawk - Harry Pierson - Fri, 04/25/2014 - 16:45

So far, we’ve created two projects, written all of about two lines of code and we have both our brokered component and its proxy/stub ready to go. Now it’s time to build the Windows Runtime app that uses the component. So far, things have been pretty easy – the only really tricky and/or manual step so far has been registering the proxy/stub, and that’s only tricky if you don’t want to run VS as admin. Unfortunately, tying this all together in the app requires a few more manual steps.

But before we get to the manual steps, let’s create the WinRT client app. Again, we’re going to create a new project but this time we’re going to select “Blank App (Windows)” from the Visual C# -> Store Apps -> Windows App node of the Add New Project dialog. Note, I’m not using “Blank App (Universal)” or “Blank App (Windows Phone)” because the brokered WinRT component feature is not support on Windows Phone. Call the client app project whatever you like, I’m calling mine “HelloWorldBRT.Client”.

Before we start writing code, we need to reference the brokered component. We can’t reference the brokered component directly or it will load in the sandboxed app process. Instead, the app need to reference a reference assembly version of the .winmd that gets generated automatically by the proxy/stub project. Remember in the last step when I said Kieran Mockford is an MSBuild wizard? The proxy/stub template project includes a custom target that automatically publishes the reference assembly winmd file used by the client app. When he showed me that, I was stunned – as I said, the man is a wizard. This means all you need to do is right click on the References node of the WinRT Client app project and select Add Reference. In the Reference Manager dialog, add a reference to the proxy/stub project you created in step two.

Now I can add the following code to the top of my App.OnLaunched function. Since this is a simple Hello World walkthru, I’m not going to bother to build any UI. I’m just going to inspect variables in the debugger. Believe me, the less UI I write, the better for everyone involved. Note, I’ve also added the P/Invoke signatures for GetCurrentProcess/ThreadID and to the client app like I did in the brokered component in step one. This way, I can get the process and thread IDs for both the app and broker process and compare them.

var pid = GetCurrentProcessId();
var tid = GetCurrentThreadId();

var c = new HelloWorldBRT.Class();
var bpid = c.CurrentProcessId;
var btid = c.CurrentThreadId;

At this point the app will compile, but if I run it the app will throw a TypeLoadException when it tries to create an instance of HelloWorldBRT.Class. The type can’t be loaded because the we’re using the reference assembly .winmd published by the proxy/stub project – it has no implementation details, so it can’t load. In order to be able to load the type, we need to declare the HelloWorldBRT.Class as a brokered component in the app’s pacakge.appxmanifest file. For non-brokered components, Visual Studio does this for you automatically. For brokered components we have to do it manually unfortunately. Every activatable class (i.e. class you can construct via “new”) needs to be registered in the appx manifest this way.

To register HelloWorldBRT.Class, right click the Package.appxmanifest file in the client project, select “Open With” from the context menu and then select “XML (Text) editor” from the Open With dialog. Then you need to insert inProcessServer extension that includes an ActivatableClass element for each class you can activate (aka has a public constructor). Each ActivatableClass element contains an ActivatableClassAttribute element that contains a pointer to the folder where the brokered component is installed. Here’s what I added to Package.appxmainfest of my HelloWorldBRT.Client app.

<Extensions>
  <Extension Category="windows.activatableClass.inProcessServer">
    <InProcessServer>
      <Path>clrhost.dll</Path>
      <ActivatableClass ActivatableClassId="HelloWorldBRT.Class" 
                        ThreadingModel="both">
        <ActivatableClassAttribute 
             Name="DesktopApplicationPath" 
             Type="string" 
             Value="D:\dev\HelloWorldBRT\Debug\HelloWorldBRT.PS"/>
      </ActivatableClass>
    </InProcessServer>
  </Extension>
</Extensions>

The key thing here is the addition of the DesktopApplicationPath ActivatableClassAttribute. This tells the WinRT activation logic that HelloWorldBRT.Class is a brokered component and where the managed .winmd file with the implementation details is located on the device. Note, you can use multiple brokered components in your side loaded app, but they all have the same DesktopApplicationPath.

Speaking of DesktopApplicationPath, the path I’m using here is path the final location of the proxy/stub components generated by the compiler. Frankly, this isn’t an good choice to use in a production deployment. But for the purposes of this walk thru, it’ll be fine.

ClientWatchWindow

Now when we run the app, we can load a HelloWorldBRT.Class instance and access the properties. re definitely seeing a different app process IDs when comparing the result of calling GetCurrentProcessId directly in App.OnLoaded vs. the result of calling GetCurrentProcessId in the brokered component. Of course, each run of the app will have different ID values, but this proves that we are loading our brokered component into a different process from where our app code is running.

Now you’re ready to go build your own brokered components! Here’s hoping you’ll find more interesting uses for them than comparing the process IDs of the app and broker processes in the debugger! :)

Categories: Architecture, Programming

Brokered WinRT Components Step Two

DevHawk - Harry Pierson - Fri, 04/25/2014 - 16:43

Now that we have built the brokered component , we have to build a proxy/stub for it. Proxies and stubs are how WinRT method calls are marshalled across process boundaries. If you want to know more – or you have insomnia – feel free to read all the gory details up on MSDN.

Proxies and stubs look like they might be scary, but they’re actually trivial (at least in the brokered component scenario) because 100% of the code is generated for you. It couldn’t be much easier.

Right click the solution node and select Add -> New Project. Alternatively, you can select File -> New -> Project in the Visual Studio main menu, but if you do that make sure you change the default solution from “Create new Solution” to “Add to Solution”. Regardless of how you launch the new project wizard, search for “broker” again, but this time select the “Brokered Windows Runtime ProxyStub” template. Give the project a name – I chose “HelloWorldBRT.PS”.

ProxyStubAddReferenceOnce you’ve created the proxy/stub project, you need to set a reference to the brokered component you created in step 1. Since proxies and stubs are native, this is a VC++ project. Adding a reference in a VC++ is not as straightforward as it is in C# projects. Right click the proxy/stub project, select “Properties” and then select Common Properties -> References from the tree on the left. Press the “Add New Reference…” button to bring up the same Add Reference dialog you’ve seen in managed code projects. Select the brokered component project and press OK.

Remember when I said that 100% of the code for the proxy/stub is generated? I wasn’t kidding – creating the template and setting referencing the brokered component project is literally all you need to do. Want proof? Go ahead and build now. If you watch the output windows, you’ll see a bunch of output go by referencing IDL files and MIDLRT among other stuff. This proxy/stub template has some custom MSBuild tasks that generates the proxy/stub code using winmdidl and midlrt. The process is similar to what is described here. BTW, if you get a chance, check out the proxy/stub project file – it is a work of art. Major props to Kieran Mockford for his msbuild wizardry.

ProxyStubRegisterOutputUnfortunately, it’s not enough just to build the proxy/stub – you also have to register it. The brokered component proxy/stub needs to be registered globally on the machine, which means you have to be running as an admin to do it. VS can register the proxy/stub for you automatically, but that means you have to run VS as an administrator. That always makes me nervous, but if you’re OK with running as admin you can enable proxy/stub registration by right clicking the proxy/stub project file, selecting Properties, navigating to Configuration properties -> Linker -> General in the tree of the project properties page, and then changing Register Output to “Yes”.

If you don’t like running VS as admin, you can manually register the proxy/stub by running “regsvr32 <proxystub dll>” from an elevated command prompt. Note, you do have to re-register every time the public surface area of your brokered component changes so letting VS handle registration admin is definitely the easier route to go.

In the third and final step, we’ll build a client app that accesses our brokered component.

Categories: Architecture, Programming

Brokered WinRT Components Step One

DevHawk - Harry Pierson - Fri, 04/25/2014 - 16:41

In this step, we’ll build the brokered component itself. Frankly, the only thing that makes a brokered component different than a normal WinRT component is some small tweaks to the project file to enable access to the full .NET Runtime and Base Class Library. The brokered component whitepaper describes the these tweaks in detail, but the new brokered component template takes care of these small tweaks for you.

BRT_NewProjectStart by selecting File -> New -> Project in Visual Studio. With the sheer number of templates to choose from these days, I find it’s easier to just search for the one I want. Type “broker” in the search box in the upper left, you’ll end up with two choices – the brokered WinRT component and the brokered WinRT proxy/stub. For now, choose the brokered component. We’ll be adding a brokered proxy/stub in step two. Name the project whatever you want. I named mine “HelloWorldBRT”.

This is probably the easiest step of the three as there’s nothing really special you have to do – just write managed code like you always do. In my keynote demo, this is where I wrote the code that wrapped the existing ADO.NET based data access library. For the purposes of this walkthrough, let’s do something simpler. We’ll use P/Invoke to retrieve the current process and thread IDs. These Win32 APIs are supported for developing WinRT apps and will make it obvious that the component is running in a separate process than the app. Here’s the simple code to retrieve those IDs (hat tip to pinvoke.net for the interop signatures):

public sealed class Class
{
    [DllImport("kernel32.dll")]
    static extern uint GetCurrentThreadId();

    [DllImport("kernel32.dll")]
    static extern uint GetCurrentProcessId();

    public uint CurrentThreadId
    {
        get { return GetCurrentThreadId(); }
    }

    public uint CurrentProcessId
    {
        get { return GetCurrentProcessId(); }
    }
}

That’s it! I didn’t even bother to change the class name for this simple sample.

Now, to be clear, there’s no reason why this code needs to run in a broker process. As I pointed out, the Win32 functions I’m wrapping here are supported for use in Windows Store apps. For this walkthrough, I’m trying to keep the code simple in order to focus on the specifics of building brokered components. If you want to see an example that actually leverages the fact that it’s running outside of the App Container, check out the NorthwindRT sample.

In the next step, we’ll add the proxy/stub that enables this component to communicate across a process boundary.

Categories: Architecture, Programming

Brokered WinRT Components Step-by-Step

DevHawk - Harry Pierson - Fri, 04/25/2014 - 16:40

Based on the feedback I’ve gotten since my keynote appearance @ Build – both in person and via email & twitter – there are a lot of folks who are excited about the Brokered WinRT Component feature. However, I’ve been advising folks to hold off a bit until the new VS templates were ready. Frankly, the developer experience for this feature is a bit rough and the VS template makes the experience much better. Well, hold off no longer! My old team has published the Brokered WinRT Component Project Templates up on the Visual Studio Gallery!

Now that the template is available, I’ve written a step-by-step guide demonstrating how to build a “Hello World” style brokered component. Hopefully, this will help folks in the community take advantage of this cool new feature in Windows 8.1 Update.

To keep it readable, I’ve broken it into three separate posts:

Note, this walkthrough assumes you’re running Windows 8.1 Update, Visual Studio 2013 with Update 2 RC (or later) and the Brokered WinRT Component Project Templates installed.

I hope this series helps you take advantage of brokered WinRT components. If you have any further questions, feel free to drop me an email or hit me up on Twitter.

Categories: Architecture, Programming

Updated Speaker Lineup for The texas Association of Enterprise Architects Summit

Mike Walker's Blog - Sat, 03/15/2014 - 18:28

 Texas Association of Enterprise Architects Summit Mike Walker

Space Limited -- Register Now!

With the popularity and as we come closer to the event on March 20th seats become more limited. To be sure you have a spot in the audience to listen to luminaries like John Zachman and Jeanne Ross along with our local Enterprise Architects in the Texas area proving thought leadership that you shouldn’t miss. Make sure to RSVP soon. Space is limited and admission is first come first served.

Come join us at the Austin Renaissance in the Arboretum on March 20th, 2014 from 8:00am to 5:00pm with an evening social from approximately 4:30pm –7: 00pm. Costs of the event are highly subsidized by our sponsors for a low cost of $50.00 for members of the AEA and $100.00 for non-members of the AEA.

 RSVP for the Texas Association of Enterprise Architects Summit Mike Walker

 

Updated Agenda for the Day

We have an action packed agenda!

 Texas Association of Enterprise Architects Summit Mike Walker

Venue and Lodging Details

The summit will be held at:

RENAISSANCE AUSTIN | 9721 Arboretum Blvd | Austin, TX 78759

www.renaissancehotels.com

Making Reservations

We have negotiated discounted room rates at: $189 per room per night plus applicable tax and service fee. You have until Friday, March 14th before the discount expires. Please make your arrangements before then.

Booking Website:
https://resweb.passkey.com/Resweb.do?mode=welcome_ei_new&eventID=11150319

Questions?

info@TexasAEA.org

Categories: Architecture

Register Now for the Texas Association of Enterprise Architects | Summit

Mike Walker's Blog - Sat, 03/01/2014 - 19:28

 Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas

1st Texas Association of Enterprise Architects Summit!

Designed by Enterprise Architects, for Enterprise Architects this summit provides an opportunity to discover the latest approaches and innovative ideas in Strategy, Enterprise Architecture and Business Architecture.

This full day event is different than most. Shifting the focus from a speaker only event to a mix of presentations and collaborative sessions with highly relevant and proven practices that are applicable to the issues Enterprise Architects face today.

Featured Speakers

 Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas Jeanne Ross     Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas John Zachman

Jeanne Ross          John Zachman

 

Register

Come join us at the Austin Renaissance in the Arboretum on March 20th, 2014 from 8:00am to 5:00pm with an evening social from 5:00pm – 8:00pm. Costs of the event are highly subsidized by our sponsors for a low cost of $50.00 for members of the AEA and $100.00 for non-members of the AEA.

 Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas RSVP

 

Agenda

We have an action packed agenda!

 Texas Association of Enterprise Architects AEA Summit Mike Walker  Austin Texas Agenda

Venue and Lodging Details

The summit will be held at:

RENAISSANCE AUSTIN | 9721 Arboretum Blvd | Austin, TX 78759

www.renaissancehotels.com

We have negotiated discounted room rates at: $189 per room per night plus applicable tax and service fee.

Questions?

info@TexasAEA.org

Categories: Architecture

Open Group Enterprise Architecture San Francisco Conference 2014 Recap

Mike Walker's Blog - Wed, 02/26/2014 - 23:01

 Mike Walker Open Group Conference San Francisco Recap Keynote Business Architecture TOGAF

The other week, The Open Group kicked off their signature Enterprise Architecture Conference in San Francisco. I always enjoy these conferences as it allows me to connect with seasoned and high caliber peers in the Enterprise Architecture community. However, these events aren’t just fun and games for me it’s an opportunity to make an impact on the profession. I had an action packed week that included the following:

  • Presentations. I had the privilege of co-presenting the morning keynote [abstract here] with one of my clients and a thought leader in the area of Business Architecture. Wrapping up the morning the AM keynote presenters got on a panel to discuss future trends and technologies.
  • Standards. I spent a half day on the overall next version of TOGAF and then another two full days leading the Business Architecture workshops to advance Business Architecture within TOGAF.
  • Lots and lots of conversations. Great dialogs with EA’s. I have to say, I learn something new every time I go to one of these conferences. 

Like others in the recent past the Open Group has taken on an industry focus for these quarterly conferences. The goal here is to provide a much tailored experience to EA’s in those specific industries. This time around it was Healthcare. Like most of these conferences there was a broad international representations from nations such as: UK, US, Columbia, Philippines, Australia, Japan, Netherlands, Germany, South Africa and many others.

The Open Group has posted two summaries are well, I would suggest taking a look at them. I wasn’t going to duplicate much of what they covered since they did such a good job. See below:

Conference Announcements

Even though there was a vertical focus the Open Group did cover additional areas around the profession of EA, forward looking views on the industry and architecture topics like big data and cloud.

Included in that were a series of announcements:

 

My Presentations

 Mike Walker Business Architecture Keynote Open Group Conference San Francisco Recap

I had two presentations both of which were recorded and available for on-demand viewing.

  1. Business Architecture the Key to Enterprise Transformation
  2. Future Technologies Panel

Business Architecture the Key to Enterprise Transformation

With enterprises being bombarded with emerging and disruptive technologies such as cloud, mobile, social and information, it can be difficult for organizations to really understand how leverage these new opportunities that present themselves. To do this, Business Architecture is the key to setting the right strategy for these new opportunities. By leveraging this discipline with Enterprise Architecture we can better quantify and qualify these opportunities to ensure we are maximizing value for our companies. In this session, we will explore the current landscape along with proven and leading practices in Business Architecture. The session will be concluded with how P&G leverages Business Architecture in their Enterprise Architecture practice.

 

 Mike Walker Business Architecture Keynote Open Group Conference San Francisco Recap

On Demand Video

 Mike Walker Business Architecture Keynote Open Group Conference San Francisco Recap

Presentation Available

 

Useful Links

Categories: Architecture

Texas Association of Enterprise Architects February Meeting

Mike Walker's Blog - Fri, 02/14/2014 - 16:52

 Texas Association of Enterprise Architects AEA

SAVE THE DATE: Next Meeting is Thursday February 27th 5:30 – 8:30 at Abel’s North

Our First Event Was a Hit, Thank You!

 Texas Association of Enterprise Architects AEA

Wow... I am humbled and inspired by the amazing turnout by all of you. This is truly amazing. Through talking with most of you I was inspired by your passion for wanting to build out a professional organization. What even furthered that commitment was one forth, yes a forth of you came from other cities in Texas. What commitment for our local EA community!

 Texas Association of Enterprise Architects AEA  Texas Association of Enterprise Architects AEA  Texas Association of Enterprise Architects AEA

For more photos of the event see our Flickr site here: http://www.flickr.com/photos/116214383@N04/sets/72157640513193715/

 

February 27th: Monthly Meeting

The meeting will take place on Thursday, February 27th, 5:30 – 8:00 at Abel’s North. This meeting will be a great opportunity for you to get plugged into the local architecture community along with an introduction to the Texas AEA Chapter.

Agenda

  1. Welcome
  2. Updates and Announcements – Mike Walker
  3. Open Group Conference Recap – Mike Walker
  4. Special Interest Groups - Dave Gibson and Venkat Nambiyur
  5. Social

To join RSVP here.

 Texas Association of Enterprise Architects AEA

Abel’s North Austin

4001 West Parmer Lane

Austin, Texas 78727

(512) 835-0010

Note: Abel’s used to previously occupied by Cool River and is behind other buildings and you should look for It’s a Grind coffee shop and turn into that small shopping plaza to find Abel’s.

Categories: Architecture

So Many Different Views, So Much Business Architecture Confusion

Mike Walker's Blog - Wed, 02/12/2014 - 18:13

 Business Architecture There certainly is no shortage of opinions in the industry around business architecture. This spans from what it is the definition of Business Architecture to how one would implement followed by the skills and competencies needed to successfully execute.

It can be overwhelming for anyone that is new to this space. Wrapping your mind around the fact and fiction is lengthy and sometimes down right frustrating. This is mainly due to where the Business Architecture discipline is at. I discuss this further in my post, “Business Architecture Ready For Prime Time”.

The second largest contributor to this is all the different views and opinions coming from the marketplace at large. From vendors to consultancies to analysts and even people like myself that blog about such topics and all of these positions are coming at the problem from a different angle.

Can it be more difficult?  Hopefully I can help.

How Do I Better Understand Business Architecture

Before we get into the weeds it important to understand a few things.

There isn’t necessarily a who’s right and who’s wrong solution here. I think of the problem of understanding Business Architecture positioning as a matter of perspective. Let’s assume all have the best intentions but have a certain set of biases, constraints, incentives and motivations. While there is more drivers, these are the major ones.

So to better deal with these dynamics I suggest using a mental frame for thinking about these contrasting and or conflicting opinions. When I look at the this market, I segment where the opinions or positions are coming from. I then look to understand the motivations for their opinion and start to form a classification mechanism so it's easier to understand their perspectives.

 Business Architecture Vendor COnsultant Analyst Standards Persectives Mental Framework

As shown above you will find such an activity. What you see below is how slice and dice the major voices in the business architecture world. I carve off 4 major perspectives that include:

  • Analysts
  • Standards Bodies
  • Vendors
  • Consultancies

There is a problem with the model…

You will notice a very key source missing. That would be the actual end organizations accountable and responsible for this effort. While there are pockets where this perspective is represented, this voice is largely missing. I would assume this is due to the fact that they are too busy adding value in their organizations rather than just talking about it at conference or on a blog.

 

Walking the Model

The image above is a high-level representation of how to split the different perspectives that you may encounter using the model that I’ve created.

 

Analysts

This perspective is interesting as it does have some very unique qualities to it. It’s important to really understand the drivers behind the position asserted here. Analysts got through a very rigorous vetting process and they are very good at what they do, analyze the market space. They are not practitioners(anymore) and have very little “skin in the game” when it comes to actual results with end customers. However, they have one very strong attribute that trumps other perspectives, their rich data and hypothesis or predictions they make with that data. This data is a vital aspect to the decision making process.

  • Time Horizon of Guidance: Past 1 year to 5 years in the future
  • Perspective: Broad Industry Thought Leading 
  • Context of Guidance: Prediction Based
  • Can be used for: evidence of a position, understanding the market landscape, understanding your peers in the industry and general advisory

 

Standards Bodies

Often times when looking at the perspectives of a standards body you are looking at one in which is the most grounded in the day to day reality. The goal of most standards bodies is to articulate practices that are proven in the industry. This is what I refer to as the “safety net”. As an example, The Open Group currently has 465 company memberships that represent HQs in 38 countries! Each one of these companies has a say and a vote on what gets ratified as a standard. That is one great vetting process.

  • Time Horizon of Guidance: Past 5 years to 1 year future
  • Perspective: Broad Industry Proven Practice
  • Context of Guidance: Evidence Based
  • Can be used for: practical implementation, provides a safety net or "insurance policy" for making decision on what has worked for customers world-wide, faster time to market on standards and reference architectures

 

Vendors

Another unique perspective here is that of our vendors. While this isn’t always true, most times it is, vendors base their opinions on the ability to drive services and products. After all that’s their business. I find that this perspective isn’t bad or wrong it’s actually a very good one as it can connect the broader more ethereal perspectives back down to reality with implementable tooling.

  • Time Horizon of Guidance: Past 1 year to 3-5 years in the future
  • Perspective: Thought Leading based on enabling a capability. Usually technical in nature.
  • Context of Guidance: Immediate and Near Future Market Needs
  • Can be used for: advice and tools for automation, deep coverage of a functional or capability area

 

Consultancies

Consultancies are similar to vendors in many regards with one small tweak, I usually find that services companies don’t share their opinions in detail. This is largely to their business model. They monetize their knowledge (intellectual property) and services. So if you do hire a services / consultancy firm I personally find extremely high quality material but you have to purchase it to get a hold of it.

  • Time Horizon of Guidance: Past 5 years to 2 years in the future
  • Perspective: Proven practice and some leading practice (largely depends on the type of consultancy)
  • Context of Guidance: Evidence based in the context of a specific offering by the firm
  • Can be used for: evidence of a position, understanding the market landscape, understanding your peers in the industry and general advisory

 

Conclusion

Hopefully with this mental frame it helps reduce the confusion in the marketplace. You probably noticed that I didn’t give you my personal opinions on specific perspective areas or sources, I don’t think that is productive as each of those sources has it own set of drivers for making those assertions. Rather than doing that I would rather arm you with the tools I use to form my opinions so that you can do the same. 

I find that it helps me in the following ways:

  1. Keeps me Calibrated. Keeps me a sanity check when I’m reading material so that I can properly consume the information I am reading.
  2. Go Deeper. I know what questions I need to ask myself or to the perspective that is sending messages my way. That allows me to ask the right questions to form my own opinion.
  3. Ability to Rationalize. I can better classify information and thus use it more effectively
  4. Effective Communication. I can more effectually use the information in my communication to my peers, customers or partners
  5. More Effective Decision Making. By understanding these perspectives I can leverage the information in a fit-for-purpose way thus reducing the risks of mistakes and mishaps.
Categories: Architecture

Business Architecture Ready For Prime Time

Mike Walker's Blog - Mon, 02/10/2014 - 22:30

image

It’s probably no surprise to all of you that there has been a significant amount of talk about Business Architecture in recent years. Just coming back from the Open Group Conference in San Francisco it was one of the key topics for practitioners. However, with all the buzz, is Business Architecture really ready for prime time? This is a real and very legitimate question.

Separating fact, fiction and pure buzz is an important data point for Enterprise Architects. We all but learned our lesson from similar buzz worthy topics like SOA and Cloud. So needless to say, diverting energy into unproven spaces or trends is a very risky business. EA’s must continually add value back to the company and must be very judicious with their time. Most EA departments if not all that I talk to, just don’t have time to experiment on trends or fads.

 

So what does this mean for Business Architecture?

 Mike Walker Defining Business Architecture. TOGAF 7 Illistration of Business FocusI believe that business architecture has been one of those topics that has always been here but has gotten very little attention until now. Seemingly  Business Architecture seems like a new discipline but it isn’t. In a previous post titled, Defining Business Architecture  I talked not only about what is business architecture but also some history around it. What I talk about is how Business Architecture is actually been here for quite some time. You can find evidence of it in the beginnings of the EA frameworks. While “true” EA was in its infancy so was Business Architecture component of it. During that time most things that occurred in the technology space were mostly just that, technology focused. I believe that for many reasons that was the correct thing to do based on where we were at in our industry, limited maturity of our discipline, our capabilities that we could offer and the rudimentary and basic profile of the technology landscape. Simply put…

Crawl, Walk and Run.

 

Enterprise Architecture Evolution… How do we get to Running?

Times have certainly changed and so has IT and along with it EA. This industry has matured and along with that maturity comes more sophistication. Up-leveling what we do has a goal of bringing more value to our customers.  What we have found is that delivering context-less technology widgets are just not delivering the right level of value to add to the capabilities of our businesses.

With all this said, I believe that Business Architecture is still at the beginning of its journey. I do think that we have come a long way with establishing the need and the value but there is still a great deal of work to be done to get Business Architecture formed as a fully standard practice. We see this be just looking back, from largely ignoring it in 2000’s to shifting that in the 2010’s and  addressing it as a key focal point of EA.

So if we look at some common mental frames for calibrating where we are at out where things are at in the industry we could use Geoffrey Moore’s, Crossing the Chasm as a way to gauge where we at. And if I look at that model I would say you were still in the chasm however we are quickly coming out of it.

 

image

 

So what is that me, it means that we are seeing evidence of organizations outside of early adopters and innovators actually using Business Architecture to solve real-world problems in the next class of individuals and organizations are for two as the early majority. We're starting to see a lot of this in the industry.

But also let’s talk a look outside my anecdotal points with the customers and look at what we see from the analyst community as well. We see strong evidence of this as well. As an example,  Gartner conducted a double blind 2011 worldwide survey and a 2012 survey of EA summit attendees in the US and Europe, Gartner finds that the vast majority of organizations are focusing their EA efforts on how they can drive business value (including IT), not just on driving IT decisions.

image

Source: Gartner (2012): Gartner Hype Cycle 2012

Based in the above shows that Gartner finds that 67% of organizations are either: starting (39%), restarting (7%) or renewing(21%) their EA efforts. By the way, they also note that they know that many of the organizations that state that they are "starting EA for the first time" are actually "restarting" because we have talked to them in the past - it is just that the current EA leaders don't know that there previous efforts.

The analyst are the only ones reporting on this activity. We have independent bodies of knowledge that have sprung up that are continuing to try to crack this business architecture did not. A couple of the most popular ones include:

  • BABoK
  • BizBoK

 

And of course behind that comes vendor practices and boutique consulting practices.

With this flurry of activity from real customers, vendors, analysts and standards bodies alike, Business Architecture is very real and is a discipline within Enterprise Architecture that needs some serious focus.

Categories: Architecture