Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Docker and Microsoft: Integrating Docker with Windows Server and Microsoft Azure

ScottGu's Blog - Scott Guthrie - Wed, 10/15/2014 - 14:30

I’m excited to announce today that Microsoft is partnering with Docker, Inc to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Docker is an open platform that enables developers and administrators to build, ship, and run distributed applications. Consisting of Docker Engine, a lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.

Earlier this year, Microsoft released support for Docker containers with Linux on Azure.  This support integrates with the Azure VM agent extensibility model and Azure command-line tools, and makes it easy to deploy the latest and greatest Docker Engine in Azure VMs and then deploy Docker based images within them.   Docker Support for Windows Server + Docker Hub integration with Microsoft Azure

Today, I’m excited to announce that we are working with Docker, Inc to extend our support for Docker much further.  Specifically, I’m excited to announce that:

1) Microsoft and Docker are integrating the open-source Docker Engine with the next release of Windows Server.  This release of Windows Server will include new container isolation technology, and support running both .NET and other application types (Node.js, Java, C++, etc) within these containers.  Developers and organizations will be able to use Docker to create distributed, container-based applications for Windows Server that leverage the Docker ecosystem of users, applications and tools.  It will also enable a new class of distributed applications built with Docker that use Linux and Windows Server images together.

image 

2) We will support the Docker client natively on Windows.  Developers and administrators running Windows will be able to use the same standard Docker client and interface to deploy and manage Docker based solutions with both Linux and Windows Server environments.

image

 

3) Docker for Windows Server container images will be available in the Docker Hub alongside the Docker for Linux container images available today.  This will enable developers and administrators to easily share and automate application workflows using both Windows Server and Linux Docker images.

4) We will integrate Docker Hub with the Microsoft Azure Gallery and Azure Management Portal.  This will make it trivially easy to deploy and run both Linux and Windows Server based Docker images in Microsoft Azure.

5) Microsoft is contributing code to Docker’s Open Orchestration APIs.  These APIs provide a portable way to create multi-container Docker applications that can be deployed into any datacenter or cloud provider environment. This support will allow a developer or administrator using the Docker command line client to launch either Linux or Windows Server based Docker applications directly into Microsoft Azure from his or her development machine.

Exciting Opportunities Ahead

At Microsoft we continue to be inspired by technologies that can dramatically improve how quickly teams can bring new solutions to market. The partnership we are announcing with Docker today will enable developers and administrators to use the best container tools available for both Linux and Windows Server based applications, and to run all of these solutions within Microsoft Azure.  We are looking forward to seeing the great applications you build with them.

You can learn more about today’s announcements here and here.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Sponsored Post: Apple, Hypertable, VSCO, Gannett, Sprout Social, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Senior Engineer: Mobile Services. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. We seek an accomplished server-side engineer capable of delivering an extraordinary portfolio of features and services based on emerging technologies to our internal customers. Please apply here.
    • Apple Pay Automation Engineer. The Apple Pay group within iOS Systems is looking for a outstanding automation engineer with strong experience in building client and server test automation. We work in an agile software development environment and are building infrastructure to move towards continuous delivery where every code change is thoroughly tested by push of a button and is considered ready to be deployed if we choose so. Please apply here
    • Site Reliability Engineer. As a member of the Apple Pay SRE team, you’re expected to not just find the issues, but to write code and fix them. You’ll be involved in all phases and layers of the application, and you’ll have a direct impact on the experience of millions of customers. Please apply here.
    • Software Engineering Manager. In this role, you will be communicating extensively with business teams across different organizations, development teams, support teams, infrastructure teams and management. You will also be responsible for working with cross-functional teams to delivery large initiatives. Please apply here

  • VSCO. Do you want to: ship the best digital tools and services for modern creatives at VSCO? Build next-generation operations with Ansible, Consul, Docker, and Vagrant? Autoscale AWS infrastructure to multiple Regions? Unify metrics, monitoring, and scaling? Build self-service tools for engineering teams? Contact me (Zo, zo@vs.co) and let’s talk about working together. vs.co/careers.

  • Gannett Digital is looking for talented Front-end developers with strong Python/Django experience to join their Development & Integrations team. The team focuses on video, user generated content, API integrations and cross-site features for Gannett Digital’s platform that powers sites such as http://www.usatoday.com, http://www.wbir.com or http://www.democratandchronicle.com. Please apply here.

  • Platform Software Engineer, Sprout Social, builds world-class social media management software designed and built for performance, scale, reliability and product agility. We pick the right tool for the job while being pragmatic and scrappy. Services are built in Python and Java using technologies like Cassandra and Hadoop, HBase and Redis, Storm and Finagle. At the moment we’re staring down a rapidly growing 20TB Hadoop cluster and about the same amount stored in MySQL and Cassandra. We have a lot of data and we want people hungry to work at scale. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Sign Up for New Aerospike Training Courses.  Aerospike now offers two certified training courses; Aerospike for Developers and Aerospike for Administrators & Operators, to help you get the most out of your deployment.  Find a training course near you. http://www.aerospike.com/aerospike-training/

  • November TokuMX Meetups for Those Interested in MongoDB. Join us in one of the following cities in November to learn more about TokuMX and hear TokuMX use cases. 11/5 - London;11/11 - San Jose; 11/12 - San Francisco. Not able to get to these cities? Check out our website for other upcoming Tokutek events in your area - www.tokutek.com/events.
Cool Products and Services
  • Hypertable Inc. Announces New UpTime Support Subscription Packages. The developer of Hypertable, an open-source, high-performance, massively scalable database, announces three new UpTime support subscription packages – Premium 24/7, Enterprise 24/7 and Basic. 24/7/365 support packages start at just $1995 per month for a ten node cluster -- $49.95 per machine, per month thereafter. For more information visit us on the Web at http://www.hypertable.com/. Connect with Hypertable: @hypertable--Blog.

  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Diagnose server issues from a single tab. Scalyr replaces all your monitoring and log management services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. Engineers say it's powerful and easy to use. Customer support teams use it to troubleshoot user issues. CTO's consider it a smart alternative to Splunk, with enterprise-grade functionality, sane pricing, and human support. Trusted by in-the-know companies like Codecademy – learn more!

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

AngularJS Training Week

Xebia Blog - Tue, 10/14/2014 - 07:00

Just a few more weeks and it's the AngularJS Training Week at Xebia in Hilversum (The Netherlands). 4 days full with AngularJS content, from 17 to 20 October, 2014. In these different days we will cover the AngularJS basics, AngularJS advanced topics, Tooling & Scaffolding and Testing with Jasmine, Karma and Protractor.

If you already have some experience or if you are only interested in one or two of the topics, then you can sign up for just the days that are of interest to you.

Visit www.angular-training.com for a full overview of the days and topics or sign up on the Xebia Training website using the links below.

Fast and Easy integration testing with Docker and Overcast

Xebia Blog - Mon, 10/13/2014 - 18:40
Challenges with integration testing

Suppose that you are writing a MongoDB driver for java. To verify if all the implemented functionality works correctly, you ideally want to test it against a REAL MongoDB server. This brings a couple of challenges:

  • Mongo is not written in java, so we can not embed it easily in our java application
  • We need to install and configure MongoDB somewhere, and maintain the installation, or write scripts to set it up as part of our test run.
  • Every test we run against the mongo server, will change the state, and tests might influence each other. We want to isolate our tests as much as possible.
  • We want to test our driver against multiple versions of MongoDB.
  • We want to run the tests as fast as possible. If we want to run tests in parallel, we need multiple servers. How do we manage them?

Let's try to address these challenges.

First of all, we do not really want to implement our own MonogDB driver. Many implementations exist and we will be reusing the mongo java driver to focus on how one would write the integration test code.

Overcast and Docker

logoWe are going to use Docker and Overcast. Probably you already know Docker. It's a technology to run applications inside software containers. Overcast is the library we will use to manage docker for us. Overcast is a open source java library
developed by XebiaLabs to help you to write test that connect to cloud hosts. Overcast has support for various cloud platforms, including EC2, VirtualBox, Vagrant, Libvirt (KVM). Recently support for Docker has been added by me in Overcast version 2.4.0.

Overcast helps you to decouple your test code from the cloud host setup. You can define a cloud host with all its configuration separately from your tests. In your test code you will only refer to a specific overcast configuration. Overcast will take care of creating, starting, provisioning that host. When the tests are finished it will also tear down the host. In your tests you will use Overcast to get the hostname and ports for this cloud host to be able to connect to them, because usually these are dynamically determined.

We will use Overcast to create Docker containers running a MongoDB server. Overcast will help us to retrieve the dynamically exposed port by the Docker host. The host in our case will always be the docker host. Docker in our case runs on an external Linux host. Overcast will use a TCP connection to communicate with Docker. We map the internal ports to a port on the docker host to make it externally available. MongoDB will internally run on port 27017, but docker will map this port to a local port in the range 49153 to 65535 (defined by docker).

Setting up our tests

Lets get started. First, we need a Docker image with MongoDB installed. Thanks to the Docker community, this is as easy as reusing one of the already existing images from the Docker Hub. All the hard work of creating such an image is already done for us, and thanks to containers we can run it on any host capable of running docker containers. How do we configure Overcast to run the MongoDB container? This is our minimal configuration we put in a file called overcast.conf:

mongodb {
    dockerHost="http://localhost:2375"
    dockerImage="mongo:2.7"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

That's all! The dockerHost is configured to be localhost with the default port. This is the default value and you can omit this. The docker image called mongo version 2.7 will be automatically pulled from the central docker registry. We set exposeAllPorts to true to inform docker it needs to dynamically map all exposed ports by the docker image. We set remove to true to make sure the container is automatically removed when stopped. Notice we override the default container startup command by passing in an extra parameter "--smallfiles" to improve testing performance. For our setup this is all we need, but overcast also has support for defining static port mappings, setting environment variables, etc. Have a look at the Overcast documentation for more details.

How do we use this overcast host in our test code? Let's have a look at the test code that sets up the Overcast host and instantiates the mongodb client that is used by every test. The code uses the TestNG @BeforeMethod and @AfterMethod annotations.

private CloudHost itestHost;
private Mongo mongoClient;

@BeforeMethod
public void before() throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost("mongodb");
    itestHost.setup();

    String host = itestHost.getHostName();
    int port = itestHost.getPort(27017);

    MongoClientOptions options = MongoClientOptions.builder()
        .connectTimeout(300 * 1000)
        .build();

    mongoClient = new MongoClient(new ServerAddress(host, port), options);
    logger.info("Mongo connection: " + mongoClient.toString());
}

@AfterMethod
public void after(){
    mongoClient.close();
    itestHost.teardown();
}

It is important to understand that the mongoClient is the object under test. Like mentioned before, we borrowed this library to demonstrate how one would integration test such a library. The itestHost is the Overcast CloudHost. In before(), we instantiate the cloud host by using the CloudHostFactory. The setup() will pull the required images from the docker registry, create a docker container, and start this container. We get the host and port from the itestHost and use them to build our mongo client. Notice that we put a high connection timeout on the connection options, to make sure the mongodb server is started in time. Especially the first run it can take some time to pull images. You can of course always pull the images beforehand. In the @AfterMethod, we simply close the connection with mongoDB and tear down the docker container.

Writing a test

The before and after are executed for every test, so we will get a completely clean mongodb server for every test, running on a different port. This completely isolates our test cases so that no tests can influence each other. You are free to choose your own testing strategy, sharing a cloud host by multiple tests is also possible. Lets have a look at one of the tests we wrote for mongo client:

@Test
public void shouldCountDocuments() throws DockerException, InterruptedException, UnknownHostException {

    DB db = mongoClient.getDB("mydb");
    DBCollection coll = db.getCollection("testCollection");
    BasicDBObject doc = new BasicDBObject("name", "MongoDB");

    for (int i=0; i < 100; i++) {
        WriteResult writeResult = coll.insert(new BasicDBObject("i", i));
        logger.info("writing document " + writeResult);
    }

    int count = (int) coll.getCount();
    assertThat(count, equalTo(100));
}

Even without knowledge of MongoDB this test should not be that hard to understand. It creates a database, a new collection and inserts 100 documents in the database. Finally the test asserts if the getCount method returns the correct amount of documents in the collection. Many more aspects of the mongodb client can be tested in additional tests in this way. In our example setup, we have implemented two more tests to demonstrate this. Our example project contains 3 tests. When you run the 3 example tests sequentially (assuming the mongo docker image has been pulled), you will see that it takes only a few seconds to run them all. This is extremely fast.

Testing against multiple MongoDB versions

We also want to run all our integration tests against different versions of the mongoDB server to ensure there are no regressions. Overcast allows you to define multiple configurations. Lets add configuration for two more versions of MongoDB:

defaultConfig {
    dockerHost="http://localhost:2375"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

mongodb27=${defaultConfig}
mongodb27.dockerImage="mongo:2.7"

mongodb26=${defaultConfig}
mongodb26.dockerImage="mongo:2.6"

mongodb24=${defaultConfig}
mongodb24.dockerImage="mongo:2.4"

The default configuration contains the configuration we have already seen. The other three configurations extend from the defaultConfig, and define a specific mongoDB image version. Lets also change our test code a little bit to make the overcast configuration we use in the test setup depend on a parameter:

@Parameters("overcastConfig")
@BeforeMethod
public void before(String overcastConfig) throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost(overcastConfig);

Here we used the paramaterized tests feature from TestNG. We can now define a TestNG suite to define our test cases and how to pass in the different overcast configurations. Lets have a look at our TestNG suite definition:

<suite name="MongoSuite" verbose="1">
    <test name="MongoDB27tests">
        <parameter name="overcastConfig" value="mongodb27"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB26tests">
        <parameter name="overcastConfig" value="mongodb26"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB24tests">
        <parameter name="overcastConfig" value="mongodb24"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
</suite>

With this test suite definition we define 3 test cases that will pass a different overcast configuration to the tests. The overcast configuration plus the TestNG configuration enables us to externally configure against which mongodb versions we want to run our test cases.

Parallel test execution

Until this point, all tests will be executed sequentially. Due to the dynamic nature of cloud hosts and docker, nothing limits us to run multiple containers at once. Lets change the TestNG configuration a little bit to enable parallel testing:

<suite name="MongoSuite" verbose="1" parallel="tests" thread-count="3">

This configuration will cause all 3 test cases from our test suite definition to run in parallel (in other words our 3 overcast configurations with different MongoDB versions). Lets run the tests now from IntelliJ and see if all tests will pass:

Screen Shot 2014-10-08 at 8.32.38 PM
We see 9 executed test, because we have 3 tests and 3 configurations. All 9 tests have passed. The total execution time turned out to be under 9 seconds. That's pretty impressive!

During test execution we can see docker starting up multiple containers (see next screenshot). As expected it shows 3 containers with a different image version running simultaneously. It also shows the dynamic port mappings in the "PORTS" column:

Screen Shot 2014-10-08 at 8.50.07 PM

That's it!

Summary

To summarise, the advantages of using Docker with Overcast for integration testing are:

  1. Minimal setup. Only a docker capable host is required to run the tests.
  2. Save time. Minimal amount of configuration and infrastructure setup required to run the integration tests thanks to the docker community.
  3. Isolation. All test can run in their isolated environment so the tests will not affect each other.
  4. Flexibility. Use multiple overcast configuration and parameterized tests for testing against multiple versions.
  5. Speed. The docker container starts up very quickly, and overcast and testng allow you to even parallelize the testing by running multiple containers at once.

The example code for our integration test project is available here. You can use Boot2Docker to setup a docker host on Mac or Windows.

Happy testing!

Paul van der Ende 

Note: Due to a bug in the gradle parallel test runner you might run into this random failure when you run the example test code yourself. The work around is to disable parallelism or use a different test runner like IntelliJ or maven.

 

Watch the open files limit when running Riak

Agile Testing - Grig Gheorghiu - Mon, 10/13/2014 - 17:53
I was close to expressing my unbridled joy at how little hand-holding our Riak cluster needs, when we started to see strange increased latencies when hitting the cluster, on calls that should have been very fast. Also, the health of the Riak nodes seems fine in terms of CPU, memory and disk. As usual, our good old friend the error log file pointed us towards the solution. We saw entries like this in /var/log/riak/error.log:

2014-10-11 03:22:40.565 UTC [error] <0.12830.4607> CRASH REPORT Process <0.12830.4607> with 0 neighbours exited with reason: {error,accept_failed} in mochiweb_acceptor:init/3 line 342014-10-11 03:22:40.619 UTC [error] <0.168.0> {mochiweb_socket_server,310,{acceptor_error,{error,accept_failed}}}2014-10-11 03:22:40.619 UTC [error] <0.12831.4607> application: mochiweb, "Accept failed error", "{error,emfile}"
A google search revealed that a possible cause of these errors is the dreaded open file descriptor limit, which is 1024 by default in Ubuntu.

To be perfectly honest, we hadn't done almost any tuning on our Riak cluster, because it had been running so smoothly. But recently we started to throw more traffic at it, hence issues with open file descriptors made sense. To fix it, we followed the advice in this Riak doc and created /etc/default/riak with the contents:
ulimit -n 65536
We also took the opportunity to apply the networking-related kernel tuning recommendations from this other Riak tuning doc and added these lines to /etc/sysctl.conf:
net.ipv4.tcp_max_syn_backlog = 40000net.core.somaxconn=4000net.ipv4.tcp_timestamps = 0net.ipv4.tcp_sack = 1net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_fin_timeout = 15net.ipv4.tcp_keepalive_intvl = 30net.ipv4.tcp_tw_reuse = 1
Then we ran sysctl -p to update the above values in the kernel. Finally we restarted our Riak nodes one at a time.
I am happy to report that ever since, we've had absolutely no issues with our Riak cluster.  I should also say we are running Riak 1.3, and I understand that Riak 2.0 has better tests in place for avoiding this issue.

I do want to give kudos to Basho for an amazingly robust piece of technology, whose only fault is that it gets you into the habit of ignoring it because it just works!

How Digital is Changing Physical Experiences

The business economy is going through massive change, as the old world meets the new world.

The convergence of mobility, analytics, social media, cloud computing, and embedded devices is driving the next wave of digital business transformation, where the physical world meets new online possibilities.

And it’s not limited to high-tech and media companies.

Businesses that master the digital landscape are able to gain strategic, competitive advantage.   They are able to create new customer experiences, they are able to gain better insights into customers, and they are able to respond to new opportunities and changing demands in a seamless and agile way.

In the book, Leading Digital: Turning Technology into Business Transformation: Turning Technology Into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of the ways that businesses are meshing the physical experience with the digital experience to generate new business value.

Provide Customers with an Integrated Experience

Businesses that win find new ways to blend the physical world with the digital world.  To serve customers better, businesses are integrating the experience across physical, phone, mail, social, and mobile channels for their customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Companies with multiple channels to customers--physical, phone, mail, social, mobile, and so on--are experiencing pressure to provide an integrated experience.  Delivering these omni-channel experiences requires envisioning and implementing change across both front-end and operational processes.  Innovation does not come from opposing the old and the new.  But as Burberry has shown,  innovation comes from creatively meshing the digital and the physical to reinvent new and compelling customer experiences and to foster continuous innovation.”

Bridge In-Store Experiences with New Online Possibilities

Starbucks is a simple example of blending digital experiences with their physical store.   To serve customers better, they deliver premium content to their in-store customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Similarly, the unique Starbucks experience is rooted in connecting with customers in engaging ways.  But Starbucks does not stop with the physical store.  It has digitally enriched the customer experience by bridging its local, in-store experience with attractive new online possibilities.  Delivered via a free Wi-Fi connection, the Starbucks Digital Network offers in-store customers premium digital content, such as the New York Times or The Economist, to enjoy alongside their coffee.  The network also offers access to local content, from free local restaurant reviews from Zagat to check-in via Foursquare.”

An Example of Museums Blending Technology + Art

Museums can create new possibilities by turning walls into digital displays.  With a digital display, the museum can showcase all of their collections and provide rich information, as well as create new backdrops, or tailor information and tours for their customers.

Via Leading Digital: Turning Technology into Business Transformation:

“Combining physical and digital to enhance customer experiences is not limited to just commercial enterprises.  Public services are getting on the act.  The Cleveland Museum of Art is using technology to enhance the experience and the management of visitors.  'EVERY museum is searching for this holy grail, this blending of technology and art,' said David Franklin, the director of the museum.

 

Fort-foot-wide touch screens display greeting-card sized images of all three thousand objects, and offers information like the location of the actual piece.  By touching an icon on the image, visitors can transfer it from the wall to an iPad (their own, or rented from the museum for $5 a day), creating a personal list of favorites.  From this list, visitors can design a personalized tour, which they can share with others.

 

'There is only so much information you can put on a wall, and no one walks around with catalogs anymore,' Franklin said.  The app can produce a photo of the artwork's original setting--seeing a tapestry in a room filled with tapestries, rather than in a white-walled gallery, is more interesting.  Another feature lets you take the elements of a large tapestry and rearrange them in either comic-book or movie-trailer format.  The experience becomes fun, educational, and engaging.  This reinvention has lured new technology-savvy visitors, but has also made seasoned museum-goers come more often.”

As you figure out the future capability vision for your business, and re-imagine what’s possible, consider how the Nexus of Forces (Cloud, Mobile, Social, and Big Data), along with the mega mega-trend (Internet-of-Things), can help you shape your digital business transformation.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

How League of Legends Scaled Chat to 70 million Players - It takes Lots of minions.

How would you build a chat service that needed to handle 7.5 million concurrent players, 27 million daily players, 11K messages per second, and 1 billion events per server, per day?

What could generate so much traffic? A game of course. League of Legends. League of Legends is a team based game, a multiplayer online battle arena (MOBA), where two teams of five battle against each other to control a map and achieve objectives.

For teams to succeed communication is crucial. I learned that from Michal Ptaszek, in an interesting talk on Scaling League of Legends Chat to 70 million Players (slides) at the Strange Loop 2014 conference. Michal gave a good example of why multiplayer team games require good communication between players. Imagine a basketball game without the ability to call plays. It wouldn’t work. So that means chat is crucial. Chat is not a Wouldn’t It Be Nice feature.

Michal structures the talk in an interesting way, using as a template the expression: Make it work. Make it right. Make it fast.

Making it work meant starting with XMPP as a base for chat. WhatsApp followed the same strategy. Out of the box you get something that works and scales well...until the user count really jumps. To make it right and fast, like WhatsApp, League of Legends found themselves customizing the Erlang VM. Adding lots of monitoring capabilities and performance optimizations to remove the bottlenecks that kill performance at scale.

Perhaps the most interesting part of their chat architecture is the use of Riak’s CRDTs (commutative replicated data types) to achieve their goal of a shared nothing fueled massively linear horizontal scalability. CRDTs are still esoteric, so you may not have heard of them yet, but they are the next cool thing if you can make them work for you. It’s a different way of thinking about handling writes.

Let’s learn how League of Legends built their chat system to handle 70 millions players...

Stats
Categories: Architecture

Xebia KnowledgeCast Episode 5: Madhur Kathuria and Scrum Day Europe 2014

Xebia Blog - Mon, 10/13/2014 - 10:48

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this 5th episode, we share key insights of Madhur Kathuria, Xebia India’s Director of Agile Consulting and Transformation, as well as some impressions of our Knowledge Exchange and Scrum Day Europe 2014. And of course, Serge Beaumont will have Fun With Stickies!

First, Madhur Kathuria shares his vision on Agile and we interview Guido Schoonheim at Scrum Day Europe 2014.

In this episode's Fun With Stickies Serge Beaumont talks about wide versus deep retrospectives.

Then, we interview Martin Olesen and Patricia Kong at Scrum Day Europe 2014.

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

Stuff The Internet Says On Scalability For October 10th, 2014

Hey, it's HighScalability time:


Social climber: Instagram explorer scales to new heights in New York.

 

  • 11 billion: world population in 2100; 10 petabytes: Size of Netflix data warehouse on S3; $600 Billion: the loss when a trader can't type; 3.2: 0-60 mph time of probably not my next car.
  • Quotable Quotes:
    • @kahrens_atl: Last week #NewRelic Insights wrote 618 billion events and ran 237 trillion queries with 9 millisecond response time #FS14
    • @sustrik: Imagine debugging on a quantum computer: Looking at the value of a variable changes its value. I hope I'll be out of business by then.
    • Arrival of the Fittest: Solving Evolution's Greatest Puzzle: Every cell contains thousands of such nanomachines, each of them dedicated to a different chemical reaction. And all their complex activities take place in a tiny space where the molecular building blocks of life are packed more tightly than a Tokyo subway at rush hour. Amazing.
    • Eric Schmidt: The simplest outcome is we're going to end up breaking the Internet," said Google's Schmidt. Foreign governments, he said, are "eventually going to say, we want our own Internet in our country because we want it to work our way, and we don't want the NSA and these other people in it.
    • Antirez: Basically it is neither a CP nor an AP system. In other words, Redis Cluster does not achieve the theoretical limits of what is possible with distributed systems, in order to gain certain real world properties.
    • @aliimam: Just so we can fathom the scale of 1B vs 1M: 1,000,000 seconds is 11.5 days. 1,000,000,000 seconds is 31.6 YEARS
    • @kayousterhout: 92% of catastrophic failures in distributed data-intensive systems caused by incorrect error handling https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-yuan.pdf … #osdi14
    • @DrQz: 'The purpose of computing is insight, not numbers.' (Hamming) Sometimes numbers ARE the insight so, make them accesible too. (Me)

  • Robert Scoble on the Gillmor Gang said that because of the crush of signups, ello had to throttle invites. Their single PostgreSQL server couldn't handle it captain.

  • Containers are getting much larger with new composite materials. Not that kind of container. Shipping containers. High oil costs have driven ships carrying 5000 containers to evolve. Now they can carry 18,000 and soon 19,000 containers!

  • If you've wanted to make a network game then this is a great start. Making Fast-Paced Multiplayer Networked Games is Hard: Fast-paced multiplayer games over the Internet are hard, but possible. First understanding your constraints then building within them is essential. I hope I have shed some light on what those constraints are and some of the techniques you can use to build within them. No doubt there are other ways out there and ways yet to be used. Each game is different and has its own set of priorities. Learning from what has been done before could help a great deal.

  • Arrival of the Fittest: Solving Evolution's Greatest Puzzle: Environmental change requires complexity, which begets robustness, which begets genotype networks, which enable innovations, the very kind that allow life to cope with environmental change, increase its complexity, and so on, in an ascending spiral of ever-increasing innovability...is the hidden architecture of life.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

New daily stand up questions

Xebia Blog - Fri, 10/10/2014 - 15:51

This post provides some alternate standup questions to let your standup be: aimed forward, goal focused, team focused.

The questions are:

  1. What have I achieved since our last SUM?
  2. What is my goal for today?
  3. What things keep me from reaching my goal?
  4. What is our team goal for the end of our sprint day?

The daily stand up runs on a few standard questions. The traditional questions are:

  • What did I accomplish yesterday?
  • What will I be doing today?
  • What obstacles are impeding my progress?

A couple of effects I see when using the above list are:

  • A lot of emphasis is placed on past activities rather than getting the most out of the day at hand.
  • Team members tell what they will be busy with, but not what they aim to complete.
  • Impediments are not related to daily goals.
  • There is no summary for the team relating to the sprint goal.

If you are experiencing the same issues you could try the alternate questions. They worked for me, but any feedback is appreciated of course. Are you using other questions? Let me know your experience. You could use the PDF below to print out the questions for your scrum board.

STAND_EN

STAND_EN

 

The LGPL on Android

Xebia Blog - Fri, 10/10/2014 - 08:11

My client had me code review an Android app built for them by a third party. As part of my review, I checked the licensing terms of the open source libraries that it used. Most were using Apache 2.0 without a NOTICE file. One was using the GNU Lesser General Public License (LGPL).

My client has commercial reasons to avoid Copyleft-style licenses and so I flagged the library as unusable. The supplier understandably was not thrilled about the rework that implied and asked for an explanation and ideally some way to make it work within the license. Looking into it in more detail, I'm convinced that if you share my client's concerns, then there is no way to use LGPL licensed code on Android. Here's why I believe this to be the case.

The GNU LGPL

When I first encountered the LGPL years ago, it was explained to me as “the GPL, without the requirement to publish your source code”. The actual license terms turn out to be a bit more restrictive. The LGPL is an add-on to the full GPL that weakens (only) the restrictions to how you license and distribute your work. These weaker restrictions are in section 4.

Here's how I read that section:

You may convey a Combined Work under terms of your choice that […] if you also
do each of the following:
  a) [full attribution]
  b) [include a copy of the license]
  c) [if you display any copyright notices, you must mention the licensed Library]
  d) Do one of the following:
    0) [provide means for the user to rebuild or re-link your application against
       a modified version of the Library]
    1) [use runtime linking against a copy already present on the system, and allow
       the user to replace that copy]
  e) [provide clear instructions how to rebuild or re-link your application in light
     of the previous point]

The LGPL on Android

An Android app can use two kinds of libraries: Java libraries and native libraries. Both run into the same problem with the LGPL.

The APK file format for Android apps is a single, digitally signed package. It contains native libraries directly, while Java libraries are packaged along with your own bytecode into the dex file. Android has no means of installing shared libraries into the system outside of your APK, ruling out out (d)(1) as an option. That leaves (d)(0). Making the library replaceable is not the issue. It may not be the simplest thing, but I'm sure there is some way to make it work for both kinds of libraries.

That leaves the digital signature, and here's where it breaks down. Any user who replaces the LGPL licensed library in your app will have to digitally sign their modified APK file. You can't publish your code signing key, so they have to sign with a different key. This breaks signature compatibility, which breaks updates and custom permissions and makes shared preferences and expansion files inaccessible. It can therefore be argued that such an APK file is not usable in lieu of the original app, thus violating the license.

In short

The GNU Lesser General Public License ensures that a user has freedom to modify a so licensed library used by your application, even if your application is itself closed source. Android's app packaging and signature requirements are such that I believe it is impossible to comply with the license when using an LGPL licensed library in a closed source Android app.

Function references in Swift and retain cycles

Xebia Blog - Thu, 10/09/2014 - 14:49

The Swift programming language comes with some nice features. One of those features are closures, which are similar to blocks in objective-c. As mentioned in the Apple guides, functions are special types of closures and they too can be passed around to other functions and set as property values. In this post I will go through some sample uses and especially explain the dangers of retain cycles that you can quickly run into when retaining function pointers.

Let's first have a look at a fairly simple objective-c sample before we write something similar in Swift.

Objective-c

We will create a button that executes a block statement when tapped.

In the header file we define a property for the block:

@interface BlockButton : UIButton

@property (nonatomic, strong) void (^action)();

@end

Keep in mind that this is a strong reference and the block and references in the block will be retained.

And then the implementation will execute the block when tapped:

#import "BlockButton.h"

@implementation BlockButton

-(void)setAction:(void (^)())action
{
    _action = action;
    [self addTarget:self action:@selector(performAction) forControlEvents:UIControlEventTouchUpInside];
}

-(void)performAction {
    self.action();
}

@end

We can now use this button in one of our view controllers as following:

self.button.action = ^{
    NSLog(@"Button Tapped");
};

We will now see the message "Button Tapped" logged to the console each time we tap the button. And since we don't reference self within our block, we won't get into trouble with retain cycles.

In many cases however it's likely that you will reference self because you might want to call a function that you also need to call from other places. Let's look as such an example:

-(void)viewDidLoad {
    self.button.action = ^{
        [self buttonTapped];
    };
}

-(void)buttonTapped {
    NSLog(@"Button Tapped");
}

Because our view controller (or it's view) retains our button, and the button retains the block, we're creating a retain cycle here because the block will create a strong reference to self. That means that our view controller will never be deallocated and we'll have a memory leak.

This can easily be solved by using a weak reference to self:

__weak typeof(self) weakSelf = self;
self.button.action = ^{
    [weakSelf buttonTapped];
};

Nothing new so far, so let's continue with creating something similar in Swift.

Swift

In Swift we can create a similar Button that executes a closure instead of a block:

class ClosureButton: UIButton {

    var action: (() -> ())? {
        didSet {
            addTarget(self, action: "callClosure", forControlEvents: .TouchUpInside)
        }
    }

    func callClosure() {
        if let action = action {
            action()
        }
    }
}

It doing the same as the objective-c version (and in fact you could use it from objective-c with the same block as before). We can assign it an action from our view controller as following:

button.action = {
    println("Button Tapped")
}

Since this closure doesn't capture self, we won't be running into problems with retain cycles here.

As mentioned earlier, functions are just a special type of closures. Which is pretty nice, because it lets us reference functions immediately like this:

override func viewDidLoad() {
    button.action = buttonTapped
}

func buttonTapped() {
    println("Button Tapped")
}

Nice and easy syntax and good for functional programming. If only it wouldn't give us problems. Without it being immediately obvious, the above sample does create a retain cycle. Why? We're not referencing self anywhere? Or are we? The problem is that the buttonTapped function is part of our view controller instance. So when the button.action references to that function, it creates a strong reference to the view controller as well. In this case we could fix it by making buttonTapped a class function. But since in most cases you'll want to do something with self in such a function, for example accessing variables, this is not an option.

The only thing we can do to fix this is to make sure that the button won't get a strong reference to the view controller. Just like in our last objective-c sample, we need to create a weak reference to self. Unfortunately there is no easy way to simply get a weak reference to our function. So we need a work around here.

Work around 1: wrapping in closure

We can create a weak reference by wrapping the function in a closure:

button.action = { [weak self] in
    self!.buttonTapped()
}

Here we first create a weak reference of self. And in Swift, weak references are always optional. That means self within this closure is now an optional and need to unwrap it first, which is what the exclamation mark is for. Since we know this code cannot be called when self is deallocated we can safely use the ! instead of ?.

A lot less elegant than immediately referencing our function immediately.

In theory, using an unowned reference to self should also work as following:

button.action = { [unowned self] in
    self.buttonTapped()
}

Unfortunately (for reasons unknown to me) this crashes with a EXC_BAD_ACCESS upon deallocation of the ClosureButton. Probably a bug.

Work around 2: method pointer function

Thanks to a question on StackOverflow about this same problem and an answer provided by Rob Napier, there is a way to make the code a bit more elegant again. We can define a function that does the wrapping in a closure for us:

func methodPointer<T: AnyObject>(obj: T, method: (T) -> () -> Void) -> (() -> Void) {
    return { [weak obj] in
        method(obj!)()
    }
}

Now we can get a weak reference to our function a bit easier.

button.action = methodPointer(self, ViewController.buttonTapped)

The reason this works is because you can get a reference to any instance function by calling it as a class function with the instance (in this case self) as argument. For example, the following all does the same thing:

// normal call
self.buttonTapped()

// get reference through class
let myFunction = MyViewController.buttonTapped(self)
myFunction()

// directly through class
MyViewController.buttonTapped(self)()

However, the downside of this is that it only works with functions that take no arguments and return Void. i.e. methods with a () -> () signature, like our buttonTapped.

For each signature we would have to create a separate function. For example for a function that takes a String parameter and returns an Int:

func methodPointer<T: AnyObject>(obj: T, method: (T) -> (String) -> Int) -> ((String) -> Int) {
    return { [weak obj] string in
        method(obj!)(string)
    }
}

We can then use it the same way:

func someFunction() {
    let myFunction = methodPointer(self, MyViewController.stringToInt)
    let myInt = myFunction("123")
}

func stringToInt(string: String) -> Int {
    return string.toInt()
}
Retain cycles within a single class instance

Retain cycles do not only happen when strong references are made between two instances of a class. It's also possible, and probably less obvious, to create a strong reference within the same instance. Let look an an example:

var print: ((String) -> ())?

override func viewDidLoad() {
    print = printToConsole
}

func printToConsole(message: String) {
    println(message)
}

Here we do pretty much the same as in our button examples. We define an optional closure variable and then assign a function reference to it. This creates a strong reference from the print variable to self and thus creating a retain cycle. We need to solve it by using the same tricks we used earlier.

Another example is when we define a lazy variable. Since lazy variables are assigned after initialisation, they are allowed to reference self directly. That means we can set them to a function reference as following:

lazy var print: ((String) -> ()) = self.printToConsole

Of course this also creates a retain cycle.

Conclusion

To avoid creating retain cycles in Swift you should always remember that a reference to an instance function means that you're referencing the instance as well. And thus when assigning to a variable, you're creating a strong reference. Always make sure to wrap such references in a closure with a weak reference to the instance or make sure to manually set the variables to nil once you're done with them.

Unfortunately Swift does not support weak closure variables, which is something that would solve the problem. Hopefully they will support it in the future or come up with a way to create a weak reference to a function much like we can use [weak self] now in closures.

That's Not My Problem - I'm Renting Them

Scott Hanselman gives a hilarious and insightful talk in Virtual Machines, JavaScript and Assembler, a keynote at Velocity Santa Clara 2014. The topic of his talk is an intuitive understanding of the cloud and why it's the best thing ever. 

At about 6:30 into the video Scott is at his standup comic best when he recounts a story of a talk Adrian Cockroft gave on Netflix’s move to SSDs. An audience member energetically questioned the move to SSDs saying they had high failure rates and how moving to SSDs was a stupid idea.

To which Mr. Cockroft replies:

That's not my problem, I'm renting them.

Scott selected the ideal illustration of the high level of abstraction the cloud provides. If you are new to the cloud that's a very hard idea to grasp. "That's not my problem, I'm renting them" is the perfect mantra when you find yourself worried about things you don't need to be worried about anymore.

Categories: Architecture

Emotional Intelligence is a Key Leadership Skill

You probably already know that emotional intelligence, or “EQ”, is a key to success in work and life.

Emotional intelligence is the ability to identify, assess, and control the emotions of yourself, others, and groups.

It’s the key to helping you respond vs. react.  When we react, it’s our lizard brain in action.  When we respond, we are aware of our emotions, but they are input, and they don’t rule our actions.  Instead, emotions inform our actions.

Emotional intelligence is how you avoid letting other people push your buttons.  And, at the same time, you can push your own buttons, because of your self-awareness.  

Emotional intelligence takes empathy.  Empathy, simply put, is the ability to understand and share the feelings of others. 

When somebody is intelligent, and has a high IQ, you would think that they would be successful.

But, if there is a lack of EQ (emotional intelligence), then their relationships suffer.

As a result, their effectiveness, their influence, and their impact are marginalized.

That’s what makes emotional intelligence such an important and powerful leadership skill.

And, it’s emotional intelligence that often sets leaders apart.

Truly exceptional leaders, not only demonstrate emotional intelligence, but within emotional intelligence, they stand out.

Outstanding leaders shine in the following 7 emotional intelligence competencies: Self-reliance, Assertiveness, Optimism, Self-Actualization, Self-Confidence, Relationship Skills, and Empathy.

I’ve summarized 10 Big Ideas from Emotional Capitalists: The Ultimate Guide to Developing Emotional Intelligence for Leaders.  It’s an insightful book by Martyn Newman, and it’s one of the best books I’ve read on the art and science of emotional intelligence.   What sets this book apart is that Newman focused on turning emotional intelligence into a skill you can practice, with measurable results (he has a scoring system.)

If there’s one take away, it’s really this.  The leaders that get the best results know how to get employees and customers emotionally invested in the business.  

Without emotional investment, people don’t bring out their best and you end up with a brand that’s blah.

You Might Also Like

10 Emotional Intelligence Articles for Effectiveness in Work and Life

Emotional Intelligence Quotes

Positive Intelligence at Microsoft

Categories: Architecture, Programming

Azure: Redis Cache, Disaster Recovery to Azure, Tagging Support, Elastic Scale for SQLDB, DocDB

ScottGu's Blog - Scott Guthrie - Tue, 10/07/2014 - 06:02

Over the last few days we’ve released a number of great enhancements to Microsoft Azure.  These include:

  • Redis Cache: General Availability of Redis Cache Service
  • Site Recovery: General Availability of Disaster Recovery to Azure using Azure Site Recovery
  • Management: Tags support in the Azure Preview Portal
  • SQL DB: Public preview of Elastic Scale for Azure SQL Database (available through .NET lib, Azure service templates)
  • DocumentDB: Support for Document Explorer, Collection management and new metrics
  • Notification Hub: Support for Baidu Push Notification Service
  • Virtual Network: Support for static private IP support in the Azure Preview Portal
  • Automation updates: Active Directory authentication, PowerShell script converter, runbook gallery, hourly scheduling support

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Redis Cache: General Availability of Redis Cache Service

I’m excited to announce the General Availability of the Azure Redis Cache. The Azure Redis Cache service provides the ability for you to use a secure/dedicated Redis cache, managed as a service by Microsoft. The Azure Redis Cache is now the recommended distributed cache solution we advocate for Azure applications. Redis Cache

Unlike traditional caches which deal only with key-value pairs, Redis is popular for its support of high performance data types, on which you can perform atomic operations such as appending to a string, incrementing the value in a hash, pushing to a list, computing set intersection, union and difference, or getting the member with highest ranking in a sorted set.  Other features include support for transactions, pub/sub, Lua scripting, keys with a limited time-to-live, and configuration settings to make Redis behave more like a traditional cache.

Finally, Redis has a healthy, vibrant open source ecosystem built around it. This is reflected in the diverse set of Redis clients available across multiple languages. This allows it to be used by nearly any application, running on either Windows or Linux, that you host inside of Azure. Redis Cache Sizes and Editions

The Azure Redis Cache Service is today offered in the following sizes:  250 MB, 1 GB, 2.8 GB, 6 GB, 13 GB, 26 GB, 53 GB.  We plan to support even higher-memory options in the future.

Each Redis cache size option is also offered in two editions:

  • Basic – A single cache node, without a formal SLA, recommended for use in dev/test or non-critical workloads.
  • Standard – A multi-node, replicated cache configured in a two-node Master/Replica configuration for high-availability, and backed by an enterprise SLA.

With the Standard edition, we manage replication between the two nodes and perform an automatic failover in the case of any failure of the Master node (because of either an un-planned server failure, or in the event of planned patching maintenance). This helps ensure the availability of the cache and the data stored within it. 

Details on Azure Redis Cache pricing can be found on the Azure Cache pricing page.  Prices start as low as $17 a month. Create a New Redis Cache and Connect to It

You can create a new instance of a Redis Cache using the Azure Preview Portal.  Simply select the New->Redis Cache item to create a new instance. 

You can then use a wide variety of programming languages and corresponding client packages to connect to the Redis Cache you’ve provisioned.  You use the same Redis client packages that you’d use to connect to your own Redis instance as you do to connect to an Azure Redis Cache service.  The API + libraries are exactly the same.

Below we’ll use a .NET Redis client called StackExchange.Redis to connect to our Azure Redis Cache instance. First open any Visual Studio project and add the StackExchange.Redis NuGet package to it, with the NuGet package manager.  Then, obtain the cache endpoint and key respectively from the Properties blade and the Keys blade for your cache instance within the Azure Preview Portal.

image

Once you’ve retrieved these, create a connection instance to the cache with the code below:

var connection = StackExchange.Redis.ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=...");

Once the connection is established, retrieve a reference to the Redis cache database, by calling the ConnectionMultiplexer.GetDatabase method.

IDatabase cache = connection.GetDatabase();

Items can be stored in and retrieved from a cache by using the StringSet and StringGet methods (or their async counterparts – StringSetAsync and StringGetAsync).

cache.StringSet("Key1", "HelloWorld");

cache.StringGet("Key1");

You have now stored and retrieved a “Hello World” string from a Redis cache instance running on Azure. For an example of an end to end application using Azure Redis Cache, please check out the MVC Movie Application blog post. Using Redis for ASP.NET Session State and Output Caching

You can also take advantage of Redis to store out-of-process ASP.NET Session State as well as to share Output Cached content across web server instances. 

For more details on using Redis for Session State, checkout this blog post: ASP.NET Session State for Redis

For details on using Redis for Output Caching, checkout this MSDN post: ASP.NET Output Cache for Redis Monitoring and Alerting

Every Azure Redis cache instance has built-in monitoring support on by default. Currently you can track Cache Hits, Cache Misses, Get/Set Commands, Total Operations, Evicted Keys, Expired Keys, Used Memory, Used Bandwidth and Used CPU.  You can easily visualize these using the Azure Preview Portal:

image

You can also create alerts on metrics or events (just click the “Add Alert” button above). For example, you could create an alert rule to notify the cache administrator when the cache is seeing evictions. This in turn might signal that the cache is running hot and needs to be scaled up with more memory. Learn more

For more information about the Azure Redis Cache, please visit the following links:

Site Recovery: Announcing the General Availability of Disaster Recovery to Azure

I’m excited to announce the general availability of the Azure Site Recovery Service’s new Disaster Recovery to Azure functionality.  The Disaster Recovery to Azure capability enables consistent replication, protection, and recovery of on-premises VMs to Microsoft Azure. With support for both Disaster Recovery and Migration to Azure, the Azure Site Recovery service now provides a simple, reliable, and cost-effective DR solution for enabling Virtual Machine replication and recovery between on-premises private clouds across different enterprise locations, or directly to the cloud with Azure.

This month’s release builds upon our recent InMage acquisition, and the integration of InMage Scout with Azure Site Recovery enables us to provide hybrid cloud business continuity solutions for any customer IT environment – regardless of whether it is Windows or Linux, running on physical servers or virtualized servers using Hyper-V, VMware or other virtualization solutions. Microsoft Azure is now the ideal destination for disaster recovery for virtually every enterprise server in the world.

In addition to enabling replication to and disaster recovery in Azure, the Azure Site Recovery service also enables the automated protection of VMs, remote health monitoring of them, no-impact disaster recovery plan testing, and single click orchestrated recovery - all backed by an enterprise-grade SLA. A new addition with this GA release is the ability to also invoke Azure Automation runbooks from within Azure Site Recovery Plans, enabling you to further automate your solutions.

image

Learn More about Azure Site Recovery

For more information on Azure Site Recovery, check out the recording of the Azure Site Recovery session at TechEd 2014 where we discussed the preview.  You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with the engineering team or other customers.

Once you’re ready to get started with Azure Site Recovery, check out additional pricing or product information, and sign up for a free Azure trial.

Beginning this month, Azure Backup and Azure Site Recovery will also be available in a convenient, and economical promotion offer available for purchase via a Microsoft Enterprise Agreement.  Each unit of the Azure Backup & Site Recovery annual subscription offer covers protection of a single instance to Azure with Site Recovery, as well as backup of data with Azure Backup.  You can contact your Microsoft Reseller or Microsoft representative for more information. Management: Tag Support with Resources

I’m excited to announce the support of tags in the Azure management platform and in the Azure preview portal.

Tags provide an easy way to organize your Azure resources and resources groups, by allowing you to tag your resources with name/value pairs to further categorize and view resources across resource groups and across subscriptions.  For example, you could use tags to identify which of your resources are used for “production” versus “dev/test” – and enable easy filtering/searching of the resources based on which tag you were interested in – regardless of which application or resource group they were in.  Using Tags

To get started with the new Tag support, browse to any resource or resource group in the Azure Preview Portal and click on the Tags tile on the resource.

image

On the Tags blade that appears, you'll see a list of any tags you've already applied. To add a new tag, simply specify a name and value and press enter. After you've added a few tags, you'll notice autocomplete options based on pre-existing tag names and values to better ensure a consistent taxonomy across your resources and to avoid common mistakes, like misspellings.

image

You can also use our command-line tools to tag resources as well.  Below is an example of using the Azure PowerShell module to quickly tag all of the resources in your Azure subscription:

image

Once you've tagged your resources and resource groups, you can view the full list of tags across all of your subscriptions using the Browse hub.

image

You can also “pin” tags to your Startboard for quick access.  This provides a really easy way to quickly jump to any resource in a tag you’ve pinned:

image 

SQL Databases: Public Preview of Elastic Scale Support

I am excited to announce the public preview of Elastic Scale for Azure SQL Database. Elastic Scale enables the data-tier of an application to scale out via industry-standard sharding practices, while significantly streamlining the development and management of your sharded cloud applications. The new capabilities are provided through .NET libraries and Azure service templates that are hosted in your own Azure subscription to manage your highly scalable applications. Elastic Scale implements the infrastructure aspects of sharding and thus allows you to instead focus on the business logic of your application.

Elastic Scale allows developers to establish a “contract” that defines where different slices of data reside across a collection of database instances.  This enables applications to easily and automatically direct transactions to the appropriate database (shard) and perform queries that cross many or all shards using simple extensions to the ADO.NET programming model. Elastic Scale also enables coordinated data movement between shards to split or merge ranges of data among different databases and satisfy common scenarios such as pulling a busy tenant into its own shard. 

image

We are also announcing the Federation Migration Utility which is available as part of the preview. This utility will help current SQL Database Federations customers migrate their Federations application to Elastic Scale without having to perform any data movement.

Get Started with the Elastic Scale preview today, and watch our Channel 9 video to learn more. DocumentDB: Document Explorer, Collection management and new metrics

Last week we released a bunch of updates to the Azure DocumentDB service experience in the Azure Preview Portal. We continue to improve the developer and management experiences so you can be more productive and build great applications on DocumentDB. These improvements include:

  • Document Explorer: View and access JSON documents in your database account
  • Collection management: Easily add and delete collections
  • Database performance metrics and storage information: View performance metrics and storage consumed at a Database level
  • Collection performance metrics and storage information: View performance metrics and storage consumed at a Collection level
  • Support for Azure tags: Apply custom tags to DocumentDB Accounts
Document Explorer

Near the bottom of the DocumentDB Account, Database, and Collection blades, you’ll now find a new Developer Tools lens with a Document Explorer part.

image

This part provides you with a read-only document explorer experience. Select a database and collection within the Document Explorer and view documents within that collection.

image

Note that the Document Explorer will load up to the first 100 documents in the selected Collection. You can load additional documents (in batches of 100) by selecting the “Load more” option at the bottom of the Document Explorer blade. Future updates will expand Document Explorer functionality to enable document CRUD operations as well as the ability to filter documents. Collection Management

The DocumentDB Database blade now allows you to quickly create a new Collection through the Add Collection command found on the top left of the Database blade.

image

Health Metrics

We’ve added a new Collection blade which exposes Collection level performance metrics and storage information. You can access this new blade by selecting a Collection from the list of Collections on the Database blade.

image

The Database and Collection level metrics are available via the Database and Collection blades.

image

image

As always, we’d love to hear from you about the DocumentDB features and experiences you would find most valuable within the Azure portal. You can submit your suggestions on the Microsoft Azure DocumentDB feedback forum. Notification Hubs: support for Baidu Cloud Push

Azure Notification Hubs enable cross platform mobile push notifications for Android, iOS, Windows, Windows Phone, and Kindle devices. Thousands of customers now use Notification Hubs for instant cross platform broadcast, personalized notifications to dynamic segments of their mobile audience, or simply to reach individual customers of their mobile apps regardless which device they use.  Today I am excited to announce support for another mobile notifications platform, Baidu Cloud Push, which will help Notification Hubs customers reach the diverse family of Android devices in China. 

Delivering push notifications to Android devices in China is no easy task, due to a diverse set of app stores and push services. Pushing notifications to an Android device via Google Cloud Messaging Service (GCM) does not work, as most Android devices in China are not configured to use GCM.  To help app developers reach every Android device independent of which app store they’re configured with, Azure Notification Hubs now supports sending push notifications via the Baidu Cloud Push service.

To use Baidu from your Notification Hub, register your app with Baidu, and obtain the appropriate identifiers (UserId and ChannelId) for your application.

image

Then configure your Notification Hub within the Azure Management Portal with these identifiers:

image

For more details, follow the tutorial in English & Chinese. You can learn more about Push Notifications using Azure at the Notification Hubs dev center. Virtual Machines: Instance-Level Public IPs generally available

Azure now supports the ability for you to assign public IP addresses to VMs and web or worker roles so they become directly addressable on the Internet - without having to map a virtual IP endpoint for access. With Instance-Level Public IPs, you can enable scenarios like running FTP servers in Azure and monitoring VMs directly using their IPs.

For more information, please visit the Instance-Level Public IP Addresses webpage.

Automation: Updates

Earlier this year, we introduced preview availability of Azure Automation, a service that allows you to automate the deployment, monitoring, and maintenance of your Azure resources. I am excited to announce several new features in Azure Automation:

  • Active Directory Authentication
  • PowerShell Script Converter
  • Runbook Gallery
  • Hourly Scheduling
Active Directory Authentication

We now offer an easier alternative to using certificates to authenticate from the Azure Automation service to your Azure environment. You can now authenticate to Azure using an Azure Active Directory organization identity which provides simple, credential-based authentication.

If you do not have an Active Directory user set up already, simply create a new user and provide the user with access to manage your Azure subscription. Once you have done this, create an Automation Asset with its credentials and reference the credential in your runbook. You need to do this setup only once and can then use the stored credentials going forward, greatly simplifying the number of steps that you need to take to start automating. You can read this blog to learn more about getting set up with Active Directory Authentication. PowerShell Script Converter

Azure Automation now supports importing PowerShell scripts as runbooks. When a PowerShell script is imported that does not contain a single PowerShell Workflow, Automation will attempt to convert it from PowerShell script to PowerShell Workflow, and then create a runbook from the result. This allows the vast amount of PowerShell content and knowledge that exists today to be more easily leveraged in Azure Automation, despite the fact that Automation executes PowerShell Workflow and not PowerShell. Runbook Gallery

The Runbook Gallery allows you to quickly discover Automation sample, utility, and scenario runbooks from within the Azure management portal. The Runbook Gallery consists of runbooks that can be used as is or with minor modification, and runbooks that can serve as examples of how to create your own runbooks. The Runbook Gallery features content not only by Microsoft, but also by active members of the Azure community. If you have created a runbook that you think other users may benefit from, you can share it with the community on Script Center and it will show up in the Gallery. If you are interested in learning more about the Runbook Gallery, this TechNet article describes how the Gallery works in more detail and provides information on how you can contribute.

You can access the Gallery from +New, and then selecting App Services > Automation > Runbook > From Gallery.

image

In the Gallery wizard, you can browse for runbooks by selecting the category in the left hand pane and then view the description of the selected runbook in the right pane. You can then preview the code and finally import the runbook into your personal space:

image

We will be adding the ability to expand the Gallery to include PowerShell scripts in the near future. These scripts will be converted to Workflows when they are imported to your Automation Account using the new PowerShell Script Converter. This means that you will have more content to choose from and a tool to help you get your PowerShell scripts running in Azure. Hourly Scheduling

Based on popular request from our users, hourly scheduling is now available in Azure Automation. This feature allows you to schedule your runbook hourly or every X hours, making it that much easier to start runbooks at a regular frequency that is smaller than a day. Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu omni

Categories: Architecture, Programming

Why 'Why' Is Everything

Xebia Blog - Mon, 10/06/2014 - 20:46

The 'Why' part is perhaps the most important aspect of a user story. This links to the sprint goal which links ultimately to the product vision and organisation's vision.

Lately, I got reminded of the very truth of this statement. My youngest son is part of a soccer team and they have training every week. Part of the training are exercises that use a so-called speedladder.

foto-1

After the training while driving home I asked him what he especially liked about the training and what he wants to do differently next time. This time he answered that he didn't like the training at all. So I asked him what part he disliked: "The speedladder. It is such a stupid thing to do.". Although I realised it to be a poor mans answer I told him that some parts are not that nice and he needs to accept that: practising is not always fun. I wasn't happy with the answer but couldn't think of a better one.

Some time passed when I overheard the trainers explaining to each other that the speedladder is for improving the 'footwork', coordination, and sensory development. Then I got an idea!
I knew that his ambition is to become as good as Messi :-) so when at home I explained this to my son and that it helps him to improve feints and unparalleled actions. I noticed his twinkling eyes and he enthusiastically replied: "Dad, can we buy a speedladder so I can practise at home?".  Of course I did buy one! Since then the 'speedladder' is the most favourable part of the soccer training!

Summary

The goal, purpose and the 'Why' is the most important thing for persons and teams. Communicating this clearly to the team is one of the most important things a product owner and organisation need to do in order to get high performant teams.

How Clay.io Built their 10x Architecture Using AWS, Docker, HAProxy, and Lots More

This is a guest repost by Zoli Kahan from Clay.io. 

This is the first post in my new series 10x, where I share my experiences and how we do things at Clay.io to develop at scale with a small team. If you find these things interesting, we're hiring - zoli@clay.io.

The Cloud CloudFlare

CloudFlare

CloudFlare handles all of our DNS, and acts as a distributed caching proxy with some additional DDOS protection features. It also handles SSL.

Amazon EC2 + VPC + NAT server
Categories: Architecture

How to create a Value Stream Map

Xebia Blog - Mon, 10/06/2014 - 08:05

Value Stream Mapping (VSM) is a -very- useful tool to gain insight in the workflow of a process and can be used to identify both Value Adding Activities and Non Value Adding Activities in a process stream while providing handles for optimizing the process chain. The results of a VSM can be used for many occasions: from writing out a business case, to defining a prioritized list to optimize processes within your organization, to pinpointing bottlenecks in your existing processes and gain a common understanding of process related issues.

When creating a VSM of your current software delivery process you quite possibly will be amazed by the amount of waste and therefor the room for improvement you might find. I challenge you to try this out within your own organization. It will leave you with a very powerful tool to explain to your management the steps that need to change, as it will leave you with facts.

To quickly get you started, I wrote out some handles on how to write out a proper Value Stream Map.

In many organizations there is the tendency to ‘solely’ perform local optimizations to steps in the process (i.e. per Business Unit), while in reality the largest process optimizations can be gained by optimizing the area’s which are in between the process steps and do not add any value to the customer at all; the Non Value Adding activities. Value Stream Mapping is a great tool for optimizing the complete process chain, not just the local steps.

Local_vs_complete

The Example - Mapping a Software Delivery Process
Many example value streams found on the internet focus on selling a mortgage, packaging objects in a factory or some logistic process. The example I will be using focuses on a typical Software Delivery Processes as we still see them today: the 'traditional' Software Delivery Process containing many manual steps.

You first need to map the 'as-is' process as you need this to form the baseline. This baseline provides you the required insight to remove steps from the process that do not add any value to your customer and therefor can be seen as pure waste to your organization.

It is important to write out the Value Stream as a group process (a workshop), where group-members represent people that are part of the value chain as it is today*. This is the only way to spot (hidden) activities and will provide a common understanding of the situation today. Apart from that, failure to execute the Value Stream Mapping activity as a group process will very likely reduce the acceptance rate at the end of the day. Never write out a VSM in isolation.

Value Stream mapping is 'a paper and pencil tool’ where you should ask participants to write out the stickies and help you form the map. You yourself will not write on stickies (okay, okay, maybe sometimes … but be careful not to do the work for the group). Writing out a process should take you about 4 to 6 hours, including discussions and the coffee breaks of course. So, now for the steps!

* Note that the example value stream is a simplified and fictional process based on the experience at several customers.

Step 0 Prepare
Make sure you have all materials available.

Here is a list:
- two 4 meter strokes of brown paper.
- Plastic tape to attach paper to the wall
- stickies square multiple colors
- stickies rectangle multiple colors
- small stickies one or two colors
- lot’s of sharpies (people need to be able to pick up the pens)
- colored ‘dot' stickies.

What do you need? (the helpful colleague not depicted)

What do you need? (the helpful colleague not depicted)

Step 1 & 2 define objectives and process steps
Make sure to work one process at a time and start off with defining customer objectives (the Voice Of Customer). A common understanding of the VoC is important because in later stage you will determine with the team which activities are really adding to this VoC and which steps are not. Quite often these objectives are defined in Time, Cost and Quality. For our example, let’s say the customer would like to be able to deliver a new feature every hour, with a max cost of $1000 a feature and with zero defects.

First, write down the Voice of the Customer in the top right corner. Now, together with the group, determine all the actors (organizations / persons / teams) that are part of the current process and glue these actors as orange stickies to the brown paper.

Defining Voice of Customer and Process Steps

Defining Voice of Customer and Process Steps

Step 3 Define activities performed within each process step
With the group, per determine the activities that take place. Underneath the orange stickies, add green stickies that describe the activities that take place in a given step.

Defining activities performed in each step

Defining activities performed in each step

Step 4 Define Work in Progress (WiP)
Now, add pink stickies in between the steps, describing the number of features / requirements / objects / activities that is currently in process in between actors. This is referred to as WiP - Work in Progress. Whenever there is a high WiP limit in between steps, you have identified a bottleneck causing the process 'flow' to stop.

On top of the pink WiP stickies containing particular high WiP levels, add a small sticky indicating what the group thinks is causing the high WiP. For instance, a document has to be distributed via internal mail, or a wait is introduced for a bi-weekly meeting or travel to another location is required. This information can later be used to optimize the process.

Note that in the workshop you should also take some time to finding WiP within the activities itself (this is not depicted in this example). Spend time on finding information for causes of high WiP and add this as stickies to each activity.

Define work in process

Define work in process

Step 5 Identify rework
Rework is waste. Still, many times you'll see that a deliverable is to be returned to a previous step for reprocessing. Together with the group, determine where this happens and what is the cause of this rework. A nice additional is to also write out first-time-right levels.

Identify rework

Identify rework

Step 6 Add additional information
Spend some time in adding additional comments for activities on the green stickies. Some activities might for instance not be optimized, are not easy to handle or from a group perspective considered obsolete. Mark these comments with blue stickies next to the activity at hand.

Add additional information

Add additional information

Step 7 Add Process time, Wait time and Lead time and determining Process Cycle Efficiency

Now, as we have the process more or less complete, we can start adding information related to timing. In this step you would like to determine the following information:

  • process time: the real amount of time that is required to perform a task without interruptions
  • lead time: the actual time that it takes for the activity to be completed (also known as elapse time)
  • wait time: time when no processing is done at all, for example when for waiting on a 'event' like a bi-weekly meeting.

(Not in picture): for every activity on the green sticky, write down a small sticky with two numbers vertically aligned. The top-number reflects the process-time, (i.e. 40 hours). The bottom-number reflects the lead time (i.e. 120 hours).

(In picture): add a block diagram underneath the process, where timing information in the upper section represents total processing time for all activities and timing information the lower section represents total lead time for all activities. (just add up the timing information for the individual activities I described in previous paragraph). Also add noticeable wait time in-between process steps. As a final step, to the right of this block diagram, add the totals.

Now that you have all information on the paper, the following  can be calculated:

  • Total Process Time - The total time required to actually work on activities if one could focus on the activity at hand.
  • Total Lead Time - The total time this process actually needs.
  • Project Cycle Efficiency (PCE): -> Total Process Time / Total Lead Time *100%.

Add this information to the lower right corner of your brown paper. The numbers for this example are:

Total Process Time: add all numbers in top section of stickies: 424 hours
Process Lead Time (PLT): add all numbers in lower section of stickies + wait time in between steps: 1740 hours
Project Cycle Efficiency (PCE) now is: -> Total Process  Time / Total Process Lead Time: 24%.
Note that 24% is -very- high which is caused by using an example. Usually you’ll see a PCE at about 4 - 8% for a traditional process.

Add process, wait and lead times

Add process, wait and lead times

Step 8 Identify Customer Value Add and Non Value Add activities
Now, categorize tasks into 2 types: tasks that add value to the customer (Customer Value Add, CVA) and tasks that do not add value to the customer (Non Value Add, NVA). The NVA you can again split into two categories: tasks that add Business Value (Business Value Add, BVA) and ‘Waste’. When optimizing a process, waste is to be eliminated completely as it does not add value to the customer nor the business as a whole. But also for the activities categorized as 'BVA', you have to ask yourself whether these activities add to the chain.

Mark CVA tasks with a green dot, BVA tasks with a blue dot and Waste with a red dot. Put the legend on the map for later reference.

When identifying CVA, NVA and BVA … force yourself to refer back to the Voice of Customer you jotted down in step 1 and think about who is your customer here. In this example, the customer is not the end user using the system, but the business. And it was the business that wanted Faster, Cheaper & Better. Now when you start to tag each individual task, give yourself some time in figuring out which tasks actually add to these goals.

Determine Customer Value Add & Non Value Add

Determine Customer Value Add & Non Value Add

To give you some guidance on how you can approach tagging each task, I’ll elaborate a bit on how I tagged the activities. Note again, this is just an example, within the workshop your team might tag differently.

Items I tagged as CVA: coding, testing (unit, static, acceptance), execution of tests and configuration of monitoring are adding value to the customer (business). Why? Because all these items relate to a faster, better (high quality through test + monitoring) and cheaper (less errors through higher quality of code) delivery of code.

Items I tagged as BVA: documentation, configuration of environments, deployments of VMs, installation of MW are required to be able to deliver to the customer when using this (typical waterfall) Software Delivery Processes. (Note: I do not necessarily concur with this process.) :)

Items I tagged as pure Waste, not adding any value to the customer: items like getting approval, the process to get funding (although probably required), discussing details and documenting results for later reference or waiting for the Quarterly release cycle. Non of these items are required to either deliver faster, cheaper or better so in that respect these items can be considered waste.

That's it (and step 9) - you've mapped your current process
So, that’s about it! The Value Stream Map is now more or less complete an contains all relevant information required to optimize the process in a next step. Step 9 here would be: Take some time to write out items/bottlenecks that are most important or easy to address and discuss internally with your team about a solution. Focus on items that you either tagged as BVA or pure waste and think of alternatives to eliminate these steps. Put your customer central, not your process! Just dropping an activity as a whole seems somewhat radical, but sometimes good ideas just are! Note by the way that when addressing a bottleneck, another bottleneck will pop up. There always will be a bottleneck somewhere in the process and therefor process optimization must be seen as a continuous process.

A final tip: to be able to perform a Value Stream Mapping workshop at the customer, it might be a good idea to join a more experienced colleague first, just to get a grasp of what the dynamics in such a workshop are like. The fact that all participants are at the same table, outlining the delivery process together and talk about it, will allow you to come up with an optimized process on which each person will buy in. But still, it takes some effort to get the workshop going. Take your time, do not rush it.

For now, I hope you can use the steps above the identify the current largest bottlenecks within your own organization and get going. In a next blog, if there is sufficient interest, I will write about what would be possible solutions in solving the bottlenecks in my example. If you have any ideas, just drop a line below so we can discuss! The aim for me would be to work towards a solution that caters for Continuous Delivery of Software.

Michiel Sens.

Integrating Geb with FitNesse using the Groovy ConfigSlurper

Xebia Blog - Fri, 10/03/2014 - 18:01

We've been playing around with Geb for a while now and writing tests using WebDriver and Groovy has been a delight! Geb integrates well with JUnit, TestNG, Spock, and Cucumber. All there is left to do is integrating it with FitNesse ... or not :-).

Setup Gradle and Dependencies

First we start with grabbing the gradle fitnesse classpath builder from Arjan Molenaar.
Add the following dependencies to the gradle build file:

compile 'org.codehaus.groovy:groovy-all:2.3.7'
compile 'org.gebish:geb-core:0.9.3'
compile 'org.seleniumhq.selenium:selenium-java:2.43.1
Configure different drivers with the ConfigSlurper

Geb provides a configuration mechanism using the Groovy ConfigSlurper. It's perfect for environment sensitive configuration. Geb uses the geb.env system property to determine the environment to use. So we use the ConfigSlurper to configure different drivers.

import org.openqa.selenium.chrome.ChromeDriver
import org.openqa.selenium.firefox.FirefoxDriver

driver = { new FirefoxDriver() }

environments {
  chrome {
    driver = { new ChromeDriver() }
  }
}
FitNesse using the ConfigSlurper

We need to tweak the gradle build script to let FitNesse play nice with the ConfigSlurper. So we pass the geb.env system property as a JVM argument. Look for the gradle task "wiki" in the gradle build script and add the following lines.

def gebEnv = (System.getProperty("geb.env")) ? (System.getProperty("geb.env")) : "firefox"
jvmArgs "-Dgeb.env=${gebEnv}"

Since FitNesse spins up a separate 'service' process when you execute a test, we need to pass the geb.env system property into the COMMAND_PATTERN of FitNesse. That service needs the geb.env system property to let Geb know which environment to use. Put the following lines in the FitNesse page.

!define COMMAND_PATTERN {java -Dgeb.env=${geb.env} -cp %p %m}

Now you can control the Geb environment by specifying it on the following command line.

gradle wiki -Dgeb.env=chrome

The gradle build script will pass the geb.env system property as JVM argument when FitNesse starts up. And the COMMAND_PATTERN will pass it to the test runner service.

Want to see it in action? Sources can be found here.

Stuff The Internet Says On Scalability For October 3rd, 2014

Hey, it's HighScalability time:


Is the database landscape evolving or devolving?

 

  • 76 million: once more through the data breach; 2016: when a Zettabyte is transfered over the Internet in one year
  • Quotable Quotes:
    • @wattersjames: Words missing from the Oracle PaaS keynote: agile, continuous delivery, microservices, scalability, polyglot, open source, community #oow14
    • @samcharrington: At last count, there were over 1,000,000 million containers running in the wild. http://stats.openvz.org  @jejb_ #ccevent
    • @mappingbabel: Oracle's cloud has 30,000 computers. Google has about two million computers. Amazon over a million. Rackspace over 100,000.
    • Andrew Auernheimer: The world should have given the GNU project some money to hire developers and security auditors. Hell, it should have given Stallman a place to sleep that isn't a couch at a university. There is no f*cking justice in this world.
    • John Nagle: The right answer is to track wins and losses on delayed and non-delayed ACKs. Don't turn on ACK delay unless you're sending a lot of non-delayed ACKs closely followed by packets on which the ACK could have been piggybacked. Turn it off when a delayed ACK has to be sent. I should have pushed for this in the 1980s.
    • @neil_conway: The number of < 15 node Hadoop clusters is >> the number of > 15 node Hadoop clusters. Unfortunately not reflected in SW architecture.

  • In the meat world Google wants devices to talk to you. The Physical Web. This will be better than Apple's beacons because Apple is severely limiting the functionality of beacons by requiring IDs be baked into applications. It's a very static and controlled world. In other words, it's very Apple. By using URLs Google is supporting both the web and apps; and adding flexibility because a single app can dynamically and generically handle the interaction from any kind of device. In other words, it's very Google. Apple has the numbers though, with hundreds of millions of beacon enabled phones in customer hands. Since it's just another protocol over BLE it should work on Apple devices as well.

  • Did Netflix survive the great AWS rebootathon? The Chaos Monkey says yes, yes they did: Out of our 2700+ production Cassandra nodes, 218 were rebooted. 22 Cassandra nodes were on hardware that did not reboot successfully. This led to those Cassandra nodes not coming back online. Our automation detected the failed nodes and replaced them all, with minimal human intervention. Netflix experienced 0 downtime that weekend. 

  • Google Compute Engine is following Moore's Law by announcing a 10% discount. Bandwidth is still expensive because networks don't care about silly laws. And margin has to come from somewhere.

  • Software is eating...well you've heard it before. Mesosphere cofounder envisions future data center as ‘one big computer’: The data center of the future will be fully virtualized, with everything from power supplies to storage devices consolidated into a single pool and managed by software, according to an executive whose company intends to lead the way.

  • Companies, startups, hacker spaces, teams are all intentional communities. People choose to work together towards some end. A consistent group killer is that people can be really sh*tty to each other. There's a lot of work that has been done around how to make intentional communities work. Holacracy is just one option. Here's a really interesting interview with Diana Leafe Christian on what makes communities work. It requires creating Community Glue, Good Process and Communication Skill, Effective Project Management, and good Governance and Decision making. Which is why most communities fail. Did you know there's even something called Non-Defensive Communication? If followed the Internet would collapse.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture