Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

In-Memory Computing at Aerospike Scale: When to Choose and How to Effectively Use JEMalloc

This is a guest post by Psi Mankoski (email), Principal Engineer, Aerospike.

When a customer’s business really starts gaining traction and their web traffic ramps up in production, they know to expect increased server resource load. But what do you do when memory usage still keeps on growing beyond all expectations? Have you found a memory leak in the server? Or else is memory perhaps being lost due to fragmentation? While you may be able to throw hardware at the problem for a while, DRAM is expensive, and real machines do have finite address space. At Aerospike, we have encountered these scenarios along with our customers as they continue to press through the frontiers of high scalability.

In the summer of 2013 we faced exactly this problem: big-memory (192 GB RAM) server nodes were running out of memory and crashing again within days of being restarted. We wrote an innovative memory accounting instrumentation package, ASMalloc [13], which revealed there was no discernable memory leak. We were being bitten by fragmentation.

This article focuses specifically on the techniques we developed for combating memory fragmentation, first by understanding the problem, then by choosing the best dynamic memory allocator for the problem, and finally by strategically integrating the allocator into our database server codebase to take best advantage of the disparate life-cycles of transient and persistent data objects in a heavily multi-threaded environment. For the benefit of the community, we are sharing our findings in this article, and the relevant source code is available in the Aerospike server open source GitHub repo. [12]

Executive Summary
  • Memory fragmentation can severely limit scalability and stability by wasting precious RAM and causing server node failures.

  • Aerospike evaluated memory allocators for its in-memory database use-case and chose the open source JEMalloc dynamic memory allocator.

  • Effective allocator integration must consider memory object life-cycle and purpose.

  • Aerospike optimized memory utilization by using JEMalloc extensions to create and manage per-thread (private) and per-namespace (shared) memory arenas.

  • Using these techniques, Aerospike saw substantial reduction in fragmentation, and the production systems have been running non-stop for over 1.5 years.

Introduction
Categories: Architecture

How and Why Swiftype Moved from EC2 to Real Hardware

This is a guest post by Oleksiy Kovyrin, Head of Technical Operations at Swiftype. Swiftype currently powers search on over 100,000 websites and serves more than 1 billion queries every month.

When Matt and Quin founded Swiftype in 2012, they chose to build the company’s infrastructure using Amazon Web Services. The cloud seemed like the best fit because it was easy to add new servers without managing hardware and there were no upfront costs.

Unfortunately, while some of the services (like Route53 and S3) ended up being really useful and incredibly stable for us, the decision to use EC2 created several major problems that plagued the team during our first year.

Swiftype’s customers demand exceptional performance and always-on availability and our ability to provide that is heavily dependent on how stable and reliable our basic infrastructure is. With Amazon we experienced networking issues, hanging VM instances, unpredictable performance degradation (probably due to noisy neighbors sharing our hardware, but there was no way to know) and numerous other problems. No matter what problems we experienced, Amazon always had the same solution: pay Amazon more money by purchasing redundant or higher-end services.

The more time we spent working around the problems with EC2, the less time we could spend developing new features for our customers. We knew it was possible to make our infrastructure work in the cloud, but the effort, time and resources it would take to do so was much greater than migrating away.

After a year of fighting the cloud, we made a decision to leave EC2 for real hardware. Fortunately, this no longer means buying your own servers and racking them up in a colo. Managed hosting providers facilitate a good balance of physical hardware, virtualized instances, and rapid provisioning. Given our previous experience with hosting providers, we made the decision to choose SoftLayer. Their excellent service and infrastructure quality, provisioning speed, and customer support made them the best choice for us.

After more than a month of hard work preparing the inter-data center migration, we were able to execute the transition with zero downtime and no negative impact on our customers. The migration to real hardware resulted in enormous improvements in service stability from day one, provided a huge (~2x) performance boost to all key infrastructure components, and reduced our monthly hosting bill by ~50%.

This article will explain how we planned for and implemented the migration process, detail the performance improvements we saw after the transition, and offer insight for younger companies about when it might make sense to do the same.

Preparing for the switch
Categories: Architecture

Stuff The Internet Says On Scalability For March 13th, 2015

Hey, it's HighScalability time:


1957: 13 men delivering a computer. 2017: a person may wear 13 computing devices (via Vala Afshar)
  • 5.3 million/second: LinkedIn metrics collected; 1.7 billion: Tinder ratings per day
  • Quotable Quotes:
    • @jankoum: WhatsApp crossed 1B Android downloads. btw our android team is four people + Brian. very small team, very big impact.
    • @daverog: Unlike milk, software gets more expensive, per unit, in larger quantities (diseconomies of scale) @KevlinHenney #qconlondon
    • @DevOpsGuys: Really? Wow! RT @DevOpsGuys: Vast majority of #Google software is in a single repository. 60million builds per year. #qconlondon
    • Ikea: We are world champions in making mistakes, but we’re really good at correcting them.
    • Chris Lalonde: I know a dozen startups that failed from their own success, these problems are only going to get bigger.
    • Chip Overclock: Configuration Files Are Just Another Form of Message Passing (or Maybe Vice Versa)
    • @rbranson: OH: "call me back when the number of errors per second is a top 100 web site."
    • @wattersjames: RT @datacenter: Worldwide #server sales reportedly reach $50.9 billion in 2014
    • @johnrobb: Costs of surveillance.  Bots are cutting the costs by a couple of orders of magnitude more.

  • Is this a sordid story of intrigue? How GitHub Conquered Google, Microsoft, and Everyone Else. Not really. The plot arises from the confluence of the creation of Git, the evolution of Open Source, small network effects, and a tide rising so slowly we may have missed it. Brian Doll (a GitHub VP) is letting us know the water is about nose height: Of GitHub’s ranking in the top 100, Doll says, “What that tells me is that software is becoming as important as the written word.”

  • This is audacious. Peter Lawrey Describes Petabyte JVMs. For when you really really want to avoid a network hit and access RAM locally. The approach is beautifully twisted: It’s running on a machine with 6TB and 6 NUMA regions. Since, as previously noted, we want to try and restrict the heap or at least the JVM to a single NUMA region, you end up with 5 JVMs with a heap of up to 64GB each, and memory mapped caches for both the indexes and the raw data, plus a 6th NUMA region reserved just for the operating system and monitoring tasks.

  • There's a parallel between networks and microprocessors in that network latency is not getting any faster and microprocessor speeds are not getting any faster, yet we are getting more and more bandwidth and more and more MIPS per box. Programming for this reality will be a new skill. Observed while listening to How Did We End Up Here?

  • Advances in medicine will happen because of big data. We need larger sample sizes to figure things out. Sharing Longevity Data.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Paper: Staring into the Abyss: An Evaluation of Concurrency Control with One Thousand Cores

How will your OLTP database perform if it had to scale up to 1024 cores? Not very well according to this fascinating paper: Staring into the Abyss: An Evaluation of Concurrency Control with One Thousand Cores, where a few intrepid chaos monkeys report the results of their fiendish experiment. The conclusion: we need a completely redesigned DBMS architecture that is rebuilt from the ground up.

Summary:

Categories: Architecture

Video - Agility and the essence of software architecture

Coding the Architecture - Simon Brown - Wed, 03/11/2015 - 22:59

This is just a quick note to say that the video of my "Agility and the essence of software architecture" talk from YOW! 2014 in Brisbane is now available to watch online. This talk covers the subject of software architecture and agile from a number of perspectives, focusing on how to create agile software systems in an agile way.

Agility and the essence of software architecture

The slides are also available to view online/download. A huge thanks to everybody who attended for making it such a fun session. :-)

Categories: Architecture

Cassandra Migration to EC2

This is a guest post by Tommaso Barbugli the CTO of getstream.io, a web service for building scalable newsfeeds and activity streams.

In January we migrated our entire infrastructure from dedicated servers in Germany to EC2 in the US. The migration included a wide variety of components, web workers, background task workers, RabbitMQ, Postgresql, Redis, Memcached and our Cassandra cluster. Our main requirement was to execute this migration without downtime.

This article covers the migration of our Cassandra cluster. If you’ve never run a Cassandra migration before, you’ll be surprised to see how easy this is. We were able to migrate Cassandra with zero downtime using its awesome multi-data center support. Cassandra allows you to distribute your data in such a way that a complete set of data is guaranteed to be placed on every logical group of nodes (eg. nodes that are on the same data-center, rack, or EC2 regions...). This feature is a perfect fit for migrating data from one data-center to another. Let’s start by introducing the basics of a Cassandra multi-datacenter deployment.

Cassandra, Snitches and Replication strategies
Categories: Architecture

AppLovin: Marketing to Mobile Consumers Worldwide by Processing 30 Billion Requests a Day

This is a guest post from AppLovin's VP of engineering, Basil Shikin, on the infrastructure of its mobile marketing platform. Major brands like Uber, Disney, Yelp and Hotels.com use AppLovin's mobile marketing platform. It processes 30 billion requests a day and 60 terabytes of data a day.

AppLovin's marketing platform provides marketing automation and analytics for brands who want to reach their consumers on mobile. The platform enables brands to use real-time data signals to make effective marketing decisions across one billion mobile consumers worldwide.

Core Stats
  • 30 Billion ad requests per day

  • 300,000 ad requests per second, peaking at 500,000 ad requests per second

  • 5ms average response latency

  • 3 Million events per second

  • 60TB of data processed daily

  • ~1000 servers

  • 9 data centers

  • ~40 reporting dimensions

  • 500,000 metrics data points per minute

  • 1 Pb Spark cluster

  • 15GB/s peak disk writes across all servers

  • 9GB/s peak disk reads across all servers

  • Founded in 2012, AppLovin is headquartered in Palo Alto, with offices in San Francisco, New York, London and Berlin.

 

Technology Stack
Categories: Architecture

The Architecture of Algolia’s Distributed Search Network

Guest post by Julien Lemoine, co-founder & CTO of Algolia, a developer friendly search as a service API.

Algolia started in 2012 as an offline search engine SDK for mobile. At this time we had no idea that within two years we would have built a worldwide distributed search network.

Today Algolia serves more than 2 billion user generated queries per month from 12 regions worldwide, our average server response time is 6.7ms and 90% of queries are answered in less than 15ms. Our unavailability rate on search is below 10-6 which represents less than 3 seconds per month.

The challenges we faced with the offline mobile SDK were technical limitations imposed by the nature of mobile. These challenges forced us to think differently when developing our algorithms because classic server-side approaches would not work.

Our product has evolved greatly since then. We would like to share our experiences with building and scaling our REST API built on top of those algorithms.

We will explain how we are using a distributed consensus for high-availability and synchronization of data in different regions around the world and how we are doing the routing of queries to the closest locations via an anycast DNS.

The data size misconception
Categories: Architecture

Package by component and architecturally-aligned testing

Coding the Architecture - Simon Brown - Sun, 03/08/2015 - 12:47

I've seen and had lots of discussion about "package by layer" vs "package by feature" over the past couple of weeks. They both have their benefits but there's a hybrid approach I now use that I call "package by component". To recap...

Package by layer

Let's assume that we're building a web application based upon the Web-MVC pattern. Packaging code by layer is typically the default approach because, after all, that's what the books, tutorials and framework samples tell us to do. Here we're organising code by grouping things of the same type.

Package by layer

There's one top-level package for controllers, one for services (e.g. "business logic") and one for data access. Layers are the primary organisation mechanism for the code. Terms such as "separation of concerns" are thrown around to justify this approach and generally layered architectures are thought of as a "good thing". Need to switch out the data access mechanism? No problem, everything is in one place. Each layer can also be tested in isolation to the others around it, using appropriate mocking techniques, etc. The problem with layered architectures is that they often turn into a big ball of mud because, in Java anyway, you need to mark your classes as public for much of this to work.

Package by feature

Instead of organising code by horizontal slice, package by feature seeks to do the opposite by organising code by vertical slice.

Package by feature

Now everything related to a single feature (or feature set) resides in a single place. You can still have a layered architecture, but the layers reside inside the feature packages. In other words, layering is the secondary organisation mechanism. The often cited benefit is that it's "easier to navigate the codebase when you want to make a change to a feature", but this is a minor thing given the power of modern IDEs.

What you can do now though is hide feature specific classes and keep them out of sight from the rest of the codebase. For example, if you need any feature specific view models, you can create these as package-protected classes. The big question though is what happens when that new feature set C needs to access data from features A and B? Again, in Java, you'll need to start making classes publicly accessible from outside of the packages and the big ball of mud will again emerge.

Package by layer and package by feature both have their advantages and disadvantages. To quote Jason Gorman from Schools of Package Architecture - An Illustration, which was written seven years ago.

To round off, then, I would urge you to be mindful of leaning to far towards either school of package architecture. Don't just mindlessly put socks in the sock draw and pants in the pants draw, but don't be 100% driven by package coupling and cohesion to make those decisions, either. The real skill is finding the right balance, and creating packages that make stuff easier to find but are as cohesive and loosely coupled as you can make them at the same time. Package by component

This is a hybrid approach with increased modularity and an architecturally-evident coding style as the primary goals.

Package by component

The basic premise here is that I want my codebase to be made up of a number of coarse-grained components, with some sort of presentation layer (web UI, desktop UI, API, standalone app, etc) built on top. A "component" in this sense is a combination of the business and data access logic related to a specific thing (e.g. domain concept, bounded context, etc). As I've described before, I give these components a public interface and package-protected implementation details, which includes the data access code. If that new feature set C needs to access data related to A and B, it is forced to go through the public interface of components A and B. No direct access to the data access layer is allowed, and you can enforce this if you use Java's access modifiers properly. Again, "architectural layering" is a secondary organisation mechanism. For this to work, you have to stop using the public keyword by default. This structure raises some interesting questions about testing, not least about how we mock-out the data access code to create quick-running "unit tests".

Architecturally-aligned testing

The short answer is don't bother, unless you really need to. I've spoken about and written about this before, but architecture and testing are related. Instead of the typical testing triangle (lots of "unit" tests, fewer slower running "integration" tests and even fewer slower UI tests), consider this.

Architecturally-aligned testing

I'm trying to make a conscious effort to not use the term "unit testing" because everybody has a different view of how big a "unit" is. Instead, I've adopted a strategy where some classes can and should be tested in isolation. This includes things like domain classes, utility classes, web controllers (with mocked components), etc. Then there are some things that are easiest to test as components, through the public interface. If I have a component that stores data in a MySQL database, I want to test everything from the public interface right back to the MySQL database. These are typically called "integration tests", but again, this term means different things to different people. Of course, treating the component as a black box is easier if I have control over everything it touches. If you have a component that is sending asynchronous messages or using an external, third-party service, you'll probably still need to consider adding dependency injection points (e.g. ports and adapters) to adequately test the component, but this is the exception not the rule. All of this still applies if you are building a microservices style of architecture. You'll probably have some low-level class tests, hopefully a bunch of service tests where you're testing your microservices though their public interface, and some system tests that run scenarios end-to-end. Oh, and you can still write all of this in a test-first, TDD style if that's how you work.

I'm using this strategy for some systems that I'm building and it seems to work really well. I have a relatively simple, clean and (to be honest) boring codebase with understandable dependencies, minimal test-induced design damage and a manageable quantity of test code. This strategy also bridges the model-code gap, where the resulting code actually reflects the architectural intent. In other words, we often draw "components" on a whiteboard when having architecture discussions, but those components are hard to find in the resulting codebase. Packaging code by layer is a major reason why this mismatch between the diagram and the code exists. Those of you who are familiar with my C4 model will probably have noticed the use of the terms "class" and "component". This is no coincidence. Architecture and testing are more related than perhaps we've admitted in the past.

p.s. I'll be speaking about this topic over the next few months at events across Europe, the US and (hopefully) Australia

Categories: Architecture

10 Leadership Ideas to Improve Your Influence and Impact

"In a battle between two ideas, the best one doesn't necessarily win. No, the idea that wins is the one with the most fearless heretic behind it." -- Seth Godin

One leadership idea can change your life in an instant.

Like the right key, the right leadership idea can instantly unlock or unleash what you’re capable of.

I’ve seen some leaders lose their jobs because they didn’t know how to adapt their leadership style.   I’ve seen other leaders crumble with anxiety because they didn’t know how to balance connection and conviction.

I’ve seen other leaders operate at a higher level, and influence without authority.  I’ve seen amazing leaders in action that inspire others through their stories of the art of the possible.

Here are a handful of leadership ideas that you can put into practice.

  1. 3 Stories Leaders Need to Tell
  2. 10 Ways Leaders Hold People Accountable
  3. A Leader is the Trustee of the Intangibles
  4. Balance Connection and Conviction to Reduce Anxiety and Lead Effectively
  5. Boldness Has Genius, Magic, and Power in It
  6. Change Your Leadership Style Based on Capability and Motivation
  7. Guide Your Path with Vision, Values, and Goals
  8. Lead Yourself and Others with Stories
  9. Warrior Leaders Reveal Great Character
  10. What Executive Leaders at Microsoft Taught Me

None of these leadership ideas are new.  They may be new for you.  But they are proven practices for leadership that many leaders have learned the hard way.

The nice thing about ideas is that all you have to do is try them, and find what works for you.  (Always remember to try ideas in a Bruce Lee sort of way, “adapt what is useful”, and don’t throw the baby out with the bathwater.)

If you want to get hard-core, I also have a roundup of the best business books that have influenced Microsoft leaders:

36 Best Business Books that Influence Microsoft Leaders

You’ll find a strange but potent mix of business books ranging from skills you learned in kindergarten to ways to change the world by spreading your ideas like a virus.

Enjoy.

You Might Like

7 Metaphors for Leadership Transformation

10 Free Leadership Tools for Work and Life

347 Personal Effectiveness Articles to Help You Change Your Game

Leadership Development in a Box

Leadership Quotes

Categories: Architecture, Programming

Security Concerns for Legacy Systems

Coding the Architecture - Simon Brown - Sat, 03/07/2015 - 15:12

Information security is a quality attribute that can’t easily be retrofitted. Concerns such as authorisation, authentication, access and data protection need to be defined early so they can influence the solution's design.

However, many aspects of information security aren’t static. External security threats are constantly evolving and the maintainers of a system need to keep up-to-date to analyse them. This may force change on an otherwise stable system.

Functional changes to a legacy system also need to be analysed from a security standpoint. The initial design may have taken the security requirements into consideration (a quality attribute workshop is a good way to capture these) but are they re-considered when features are added or changed? What if a sub-component is replaced or services moved to a remote location? Is the analysis re-performed?

It can be tempting to view information security as a macho battle between evil, overseas (people always think they come from another country) hackers and your own underpaid heroes but many issues have simple roots. Many data breaches or not hacks but basic errors - I once worked at a company where an accountant intern accidentally emailed a spreadsheet with everyone’s salary to the whole company.

Let’s have a quick look at some of the issues that a long running, line-of-business application might face:

Lack of Patching

Have you applied all the vendors’ patches? Not just to the application but the software stack beneath? Has the vendor applied patches to third party libraries that they rely upon? What about the version of Java/.net that the application is running or the OS beneath that? When an application is initially developed it will use the latest versions but unless a full dependency tree is recorded the required upgrades can be difficult to track. It is easy to forget these dependant upgrades even on an actively developed system.

Even if you do have a record of all components and subcomponents, there is no guarantee that, when upgraded, they will be compatible or work as before. The level of testing can be high and this acts as a deterrent to change - you only need a single broken component for the entire system to be at risk.

Passwords

Passwords are every operations team’s nightmare. Over the last 20 years the advice for best-practice, generating, and storing of passwords has changed dramatically. Users used to be advised to think of an unusual password and not write it down. However it turns out that ‘unusual’ is actually very common with people picking the same ‘unusual’ word. Leaked password lists from large websites have demonstrated how many users pick the same password. Therefore the advice and allowable passwords for modern systems have changed (often multiple word sentences). Does your legacy system enforce this or is it filled with passwords from a brute-force list?

Passwords also tend to get shared over time. What happens when someone goes on holiday, a weekly report needs to be run, but the template exists within a specific user’s account? Often they are phoned up and asked for their password. This may indicate a feature flaw in the product but is very common. There are many ways to improve this; from frequent password modifications to two factor authentication but these increase the burden on the operations team.

Does your organisation have an employee leaver’s process? Do you suspend account access? If you have shared accounts (“everyone knows the admin password") this may be difficult or disruptive. Having a simple list (or preferably an automated script) to execute for each employee that leaves is important.

There are similar problems with cryptographic keys. Are they long enough to comply with the latest advice? Do they use a best practice algorithm or one with a known issue? It is amazing how many websites use old certificates that should be replaced or have even expired. How secure is your storage of these keys?

Are any of your passwords or keys embedded in system files? This may have seemed safe when the entire system was on a single machine in a secure location but if the system has been restructured this may no longer be the case. For example, if some of the files have been moved to a shared or remote location, it may be possible for a non-authorised party to scan them.

Moving from Closed to Open Networks

A legacy system might have used a private, closed network for reasons of speed and reliability but it may now be possible to meet those quality attributes on an open network and vastly reduce costs. However, if you move services from closed networks to open networks you have to reconsider the use of encryption on the connection. The security against eavesdropping/network sniffing was a fortunate side-effect of the network being private, so the requirement may have not been captured - it was a given. This can be dangerous if the original requirements are used for restructuring. These implicit quality attributes are important and whether a feature change creates new quality attributes should be considered. You might find these cost-saving changes dropped on you by an excited accountant (who thinks their brilliance has just halved communications charges) with little warning!

Moving to an open network will make services reachable by unknown clients. This raises issues from Denial-of-Service attacks through to malicious clients attempting to use bad messages (such as SQL injection) to compromise a system. There are various techniques that can be applied at the network level to help here (VPNs, blocking unknown IPs, deep packet inspection etc) but ultimately the code being run in the services need to be security aware - this is very, very hard to do to an entire system after it is written.

Migrating to an SOA or micro-service architecture increases these effects as the larger number of connections and end-points now need to be secured. A well modularised system may be easy to distribute but intra-process communication is much more secure than inter-process or inter-machine.

Modernising Data Formats

Migrating from a closed, binary data format to an open one (e.g. xml) for messaging or storage makes navigating the data easier, but this applies to casual scanning by an attacker as well. Relying on security by obscurity isn’t a good idea (and this is not an excuse to avoid improving the readability of data) but many systems do. When improving data formats you should re-consider where the data is being stored, what has access and whether encryption is required.

Similar concerns should be addressed when making source-code open source. Badly written code is now available for inspection and attack vectors can be examined. In particular you should be careful to avoid leaking configuration into the source code if you intending making it open.

New Development and Copied Data

If new features are developed for a system that has been static for a while, it is likely that new developer, test, QA and pre-production environments will be created. (The originals will either be out of date or not kept due to cost). The quickest and most accurate way to create test environments is to clone production. This works well but copied data is as important as the original. Do you treat this copied data with the same security measures as production? If you have proprietary or confidential customer information then it should be. Note that the definition of ‘confidential’ varies but you might be surprised at how broad some regulators make it. You may also be restricted in the information that you can move out of the country - is your development or QA team located overseas?

Remember, you are not just restricting access to your system but your data as well.

Server Consolidation

Systems that pushed the boundaries of computing power 15 years ago, can now be run on a cheap commodity server. Many organisations consolidate their systems on a regular basis, replacing multiple old servers with a single powerful one. An organisation may have been through this process many times. If so, how has this been done and has this increased the visibility of these processes/services to others? If done correctly, with virtualisation tools, then the virtual machines should still be isolated but this is still worth checking. However, a more subtle problem can be caused by the removal of the infrastructure between services. There may no longer be routers or firewalls between the services (or virtual ones with a different setup) as they now sit on the same physical device. This means that a vulnerable, insecure server is less restricted - and therefore a more dangerous staging point if compromised.

A server consolidation process should, instead, be used as an opportunity to increase the security and isolation of services as virtual firewalls are easy to create and monitoring can be improved.

Improved Infrastructure Processes

Modifications to support processes can create security holes. For example, consider the daily backup of an application’s data. The architect of a legacy system may have originally expected backups to be placed onto magnetic tapes and stored in a fire-safe near to the server itself (with periodic backups taken securely offsite).

A more modern process would use offsite, real-time replication. Many legacy systems have had their backup-to-tape processes replaced with a backup-to-SAN which is replicated offsite. This is simple to implement, faster, more reliable and allows quicker restoration. However, who now has access to these backups? When a tape was placed in a fire-safe, the only people with access to the copied data were those with physical access to the safe. Now it can be accessed by anyone with read permission in any location the data is copied. Is this the same group of people as before? It is likely to be a much larger group (over a wide physical area) and could include those with borrowed passwords or those that have left the organisation.

Any modifications to the backup processes need to be analysed from an information security perspective. This is not just for the initial backup location but anywhere else the data is copied to.

Conclusion

Information security is an ongoing process that has multiple drivers, both internal and external to your system. The actions required will vary greatly between systems and depend on the system architecture, its business function and the environment it exists within. Any of these can change and affect the security. Architectural thinking and awareness are central to providing this and a good place to start is a diagram and a risk storming session (with a taxonomy).

Categories: Architecture

Stuff The Internet Says On Scalability For March 6th, 2015

Hey, it's HighScalability time:


The future of technology in one simple graph (via swardley)
  • $50 billion: the worth of AWS (is it low?); 21 petabytes: size of the Internet Archive; 41 million: # of views of posts about a certain dress

  • Quotable Quotes:
    • @bpoetz: programming is awesome if you like feeling dumb and then eventually feeling less dumb but then feeling dumb about something else pretty soon
    • @Steve_Yegge: Saying you don't need debuggers because you have unit tests is like saying you don't need detectives because you have jails.
    • Nasser Manesh: “Some of the things we ran into are issues [with Docker] that developers won’t see on laptops. They only show up on real servers with real BIOSes, 12 disk drives, three NICs, etc., and then they start showing up in a real way. It took quite some time to find and work around these issues.”
    • Guerrilla Mantras Online: Best practices are an admission of failure.
    • Keine Kommentare: Using master-master for MySQL? To be frankly we need to get rid of that architecture. We are skipping the active-active setup and show why master-master even for failover reasons is the wrong decision.
    • Ed Felten: the NSA’s actions in the ‘90s to weaken exportable cryptography boomeranged on the agency, undermining the security of its own site twenty years later.
    • @trisha_gee: "Java is both viable and profitable for low latency" @giltene at #qconlondon
    • @michaelklishin: Saw Eric Brewer trending worldwide. Thought the CAP theorem finally went mainstream. Apparently some Canadian hockey player got traded.
    • @ThomasFrey: Mobile game revenues will grow 16.5% in 2015, to more than $3B
    • John Allspaw: #NoEstimates is an example of something that engineers seem to do a lot, communicating a concept by saying what it’s not.

  • Improved thread handling contention, NDB API receive processing, scans and PK lookups in the data nodes has lead to a monster 200M reads per second in MySQL Cluster 7.4. That's on an impressive hardware configuration to be sure, but it doesn't matter how mighty the hardware if your software can't drive it.

  • Have you heard of this before? LinkedIn shows how to use SDCH, an HTTP/1.1-compatible extension, which reduces the required bandwidth through the use of a dictionary shared between the client and the server, to achieve impressive results: When sdch and gzip are combined, we have seen additional compression as high as 81% on certain files when compared to gzip only. Average additional compression across our static content was about 24%. 

  • Double awesome description of How we [StackExchange] upgrade a live data center. It was an intricate multi-day highly choreographed dance. Toes were stepped on, but there was also great artistry. The result of the new beefier hardware: The decrease on question render times (from approx30-35ms to 10-15ms) is only part of the fun. Great comment thread on reddit.

  • A jaunty exploration of Microservices: What are They and Why Should You Care?  Indix is following Twitter and Netflix by tossing their monolith for microservices. The main idea: Microservices decouples your systems and gives more options and choices to evolve them independently.

  • Wired's new stack: WordPress, PHP, Stylus for CSS, jQuery, starting with React.js, JSON, Vagrant, Gulp for task automation, Git hooks, Lining, GitHub, Jenkins.

  • Have you ever wanted to search Hacker News? Algolia has created a great search engine for HN.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Successful Digital Vision Starts at the Top

Business change is tough.   Just try it at Cloud speed, and you’ll know what I mean.

That said, digital business transformation is reshaping companies and industries around the world, at a rapid rate.

If you don’t cross the Cloud chasm, and learn how to play in the new digital economy,  you might just get left behind.

Sadly, not every executive has a digital vision.

That’s a big deal because the pattern here is that successful digital business transformation starts at the top of the company.  And it starts with digital vision.

But just having a digital vision is not enough.

It has to be a shared transformative digital vision.   Not a mandate, but a shared digital vision from the top, that’s led and made real by the people in the middle and lower levels.

In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share how successful companies and executives drive digital business transformation through shared transformative digital visions.

Employees Don’t Always Get the WHY, WHAT, or HOW of Digital Business Transformation

You need a digital vision at the top.   Otherwise, it’s like pushing rocks uphill.  Worse, not everybody will be in the game, or know what position they play, or even how to play the game.

Via Leading Digital: Turning Technology into Business Transformation:

“The changes being wrought through digital transformation are real.  Yet, even when leaders see the digital threats and opportunity, employees may need to be convinced.  Many employees feel they are paid to do a job, not to change that job.  And they have lived through big initiatives in the past that failed to turn into reality.  To many, digital transformations is either irrelevant or just another passing fad.  Still other people may not understand how the change affects their jobs or how they might make the transition.”

Only Senior Executives Can Create a Compelling Vision of the Future

Digital business transformation must be led.   Senior executives are in the right position to create a compelling future all up, and communicate it across the board.

Via Leading Digital: Turning Technology into Business Transformation:

“Our research shows that successful digital transformation starts at the top of the company.  Only the senior-most executives can create a compelling vision of the future and communicate it throughout the organization.  Then people in the middle and lower levels can make the vision a reality.  Managers can redesign process, workers can start to work differently, and everyone can identify new ways to meet the vision.  This kind of change doesn't happen through simple mandate.  It must be led.

Among the companies we studied, none have created true digital transformation through a bottom-up approach.  Some executives have changed their parts of the business--for example, product design and supply chain at Nike--but the executives stopped at the boundaries of their business units.  Changing part of your business is not enough.  Often, the real benefits of transformation come from seeing potential synergies across silos and then creating conditions through which everyone can unlock that value.  Only senior executives are positioned to drive this kind of boundary-spanning change.”

Digital Masters Have a Shared Digital Vision (While Others Do Not)

As the business landscape is reshaping, you are either a disruptor or the disrupted.  The Digital Masters that are creating the disruption in their business and in their industries have shared digital visions, and re-imagine their business for a mobile-first, Cloud-first world, and a new digital economy.

Via Leading Digital: Turning Technology into Business Transformation:

“So how prevalent is digital vision? In our global survey of 431 executives in 391 companies, only 42 percent said that their senior executive had a digital vision.  Only 35 percent said the vision was shared among senior and middle managers.  These numbers are surprisingly low, given the rapid rate at which digital transformation is reshaping companies and industries.  But the low overall numbers mask an important distinction.  Digital Masters have a shared digital vision, while others do not. 

Among the Digital Masters that we surveyed, 82 percent agreed that their senior leaders shared a common vision of digital transformation, and 71 percent said it was shared between senior and middle managers.  The picture is quite different for firms outside our Digital Masters category, where less than 30 percent said their senior leaders had a shared digital vision and only 17 percent said the shared vision extended to middle management.”

Digital Vision is Not Enough (You Need a Transformative Digital Vision)

It’s bad enough that many executives don’t have a shared digital vision.   But what makes it worse, is that even fewer have a transformative digital vision, which is the key to success in the digital frontier.

Via Leading Digital: Turning Technology into Business Transformation:

“But having a shared digital vision is not quite enough.  Many organizations fail to capture the full potential of digital technologies because their leaders lack a truly transformative vision of the digital future.  On average, only 31 percent of our respondents said that they had a vision which represented radical change, and 41 percent said their vision crossed internal organizational units.  Digital Masters were far more transformative in their vision, with two-thirds agreeing they had a radical vision, and 82 percent agreeing their vision crossed organizational silos.  Meanwhile, nonmasters were far less transformative in their visions.”

Where there is no vision, the businesses perish.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Drive Business Transformation by Reenvisioning Operations

Drive Business Transformation by Reenvisioning Your Customer Experience

Dual-Speed IT Drives Business Transformation and Improves IT-Business Relationships

How Leaders are Building Digital Skills

How To Build a Better Business Case for Digital Initiatives

How To Improve the IT-Business Relationship

How To Build a Roadmap for Your Digital Business Transformation

Categories: Architecture, Programming

Lightweight software architecture - an interview with Fog Creek

Coding the Architecture - Simon Brown - Thu, 03/05/2015 - 08:48

I recently did a short interview with the folks from Fog Creek (creators of Stack Exchange, Trello, FogBugz, etc) about lightweight approaches to software architecture, my book and so on. The entire interview is only about 8 minutes in length and you can watch/listen/read it on the Fog Creek blog.

Read more...

Categories: Architecture

10 Reasons to Consider a Multi-Model Database

This is a guest post by Nikhil Palekar, Solutions Architect, FoundationDB.

The proliferation of NoSQL databases is a response to the needs of modern applications. Still, not all data can be shoehorned into a particular NoSQL model, which is why so many different database options exist in the market. As a result, organizations are now facing serious database bloat within their infrastructure.

But a new class of database engine recently has emerged that can address the business needs of each of those applications and use cases without also requiring the enterprise to maintain separate systems, software licenses, developers, and administrators.

These multi-model databases can provide a single back end that exposes multiple data models to the applications it supports. In that way, multi-model databases eliminate fragmentation and provide a consistent, well-understood backend that supports many different products and applications. The benefits to the organization are extensive, but some of the most significant benefits include:

1. Consolidation
Categories: Architecture

Sponsored Post: Apple, InMemory.Net, Sentient, Couchbase, VividCortex, Internap, Transversal, MemSQL, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple is hiring a Software Engineer for Maps Services. The Maps Team is looking for a developer to support and grow some of the core backend services that support Apple Map's Front End Services. Ideal candidate would have experience with system architecture, as well as the design, implementation, and testing of individual components but also be comfortable with multiple scripting languages. Please apply here.

  • Sentient Technologies is hiring several Senior Distributed Systems Engineers and a Senior Distributed Systems QA Engineer. Sentient Technologies, is a privately held company seeking to solve the world’s most complex problems through massively scaled artificial intelligence running on one of the largest distributed compute resources in the world. Help us expand our existing million+ distributed cores to many, many more. Please apply here.

  • Linux Web Server Systems EngineerTransversal. We are seeking an experienced and motivated Linux System Engineer to join our Engineering team. This new role is to design, test, install, and provide ongoing daily support of our information technology systems infrastructure. As an experienced Engineer you will have comprehensive capabilities for understanding hardware/software configurations that comprise system, security, and library management, backup/recovery, operating computer systems in different operating environments, sizing, performance tuning, hardware/software troubleshooting and resource allocation. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Rise of the Multi-Model Database. FoundationDB Webinar: March 10th at 1pm EST. Do you want a SQL, JSON, Graph, Time Series, or Key Value database? Or maybe it’s all of them? Not all NoSQL Databases are not created equal. The latest development in this space is the Multi Model Database. Please join FoundationDB for an interactive webinar as we discuss the Rise of the Multi Model Database and what to consider when choosing the right tool for the job.
Cool Products and Services
  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • Top Enterprise Use Cases for NoSQL. Discover how the largest enterprises in the world are leveraging NoSQL in mission-critical applications with real-world success stories. Get the Guide.
    http://info.couchbase.com/HS_SO_Top_10_Enterprise_NoSQL_Use_Cases.html

  • VividCortex Developer edition delivers a groundbreaking performance management solution to startups, open-source projects, nonprofits, and other organizations free of charge. It integrates high-resolution metrics on queries, metrics, processes, databases, and the OS and hardware to deliver an unprecedented level of visibility into production database activity.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Aerospike demonstrates RAM-like performance with Google Compute Engine Local SSDs. After scaling to 1 M Writes/Second with 6x fewer servers than Cassandra on Google Compute Engine, we certified Google’s new Local SSDs using the Aerospike Certification Tool for SSDs (ACT) and found RAM-like performance and 15x storage cost savings. Read more.

  • Diagnose server issues from a single tab. The Scalyr log management tool replaces all your monitoring and analysis services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. It's a universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs,” but enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – try it free! (See how Scalyr is different if you're looking for a Splunk alternative.)

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

101 Proven Practices for Focus

“Lack of direction, not lack of time, is the problem. We all have twenty-four hour days.” -- Zig Ziglar

Here is my collection of 101 Proven Practices for Focus.   It still needs work to improve it, but I wanted to shared it, as is, because focus is one of the most important skills we can develop for work and life.

Focus is the backbone of personal effectiveness, personal development, productivity, time management, leadership skills, and just about anything that matters.   Focus is a key ingredient to helping us achieve the things we set out to do, and to learn the things we need to learn.

Without focus, we can’t achieve great results.

I have a very healthy respect for the power of focus to amplify impact, to create amazing breakthroughs, and to make things happen.

The Power of Focus

Long ago one of my most impactful mentors said that focus is what separates the best from the rest.  In all of his experience, what exceptional people had, that others did not, was focus.

Here are a few relevant definitions of focus:
A main purpose or interest.
A center of interest or activity.
Close or narrow attention; concentration.

I think of focus simply as  the skill or ability to direct and hold our attention.

Focus is a Skill

Too many people think of focus as something either you are good at, or you are not.  It’s just like delayed gratification.

Focus is a skill you can build.

Focus is actually a skill and you can develop it.   In fact, you can develop it quite a bit.  For example, I helped a colleague get themselves off of their ADD medication by learning some new ways to retrain their brain.   It turned out that the medication only helped so much, the side effects sucked, and in the end, what they really needed was coping mechanisms for their mind, to better direct and hold their attention.

Here’s the surprise, though.  You can actually learn how to direct your attention very quickly.  Simply ask new questions.  You can direct your attention by asking questions.   If you want to change your focus, change the question.

101 Proven Practices at a Glance

Here is a list of the 101 Proven Practices for Focus:

  1. Align  your focus and your values
  2. Ask new questions to change your focus
  3. Ask yourself, “What are you rushing through for?”
  4. Beware of random, intermittent rewards
  5. Bite off what you can chew
  6. Breathe
  7. Capture all of your ideas in one place
  8. Capture all of your To-Dos all in one place
  9. Carry the good forward
  10. Change your environment
  11. Change your physiology
  12. Choose one project or one thing to focus on
  13. Choose to do it
  14. Clear away all distractions
  15. Clear away external distractions
  16. Clear away internal distractions
  17. Close your distractions
  18. Consolidate and batch your tasks
  19. Create routines to help you focus
  20. Decide to finish it
  21. Delay gratification
  22. Develop a routine
  23. Develop an effective startup routine
  24. Develop an effective shutdown routine
  25. Develop effective email routines
  26. Develop effective renewal activities
  27. Develop effective social media routines
  28. Direct your attention with skill
  29. Do less, focus more
  30. Do now what you could put off until later
  31. Do things you enjoy focusing on
  32. Do worst things first
  33. Don’t chase every interesting idea
  34. Edit later
  35. Exercise your body
  36. Exercise your mind
  37. Expand your attention span
  38. Find a way to refocus
  39. Find the best time to do your routine tasks
  40. Find your flow
  41. Finish what you started
  42. Focus on what you control
  43. Force yourself to focus
  44. Get clear on what you want
  45. Give it the time and attention it deserves
  46. Have a time and place for things
  47. Hold a clear picture in your mind of what you want to accomplish
  48. Keep it simple
  49. Keep your energy up
  50. Know the tests for success
  51. Know what’s on your plate
  52. Know your limits
  53. Know your personal patterns
  54. Know your priorities
  55. Learn to say no – to yourself and others
  56. Limit your starts and stops
  57. Limit your task switching
  58. Link it to good feelings
  59. Make it easy to pick back up where you left off
  60. Make it relentless
  61. Make it work, then make it right
  62. Master your mindset
  63. Multi-Task with skill
  64. Music everywhere
  65. Narrow your focus
  66. Pair up
  67. Pick up where you left off
  68. Practice meditation
  69. Put the focus on something bigger than yourself
  70. Rate your focus each day
  71. Reduce friction
  72. Reduce open work
  73. Reward yourself along the way
  74. See it, do it
  75. Set a time frame for focus 
  76. Set goals
  77. Set goals with hard deadlines
  78. Set mini-goals
  79. Set quantity limits
  80. Set time limits
  81. Shelve things you aren’t actively working on
  82. Single Task
  83. Spend your attention with skill
  84. Start with WHY
  85. Stop starting new projects
  86. Take breaks
  87. Take care of the basics
  88. Use lists to avoid getting overwhelmed or overloaded
  89. Use metaphors
  90. Use Sprints to scope your focus
  91. Use the Rule of Three
  92. Use verbal cues
  93. Use visual cues
  94. Visualize your performance
  95. Wake up at the same time each day
  96. Wiggle your toes – it’s a fast way to bring yourself back to the present
  97. Write down your goals
  98. Write down your steps
  99. Write down your tasks
  100. Write down your thoughts
  101. Work when you are most comfortable

When you go through the 101 Proven Practices for Focus, don’t expect it to be perfect.  It’s a work in progress.   Some of the practices for focus need to be fleshed out better.   There is also some duplication and overlap, as I re-organize the list and find better ways to group and label ideas.

In the future, I’m going to revamp this collection to have some more precision, better naming, and some links to relevant quotes, and some science where possible.   There is a lot more relevant science that explains why some of these techniques work, and why some work so well.

What’s important is that you find the practices that resonate for you, and the things that you can actually practice.

Getting Started

You might find that from all the practices, only one or two really resonate, or help you change your game.   And, that’s great.   The idea of having a large list to select from is that it’s more to choose from.  The bigger your toolbox, the more you can choose the right tool for the job.  If you only have a hammer, then everything looks like a nail.

If you don’t consider yourself an expert in focus, that’s fine.  Everybody has to start somewhere.  In fact, you might even use one of the practices to help you get better:  Rate your focus each day.

Simply rate yourself, on a scale of 1-10, where 10 is awesome and 1 means you’re a squirrel with a sugar high, dazed and confused, and chasing all the shiny objects that come into site.   And then see if your focus improves over the course of a week.

If you adopt just one practice, try either Align  your focus and your values or Ask new questions to change your focus.  

Feel Free to Share It With Friends

At the bottom of the 101 Proven Practices for Focus, you’ll find the standard sharing buttons for social media to make it easier to share.

Share it with friends, family, your world, the world.

The ability to focus is really a challenge for a lot of people.   The answer to improve your attention and focus is through proven practices, techniques, and skill building.  Too many people hope the answer lies in a pill, but pills don’t teach you skills.

Even if you struggle a bit in the beginning, remind yourself that growth feels awkward.   You' will get better with practice.  Practice deliberately.  In fact, the side benefit of focusing on improving your focus, is, well, you guessed it … you’ll improve your focus.

What we focus on expands, and the more we focus our attention, and apply deliberate practice, the deeper our ability to focus will grow.

Grow your focus with skill.

You Might Also Like

The Great Inspirational Quotes Revamped

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Motivational Quotes Revamped

The Great Personal Development Quotes Collection Revamped

The Great Positive Thinking Quotes Collection

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming

A product manager's perfection....

Xebia Blog - Tue, 03/03/2015 - 15:59

is achieved not there are no more features to add, but when there are no more features to take away. -- Antoine de Saint Exupéry

Not only was Antoine a brilliant writer, philosopher and pilot (well arguably since he crashed in the Mediterranean) but most of all he had a sharp mind about engineering, and I frequent quote him when I train product owners, product managers or in general product companies, about what makes a good product great. I also tell them their most important word in their vocabulary is "no". But the question then becomes, what is the criteria to say "yes"?

Typically we will look at the value of a feature and use different ways to prioritise and rank different features, break them down to their minimal proposition and get the team going. But what if you already have a product? and it’s rate of development is slowing. Features have been stacked on each other for years or even decades, and it’s become more and more difficult for the teams to wade through the proverbial swamp the code has become?

Too many features

Too many features

Turns out there are a number of criteria that you can follow:

1.) Working software, means it’s actually being used.

Though it may sound obvious, it’s not that easy to figure out. I was once part of a team that had to rebuild a rather large piece (read huge) of software for an air traffic control system. The managers ensured us that every functionality was a must keep, but the cost would have been prohibitory high.

One of the functions of the system was a record and replay mode for legal purposes. It basically registers all events throughout the system to serve as evidence that picture compilers would be accountable, or at least verifiable. One of our engineers had the bright insight that we could catalogue this data anonymously to figure out which functions were used and which were not.

Turned out the Standish Group was pretty right in their claims that 80% of the software is never used. Carving that out was met with fierce resistance, but it was easier to convince management (and users) with data, than with gut.

Another upside? we also knew what functions they were using a lot, and figured out how to improve those substantially.

2.) The cost of platforms

Yippee we got it running on a gazillion platforms! and boy do we have a reach, the marketing guys are going frenzy. Even if is the right choice at the time, you need to revisit this assumption all the time, and be prepared to clean up! This is often looked upon as a disinvestment: “we spent so much money on finally getting Blackberry working” or “It’s so cost effective that we can also offer it on platform XYZ”.

In the web world it’s often the number of browsers we support, but for larger software systems it is more often operating systems, database versions or even hardware. For one customer we would refurbish hardware systems, simply because it was cheaper than moving on to a more modern machine.

Key take away: If the platform is deprecated, remove it entirely from the codebase, it will bog the team down and you need their speed to respond to an ever increasing pace of new platforms.

3.) Old strategies

Every market and every product company pivots at least every few years (or dies). Focus shifts from consumer groups, to type of clients, type of delivery, shift to service or something else which is novel, hip and most of all profitable. Code bases tend to have a certain inertia. The larger the product, the bigger the inertia, and before you know it there a are tons of features in their that are far less valuable in the new situation. Cutting away perfectly good features is always painful but at some point you end up with the toolbars of Microsoft Word. Nice features, but complete overkill for the average user.

4.) The cause and effect trap

When people are faced with an issue they tend to focus on fixing the issue as it manifests itself. It's hard for our brain to think in problems, it tries to think in solutions. There is an excellent blog post here that provides a powerful method to overcome this phenomena by asking five times "why".

  • "We need the system to automatically export account details at the end of the day."
  • "Why?"
  • "So we can enter the records into the finance system"
  • "So it sounds like the real problem is getting the data into the finance system, not exporting it. Exporting just complicates the issue. Let's implement a data feed that automatically feeds the data to the finance system"

The hard job is to continuously keep evaluating your features, and remove those who are no longer valuable. It may seem like your throwing away good code, but ultimately it is not the largest product that survives, but the one that is able to adapt fast enough to the changing market. (Freely after Darwin)

 

Tutum, first impressions

Xebia Blog - Mon, 03/02/2015 - 16:40

Tutum is a platform to build, run and manage your docker containers. After shortly playing with it some time ago, I decided to take a bit more serious look at it this time. This article describes first impressions of using this platform, more specifically looking at it from a continuous delivery perspective.

The web interface

First thing to notice is the clean and simple web interface. Basically there are two main sections, which are services and nodes. The services view lists the services or containers you have deployed with status information and two buttons, one to stop (or start) and one to terminate the container, which means to throw it away.

You can drill down to a specific service, which provides you with more detailed information per service. The detail page provides you information about the containers, a slider to scale up and down, endpoints, logging, some metrics for monitoring and more .

Screen Shot 2015-02-23 at 22.49.33

The second view is a list of nodes. The list contains the VM's on which containers can be deployed. Again with two simple buttons to start/stop and to terminate the node. For each node it displays useful information about the current status, where it runs, and how many containers are deployed on it.

The node page also allows you to drill down to get more information on a specific node.  The screenshot below shows some metrics in fancy graphs for a node, which can potentially be used to impress your boss.

Screen Shot 2015-02-23 at 23.07.30

 

Creating a new node

You’ll need a node to deploy containers on it. In the node view you see two big green buttons. One states: “Launch new node cluster”. This will bring up a form with currently four popular providers Amazon, Digital Ocean, Microsoft Azure and Softlayer. If you have linked your account(s) in the settings you can select that provider from a dropdown box. It only takes a few clicks to get a node up and running. In fact you create a node cluster, which allows you to easily scale up or down by adding or removing nodes from the cluster.

You also have an option to ‘Bring you own node’. This allows you to add your own Ubuntu Linux systems as nodes to Tutum. You need to install an agent onto your system and open up a firewall port to make your node available to Tutum. Again very easy and straight forward.

Creating a new service

Once you have created a node, you maybe want to do something with it. Tumtum provides jumpstart images with popular types of services for storage, cacheing, queueing and more, providing for example MongoDB, Elasticsearch or Tomcat. Using a wizard it takes only four steps to get a particular service up and running.

Besides the jumpstart images that Tutum provides, you can also search public repositories for your image of choice. Eventually you would like to have your own images running your homegrown software. You can upload your image to a Tutum private registry. You can either pull it from Docker Hub or upload your local images directly to Tutum.

Automating

We all know real (wo)men (and automated processes) don’t use GUI’s. Tutum provides a nice and extensive command line interface for both Linux and Mac. I installed it using brew on my MBP and seconds later I was logged in and doing all kinds of cool stuff with the command line.

Screen Shot 2015-02-24 at 22.23.30

The cli is actually doing rest calls, so you can skip the cli all together and talk HTTP directly to a REST API, or if it pleases you, you can use the python API to create scripts that are actually maintainable. You can pretty much automate all management of your nodes, containers, and services using the API, which is a must have in this era of continuous everything.

A simple deployment example

So let's say we've build a new version of our software on our build server. Now we want to get this software deployed to do some integration testing, or if you feeling lucky just drop it straight into production.

build the docker image

tutum build -t test/myimage .

upload the image to Tutum registry

tutum image push <image_id>

create the service

tutum service create <image_id>

run it on a node

tutum service run -p <port> -n <name> <image_id>

That's it. Of course there are lots of options to play with, for example deployment strategy, set memory, auto starting etc. But the above steps are enough to get your image build, deployed and run. Most time I had to spend was waiting while uploading my image using the flaky-but-expensive hotel wifi.

Conclusion for now

Tutum is clean, simple and just works. I’m impressed with ease and speed you can get your containers up and running. It takes only minutes to get from zero to running using the jumpstart services, or even your own containers. Although they still call it beta, everything I did just worked, and without the need to read through lots of complex documentation. The web interface is self explanatory and the REST API or cli provides everything you need to integrate Tutum in your build pipeline, so you can get your new features in production with automation speed.

I'm wondering how challenging managing would be at a scale of hundreds of nodes and even more containers, when using the web interface. You'd need a meta-overview or aggregate view or something. But then again, you have a very nice API to

The Great Positive Thinking Quotes Collection

I revamped my positive thinking quotes collection on Sources of Insight to help you amp up your ability to generate more positive thoughts. 

It’s a powerful one.

Why positive thinking?

Maybe Zig Ziglar said it best:

“Positive thinking will let you do everything better than negative thinking will.”

Positive Thinking Can Help You Defeat Learned Helplessness

Actually, there’s a more important reason for positive thinking: 

It’s how you avoid learned helplessness.

Learned helplessness is where you give up, because you don’t think you have any control over the situation, or what happens in your life.  You explain negative situations as permanent, personal, and pervasive, instead of temporary, situational, and specific.

That’s a big deal.

If you fall into the learned helplessness trap, you spiral down.  You stop taking action.  After all, why take action, if it won’t matter.  And, this can lead to depression.

But that’s a tale of woe for others, not you.   Because you know how to defeat learned helplessness and how to build the skill of learned optimism.

You can do it by reducing negative thinking, and by practicing positive thinking.   And what better way to improve your positive thinking, than through positive thinking quotes.

Keep a Few Favorite Positive Thinking Quotes Handy

Always keep a few positive thinking quotes at your fingertips so that they are there when you need them.

Here is a quick taste of a few of my favorites from the positive thinking quotes collection:

“A positive attitude may not solve all your problems, but it will annoy enough people to make it worth the effort.” –  Herm Albright

"Attitudes are contagious.  Are yours worth catching?" — Dennis and Wendy Mannering

"Be enthusiastic.  Remember the placebo effect – 30% of medicine is showbiz." — Ronald Spark

"I will not let anyone walk through my mind with their dirty feet." — Mahatma Gandhi

"If the sky falls, hold up your hands." – Author Unknown

Think Deeper About Positivity By Using Categories for Positive Thinking

But this positive thinking quotes collection is so much more.  I’ve organized the positive thinking quotes into a set of categories to chunk it up, and to make it more insightful:

Actions
Adaptability and Flexibility
Anger and Frustration
Appreciation and Gratitude
Attitude, Disposition, and Character
Defeat, Setbacks, and Failures
Expectations
Focus and Perspective
Hope and Fear
Humor
Letting Things Go and Forgiveness
Love and Liking
Opportunity and Possibility
Positive Thinking (General)

The distinctions you can add to your mental repertoire, the more powerful of a positive thinker you will be.

You can think of each positive thinking quote as a distinction that can add more depth.

Draw from Wisdom of the Ages and Modern Sages on the Art and Science of Positive Thinking

I've included positive thinking quotes from a wide range of people including  Anne Frank, Epictetus, Johann Wolfgang von Goethe, Napoleon, Oscar Wilde, Ralph Waldo Emerson, Robert Frost, Voltaire, Winston Churchill, and many, many more.

You might even find useful positive thinking mantras from the people that you work with, or the people right around you. 

For example, here are a few positive thinking thoughts from Satya Nadella:

“The future we're going to invent together, express ourselves in the most creative ways.”

“I want to work in a place where everybody gets more meaning out of their work on an everyday basis.

I want each of us to give ourselves permission to be able to move things forward.  Each of us sometimes overestimate the power others have to do things vs. our own ability to make things happen.

Challenge Yourself to New Levels of Positive Thinking through Positive Quotes

As you explore the positive thinking quotes collection, try to find the quotes that challenge you the most, that really make you think, and give you a new way to generate more positive thoughts in your worst situations. 

In the words of Friedrich Nietzsche, "That which does not kill us makes us stronger."

You can use your daily trials and tribulations in the workplace as your personal dojo to practice and build your positive thinking skills.

The more positivity you can bring to the table, the more you’ll empower yourself in ways you never thought possible.

As you get tested by your worst scenarios, it’s good to keep in mind, the words of F. Scott Fitzgerald:

"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.  One should, for example, be able to see that things are hopeless and yet be determined to make them otherwise."

I’ll also point out that as you grow your toolbox of positive thinking quotes and you build your positive thinking skills, you need to also focus on taking positive action.

Don’t just imagine a better garden, get out and actually weed it.

Don’t Just Imagine Everything Going Well – Imagine How You’ll Deal with the Challenges

Here’s another important tip about positivity and positive thinking …

If you use visualization as part of your approach to getting results, it’s important to include dealing with setbacks and challenges.   It’s actually more effective to imagine the most likely challenges coming up, and walking through how you’ll deal with them, if they occur.   This is way more effective than just picturing the perfect plan where everything goes without a hitch.

The reality is things happen, stuff comes up, and setbacks occur.

But your ability to mentally prepare for the setbacks, and have a plan of action, will make you much more effective in dealing with the challenge that actually do occur.  This will help you respond vs. react in more situations, and to stay in a better place mentally while you evaluate options, and decide a course of action.  (Winging it under stress doesn’t work very well because we shut down our prefrontal cortex – the smart part of our brain – when we go into flight-or-fight mode.)

If I missed any of your favorite positive thinking quotes in my positive thinking quotes collection, please let me know.

In closing, please keep handy one of the most powerful positive thinking quotes of all time:

“May the force be with you.”

Always.

You Might Also Like

The Great Inspirational Quotes Revamped

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Motivational Quotes Revamped

The Great Personal Development Quotes Collection Revamped

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming