Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
When a customer’s business really starts gaining traction and their web traffic ramps up in production, they know to expect increased server resource load. But what do you do when memory usage still keeps on growing beyond all expectations? Have you found a memory leak in the server? Or else is memory perhaps being lost due to fragmentation? While you may be able to throw hardware at the problem for a while, DRAM is expensive, and real machines do have finite address space. At Aerospike, we have encountered these scenarios along with our customers as they continue to press through the frontiers of high scalability.
In the summer of 2013 we faced exactly this problem: big-memory (192 GB RAM) server nodes were running out of memory and crashing again within days of being restarted. We wrote an innovative memory accounting instrumentation package, ASMalloc , which revealed there was no discernable memory leak. We were being bitten by fragmentation.
This article focuses specifically on the techniques we developed for combating memory fragmentation, first by understanding the problem, then by choosing the best dynamic memory allocator for the problem, and finally by strategically integrating the allocator into our database server codebase to take best advantage of the disparate life-cycles of transient and persistent data objects in a heavily multi-threaded environment. For the benefit of the community, we are sharing our findings in this article, and the relevant source code is available in the Aerospike server open source GitHub repo. Executive Summary
Memory fragmentation can severely limit scalability and stability by wasting precious RAM and causing server node failures.
Aerospike evaluated memory allocators for its in-memory database use-case and chose the open source JEMalloc dynamic memory allocator.
Effective allocator integration must consider memory object life-cycle and purpose.
Aerospike optimized memory utilization by using JEMalloc extensions to create and manage per-thread (private) and per-namespace (shared) memory arenas.
Using these techniques, Aerospike saw substantial reduction in fragmentation, and the production systems have been running non-stop for over 1.5 years.
When Matt and Quin founded Swiftype in 2012, they chose to build the company’s infrastructure using Amazon Web Services. The cloud seemed like the best fit because it was easy to add new servers without managing hardware and there were no upfront costs.
Unfortunately, while some of the services (like Route53 and S3) ended up being really useful and incredibly stable for us, the decision to use EC2 created several major problems that plagued the team during our first year.
Swiftype’s customers demand exceptional performance and always-on availability and our ability to provide that is heavily dependent on how stable and reliable our basic infrastructure is. With Amazon we experienced networking issues, hanging VM instances, unpredictable performance degradation (probably due to noisy neighbors sharing our hardware, but there was no way to know) and numerous other problems. No matter what problems we experienced, Amazon always had the same solution: pay Amazon more money by purchasing redundant or higher-end services.
The more time we spent working around the problems with EC2, the less time we could spend developing new features for our customers. We knew it was possible to make our infrastructure work in the cloud, but the effort, time and resources it would take to do so was much greater than migrating away.
After a year of fighting the cloud, we made a decision to leave EC2 for real hardware. Fortunately, this no longer means buying your own servers and racking them up in a colo. Managed hosting providers facilitate a good balance of physical hardware, virtualized instances, and rapid provisioning. Given our previous experience with hosting providers, we made the decision to choose SoftLayer. Their excellent service and infrastructure quality, provisioning speed, and customer support made them the best choice for us.
After more than a month of hard work preparing the inter-data center migration, we were able to execute the transition with zero downtime and no negative impact on our customers. The migration to real hardware resulted in enormous improvements in service stability from day one, provided a huge (~2x) performance boost to all key infrastructure components, and reduced our monthly hosting bill by ~50%.
This article will explain how we planned for and implemented the migration process, detail the performance improvements we saw after the transition, and offer insight for younger companies about when it might make sense to do the same.Preparing for the switch
Hey, it's HighScalability time:
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...
How will your OLTP database perform if it had to scale up to 1024 cores? Not very well according to this fascinating paper: Staring into the Abyss: An Evaluation of Concurrency Control with One Thousand Cores, where a few intrepid chaos monkeys report the results of their fiendish experiment. The conclusion: we need a completely redesigned DBMS architecture that is rebuilt from the ground up.
This is just a quick note to say that the video of my "Agility and the essence of software architecture" talk from YOW! 2014 in Brisbane is now available to watch online. This talk covers the subject of software architecture and agile from a number of perspectives, focusing on how to create agile software systems in an agile way.
The slides are also available to view online/download. A huge thanks to everybody who attended for making it such a fun session. :-)
This is a guest post by Tommaso Barbugli the CTO of getstream.io, a web service for building scalable newsfeeds and activity streams.
In January we migrated our entire infrastructure from dedicated servers in Germany to EC2 in the US. The migration included a wide variety of components, web workers, background task workers, RabbitMQ, Postgresql, Redis, Memcached and our Cassandra cluster. Our main requirement was to execute this migration without downtime.
This article covers the migration of our Cassandra cluster. If you’ve never run a Cassandra migration before, you’ll be surprised to see how easy this is. We were able to migrate Cassandra with zero downtime using its awesome multi-data center support. Cassandra allows you to distribute your data in such a way that a complete set of data is guaranteed to be placed on every logical group of nodes (eg. nodes that are on the same data-center, rack, or EC2 regions...). This feature is a perfect fit for migrating data from one data-center to another. Let’s start by introducing the basics of a Cassandra multi-datacenter deployment.Cassandra, Snitches and Replication strategies
This is a guest post from AppLovin's VP of engineering, Basil Shikin, on the infrastructure of its mobile marketing platform. Major brands like Uber, Disney, Yelp and Hotels.com use AppLovin's mobile marketing platform. It processes 30 billion requests a day and 60 terabytes of data a day.
AppLovin's marketing platform provides marketing automation and analytics for brands who want to reach their consumers on mobile. The platform enables brands to use real-time data signals to make effective marketing decisions across one billion mobile consumers worldwide.Core Stats
30 Billion ad requests per day
300,000 ad requests per second, peaking at 500,000 ad requests per second
5ms average response latency
3 Million events per second
60TB of data processed daily
9 data centers
~40 reporting dimensions
500,000 metrics data points per minute
1 Pb Spark cluster
15GB/s peak disk writes across all servers
9GB/s peak disk reads across all servers
Founded in 2012, AppLovin is headquartered in Palo Alto, with offices in San Francisco, New York, London and Berlin.
Algolia started in 2012 as an offline search engine SDK for mobile. At this time we had no idea that within two years we would have built a worldwide distributed search network.
Today Algolia serves more than 2 billion user generated queries per month from 12 regions worldwide, our average server response time is 6.7ms and 90% of queries are answered in less than 15ms. Our unavailability rate on search is below 10-6 which represents less than 3 seconds per month.
The challenges we faced with the offline mobile SDK were technical limitations imposed by the nature of mobile. These challenges forced us to think differently when developing our algorithms because classic server-side approaches would not work.
Our product has evolved greatly since then. We would like to share our experiences with building and scaling our REST API built on top of those algorithms.
We will explain how we are using a distributed consensus for high-availability and synchronization of data in different regions around the world and how we are doing the routing of queries to the closest locations via an anycast DNS.The data size misconception
I've seen and had lots of discussion about "package by layer" vs "package by feature" over the past couple of weeks. They both have their benefits but there's a hybrid approach I now use that I call "package by component". To recap...Package by layer
Let's assume that we're building a web application based upon the Web-MVC pattern. Packaging code by layer is typically the default approach because, after all, that's what the books, tutorials and framework samples tell us to do. Here we're organising code by grouping things of the same type.
There's one top-level package for controllers, one for services (e.g. "business logic") and one for data access. Layers are the primary organisation mechanism for the code. Terms such as "separation of concerns" are thrown around to justify this approach and generally layered architectures are thought of as a "good thing". Need to switch out the data access mechanism? No problem, everything is in one place. Each layer can also be tested in isolation to the others around it, using appropriate mocking techniques, etc. The problem with layered architectures is that they often turn into a big ball of mud because, in Java anyway, you need to mark your classes as public for much of this to work.Package by feature
Instead of organising code by horizontal slice, package by feature seeks to do the opposite by organising code by vertical slice.
Now everything related to a single feature (or feature set) resides in a single place. You can still have a layered architecture, but the layers reside inside the feature packages. In other words, layering is the secondary organisation mechanism. The often cited benefit is that it's "easier to navigate the codebase when you want to make a change to a feature", but this is a minor thing given the power of modern IDEs.
What you can do now though is hide feature specific classes and keep them out of sight from the rest of the codebase. For example, if you need any feature specific view models, you can create these as package-protected classes. The big question though is what happens when that new feature set C needs to access data from features A and B? Again, in Java, you'll need to start making classes publicly accessible from outside of the packages and the big ball of mud will again emerge.
Package by layer and package by feature both have their advantages and disadvantages. To quote Jason Gorman from Schools of Package Architecture - An Illustration, which was written seven years ago.To round off, then, I would urge you to be mindful of leaning to far towards either school of package architecture. Don't just mindlessly put socks in the sock draw and pants in the pants draw, but don't be 100% driven by package coupling and cohesion to make those decisions, either. The real skill is finding the right balance, and creating packages that make stuff easier to find but are as cohesive and loosely coupled as you can make them at the same time. Package by component
This is a hybrid approach with increased modularity and an architecturally-evident coding style as the primary goals.
The basic premise here is that I want my codebase to be made up of a number of coarse-grained components, with some sort of presentation layer (web UI, desktop UI, API, standalone app, etc) built on top. A "component" in this sense is a combination of the business and data access logic related to a specific thing (e.g. domain concept, bounded context, etc). As I've described before, I give these components a public interface and package-protected implementation details, which includes the data access code. If that new feature set C needs to access data related to A and B, it is forced to go through the public interface of components A and B. No direct access to the data access layer is allowed, and you can enforce this if you use Java's access modifiers properly. Again, "architectural layering" is a secondary organisation mechanism. For this to work, you have to stop using the public keyword by default. This structure raises some interesting questions about testing, not least about how we mock-out the data access code to create quick-running "unit tests".Architecturally-aligned testing
The short answer is don't bother, unless you really need to. I've spoken about and written about this before, but architecture and testing are related. Instead of the typical testing triangle (lots of "unit" tests, fewer slower running "integration" tests and even fewer slower UI tests), consider this.
I'm trying to make a conscious effort to not use the term "unit testing" because everybody has a different view of how big a "unit" is. Instead, I've adopted a strategy where some classes can and should be tested in isolation. This includes things like domain classes, utility classes, web controllers (with mocked components), etc. Then there are some things that are easiest to test as components, through the public interface. If I have a component that stores data in a MySQL database, I want to test everything from the public interface right back to the MySQL database. These are typically called "integration tests", but again, this term means different things to different people. Of course, treating the component as a black box is easier if I have control over everything it touches. If you have a component that is sending asynchronous messages or using an external, third-party service, you'll probably still need to consider adding dependency injection points (e.g. ports and adapters) to adequately test the component, but this is the exception not the rule. All of this still applies if you are building a microservices style of architecture. You'll probably have some low-level class tests, hopefully a bunch of service tests where you're testing your microservices though their public interface, and some system tests that run scenarios end-to-end. Oh, and you can still write all of this in a test-first, TDD style if that's how you work.
I'm using this strategy for some systems that I'm building and it seems to work really well. I have a relatively simple, clean and (to be honest) boring codebase with understandable dependencies, minimal test-induced design damage and a manageable quantity of test code. This strategy also bridges the model-code gap, where the resulting code actually reflects the architectural intent. In other words, we often draw "components" on a whiteboard when having architecture discussions, but those components are hard to find in the resulting codebase. Packaging code by layer is a major reason why this mismatch between the diagram and the code exists. Those of you who are familiar with my C4 model will probably have noticed the use of the terms "class" and "component". This is no coincidence. Architecture and testing are more related than perhaps we've admitted in the past.
"In a battle between two ideas, the best one doesn't necessarily win. No, the idea that wins is the one with the most fearless heretic behind it." -- Seth Godin
One leadership idea can change your life in an instant.
Like the right key, the right leadership idea can instantly unlock or unleash what youâre capable of.
Iâve seen some leaders lose their jobs because they didnât know how to adapt their leadership style. Iâve seen other leaders crumble with anxiety because they didnât know how to balance connection and conviction.
Iâve seen other leaders operate at a higher level, and influence without authority. Iâve seen amazing leaders in action that inspire others through their stories of the art of the possible.
Here are a handful of leadership ideas that you can put into practice.
None of these leadership ideas are new. They may be new for you. But they are proven practices for leadership that many leaders have learned the hard way.
The nice thing about ideas is that all you have to do is try them, and find what works for you. (Always remember to try ideas in a Bruce Lee sort of way, âadapt what is usefulâ, and donât throw the baby out with the bathwater.)
If you want to get hard-core, I also have a roundup of the best business books that have influenced Microsoft leaders:
Youâll find a strange but potent mix of business books ranging from skills you learned in kindergarten to ways to change the world by spreading your ideas like a virus.
Enjoy.You Might Like
Information security is a quality attribute that canât easily be retrofitted. Concerns such as authorisation, authentication, access and data protection need to be defined early so they can influence the solution's design.
However, many aspects of information security arenât static. External security threats are constantly evolving and the maintainers of a system need to keep up-to-date to analyse them. This may force change on an otherwise stable system.
Functional changes to a legacy system also need to be analysed from a security standpoint. The initial design may have taken the security requirements into consideration (a quality attribute workshop is a good way to capture these) but are they re-considered when features are added or changed? What if a sub-component is replaced or services moved to a remote location? Is the analysis re-performed?
It can be tempting to view information security as a macho battle between evil, overseas (people always think they come from another country) hackers and your own underpaid heroes but many issues have simple roots. Many data breaches or not hacks but basic errors - I once worked at a company where an accountant intern accidentally emailed a spreadsheet with everyoneâs salary to the whole company.
Letâs have a quick look at some of the issues that a long running, line-of-business application might face:
Lack of Patching
Have you applied all the vendorsâ patches? Not just to the application but the software stack beneath? Has the vendor applied patches to third party libraries that they rely upon? What about the version of Java/.net that the application is running or the OS beneath that? When an application is initially developed it will use the latest versions but unless a full dependency tree is recorded the required upgrades can be difficult to track. It is easy to forget these dependant upgrades even on an actively developed system.
Even if you do have a record of all components and subcomponents, there is no guarantee that, when upgraded, they will be compatible or work as before. The level of testing can be high and this acts as a deterrent to change - you only need a single broken component for the entire system to be at risk.
Passwords are every operations teamâs nightmare. Over the last 20 years the advice for best-practice, generating, and storing of passwords has changed dramatically. Users used to be advised to think of an unusual password and not write it down. However it turns out that âunusualâ is actually very common with people picking the same âunusualâ word. Leaked password lists from large websites have demonstrated how many users pick the same password. Therefore the advice and allowable passwords for modern systems have changed (often multiple word sentences). Does your legacy system enforce this or is it filled with passwords from a brute-force list?
Passwords also tend to get shared over time. What happens when someone goes on holiday, a weekly report needs to be run, but the template exists within a specific userâs account? Often they are phoned up and asked for their password. This may indicate a feature flaw in the product but is very common. There are many ways to improve this; from frequent password modifications to two factor authentication but these increase the burden on the operations team.
Does your organisation have an employee leaverâs process? Do you suspend account access? If you have shared accounts (âeveryone knows the admin password") this may be difficult or disruptive. Having a simple list (or preferably an automated script) to execute for each employee that leaves is important.
There are similar problems with cryptographic keys. Are they long enough to comply with the latest advice? Do they use a best practice algorithm or one with a known issue? It is amazing how many websites use old certificates that should be replaced or have even expired. How secure is your storage of these keys?
Are any of your passwords or keys embedded in system files? This may have seemed safe when the entire system was on a single machine in a secure location but if the system has been restructured this may no longer be the case. For example, if some of the files have been moved to a shared or remote location, it may be possible for a non-authorised party to scan them.
Moving from Closed to Open Networks
A legacy system might have used a private, closed network for reasons of speed and reliability but it may now be possible to meet those quality attributes on an open network and vastly reduce costs. However, if you move services from closed networks to open networks you have to reconsider the use of encryption on the connection. The security against eavesdropping/network sniffing was a fortunate side-effect of the network being private, so the requirement may have not been captured - it was a given. This can be dangerous if the original requirements are used for restructuring. These implicit quality attributes are important and whether a feature change creates new quality attributes should be considered. You might find these cost-saving changes dropped on you by an excited accountant (who thinks their brilliance has just halved communications charges) with little warning!
Moving to an open network will make services reachable by unknown clients. This raises issues from Denial-of-Service attacks through to malicious clients attempting to use bad messages (such as SQL injection) to compromise a system. There are various techniques that can be applied at the network level to help here (VPNs, blocking unknown IPs, deep packet inspection etc) but ultimately the code being run in the services need to be security aware - this is very, very hard to do to an entire system after it is written.
Migrating to an SOA or micro-service architecture increases these effects as the larger number of connections and end-points now need to be secured. A well modularised system may be easy to distribute but intra-process communication is much more secure than inter-process or inter-machine.
Modernising Data Formats
Migrating from a closed, binary data format to an open one (e.g. xml) for messaging or storage makes navigating the data easier, but this applies to casual scanning by an attacker as well. Relying on security by obscurity isnât a good idea (and this is not an excuse to avoid improving the readability of data) but many systems do. When improving data formats you should re-consider where the data is being stored, what has access and whether encryption is required.
Similar concerns should be addressed when making source-code open source. Badly written code is now available for inspection and attack vectors can be examined. In particular you should be careful to avoid leaking configuration into the source code if you intending making it open.
New Development and Copied Data
If new features are developed for a system that has been static for a while, it is likely that new developer, test, QA and pre-production environments will be created. (The originals will either be out of date or not kept due to cost). The quickest and most accurate way to create test environments is to clone production. This works well but copied data is as important as the original. Do you treat this copied data with the same security measures as production? If you have proprietary or confidential customer information then it should be. Note that the definition of âconfidentialâ varies but you might be surprised at how broad some regulators make it. You may also be restricted in the information that you can move out of the country - is your development or QA team located overseas?
Remember, you are not just restricting access to your system but your data as well.
Systems that pushed the boundaries of computing power 15 years ago, can now be run on a cheap commodity server. Many organisations consolidate their systems on a regular basis, replacing multiple old servers with a single powerful one. An organisation may have been through this process many times. If so, how has this been done and has this increased the visibility of these processes/services to others? If done correctly, with virtualisation tools, then the virtual machines should still be isolated but this is still worth checking. However, a more subtle problem can be caused by the removal of the infrastructure between services. There may no longer be routers or firewalls between the services (or virtual ones with a different setup) as they now sit on the same physical device. This means that a vulnerable, insecure server is less restricted - and therefore a more dangerous staging point if compromised.
A server consolidation process should, instead, be used as an opportunity to increase the security and isolation of services as virtual firewalls are easy to create and monitoring can be improved.
Improved Infrastructure Processes
Modifications to support processes can create security holes. For example, consider the daily backup of an applicationâs data. The architect of a legacy system may have originally expected backups to be placed onto magnetic tapes and stored in a fire-safe near to the server itself (with periodic backups taken securely offsite).
A more modern process would use offsite, real-time replication. Many legacy systems have had their backup-to-tape processes replaced with a backup-to-SAN which is replicated offsite. This is simple to implement, faster, more reliable and allows quicker restoration. However, who now has access to these backups? When a tape was placed in a fire-safe, the only people with access to the copied data were those with physical access to the safe. Now it can be accessed by anyone with read permission in any location the data is copied. Is this the same group of people as before? It is likely to be a much larger group (over a wide physical area) and could include those with borrowed passwords or those that have left the organisation.
Any modifications to the backup processes need to be analysed from an information security perspective. This is not just for the initial backup location but anywhere else the data is copied to.
Information security is an ongoing process that has multiple drivers, both internal and external to your system. The actions required will vary greatly between systems and depend on the system architecture, its business function and the environment it exists within. Any of these can change and affect the security. Architectural thinking and awareness are central to providing this and a good place to start is a diagram and a risk storming session (with a taxonomy).
Hey, it's HighScalability time:
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...
Business change is tough. Just try it at Cloud speed, and youâll know what I mean.
That said, digital business transformation is reshaping companies and industries around the world, at a rapid rate.
If you donât cross the Cloud chasm, and learn how to play in the new digital economy, you might just get left behind.
Sadly, not every executive has a digital vision.
Thatâs a big deal because the pattern here is that successful digital business transformation starts at the top of the company. And it starts with digital vision.
But just having a digital vision is not enough.
It has to be a shared transformative digital vision. Not a mandate, but a shared digital vision from the top, thatâs led and made real by the people in the middle and lower levels.
In the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share how successful companies and executives drive digital business transformation through shared transformative digital visions.Employees Donât Always Get the WHY, WHAT, or HOW of Digital Business Transformation
You need a digital vision at the top. Otherwise, itâs like pushing rocks uphill. Worse, not everybody will be in the game, or know what position they play, or even how to play the game.
âThe changes being wrought through digital transformation are real. Yet, even when leaders see the digital threats and opportunity, employees may need to be convinced. Many employees feel they are paid to do a job, not to change that job. And they have lived through big initiatives in the past that failed to turn into reality. To many, digital transformations is either irrelevant or just another passing fad. Still other people may not understand how the change affects their jobs or how they might make the transition.âOnly Senior Executives Can Create a Compelling Vision of the Future
Digital business transformation must be led. Senior executives are in the right position to create a compelling future all up, and communicate it across the board.
âOur research shows that successful digital transformation starts at the top of the company. Only the senior-most executives can create a compelling vision of the future and communicate it throughout the organization. Then people in the middle and lower levels can make the vision a reality. Managers can redesign process, workers can start to work differently, and everyone can identify new ways to meet the vision. This kind of change doesn't happen through simple mandate. It must be led.
Among the companies we studied, none have created true digital transformation through a bottom-up approach. Some executives have changed their parts of the business--for example, product design and supply chain at Nike--but the executives stopped at the boundaries of their business units. Changing part of your business is not enough. Often, the real benefits of transformation come from seeing potential synergies across silos and then creating conditions through which everyone can unlock that value. Only senior executives are positioned to drive this kind of boundary-spanning change.âDigital Masters Have a Shared Digital Vision (While Others Do Not)
As the business landscape is reshaping, you are either a disruptor or the disrupted. The Digital Masters that are creating the disruption in their business and in their industries have shared digital visions, and re-imagine their business for a mobile-first, Cloud-first world, and a new digital economy.
âSo how prevalent is digital vision? In our global survey of 431 executives in 391 companies, only 42 percent said that their senior executive had a digital vision. Only 35 percent said the vision was shared among senior and middle managers. These numbers are surprisingly low, given the rapid rate at which digital transformation is reshaping companies and industries. But the low overall numbers mask an important distinction. Digital Masters have a shared digital vision, while others do not.
Among the Digital Masters that we surveyed, 82 percent agreed that their senior leaders shared a common vision of digital transformation, and 71 percent said it was shared between senior and middle managers. The picture is quite different for firms outside our Digital Masters category, where less than 30 percent said their senior leaders had a shared digital vision and only 17 percent said the shared vision extended to middle management.âDigital Vision is Not Enough (You Need a Transformative Digital Vision)
Itâs bad enough that many executives donât have a shared digital vision. But what makes it worse, is that even fewer have a transformative digital vision, which is the key to success in the digital frontier.
âBut having a shared digital vision is not quite enough. Many organizations fail to capture the full potential of digital technologies because their leaders lack a truly transformative vision of the digital future. On average, only 31 percent of our respondents said that they had a vision which represented radical change, and 41 percent said their vision crossed internal organizational units. Digital Masters were far more transformative in their vision, with two-thirds agreeing they had a radical vision, and 82 percent agreeing their vision crossed organizational silos. Meanwhile, nonmasters were far less transformative in their visions.â
Where there is no vision, the businesses perish.You Might Also Like
I recently did a short interview with the folks from Fog Creek (creators of Stack Exchange, Trello, FogBugz, etc) about lightweight approaches to software architecture, my book and so on. The entire interview is only about 8 minutes in length and you can watch/listen/read it on the Fog Creek blog.
The proliferation of NoSQL databases is a response to the needs of modern applications. Still, not all data can be shoehorned into a particular NoSQL model, which is why so many different database options exist in the market. As a result, organizations are now facing serious database bloat within their infrastructure.
But a new class of database engine recently has emerged that can address the business needs of each of those applications and use cases without also requiring the enterprise to maintain separate systems, software licenses, developers, and administrators.
These multi-model databases can provide a single back end that exposes multiple data models to the applications it supports. In that way, multi-model databases eliminate fragmentation and provide a consistent, well-understood backend that supports many different products and applications. The benefits to the organization are extensive, but some of the most significant benefits include:1. Consolidation
If any of these items interest you there's a full description of each sponsor below. Please click to read more...
âLack of direction, not lack of time, is the problem. We all have twenty-four hour days.â -- Zig Ziglar
Here is my collection of 101 Proven Practices for Focus. It still needs work to improve it, but I wanted to shared it, as is, because focus is one of the most important skills we can develop for work and life.
Focus is the backbone of personal effectiveness, personal development, productivity, time management, leadership skills, and just about anything that matters. Focus is a key ingredient to helping us achieve the things we set out to do, and to learn the things we need to learn.
Without focus, we canât achieve great results.
I have a very healthy respect for the power of focus to amplify impact, to create amazing breakthroughs, and to make things happen.The Power of Focus
Long ago one of my most impactful mentors said that focus is what separates the best from the rest. In all of his experience, what exceptional people had, that others did not, was focus.
Here are a few relevant definitions of focus:
A main purpose or interest.
A center of interest or activity.
Close or narrow attention; concentration.
I think of focus simply as the skill or ability to direct and hold our attention.Focus is a Skill
Too many people think of focus as something either you are good at, or you are not. Itâs just like delayed gratification.
Focus is a skill you can build.
Focus is actually a skill and you can develop it. In fact, you can develop it quite a bit. For example, I helped a colleague get themselves off of their ADD medication by learning some new ways to retrain their brain. It turned out that the medication only helped so much, the side effects sucked, and in the end, what they really needed was coping mechanisms for their mind, to better direct and hold their attention.
Hereâs the surprise, though. You can actually learn how to direct your attention very quickly. Simply ask new questions. You can direct your attention by asking questions. If you want to change your focus, change the question.101 Proven Practices at a Glance
Here is a list of the 101 Proven Practices for Focus:
When you go through the 101 Proven Practices for Focus, donât expect it to be perfect. Itâs a work in progress. Some of the practices for focus need to be fleshed out better. There is also some duplication and overlap, as I re-organize the list and find better ways to group and label ideas.
In the future, Iâm going to revamp this collection to have some more precision, better naming, and some links to relevant quotes, and some science where possible. There is a lot more relevant science that explains why some of these techniques work, and why some work so well.
Whatâs important is that you find the practices that resonate for you, and the things that you can actually practice.Getting Started
You might find that from all the practices, only one or two really resonate, or help you change your game. And, thatâs great. The idea of having a large list to select from is that itâs more to choose from. The bigger your toolbox, the more you can choose the right tool for the job. If you only have a hammer, then everything looks like a nail.
If you donât consider yourself an expert in focus, thatâs fine. Everybody has to start somewhere. In fact, you might even use one of the practices to help you get better: Rate your focus each day.
Simply rate yourself, on a scale of 1-10, where 10 is awesome and 1 means youâre a squirrel with a sugar high, dazed and confused, and chasing all the shiny objects that come into site. And then see if your focus improves over the course of a week.
If you adopt just one practice, try either Align your focus and your values or Ask new questions to change your focus.Feel Free to Share It With Friends
At the bottom of the 101 Proven Practices for Focus, youâll find the standard sharing buttons for social media to make it easier to share.
Share it with friends, family, your world, the world.
The ability to focus is really a challenge for a lot of people. The answer to improve your attention and focus is through proven practices, techniques, and skill building. Too many people hope the answer lies in a pill, but pills donât teach you skills.
Even if you struggle a bit in the beginning, remind yourself that growth feels awkward. You' will get better with practice. Practice deliberately. In fact, the side benefit of focusing on improving your focus, is, well, you guessed it âŠ youâll improve your focus.
What we focus on expands, and the more we focus our attention, and apply deliberate practice, the deeper our ability to focus will grow.
Grow your focus with skill.You Might Also Like
is achieved not there are no more features to add, but when there are no more features to take away. --Â Antoine de Saint ExupĂ©ry
Not only was Antoine a brilliant writer, philosopher and pilot (well arguably since he crashed in the Mediterranean) but most of all he had a sharp mind about engineering, and I frequent quote him when I train product owners, product managers or in general product companies, about what makes a good product great. I also tell them their most important word in their vocabulary is "no". But the question then becomes,Â what is the criteriaÂ toÂ say "yes"?
Typically we will look at the value of a feature and use different ways to prioritise and rank different features, break them down to their minimal proposition and get the team going. But what if you already have a product? and itâs rate of development is slowing. Features have been stacked on each other for years or even decades, and itâs become more and more difficult for the teams to wade through the proverbial swamp the code has become?
Turns out there are a number of criteria that you can follow:
1.) Working software, means itâs actually being used.
Though it may sound obvious, itâs not that easy to figure out. I was once part of a team that had to rebuild a rather large piece (read huge) of software for an air traffic control system. The managers ensured us that every functionality was a must keep, but the cost would have been prohibitory high.
One of the functions of the system was a record and replay mode for legal purposes. It basically registers all events throughout the system to serve as evidence that picture compilers would be accountable, or at least verifiable. One of our engineers had the bright insight that we could catalogue this data anonymously to figure out which functions were used and which were not.
Turned out the Standish Group was pretty right in their claims that 80% of the software is never used. Carving that out was met with fierce resistance, but it was easier to convince management (and users) with data, than with gut.
Another upside? we also knew what functions they were using a lot, and figured out how to improve those substantially.
2.) The cost of platforms
Yippee we got it running on a gazillion platforms! and boy do we have a reach, the marketing guys are going frenzy. Even if is the right choice at the time, you need to revisit this assumption all the time, and be prepared to clean up! This is often looked upon as a disinvestment: âwe spent so much money on finally getting Blackberry workingâ or âItâs so cost effective that we can also offer it on platform XYZâ.
In the webÂ world itâs often the number of browsers we support, but for larger software systems it is more often operating systems, database versions or even hardware. For one customer we would refurbish hardware systems, simply because it was cheaper than moving on to a more modern machine.
Key take away: If the platform is deprecated, remove it entirely from the codebase, it will bog the team down and you need their speed to respond to an ever increasing pace of new platforms.
3.) Old strategies
Every market and every product company pivots at least every few years (or dies). Focus shifts from consumer groups, to type of clients, type of delivery, shift to service or something else which is novel, hip and most of all profitable. Code bases tend to have a certain inertia. The larger the product, the bigger the inertia, and before you know it there a are tons of features in their that are far less valuable in the new situation. Cutting away perfectly good features is always painful but at some point you end up with the toolbars of Microsoft Word. Nice features, but complete overkill for the average user.
4.) The cause and effect trap
When people are faced with an issue they tend to focus on fixing the issue as it manifests itself. It's hard for our brain to think in problems, it tries to think in solutions. There is an excellent blog post here that provides a powerful method to overcome this phenomena by askingÂ five times "why".
The hard job is to continuously keep evaluating your features, and remove those who are no longer valuable. It may seem like your throwing away good code, but ultimately it is not the largest product that survives, but the one that is able to adapt fast enough to the changing market. (Freely after Darwin)
Tutum is a platform to build, run and manage your docker containers. After shortly playing with it some time ago, I decided to take a bit more serious look at it this time. This article describes first impressions of using this platform, more specifically looking at it from a continuous delivery perspective.
The web interface
First thing to notice is the clean and simple web interface. Basically there are two main sections, which are services and nodes. The services view lists the services or containers you have deployed with status information and two buttons, one to stop (or start) and one to terminate the container, which means to throw it away.
You can drill down to a specific service, which provides you with more detailed information per service. The detail page provides you information about the containers, a slider to scale up and down, endpoints, logging, some metrics for monitoring and more .
The second view is a list of nodes. The list contains the VM's on which containers can be deployed. Again with two simple buttons to start/stop and to terminate the node. For each node it displays useful information about the current status, where it runs, and how many containers are deployed on it.
The node page also allows you to drill down to get more information on a specific node. Â The screenshot below shows some metrics in fancy graphs for a node, which canÂ potentially be used toÂ impress your boss.
Creating a new node
Youâll need a node to deploy containersÂ on it. In the node view you see two big green buttons. One states: âLaunch new node clusterâ. This will bring upÂ a form with currently four popular providers Amazon, Digital Ocean, Microsoft Azure and Softlayer. If you have linked your account(s) in the settings you can select that provider from a dropdown box. It only takes a few clicks to get a node up and running. In factÂ you create a node cluster, which allows you to easily scale up or down by adding or removing nodes from the cluster.
You also have an option to âBring you own nodeâ. This allows you to add your own Ubuntu Linux systems as nodes to Tutum. You need to install an agent onto your system and open up a firewall port to make your nodeÂ available to Tutum. Again very easy and straight forward.
Creating a new service
Once you haveÂ created a node, you maybe want to do something with it. Tumtum provides jumpstart images with popular types of services for storage, cacheing, queueing and more, providing for example MongoDB, Elasticsearch orÂ Tomcat. Using a wizard it takes only four steps to get a particular service up and running.
Besides the jumpstart images that Tutum provides, you can also search public repositories for your image of choice. EventuallyÂ you would like to have your own images running your homegrown software. You can upload your image to a Tutum private registry. You can either pull it from Docker Hub or upload your local images directly to Tutum.
We all know real (wo)men (and automated processes) donât use GUIâs. Tutum provides a nice and extensive command line interface for both Linux and Mac. I installed it usingÂ brew on my MBPÂ and seconds later I was logged in and doing all kinds of cool stuff with the command line.
The cli is actually doing rest calls, so you can skip the cli all together and talk HTTP directly to a REST API, or if it pleases you, you can use the python API to create scripts that are actually maintainable. You can pretty much automate all management of your nodes, containers, and services using the API, which is a must have in this era of continuous everything.
A simpleÂ deployment example
So let's say we've build a new version of our software on our build server. Now we want to get this software deployedÂ to do some integration testing, or if you feeling lucky just drop it straight into production.
build the docker image
tutum build -t test/myimage .
upload the image to Tutum registry
tutum image pushÂ <image_id>
create the service
tutum service createÂ <image_id>
run it on a node
tutum service run -p <port> -n <name>Â <image_id>
That's it. Of course there are lots of options to play with, for example deployment strategy, set memory, auto starting etc. But the above steps are enough to get your image build, deployed and run. Most time I had to spendÂ was waiting whileÂ uploading my image using the flaky-but-expensive hotel wifi.
Conclusion for now
Tutum is clean, simple and just works. Iâm impressed with ease and speed you can get your containers up and running. It takes only minutes to get from zero to running using the jumpstart services, or even your own containers. Although they still call it beta, everything I did just worked, and without the need to read through lots of complex documentation. The web interface is self explanatory and the REST API or cli provides everything you need to integrate Tutum in your build pipeline, so you can get your new featuresÂ in production with automation speed.
I'm wondering howÂ challenging managing would be at a scale ofÂ hundreds of nodes and even moreÂ containers, when using the web interface. You'd needÂ a meta-overview or aggregate view or something. But then again, you have a very nice API to
I revamped my positive thinking quotes collection on Sources of Insight to help you amp up your ability to generate more positive thoughts.
Itâs a powerful one.
Why positive thinking?
Maybe Zig Ziglar said it best:
âPositive thinking will let you do everything better than negative thinking will.âPositive Thinking Can Help You Defeat Learned Helplessness
Actually, thereâs a more important reason for positive thinking:
Itâs how you avoid learned helplessness.
Learned helplessness is where you give up, because you donât think you have any control over the situation, or what happens in your life. You explain negative situations as permanent, personal, and pervasive, instead of temporary, situational, and specific.
Thatâs a big deal.
If you fall into the learned helplessness trap, you spiral down. You stop taking action. After all, why take action, if it wonât matter. And, this can lead to depression.
But thatâs a tale of woe for others, not you. Because you know how to defeat learned helplessness and how to build the skill of learned optimism.
You can do it by reducing negative thinking, and by practicing positive thinking. And what better way to improve your positive thinking, than through positive thinking quotes.Keep a Few Favorite Positive Thinking Quotes Handy
Always keep a few positive thinking quotes at your fingertips so that they are there when you need them.
Here is a quick taste of a few of my favorites from the positive thinking quotes collection:
âA positive attitude may not solve all your problems, but it will annoy enough people to make it worth the effort.â â Herm Albright
"Attitudes are contagious. Are yours worth catching?" â Dennis and Wendy Mannering
"Be enthusiastic. Remember the placebo effect â 30% of medicine is showbiz." â Ronald Spark
"I will not let anyone walk through my mind with their dirty feet." â Mahatma Gandhi
"If the sky falls, hold up your hands." â Author UnknownThink Deeper About Positivity By Using Categories for Positive Thinking
But this positive thinking quotes collection is so much more. Iâve organized the positive thinking quotes into a set of categories to chunk it up, and to make it more insightful:
Adaptability and Flexibility
Anger and Frustration
Appreciation and Gratitude
Attitude, Disposition, and Character
Defeat, Setbacks, and Failures
Focus and Perspective
Hope and Fear
Letting Things Go and Forgiveness
Love and Liking
Opportunity and Possibility
Positive Thinking (General)
The distinctions you can add to your mental repertoire, the more powerful of a positive thinker you will be.
You can think of each positive thinking quote as a distinction that can add more depth.Draw from Wisdom of the Ages and Modern Sages on the Art and Science of Positive Thinking
I've included positive thinking quotes from a wide range of people including Anne Frank, Epictetus, Johann Wolfgang von Goethe, Napoleon, Oscar Wilde, Ralph Waldo Emerson, Robert Frost, Voltaire, Winston Churchill, and many, many more.
You might even find useful positive thinking mantras from the people that you work with, or the people right around you.
For example, here are a few positive thinking thoughts from Satya Nadella:
âThe future we're going to invent together, express ourselves in the most creative ways.â
âI want to work in a place where everybody gets more meaning out of their work on an everyday basis.
I want each of us to give ourselves permission to be able to move things forward. Each of us sometimes overestimate the power others have to do things vs. our own ability to make things happen.Challenge Yourself to New Levels of Positive Thinking through Positive Quotes
As you explore the positive thinking quotes collection, try to find the quotes that challenge you the most, that really make you think, and give you a new way to generate more positive thoughts in your worst situations.
In the words of Friedrich Nietzsche, "That which does not kill us makes us stronger."
You can use your daily trials and tribulations in the workplace as your personal dojo to practice and build your positive thinking skills.
The more positivity you can bring to the table, the more youâll empower yourself in ways you never thought possible.
As you get tested by your worst scenarios, itâs good to keep in mind, the words of F. Scott Fitzgerald:
"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function. One should, for example, be able to see that things are hopeless and yet be determined to make them otherwise."
Iâll also point out that as you grow your toolbox of positive thinking quotes and you build your positive thinking skills, you need to also focus on taking positive action.
Donât just imagine a better garden, get out and actually weed it.Donât Just Imagine Everything Going Well â Imagine How Youâll Deal with the Challenges
Hereâs another important tip about positivity and positive thinking âŠ
If you use visualization as part of your approach to getting results, itâs important to include dealing with setbacks and challenges. Itâs actually more effective to imagine the most likely challenges coming up, and walking through how youâll deal with them, if they occur. This is way more effective than just picturing the perfect plan where everything goes without a hitch.
The reality is things happen, stuff comes up, and setbacks occur.
But your ability to mentally prepare for the setbacks, and have a plan of action, will make you much more effective in dealing with the challenge that actually do occur. This will help you respond vs. react in more situations, and to stay in a better place mentally while you evaluate options, and decide a course of action. (Winging it under stress doesnât work very well because we shut down our prefrontal cortex â the smart part of our brain â when we go into flight-or-fight mode.)
If I missed any of your favorite positive thinking quotes in my positive thinking quotes collection, please let me know.
In closing, please keep handy one of the most powerful positive thinking quotes of all time:
âMay the force be with you.â
Always.You Might Also Like