Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
However, many organizations are observed to be failing with SharePoint implementation. So with this article, we are trying to make it simpler for organizations in-house IT administrators to help implement SharePoint over a virtual server environment.
Here we are going to see following key points:
We have written agents deployed/distributed across the network. Agents sends data every 15 Secs may be even 5 secs. Working on a service/system to which all agent can post data/tuples with marginal payload. Upto 5% drop rate is acceptable. Ultimately the data will be segregated and stored into DBMS System (currently we are using MSQL).
Question(s) I am looking for answer
1. Client/Server Communication: Agent(s) can post data. Status of sending data is not that important. But there is a remote where Agent(s) to be notified if the server side system generates an event based on the data sent.
- Lot of advices from internet suggests using Message Bus (ActiveMQ) for async communication. Multicast and UDP are the alternatives.
2. Persistence: After some evaluation data to be stored in DBMS System.
- End of processing data is an aggregated record for which MySql looks scalable. But on the volume of data is exponential. Considering HBase as an option.
Looking if there are any alternatives for above two scenarios and get expert advice.
Michael Laing, a Systems Architect at NYTimes, gave this great decription of their use of RabbitMQ and their overall architecture on the RabbitMQ mailing list. The closing sentiment marks this as definitely an architecture to learn from:
Although it may seem complex, Fabrik has simple components and is mostly principles and plumbing. The key point to grasp is that there is no head, no master, no single point of failure. As I write this I can see components failing (not RabbitMQ), and we are fixing them so they are more reliable. But the system doesn't fail, users can connect, and messages are delivered, regardless - all within design parameters.
Since it's short, to the point, and I couldn't say it better, I'll just reproduce several original sources here:
Hey, it's HighScalability time:
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...
Whenâs the last time you went for your personal Epic Win? If itâs been a while, no worries. Letâs go big this year.
Iâll give you the tools.
I realize time and again, that Bruce Lee was so right when he said, âTo hell with circumstances; I create opportunities.â Similarly, William B. Sprague told us, âDo not wait to strike till the iron is hot; but make it hot by striking.â
And, Peter Drucker said, âThe best way to predict the future is to create it.â Similarly, Alan Kay said, "The best way to predict the future is to invent it."
Well then? Game on!
By the way, if youâre not feeling very inspired, check out either my 37 Inspirational Quotes That Will Change Your Life, Motivational Quotes, or my Inspirational Quotes. They are intense, and I bet you can find your favorite three.
As Iâve been diving deep into goal setting and goal planning, Iâve put together a set of deep dive posts that will give you a very in-depth look at how to set and achieve any goal you want. Here is my roundup so far:
Hopefully, my posts on goal setting and goal planning save you many hours (if not days, weeks, etc.) of time, effort, and frustration on trying to figure out how to really set and achieve your goals. If you only read one post, at least read Goal Setting vs. Goal Planning because this will put you well ahead of the majority of people who regularly donât achieve their goals.
In terms of actions, if there is one thing to decide, make it Commit to Your Best Year Ever.
Enjoy and best wishes for your greatest year ever and a powerful 2014.
Adrian Cockcroft on the future of Cloud, Open Source, SaaS and the End of Enterprise Computing:
Most big enterprise companies are actively working on their AWS rollout now. Most of them are also trying to get an in-house cloud to work, with varying amounts of success, but even the best private clouds are still years behind the feature set of public clouds, which is has a big impact on the agility and speed of product development
While the Snowden revelations have tattered the thin veil of trust secreting Big Brother from We the People, they may also be driving a fascinating new tension in architecture choices between Cloud Native (scale-out, IaaS), Amazon Native (rich service dependencies), and Enterprise Native (raw hardware, scale-up).
This tension became evident in a recent HipChat interview where HipChat, makers of an AWS based SaaS chat product, were busy creating an on-premises version of their product that could operate behind the firewall in enterprise datacenters. This is consistent with other products from Atlassian in that they do offer hosted services as well as installable services, but it is also an indication of customer concerns over privacy and security.
The result is a potential shattering of backend architectures into many fragments like we’ve seen on the front-end. On the front-end you can develop for iOS, Android, HTML5, Windows, OSX, and so on. Any choice you make is like declaring for a cold war power in a winner take all battle for your development resources. Unifying this mess has been the refuge of cloud based services over HTTP. Now that safe place is threatened.
To see why...
If any of these items interest you there's a full description of each sponsor below. Please click to read more...
HipChat started in an unusual space, one you might not think would have much promise, enterprise group messaging, but as we are learning there is gold in them there enterprise hills. Which is why Atlassian, makers of well thought of tools like JIRA and Confluence, acquired HipChat in 2012.
And in a tale not often heard, the resources and connections of a larger parent have actually helped HipChat enter an exponential growth cycle. Having reached the 1.2 billion message storage mark they are now doubling the number of messages sent, stored, and indexed every few months.
That kind of growth puts a lot of pressure on a once adequate infrastructure. HipChat exhibited a common scaling pattern. Start simple, experience traffic spikes, and then think what do we do now? Using bigger computers is usually the first and best answer. And they did that. That gave them some breathing room to figure out what to do next. On AWS, after a certain inflection point, you start going Cloud Native, that is, scaling horizontally. And that’s what they did.
But there's a twist to the story. Security concerns have driven the development of an on-premises version of HipChat in addition to its cloud/SaaS version. We'll talk more about this interesting development in a post later this week.
While HipChat isn’t Google scale, there is good stuff to learn from HipChat about how they index and search billions of messages in a timely manner, which is the key difference between something like IRC and HipChat. Indexing and storing messages under load while not losing messages is the challenge.
This is the road that HipChat took, what can you learn? Let’s see…
Maybe software developers are naturally optimistic but in my experience they rarely consider system failure or disaster scenarios when designing software. Failures are varied and range from the likely (local disk failure) to the rare (tsunami) and from low impact to fatal (where fatal may be the death of people or bankruptcy of a business).
Failure planning broadly fits into the following areas:
Avoiding failure is what a software architect is most likely to think about at design time. This may involve a number of High Availability (HA) techniques and tools including; redundant servers, distributed databases or real time replication of data and state. This usually involves removing any single point of failure but you should be careful to not just consider the software and hardware that it immediately runs on - you should also remove any single dependency on infrastructure such as power (battery backup, generators or multiple power supplies) or telecoms (multiple wired connections, satellite or radio backups etc).
Failing safely is a complex topic that I touched on recently and may not apply to your problem domain (although you should always consider if it does).
Failure recovery usually goes hand-in-hand with High Availability and ensures that when single components are lost they can be re-created/started to join the system. There is no point in having redundancy if components cannot be recovered as you will eventually lose enough components for the system to fail!
Disaster Recovery Scenarios and Planning
However, the main topic I want to discuss here is disaster recovery. This is the process that a system and its operators have to execute in order to recreate a fully operational system after a disaster scenario. This differs from a failure in that the entire system (potentially all the components but at least enough to render it inoperable) stops working. As I stated earlier, many software architects don't consider these scenarios but they can include:
These are usually classified into either natural or man-made disasters. Importantly these are likely to cause outright system failure and require some manual intervention - the system will not automatically recover. Therefore an organisation should have a Disaster Recovery (DR) Plan for the operational staff to follow when this occurs.
A disaster recovery plan should consider a range of scenarios and give very clear and precise instructions on what to do for each of them. In the event of a disaster scenario the staff members are likely to be stressed and not thinking as clearly as they would otherwise. Keep any steps required simple and don't worry about stating the obvious or being patronising - remember that the staff executing the plan may not be the usual maintainers of the system.
Please remember that 'cloud hosted' systems still require disaster recovery plans! Your provider could have issues and you are still affected by scenarios that involve corrupt data and disgruntled staff. Can you roll-back your data store to a known point in the past before corruption?
The aims and actions of any recovery will depend on the scenario that occurs. Therefore the scenarios listed should each refer to a strategy which contains some actions.
Before any strategy is executed you need to be able to detect the event has occurred. This may sound obvious but a common mistake is to have insufficient monitoring in place to actually detect it. Once detected there needs to be comprehensive notification in place so that all systems and people are aware that actions are now required.
For each strategy there has to be an aim for the actions. For example, do you wish to try to bring up a complete system with all data (no data loss) or do you just need something up and running? Perhaps missing data can be imported at a later time or maybe some permanent data-loss is tolerated? Does the recovered system have to provide full functionality or is an emergency subset sufficient?
This is hugely dependent on the problem domain and scenario but the key metrics are recovery point objectives (RPO) and recovery time objectives (RTO) along with level of service. Your RPO and RTO are key non-functional (quality) requirements and should be listed in your software architecture document. These metrics should influence your replication, backup strategies and necessary actions.
The disaster recovery plans for the IT systems are actually a subset of the boarder 'business continuity' plans (BCP) that an organisation should have. This covers all the aspects of keeping an organisation running in the event of a disaster. BCP plans also includes manual processes, staff coverage, building access etc. You need to make sure that the IT disaster recovery plan fits into the business continuity plan and you state the dependencies between them.
There are a range of official standards covering Business Continuity Planning such as ISO22301, ISO22313 and ISO27031. Depending on your business and location you might have a legal obligation to comply with these or other local standards. I would strongly recommend that you investigate whether your organisation needs to be compliant - if you fail to do so then there could be legal consequences.
This is a complex topic which I have only touched upon - if it raises concerns then you may have a lot of work to do! If you don't know where to start then I'd suggest getting your team together and running a risk storming workshop.
FitNesse is an acceptance testing framework. It allows business users, testers and developers to collaborate on executable specifications (for example in BDDÂ style and/or implementing Specification by Example), and allows for testing bothÂ theÂ back-endÂ andÂ the front-end. Aside from partly automating acceptance testing and as a tool to help build a common understanding between developers and business users, a selection of the tests from a FitNesse test suite often doubles as a regression test suite.
In contrast to unit tests, FitNesse tests should usually be focusedÂ but still test a feature in an Â 'end-to-end' way. It is not uncommon for a FitNesse test to for example start mocked versions of external systems, start e.g. a Spring contextÂ and connect to a real test database rather than an in-memory one.Running FitNesse during development
The downside of end-to-end testing is that setting up all this context makes running a single test locallyÂ relatively slow. This is part of the reason you should keep in mind the testing pyramidÂ while writing tests, and write tests at the lowest possible level (thoughÂ not lower).
Still, when used correctly, FitNesse tests can provide enormous value. Luckily, versions of FitNesse since 06-01-2014 make it relatively easy to significantly reduce this round-trip time.A bit of background
Most modern FitNesse tests are written using the SLIMÂ test system. When executing a test, a separate 'service' process is spun up to actually execute the code under test ('fixtures' and code-under-test). This has a couple of advantages: the classpath of the service process can be kept relatively clean - in fact, you can even use a service process written in an entirely different language, such as .NetÂ or Ruby, as long as implements the SLIM protocol.
In the common case of using the Java SLIM service, however, this means spinning up a JVM, loading your classes into the classloader, and possibly additional tasks such as initializing part of your backend and mocking services. This can take a while, and slows down your development roundtrip, making FitNesse less pleasant to work with.How to speed up your FitNesse round-trip times
One way to tremendously speed up test round-trip times is to, instead of initializing the complete context every time you run a test, start the SlimService manually and keep it running. When done from your IDE this allows you to take advantage of selective reloading of updated classes and easily setting breakpoints.
To locally use FitNesse in this way, put the FitNesse non-standalone jarÂ on your classpath, and start the main method of fitnesse.slim.SlimService with parameters like '-d 8090': '-d' is to prevent the SlimService from shutting down after the first test disconnects, '8090' specifies the port number on which to listen.
Example:Â java -cp *yourclasspath* fitnesse.slim.SlimService -d 8090
Now, when starting the FitNesse web UI, use the 'slim.port' to specify the port to connect to and set 'slim.pool.size' to '1', and FitNesse will connect to the already-running SLIM service instead of spinning up a new process each time.
Example:Â java -Dslim.port=8090 -Dslim.pool.size=1 -jar fitnesse-standalone.jar -p 8000 -e 0Â
We've seen improvements in the time it takes to re-run one of our tests from a typical ~15 seconds to about 2-3 seconds. Not only a productivity improvement, but more importantly this makes it much more pleasant to use FitNesse tests where they make sense.
I was reading The Fruits of Innovation: Top 10 IT Trends in 2014, by Mark Harris.
Harris had this to say about the evolving role of the CIO:
âIn the end, these leaders are now tasked to accurately manage, predict, execute and justify. Hence, the CIOâs role will evolve. Previously, CIOs were mostly technologists that were measured almost exclusively by availability and uptime. The CIOâs job was all about crafting a level of IT services that the company could count on, and the budgeting process needed to do so was a mostly a formality.â
Harris had this to say about the best qualities in a CIO:
âThe most effective CIOs in 2014 will be business managers that understand the wealth of technology options now available, the costs associated with each as well as the business value of each of the various services they are chartered to deliver. He or she will map out a plan that delivers just the right amount service within their agreed business plan. Email, for instance, may have an entirely different value to a company than their online store, so the means to deliver these diverse services will need to be different. It is the 2014 CIOâs job to empower their organizations to deliver just the right services at just the right cost.â
That matches what Iâve been seeing.
CIOs need business acumen and the ability to connect IT to business impact.
Another way to think of it is, the CIO needs to help accelerate and realize business value from IT investments.
Value Realization is hot.You Might Also Like
Thereâs a little trick I learned about how to have your best year ever:
And, it actually works.
When you decide to have your best year ever, and you make it a mission, you find a way to make it happen.
You embrace the challenges and the changes that come you way.
You make better choices throughout the year, in a way that moves you towards your best year ever.
A while back, our team, did exactly that. We decided we wanted to make the coming year our best year ever. We wanted a year we could look back on, and know that we gave it our best shot. We wanted a year that mattered. And we were willing to work for it.
And, it worked like a champ.
In fast, most of us go our best reviews at Microsoft. Ever.
Itâs not like itâs magic. It works because it sets the stage. It sets the stage for great expectations. And, when you expect more, from yourself, or from the world, you start to look for and leverage more opportunities to make that come true.
It also helps you role with the punches. You find ways to turnaround negative situations into more positive ones. You find ways to take setbacks as learning opportunities to experience your greatest growth. You look for ways to turn ordinary events into extraordinary adventures.
And when you get knocked down. You try again. Because youâre on a mission.
When you make it a mission to have your best year ever, you stretch yourself a little more. You try new things. You take old things to new heights.
But thereâs a very important distinction here. You have to own the decision.
It has to be your choice. YOU have to choose it so that you internalize it, and actually believe it, so that you actually act on it.
Otherwise, itâs just a neat idea, but you wonât live it.
And if you donât live it, it wonât happen.
But, as soon as you decide that no matter what, this will be YOUR best year ever, you unleash your most resourceful self.
If youâve forgotten what itâs like to go for the epic win, then watch this TED talk and my notes:
Best wishes for your best year.
Hey, it's HighScalability time, can you handle the truth?
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge...
The great thing about standards is there are so many to choose from. What is it about human nature that makes this so recognizably true?
How do you turn Big Data into fast, useful, and interesting visualizations? Using R and technology called Nanocubes. The visualizations are stunning and amazingly reactive. Almost as interesting as the technologies behind them.
David Smith wrote a great article explaining the technology and showing a demo by Simon Urbanek of a visualization that uses 32Tb of Twitter data. It runs smoothly and interactively on a single machine with 16Gb of RAM. For more information and demos go to nanocubes.net.
David Smith sums it up nicely:Despite the massive number of data points and the beauty and complexity of the real-time data visualization, it runs impressively quickly. The underlying data structure is based on Nanocubes, a fast datastructure for in-memory data cubes. The basic idea is that nanocubes aggregate data hierarchically, so that as you zoom in and out of the interactive application, one pixel on the screen is mapped to just one data point, aggregated from the many that sit "behind" that pixel. Learn more about nanocubes, and try out the application yourself (modern browser required) at the link below.
You might already know the Agile Manifesto:
But do you know the Declaration of Interdependence:
While the Agile Manifesto is geared toward Agile practitioners, the Declaration of Interdependence is geared towards Agile project leaders.
When you know the values that shape things, it helps you better understand why things are the way they are.
Notice how you can read the Agile Manifesto as, âwe value this more than thatâ and you can read the Declaration of Interdependence as âthis benefit we achieve through this.â Those are actually powerful and repeatable language patterns. Iâve found myself drawing from those patterns over the years, whenever I was trying to articulate operating principles (which is a good name for principles that guide how you operate.)You Might Also Like
Talk about taking some things for granted. Especially when itâs a love-hate relationship. Iâm talking about Annual Reviews.
I didnât realize how valuable they can be when you own the process and you line them up with your bigger goal setting for life. Iâve done them for so long, in this way, that I forgot how much they are a part of my process for carving out a high-impact year.
I know I might do things a big differently in terms of how I do my review, so I highlighted key things in my post:
Note that if you hate the term Annual Review because it conjures up a bunch of bad memories, then consider calling it your Annual Retrospective. If youâre a Scrum fan, youâll appreciate the twist.
Hereâs the big idea:
If you âownâ your Annual Review, you can use taking a look back to take a leap forward.
What I mean is that if you are pro-active in your approach, and if you really use feedback as a gift, you can gain tremendous insights into your personal growth and capabilities.
Hereâs a summary of what I do in terms of my overall review process:
Itâs not an easy process. But thatâs just it. Thatâs what makes it worth it. Itâs a tough look at the hard stuff that matters. The parts of the process that make it a challenge are the opportunities for growth. Looking back, I can see how much easier it is for me to really plan out a year of high-impact where I live my values and play to my strengths. I can also see early warning signs and anticipate downstream challenges. I know when I first started, it was daunting to figure out what a year might look like. Now, itâs almost too easy.
This gives me a great chance up front to play out a lot of âWhat If?â scenarios. This also gives me a great chance right up front to ask the question, if this is how the year will play out, is that the ride I want to be on? The ability to plan out our future capability vision, design a better future, and change our course is part of owning our destiny.
In my experience, a solid plan at the right level, gives you more flexibility and helps you make smarter choices, before you become a frog in the boiling pot.
If you havenât taken the chance to really own and drive your Annual Review, then consider doing an Annual Retrospective, and use the process to help you leap frog ahead.
Make this YOUR year.You Might Also Like
Ever had deadlines that must be met causing short-term decisions to be made? Ever worked over time with your team to meet an important deadline after which the delivered product wasnât used for a couple of weeks?
I believe we all know these examples where deadlines are imposed on the team for questionable reasons.
Yet, deadlines are part of reality and we have to deal with them. Certainly, there is business value in meeting them but they also have costs.The Never Ending Story of Shifting DeadlinesâŠ..
Some time ago I was involved in a project for delivering personalised advertisements on mobile phones. At that time this was quite revolutionary and we didnât know how the market would react. Therefore, a team of skilled engineers and marketeers was assembled and we set out to deliver a prototype in a couple of months and test it in real life, i.e. with real phones and in the mobile network. This was a success and we got the assignment to make it into a commercial product version 1.0.
At this time there was no paying customer for the product yet and we built it targeted at multiple potential customers.
For the investment to make sense the deadline for delivering version 1.0 was set to 8 months.
The prototype worked fine but how to achieve a robust product when the product is scaled to millions of subscribers and thousands of advertisements per second? What architecture to use? Should we build upon the prototype or throw it away and start all over with the acquired knowledge?
A new architecture required us to use new technology which would require training and time to get acquainted with it. Time we did not have as the deadline was only 8 months away. We double checked that the deadline can be moved to a later date. Of course, this washât possible as it would invalidate the business case. We decided to not throw away the prototype but enhance it further.
As the deadline was approached it became clear that we were not going to deliver a product 1.0. Causes were multiple: the prototypeâs architecture did not quite fit the 1.0 needs, scope changed along the way as marketing got new insights from the changing market, the team grew in size, and the integration to other network components took time as well.
So, the deadline was extended with another 6 months.
The deadline got shifted 2 more times.
This felt really bad. It felt we let down both the company and the product owner by not delivering on time. We had the best people part on the team and already had a working proto type. How come we werenât able to deliver? What happened? What could we do to prevent this from happening a third time?
Then the new product was going to be announced at a large Telecom conference. This is what the product (and the team) needed; we still got a deadline but this time there was a clear goal associated with the deadline, namely a demo version for attracting first customers! Moreover, there was a small time window for delivering the product; missing the deadline would mean an important opportunity was lost with severe impact to the business case. This time we made the deadline.
The conference was a success and we got our first customers; of course new deadlines followed and this time with clear goals related to the needs of specific customers.The Effect Deadlines Have
Looking back, which always is easy, we could have finished the product much earlier if the initial deadline was set to a much later date. Certainly, there was value in being able to deliver a product very fast, i.e. having short-term deadlines. On the other hand there were also costs associated with these short-term deadlines including:
In this case the short-term deadline put pressure on the team to deliver in short time causing the team to take short-cuts along the way causing delays and refactoring at a later time. Over time less results will be delivered.
What makes this pattern hard to fix is that the action of imposing deadline will deliver short-term results and seems a good idea to get results from the team.
This pattern is known as âShifting the Burdenâ and is depicted below. In the previous example the root cause is not trusting the team to get the job done. The distrust is addressed by imposing a deadline as a way to âforceâ the team to deliver.
The balancing (B) loop on top is the short-term solution to the problem getting results from the team. The 'fear' of the lacking focus and therefore results leads to imposing a deadline and thereby increasing the focus (and results) of the team. The problem symptom will reduce but will reappear causing an 'addiction' to the loop of imposing deadlines.
The fundamental solution of trusting the team, prioritising and giving them goals is often overlooked. Also this fundamental solution is less evident and costs energy and effort from the organisation to implement. The short-term solution has unwanted side effects that in the long run - slashed arrow - have negative impact on the team results.
In the example above the fundamental solution consisted of setting and prioritising towards the goal of a successful demo at the conference. This worked because it was a short-term and realistic goal. Furthermore the urgency was evident to the team: there was not going to be a second chance in case this deadline was missed.
In practise I also encounter the situation in which deadlines are imposed is a team that seems to lack focus. The underlying problem is the lack of a (business) vision and goals. The symptom as experienced by the organisation is the lack of concrete results. In fact the team does work hard but does so by working on multiple goals at the same time. Here, clear goals and prioritising the work to be done first will help.
Also in this example, the action of imposing deadline to âsolveâ the problem has the unintended side effect of not addressing the underlying problem. This will make the problem of the team not delivering result reappear.Goals & Deadlines
In the examples of the previous section the deadlines I call phoney deadlines. When in practise a deadline is imposed it usually also implies a fixed scope and fixed date.
Deadlines should be business case related and induce large costs if not met. For the deadlines in the above examples this is not the case.
Examples of deadlines that have associated large costs if not met, are:
In the story above the ârealâ deadline actually was 2 years instead of the 8 months. In this case the deadline probably was used as a management tool, with all good intentions, to get the team focussed on producing results. Whereas in fact it caused short-cuts to be made by the team in order to meet the deadline.
Getting focus in teams is done by giving the team a goal: a series of subgoals leading to the end-goal [Bur13]. Establish a vision and derive a series of (sub)goals to realise the vision. Relate these goals to the business case. Story mapping is one of the tools available to define series of goals [Lit].Conclusion
Avoid setting deadlines as a means to get results from a team. On the short-term this will give results but on the long run it negatively impacts the results that the organisation wants to achieve.
Reserve deadlines for events that have a very high Cost of Delay when missed, i.e. the cost of missing the deadline is very large.
Instead, set a vision (both for the organisation and product) that is consistent with the fundamental solution. In addition, derive a series of goals and prioritise to help team focus on achieving results. To derive a series of goals several techniques can be used like Story mapping and Goal-Impact Mapping.References [Bur13] Daniel Burm, 2013, The Secret to 3-Digit Productivity Growth [Wik] Shifting the Burden, Wikipedia, Systems Thinking [Lit] Lithespeed, Story Mapping