Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Stuff The Internet Says On Scalability For July 31st, 2015

Hey, it's HighScalability time:


Where does IBM's Watson or Google Translate fit? (SciencePorn)
  • 40Tb/s: Bandwidth for Windows 10 launch; 4.04B: Facebook Q2 revenue; 37M: Americans who don't use the web;
  • Quotable Quotes:
    • @BoredElonMusk: We would have already discovered Earth 6.0 if NASA got the same budget as the DOD.
    • David Blight~ Something I've always believed as a historian and more and more it seems true to me is what really moves history, or brings about change in rather sudden and almost always unpredictable ways, is events. 
    • Quentyn Kennemer: Tom Brady replaces Android with iPhone, gets suspended 4 games
    • @BenedictEvans: Apple Maps has ~300m users to iOS GMaps 100m, of 4-500m iPhones. Spotify has 20m paying & 70m free users. And then there’s YouTube
    • Ben: Some scale problems should go unsolved. No. Most scale problems should go unsolved.
    • @mikedicarlo: 3.5 million Redis ops per/sec across our cluster. Wondering how that compares with other production deployments out there. 
    • @Carnage4Life: $1 billion valuation for a caller ID app with $800K in revenues? Unicorn valuations are officially meaningless 

  • Is shooting a trespasser filming a video of your potentially intimate moments considered a crime? Kentucky man shoots down drone hovering over his backyard

  • Death through premature scaling. Larry Berman determined this was the cause of death of RewardMe, his once scrappy startup. In the next turn of the wheel the dharma is:  Be a 1-man growth team;  Get customers online as oppose to through a long sales cycle; Don’t hold inventory; Focus on product and support. The new enlightenment: Don’t scale until you’re ready for it. Cash is king, and you need to extend your runway as long as possible until you’ve found product market fit. 

  • What about scaling for the rest of us? That's the topic addressed in Scaling Ruby Apps to 1000 Requests per Minute - A Beginner's Guide. A very good resource. It goes into explaining the path of request through Heroku. Dispels some myths like scaling up makes a system faster. Explains queue time. And other good stuff. 

  • Not quite as sexy as Zero Point energy, but 3D Xpoint memory sounds pretty cool: Intel and Micron have unveiled what appears to be the holy grail of memory. Called 3D XPoint (pronounced "cross point"), this is an entirely new type of non-volatile memory, with roughly 1,000 times the performance and 1,000 times the endurance of conventional NAND flash, while also being 10 times denser than conventional DRAM.

  • So what is 3D XPoint Memory really? Here's a great analysis at DailyTech by Jason Mick. More than analysis, it's a detective story. Jason puts together clues from history and recently filed patents to deduce that this new wonder RAM is most likely to be PRAM or Phase-change Memory, that stores data "in the form of a phase change to a tiny atomic-level structure." Jason thinks "any usage scenarios, it may be possible to run exclusively off PRAM." Forgetting just got even harder.

  • Damn. I may die after all. The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near: My model shows that it can be estimated that the brain operates at least 10x^21 operations per second. With current rates of growth in computational power we could achieve supercomputers with brain-like capabilities by the year 2037, but estimates after the year 2080 seem more realistic.

  • It has always struck me that telcos who desperately want to get in to the cloud business, where they are just an also ran, control some of the most desired potential colo space in the world: cell towers. Turn those towers into location aware clouds and we can really get some revolutionary edge computing going on. Transiting traffic back to a centralized cloud is such a waste. Could 'Supercomputing at the Edge' provide a scalable platform for new mobile services?

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

How Debugging is Like Hunting Serial Killers

Warning: A quote I use in this article is quite graphic. That's the power of the writing, but if you are at all squirmy you may want to turn back now

Debugging requires a particular sympathy for the machine. You must be able to run the machine and networks of machines in your mind while simulating what-ifs based on mere wisps of insight.

There's another process that is surprisingly similar to debugging: hunting down serial killers.

I ran across this parallel while reading Mindhunter: Inside the FBI's Elite Serial Crime Unit by John E. Douglas, a FBI profiler whose specialty is the dark debugging of twisted human minds.

Here's how John describes profiling:

You have to be able to re-create the crime scene in your head. You need to know as much as you can about the victim so that you can imagine how she might have reacted. You have to be able to put yourself in her place as the attacker threatens her with a gun or a knife, a rock, his fists, or whatever. You have to be able to feel her fear as he approaches her. You have to be able to feel her pain as he rapes her or beats her or cuts her. You have to try to imagine what she was going through when he tortured her for his sexual gratification. You have to understand what it’s like to scream in terror and agony, realizing that it won’t help, that it won’t get him to stop. You have to know what it was like. And that is a heavy burden to have to carry.

Serial killers are like bugs in the societal machine. They hide. They blend in. They can pass for "normal" which makes them tough to find. They attack weakness causing untold damage until caught. And they will keep causing damage until caught. They are always hunting for opportunity.

After reading the book I'm quite grateful that the only bugs I've had to deal with are of the computer variety. The human bugs are very very scary.

Here are some other quotes from the book you may also appreciate:

Categories: Architecture

What is Insight?

"A moment's insight is sometimes worth a life's experience." -- Oliver Wendell Holmes, Sr.

Some say we’re in the Age of Insight.  Others say insight is the new currency in the Digital Economy.

And still others say that insight is the backbone of innovation.

Either way, we use “insight” an awful lot without talking about what insight actually is.

So, what is insight?

I thought it was time to finally do a deeper dive on what insight actually is.  Here is my elaboration of “insight” on Sources of Insight:

Insight

You can think of it as “insight explained.”

The simple way that I think of insight, or those “ah ha” moments, is by remembering a question Ward Cunningham uses a lot:

“What did you learn that you didn’t expect?” or “What surprised you?”

Ward uses these questions to reveal insights, rather than have somebody tell him a bunch of obvious or uneventful things he already knows.  For example, if you ask somebody what they learned at their presentation training, they’ll tell you that they learned how to present more effectively, speak more confidently, and communicate their ideas better.

No kidding.

But if you instead ask them, “What did you learn that you didn’t expect?” they might actually reveal some insight and say something more like this:

“Even though we say don’t shoot the messenger all the time, you ARE the message.”

Or

“If you win the heart, the mind follows.”

It’s the non-obvious stuff, that surprises you (at least at first).  Or sometimes, insight strikes us as something that should have been obvious all along and becomes the new obvious, or the new normal.

Ward used this insights gathering technique to more effectively share software patterns.  He wanted stories and insights from people, rather than descriptions of the obvious.

I’ve used it myself over the years and it really helps get to deeper truths.  If you are a truth seeker or a lover of insights, you’ll enjoy how you can tease out more insights, just by changing your questions.   For example, if you have kids, don’t ask, “How was your day?”   Ask them, “What was the favorite part of your day?” or “What did you learn that surprised you?”

Wow, I now this is a short post, but I almost left without defining insight.

According to the dictionary, insight is “The capacity to gain an accurate and deep intuitive understanding of a person or thing.”   Or you may see insight explained as inner sight, mental vision, or wisdom.

I like Edward de Bono’s simple description of insight as “Eureka moments.”

Some people count steps in their day.  I count my “ah-ha” moments.  After all, the most important ingredient of effective ideation and innovation is …yep, you guessed it – insight!

For a deeper dive on the power of insight, read my page on Insight explained, on Sources Of Insight.com

Categories: Architecture, Programming

A Well Known But Forgotten Trick: Object Pooling

This is a guest repost by Alex Petrov. Find the original article here.

Most problem are quite straightforward to solve: when something is slow, you can either optimize it or parallelize it. When you hit a throughput barrier, you partition a workload to more workers. Although when you face problems that involve Garbage Collection pauses or simply hit the limit of the virtual machine you're working with, it gets much harder to fix them.

When you're working on top of a VM, you may face things that are simply out of your control. Namely, time drifts and latency. Gladly, there are enough battle-tested solutions, that require a bit of understanding of how JVM works.

If you can serve 10K requests per second, conforming with certain performance (memory and CPU parameters), it doesn't automatically mean that you'll be able to linearly scale it up to 20K. If you're allocating too many objects on heap, or waste CPU cycles on something that can be avoided, you'll eventually hit the wall.

The simplest (yet underrated) way of saving up on memory allocations is object pooling. Even though the concept is sounds similar to just pooling objects and socket descriptors, there's a slight difference.

When we're talking about socket descriptors, we have limited, rather small (tens, hundreds, or max thousands) amount of descriptors to go through. These resources are pooled because of the high initialization cost (establishing connection, performing a handshake over the network, memory-mapping the file or whatever else). In this article we'll talk about pooling larger amounts of short-lived objects which are not so expensive to initialize, to save allocation and deallocation costs and avoid memory fragmentation.

Object Pooling
Categories: Architecture

Algolia's Fury Road to a Worldwide API Part 3

The most frequent questions we answer for developers and devops are about our architecture and how we achieve such high availability. Some of them are very skeptical about high availability with bare metal servers, while others are skeptical about how we distribute data worldwide. However, the question I prefer is “How is it possible for a startup to build an infrastructure like this”. It is true that our current architecture is impressive for a young company:

  • Our high-end dedicated machines are hosted in 13 worldwide regions with 25 data-centers

  • our master-master setup replicates our search engine on at least 3 different machines

  • we process over 6 billion queries per month

  • we receive and handle over 20 billion write operations per month

Just like Rome wasn't built in a day, our infrastructure wasn't as well. This series of posts will explore the 15 instrumental steps we took when building our infrastructure. I will even discuss our outages and bugs in order to you to understand how we used them to improve our architecture.

The first blog post of this series focused on our early days in beta and the second post on the first 18 months of the service, including our first outages. In this last post, I will describe how we transformed our "startup" architecture into something new that was able to meet the expectation of big public companies.

Step 11: February 2015 Launch of our Synchronized Worldwide infrastructure
Categories: Architecture

The monolithic frontend in the microservices architecture

Xebia Blog - Mon, 07/27/2015 - 16:39

When you are implementing a microservices architecture you want to keep services small. This should also apply to the frontend. If you don't, you will only reap the benefits of microservices for the backend services. An easy solution is to split your application up into separate frontends. When you have a big monolithic frontend that can’t be split up easily, you have to think about making it smaller. You can decompose the frontend into separate components independently developed by different teams.

Imagine you are working at a company that is switching from a monolithic architecture to a microservices architecture. The application your are working on is a big client facing web application. You have recently identified a couple of self-contained features and created microservices to provide each functionality. Your former monolith has been carved down to bare essentials for providing the user interface, which is your public facing web frontend. This microservice only has one functionality which is providing the user interface. It can be scaled and deployed separate from the other backend services.

You are happy with the transition: Individual services can fit in your head, multiple teams can work on different applications, and you are speaking on conferences on your experiences with the transition. However you’re not quite there yet: The frontend is still a monolith that spans the different backends. This means on the frontend you still have some of the same problems you had before switching to microservices. The image below shows a simplification of the current architecture.

Single frontend

With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Backend teams can't deliver business value without the frontend being updated since an API without a user interface doesn't do much. More backend teams means more new features, and therefore more pressure is put on the frontend team(s) to integrate new features. To compensate for this it is possible to make the frontend team bigger or have multiple teams working on the same project. Because the frontend still has to be deployed in one go, teams cannot work independently. Changes have to be integrated in the same project and the whole project needs to be tested since a change can break other features.
Another option is to have the backend teams integrate their new features with the frontend and submitting a pull request. This helps in dividing the work, but to do this effectively a lot of knowledge has to be shared across the teams to get the code consistent and on the same quality level. This would basically mean that the teams are not working independently. With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Besides not being able to scale, there is also the classical overhead of a separate backend and frontend team. Each time there is a breaking change in the API of one of the services, the frontend has to be updated. Especially when a feature is added to a service, the frontend has to be updated to ensure your customers can even use the feature. If you have a frontend small enough it can be maintained by a team which is also responsible for one or more services which are coupled to the frontend. This means that there is no overhead in cross team communication. But because the frontend and the backend can not be worked on independently, you are not really doing microservices. For an application which is small enough to be maintained by a single team it is probably a good idea not to do microservices.

If you do have multiple teams working on your platform, but you were to have multiple smaller frontend applications there would have been no problem. Each frontend would act as the interface to one or more services. Each of these services will have their own persistence layer. This is known as vertical decomposition. See the image below.

frontend-per-service

When splitting up your application you have to make sure you are making the right split, which is the same as for the backend services. First you have to recognize bounded contexts in which your domain can be split. A bounded context is a partition of the domain model with a clear boundary. Within the bounded context there is high coupling and between different bounded contexts there is low coupling. These bounded contexts will be mapped to micro services within your application. This way the communication between services is also limited. In other words you limit your API surface. This in turn will limit the need to make changes in the API and ensure truly separately operating teams.

Often you are unable to separate your web application into multiple entirely separate applications. A consistent look and feel has to be maintained and the application should behave as single application. However the application and the development team are big enough to justify a microservices architecture. Examples of such big client facing applications can be found in online retail, news, social networks or other online platforms.

Although a total split of your application might not be possible, it might be possible to have multiple teams working on separate parts of the frontend as if they were entirely separate applications. Instead of splitting your web app entirely you are splitting it up in components, which can be maintained separately. This way you are doing a form of vertical decomposition while you still have a single consistent web application. To achieve this you have a couple of options.

Share code

You can share code to make sure that the look and feel of the different frontends is consistent. However then you risk coupling services via the common code. This could even result in not being able to deploy and release separately. It will also require some coordination regarding the shared code.

Therefore when you are going to share code it is generally a good a idea to think about the API that it’s going to provide. Calling your shared library “common”, for example, is generally a bad idea. The name suggests developers should put any code which can be shared by some other service in the library. Common is not a functional term, but a technical term. This means that the library doesn’t focus on providing a specific functionality. This will result in an API without a specific goal, which will be subject to change often. This is especially bad for microservices when multiple teams have to migrate to the new version when the API has been broken.

Although sharing code between microservices has disadvantages, generally all microservices will share code by using open source libraries. Because this code is always used by a lot of projects, special care is given to not breaking compatibility. When you’re going to share code it is a good idea to uphold your shared code to the same standards. When your library is not specific to your business, you might as well release it publicly to encourage you think twice about breaking the API or putting business specific logic in the library.

Composite frontend

It is possible to compose your frontend out of different components. Each of these components could be maintained by a separate team and deployed independent of each other. Again it is important to split along bounded contexts to limit the API surface between the components. The image below shows an example of such a composite frontend.

composite-design

Admittedly this is an idea we already saw in portlets during the SOA age. However, in a microservices architecture you want the frontend components to be able to deploy fully independently and you want to make sure you do a clean separation which ensures there is no or only limited two way communication needed between the components.

It is possible to integrate during development, deployment or at runtime. At each of these integration stages there are different tradeoffs between flexibility and consistency. If you want to have separate deployment pipelines for your components, you want to have a more flexible approach like runtime integration. If it is more likely different versions of components might break functionality, you need more consistency. You would get this at development time integration. Integration at deployment time could give you the same flexibility as runtime integration, if you are able to integrate different versions of components on different environments of your build pipeline. However this would mean creating a different deployment artifact for each environment.

Software architecture should never be a goal, but a means to an end

Combining multiple components via shared libraries into a single frontend is an example of development time integration. However it doesn't give you much flexibility in regards of separate deployment. It is still a classical integration technique. But since software architecture should never be a goal, but a means to an end, it can be the best solution for the problem you are trying to solve.

More flexibility can be found in runtime integration. An example of this is using AJAX to load html and other dependencies of a component. Then the main application only needs to know where to retrieve the component from. This is a good example of a small API surface. Of course doing a request after page load means that the users might see components loading. It also means that clients that don’t execute javascript will not see the content at all. Examples are bots / spiders that don’t execute javascript, real users who are blocking javascript or using a screenreader that doesn’t execute javascript.

When runtime integration via javascript is not an option it is also possible to integrate components using a middleware layer. This layer fetches the html of the different components and composes them into a full page before returning the page to the client. This means that clients will always retrieve all of the html at once. An example of such middleware are the Edge Side Includes of Varnish. To get more flexibility it is also possible to implement a server which does this yourself. An open source example of such a server is Compoxure.

Once you have you have your composite frontend up and running you can start to think about the next step: optimization. Having separate components from different sources means that many resources have to be retrieved by the client. Since retrieving multiple resources takes longer than retrieving a single resource, you want to combine resources. Again this can be done at development time or at runtime depending on the integration techniques you chose decomposing your frontend.

Conclusion

When transitioning an application to a microservices architecture you will run into issues if you keep the frontend a monolith. The goal is to achieve good vertical decomposition. What goes for the backend services goes for the frontend as well: Split into bounded contexts to limit the API surface between components, and use integration techniques that avoid coupling. When you are working on single big frontend it might be difficult to make this decomposition, but when you want to deliver faster by using multiple teams working on a microservices architecture, you cannot exclude the frontend from decomposition.

Resources

Sam Newman - From Macro to Micro: How Big Should Your Services Be?
Dan North - Microservices: software that fits in your head

Super fast unit test execution with WallabyJS

Xebia Blog - Mon, 07/27/2015 - 11:24

Our current AngularJS project has been under development for about 2.5 years, so the number of unit tests has increased enormously. We tend to have a coverage percentage near 100%, which led to 4000+ unit tests. These include service specs and view specs. You may know that AngularJS - when abused a bit - is not suited for super large applications, but since we tamed the beast and have an application with more than 16,000 lines of high performing AngularJS code, we want to keep in charge about the total development process without any performance losses.

We are using Karma Runner with Jasmine, which is fine for a small number of specs and for debugging, but running the full test suite takes up to 3 minutes on a 2.8Ghz MacBook Pro.

We are testing our code continuously, so we came up with a solution to split al the unit tests into several shards. This parallel execution of the unit tests decreased the execution time a lot. We will later write about the details of this Karma parallelization on this blog. Sharding helped us a lot when we want to run the full unit test suite, i.e. when using it in the pre push hook, but during development you want quick feedback cycles about coverage and failing specs (red-green testing).

With such a long unit test cycle, even when running in parallel, many of our developers are fdescribe-ing the specs on which they are working, so that the feedback is instant. However, this is quite labor intensive and sometimes an fdescribe is pushed accidentally.

And then.... we discovered WallabyJS. It is just an ordinary test runner like Karma. Even the configuration file is almost a copy of our karma.conf.js.
The difference is in the details. Out of the box it runs the unit test suite in 50 secs, thanks to the extensive use of Web Workers. Then the fun starts.

Screenshot of Wallaby In action (IntelliJ). Shamelessly grabbed from wallaby.com

I use Wallaby as IntelliJ IDEA plugin, which adds colored annotations to the left margin of my code. Green squares indicate covered lines/statements, orange give me partly covered code and grey means "please write a test for this functionality or I introduce hard to find bugs". Colorblind people see just kale green squares on every line, since the default colors are not chosen very well, but these colors are adjustable via the Preferences menu.

Clicking on a square pops up a box with a list of tests that induces the coverage. When the test failed, it also tells me why.

dialog

A dialog box showing contextual information (wallaby.com)

Since the implementation and the tests are now instrumented, finding bugs and increasing your coverage goes a lot faster. Beside that, you don't need to hassle with fdescribes and fits to run individual tests during development. Thanks to the instrumentation Wallaby is running your tests continuously and re-runs only the relevant tests for the parts that you are working on. Real time.

5 Reasons why you should test your code

Xebia Blog - Mon, 07/27/2015 - 09:37

It is just like in mathematics class when I had to make a proof for Thales’ theorem I wrote “Can’t you see that B has a right angle?! Q.E.D.”, but he still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, let’s fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or – even better – a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

coverage

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

“Premature optimalization is the root of all evil” is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whether you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a ‘no go’ because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

Dilbert bugfix

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

  // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

OLYMPUS DIGITAL CAMERA

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.

Stuff The Internet Says On Scalability For July 24th, 2015

Hey, it's HighScalability time:


Walt Disney doesn't mouse around. Here's how he makes a goofy business plan.

 

  • 81%: AWS YOY growth; 400: hours of video uploaded to YouTube EVERY MINUTE; 9,000: # of mineable asteroids near earth; 1,400: light years to Earth's high latency backup node; 10K: in the future hard disks will be this many times faster 
  • Quotable Quotes:
    • @BenedictEvans: Chinese govt: At the end of 2014 China had 112.7 billion static webpages and 77.2 billion dynamic webpages. They used 9,310,312,446,467 KB
    • Michael Franklin (AMPLab): This is always a pendulum where you swing from highly distributed to more centralized and back in. My guess is there’s going to be another swing of the pendulum, where we really need to start thinking about how do you distribute processing throughout a wide area network.
    • Sherlock Holmes: Singularity is almost invariably a clue. 
    • @jpetazzo: OH: "In any team you need a tank, a healer, a damage dealer, someone with crowd control abilities, and another who knows iptables"
    • Jeff Sussna: Ultimately, the impact of containers will reach even beyond IT, and play a part in transforming the entire nature of the enterprise. 
    • @CarlosAlimurung: Impressive.  The number of #youtube channels making six figures grew by 50%. 
    • harlowja: Overall, no the community isn't perfect, yes there are issues, yes it burns some people out, but software isn't rainbows and butterflies after all.
    • werner: BTW nobody wants eventual consistency, it is a fact of live among many trade-offs. I would rather not expose it but it comes with other advantages ...
    • @VideoInkNews: We’re focused on our top three priorities – mobile, mobile and mobile, said @YouTube CEO @SusanWojcicki #VidCon2015 #keynote
    • Ivan Pepelnjak: Use a combination of MPLS/VPN and Internet VPN, or Internet VPN with 3G backup. Use multiple access methods, so the cable-seeking backhoe doesn’t bring down all uplinks.
    • @randybias: Repeat after me: containers do little to enable application portability.  If you want portability use a PaaS.  PaaS != Containers.
    • To see even more quotes please click through to see the rest of the post.

  • Can't we all just get along? And by "we" I mean humans and robots. Maybe. Inside Amazon shows by example how one new utopian community is bridging the categorical divide. Forget all your skepticism and technopanic, humans and robots can really work together in a highly efficient system.

  • A Brief History of Scaling LinkedIn. Not so brief actually. Lots of really good details. They of course started off with a monolith and ended up with a service oriented architecture. One of the most interesting ideas is the super block: groupings of backend services with a single access API. This allows us to have a specific team optimize the block, while keeping our call graph in check for each client.

  • If you want to move at the speed of software doesn't your datacenter infrastructure have to move at the same speed? Network Break 45 from Packet Pushers talks about an open source virtual software router, CloudRouter, running the latest release of OpenDaylight's SDN controller and ONOS. The idea is to make a dead simple router you can just instantiate as needed. Greg Ferro makes the point that if you don't have to care if you are starting 100 or 1000 virtual routers it changes how you go about building infrastructure. Running a Cisco Router, and F5 load balancer, and a virtual firewall, how much will it cost to spin up virtual datacenters for 100s of developers? How long will it take? How much will it cost? How does it even work? 

Categories: Architecture

Android: Custom ViewMatchers in Espresso

Xebia Blog - Fri, 07/24/2015 - 16:03

Somehow it seems that testing is still treated like an afterthought in mobile development. The introduction of the Espresso test framework in the Android Testing Support Library improved the situation a little bit, but the documentation is limited and it can be hard to debug problems. And you will run into problems, because testing is hard to learn when there are so few examples to learn from.

Anyway, I recently created my first custom ViewMatcher for Espresso and I figured I would like to share it here. I was building a simple form with some EditText views as input fields, and these fields should display an error message when the user entered an invalid input.

Android TextView with Error Message

Android TextView with error message

In order to test this, my Espresso test enters an invalid value in one of the fields, presses "submit" and checks that the field is actually displaying an error message.

@Test
public void check() {
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .perform(ViewActions.typeText("foo"));
  Espresso
      .onView(ViewMatchers.withId(R.id.submit))
      .perform(ViewActions.click());
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .check(ViewAssertions.matches(
          ErrorTextMatchers.withErrorText(Matchers.containsString("email address is invalid"))));
}

The real magic happens inside the ErrorTextMatchers helper class:

public final class ErrorTextMatchers {

  /**
   * Returns a matcher that matches {@link TextView}s based on text property value.
   *
   * @param stringMatcher {@link Matcher} of {@link String} with text to match
   */
  @NonNull
  public static Matcher<View> withErrorText(final Matcher<String> stringMatcher) {

    return new BoundedMatcher<View, TextView>(TextView.class) {

      @Override
      public void describeTo(final Description description) {
        description.appendText("with error text: ");
        stringMatcher.describeTo(description);
      }

      @Override
      public boolean matchesSafely(final TextView textView) {
        return stringMatcher.matches(textView.getError().toString());
      }
    };
  }
} 

The main details of the implementation are as follows. We make sure that the matcher will only match children of the TextView class by returning a BoundedMatcher from withErrorText(). This makes it very easy to implement the matching logic itself in BoundedMatcher.matchesSafely(): simply take the getError() method from the TextView and feed it to the next Matcher. Finally, we have a simple implementation of the describeTo() method, which is only used to generate debug output to the console.

In conclusion, it turns out to be pretty straightforward to create your own custom ViewMatcher. Who knew? Perhaps there is still hope for testing mobile apps...

You can find an example project with the ErrorTextMatchers on GitHub: github.com/smuldr/espresso-errortext-matcher.

The Best Productivity Book for Free

image"At our core, Microsoft is the productivity and platform company for the mobile-first and cloud-first world." -- Satya Nadella

We take productivity seriously at Microsoft. Ask any Softie. I never have a lack of things to do, or too much time in my day, and I can't ever make "too much" impact.

To be super productive, I've had to learn hard-core prioritization techniques, extreme energy management, stakeholder management, time management, and a wealth of productivity hacks to produce better, faster results.

We don’t learn these skills in school.  But if we’re lucky, we learn from the right mentors and people all around us, how to bring out our best when we need it the most.

Download the 30 Days of Getting Results Free eBook

You can save years of pain for free:

30 Days of Getting Results Free eBook

There’s always a gap between books you read and what you do in the real world. I wanted to bridge this gap. I wanted 30 Days of Getting Results to be raw and real to help you learn what it really takes to master productivity and time management so you can survive and thrive with the best in the world.

It’s not pretty.  It’s super effective.

30 Days of Getting Results is a 30 Day Personal Productivity Improvement Sprint

I wrote 30 Days of Getting Results using a 30 Day Sprint. Each day for that 30 Day Sprint, I wrote down the best information I learned from the school of hard knocks about productivity, time management, work-life balance, and more.

For each day, I share a lesson, a story, and an exercise.

I wanted to make it easy to practice productivity habits.

Agile Results is a Fire Starter for Personal Productivity

The thing that’s really different about Agile Results as a time management system is that it’s focused on meaningful results.  Time is treated as a first-class citizen so that you hit your meaningful windows of opportunity, and get fresh starts each day, each week, each month, each year.  As a metaphor, you get to be the author of your life and write your story forward.

For years, I’ve received emails from people around the world how 30 Days of Getting Results was a breath of fresh air for them.

It helped them find their focus, get more productive, enjoy what they do, renew their energy, and spend more time in their strengths and their passions, while pursuing their purpose.

It’s helped doctors, teachers, students, lawyers, developers, grandmothers, and more.

Learn a New Language, Change Careers, or Start a Business

You can use Agile Results to learn better, faster, and deeper because it helps you think better, feel better, and take better action.

You can use Agile Results to help you learn a new language, build new skills, learn an instrument, or whatever your heart desires.

I used the system to accidentally write a book in a month.

I didn’t set out to write a book. I set out to share the world’s best insight and action for productivity and time management. I wrote for 20 minutes each day, during that month, to share the best lessons and the best insights I could with one purpose:

Help everyone thrive in work and life.

Over the coming months, I had more and more people ask for a book version. As much as they liked the easy to flip through Web pages, they wanted to consume it as an eBook. So I turned 30 Days of Getting Results into a free eBook and made that available.

Here's the funny part:

I forgot I had done that.

The Accidental Free Productivity Book that Might Just Change Your Life

One day, I was having a conversation with one of my readers, and they said that I should sell 30 Days of Getting Results as a $30 work book. They liked it much more than the book, Getting Results the Agile Way. They found it to be more actionable and easier to get started, and they liked that I used the system as a way to teach the system.

They said I should make the effort to put it together as a PDF and sell it as a workbook. He said people would want to pay for it because it’s high-value, real-world training, and he said it was better than any live training he had ever taken (and he had taken a lot.)

I got excited by the idea, and it made perfect sense. After all, wouldn’t people want to learn something that could impact every single day of their lives, and help them achieve more in work and life and help them adapt and compete more effectively in our ever-changing world?

I went to go put it together, and I had already done it.

Set Your Productivity on Fire

When you’re super productive, it’s easy to forget some of the things you create because they so naturally flow from spending the right time, on the right things, with the right energy. You’ll naturally leave a trail of results from experimenting and learning.

Whether you want to be super productive, or do less, but accomplish more, check out the ultimate free productivity guide:

30 Days of Getting Results Free eBook

Share it with friends, family, colleagues, and whoever else you want to have an unfair advantage in our hyper-competitive world.

Lifting others up, lifts you up in the process.

If you have a personal story of how 30 Days of Getting Results has helped you in some way, feel free to share it with me.  It’s always fun to hear how people are using Agile Results to take on new challenges, re-invent their productivity, and operate at a higher level.

Or simply get started again … like a fresh start, for the first time, full of new zest to be your best.

Categories: Architecture, Programming

Architecting Backend for a Social Product

This is aimed towards taking you through key architectural decisions which will make a social application a true next generation social product. The proposed changes addresses following attributes; a) availability b) reliability c) scalability d) performance and flexibility towards extensions (not modifications)

Goals

a) Ensuring that user’s content is easily discoverable and is available always.

b) Ensuring that the content pushed is relevant not only semantically but also from user’s device perspective.

c) Ensuring that the real time updates are generated, pushed and analyzed.

d) Eye towards saving user’s resources as much as possible.

e) Irrespective of server load, user’s experience should remain intact.

f) Ensuring overall application security

In summary we want to deal with an amazing challenge, where we must deal with a mega sea of ever expanding user generated contents, increasing number of users, and a constant stream of new items, all while ensuring an excellent performance. Considering the above challenge it is imperative that we must study certain key architectural elements which will influence the over system design. Here are the few key decisions & analysis.

Data Storage
Categories: Architecture

Step away from the code!

Coding the Architecture - Simon Brown - Tue, 07/21/2015 - 19:57

While at the Devoxx UK conference recently, I was interviewed by Lucy Carey from Voxxed about software architecture, diagrams, monoliths, microservices, design thinking and modularity. You can watch this short interview (~5 minutes) at Step Away from the Code! ... enjoy!

Categories: Architecture

Sponsored Post: Redis Labs, Jut.io, VoltDB, Datadog, Tumblr, Power Admin, MongoDB, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users.  As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here now! 

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Surge 2015. Want to mingle with some of the leading practitioners in the scalability, performance, and web operations space? Looking for a conference that isn't just about pitching you highly polished success stories, but that actually puts an emphasis on learning from real world experiences, including failures? Surge is the conference for you.

  • Your event could be here. How cool is that?
Cool Products and Services
  • New Research Shows MongoDB Outperforms NoSQL Vendors. Independent evaluators United Software Associates recently released a report in which they demonstrated that MongoDB overwhelmingly outperforms key value stores both in terms of throughput and latency across a number of configurations. Download the report to get access to performance test results comparing Couchbase, Cassandra, and MongoDB with WiredTiger. Get the report.

  • In a recent benchmark for NoSQL databases on the AWS cloud, Redis Labs Enterprise Cluster's performance had obliterated Couchbase, Cassandra and Aerospike in this real life, write-intensive use case. Full backstage pass and and all the juicy details are available in this downloadable report.

  • Real-time correlation across your logs, metrics and events.  Jut.io just released its operations data hub into beta and we are already streaming in billions of log, metric and event data points each day. Using our streaming analytics platform, you can get real-time monitoring of your application performance, deep troubleshooting, and even product analytics. We allow you to easily aggregate logs and metrics by micro-service, calculate percentiles and moving window averages, forecast anomalies, and create interactive views for your whole organization. Try it for free, at any scale.

  • VoltDB is a full-featured fast data platform that has all of the data processing capabilities of Apache Storm and Spark Streaming, but adds a tightly coupled, blazing fast ACID-relational database, scalable ingestion with backpressure; all with the flexibility and interactivity of SQL queries. Learn more.

  • In a recent benchmark conducted on Google Compute Engine, Couchbase Server 3.0 outperformed Cassandra by 6x in resource efficiency and price/performance. The benchmark sustained over 1 million writes per second using only one-sixth as many nodes and one-third as many cores as Cassandra, resulting in 83% lower cost than Cassandra. Download Now.

  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at datadoghq.com.

  • Here's a little quiz for you: What do these companies all have in common? Symantec, RiteAid, CarMax, NASA, Comcast, Chevron, HSBC, Sauder Woodworking, Syracuse University, USDA, and many, many more? Maybe you guessed it? Yep! They are all customers who use and trust our software, PA Server Monitor, as their monitoring solution. Try it out for yourself and see why we’re trusted by so many. Click here for your free, 30-Day instant trial download!

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Loggly alternative.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Parallax image scrolling using Storyboards

Xebia Blog - Tue, 07/21/2015 - 07:37

Parallax image scrolling is a popular concept that is being adopted by many apps these days. It's the small attention to details like this that can really make an app great. Parallax scrolling gives you the illusion of depth by letting objects in the background scroll slower than objects in the foreground. It has been used in the past by many 2d games to make them feel more 3d. True parallax scrolling can become quite complex, but it's not very hard to create a simple parallax image scrolling effect on iOS yourself. This post will show you how to add it to a table view using Storyboards.

NOTE: You can find all source code used by this post on https://github.com/lammertw/ParallaxImageScrolling.

The idea here is to create a UITableView with an image header that has a parallax scrolling effect. When we scroll down the table view (i.e. swipe up), the image should scroll with half the speed of the table. And when we scroll up (i.e. swipe down), the image should become bigger so that it feels like it's stretching while we scroll. The latter is not really a parallax scrolling effect but commonly used in combination with it. The following animation shows these effects:

imageedit_9_7419205352  

But what if we want a "Pull down to Refresh" effect and need to add a UIRefreshControl? Well, then we just drop the stretch effect when scrolling up:  

imageedit_7_3493339154    

And as you might expect, the variation with Pull to Refresh is actually a lot easier to accomplish than the one without.

Parallax Scrolling Libraries

While you can find several objective-c or Swift libraries that provide parallax scrolling similar to the ones here, you'll find that it's not that hard to create these yourself. Doing it yourself has the benefit of customizing it exactly the way your want it and of course it will add to your experience. Plus it might be less code than integrating with such a library. However if you need exactly what such a library provides then using it might work better for you.

The basics

NOTE: You can find all the code of this section at the no-parallax-scrolling branch.

Let's start with a simple example that doesn't have any parallax scrolling effects yet.

imageedit_5_3840368192

Here we have a standard UITableViewController with a cell containing our image at the top and another cell below it with some text. Here is the code:

class ImageTableViewController: UITableViewController {

  override func numberOfSectionsInTableView(tableView: UITableView) -> Int {
    return 2
  }

  override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
    return 1
  }

  override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
    var cellIdentifier = ""
    switch indexPath.section {
    case 0:
      cellIdentifier = "ImageCell"
    case 1:
      cellIdentifier = "TextCell"
    default: ()
    }

    let cell = tableView.dequeueReusableCellWithIdentifier(cellIdentifier, forIndexPath: indexPath) as! UITableViewCell

    return cell
  }

  override func tableView(tableView: UITableView, heightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat {
    switch indexPath.section {
    case 0:
      return UITableViewAutomaticDimension
    default: ()
      return 50
    }
  }

  override func tableView(tableView: UITableView, estimatedHeightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat {
    switch indexPath.section {
    case 0:
      return 200
    default: ()
      return 50
    }
  }

}

The only thing of note here is that we're using UITableViewAutomaticDimension for automatic cell heights determined by constraints in the cell: we have a UIImageView with constraints to use the full width and height of the cell and a fixed aspect ratio of 2:1. Because of this aspect ratio, the height of the image (and therefore of the cell) is always half of the width. In landscape it looks like this:

iOS Simulator Screen Shot 20 Jul 2015 17.27.38

We'll see later why this matters.

Parallax scrolling with Pull to Refresh

NOTE: You can find all the code of this section at the pull-to-refresh branch.

As mentioned before, creating the parallax scrolling effect is easiest when it doesn't need to stretch. Commonly you'll only want that if you have a Pull to Refresh. Adding the UIRefreshControl is done in the standard way so I won't go into that.

Container view
The rest is also quite simple. With the basics from below as our starting point, what we need to do first is add a UIView around our UIImageView that acts as a container. Since our image will change it's position while we scroll, we cannot use it anymore to calculate the height of the cell. The container view will have exactly the constraints that our image view had: use the full width and height of the cell and have an aspect ratio of 2:1. Also make sure to enable Clip Subviews on the container view to make sure the image view is clipped by it.

Align Center Y constraint
The image view, which is now inside the container view, will keep its aspect ratio constraint and use the full width of the container view. For the y position we'll add an Align Center Y constraint to vertically center the image within the container. All that looks something like this: Screen Shot 2015-07-20 at 17.46.25

Parallax scrolling using constraint
When we run this code now, it will still behave exactly as before. What we need to do is make the image view scroll with half the speed of the table view when scrolling down. We can do that by changing the constant of the Align Center Y constraint that we just created. First we need to connect it to an outlet of a custom UITableViewCell subclass:

class ImageCell: UITableViewCell {
  @IBOutlet weak var imageCenterYConstraint: NSLayoutConstraint!
}

When the table view scrolls down, we need to lower the Y position of the image by half the amount that we scrolled. To do that we can use scrollViewDidScroll and the content offset of the table view. Since our UITableViewController already adheres to the UIScrollViewDelegate, overriding that method is enough:

override func scrollViewDidScroll(scrollView: UIScrollView) {
  imageCenterYConstraint?.constant = min(0, -scrollView.contentOffset.y / 2.0) // only when scrolling down so we never let it be higher than 0
}

We're left with one small problem. The imageCenterYConstraint is connected to the ImageCell that we created and the scrollViewDidScroll method is in the view controller. So what left is create a imageCenterYConstraint in the view controller and assign it when the cell is created:

weak var imageCenterYConstraint: NSLayoutConstraint?

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  var cellIdentifier = ""
  switch indexPath.section {
  case 0:
    cellIdentifier = "ImageCell"
  case 1:
    cellIdentifier = "TextCell"
  default: ()
  }

  // the new part of code:
  let cell = tableView.dequeueReusableCellWithIdentifier(cellIdentifier, forIndexPath: indexPath) as! UITableViewCell
  if let imageCell = cell as? ImageCell {
    imageCenterYConstraint = imageCell.imageCenterYConstraint
  }

  return cell
}

That's all we need to do for our first variation of the parallax image scrolling. Let's go on with something a little more complicated.

Parallax scrolling with Pull to Refresh

NOTE: You can find all the code of this section at the no-pull-to-refresh branch.

When starting from the basics, we need to add a container view again like we did in the Container view paragraph from the previous section. The image view needs some different constraints though. Add the following constraints to the image view:

  • Ass before, keep the 2:1 aspect ratio
  • Add a Leading Space and Trailing Space of 0 to the Superview (our container view) and set the priority to 900. We will break these constraints when stretching the image because the image will become wider than the container view. However we still need them to determine the preferred width.
  • Align Center X to the Superview. We need this one to keep the image in the center when we break the Leading and Trailing Space constraints.
  • Add a Bottom Space and Top Space of 0 to the Superview. Create two outlets to the cell class ImageCell like we did in the previous section for the center Y constraint. We'll call these bottomSpaceConstraint and topSpaceConstraint. Also assign these from the cell to the view controller like we did before so we can access them in our scrollViewDidScroll method.

The result: Screen Shot 2015-07-20 at 21.30.52 We now have all the constraints we need to do the effects for scrolling up and down.

Scrolling down
When we scroll down (swipe up) we want the same effect as in our previous section. Instead of having an 'Align Center Y' constraint that we can change, we now need to do the following:

  • Set the bottom space to minus half of the content offset so it will fall below the container view.
  • Set the top space to plus half of the content offset so it will be below the top of the container view.

With these two calculation we effectively delay the scrolling speed of the image view with the half of the table view scrolling speed.

bottomSpaceConstraint?.constant = -scrollView.contentOffset.y / 2
topSpaceConstraint?.constant = scrollView.contentOffset.y / 2

Scrolling up
When the table view scrolls up (swipe down) the container view is going down. What we want here is that the image view sticks to the top of the screen instead of going down as well. As well need for that is to set the constant of the topSpaceConstraint to the content offset. That means the height of the image will increase. Because of our 2:1 aspect ratio, the width of the image will grow as well. This is why we had to lower the priority of the Leading and Trailing constraint because the image no longer fits inside the container and breaks those constraints.

topSpaceConstraint?.constant = scrollView.contentOffset.y

We're left with one problem now. When the image sticks to the top while the container view goes down, it means that the image falls outside the container view. And since we had to enable Clip Subviews for scrolling down, we now get something like this: iOS Simulator Screen Shot 20 Jul 2015 21.45.44

We can't see the top of the image since it's outside the container view. So what we need is to clip when scrolling down and not clip when scrolling up. We can only do that in code so we need to connect the container view to an outlet, just as we've done with the constraints. Then the final code in scrollViewDidScroll becomes:

func scrollViewDidScroll(scrollView: UIScrollView) { 
  if scrollView.contentOffset.y >= 0 { 
    // scrolling down 
    containerView.clipsToBounds = true 
    bottomSpaceConstraint?.constant = -scrollView.contentOffset.y / 2 
    topSpaceConstraint?.constant = scrollView.contentOffset.y / 2 
  } else { 
    // scrolling up 
    topSpaceConstraint?.constant = scrollView.contentOffset.y 
    containerView.clipsToBounds = false 
  } 
}
Conclusion

So there you have it. Two variations of parallax scrolling without too much effort. As mentioned before, use a dedicated library if you have to, but don't be afraid that it's too complicated to do it yourself.

Additional notes

If you've seen the source code on GitHub you might have noted a few additional things. I didn't want to mention in the main body of this post to prevent any distractions but it's important to mention them anyway.

  • The aspect ratio constraints need to have a priority lower than 1000. Set them 999 or 950 or something (make sure they're higher than the Leading and Trailing Space constraints that we set to 900 in the last section). This is because of an issue related to cells with dynamic height (using UITableViewAutomaticDimension) and rotation. When the user rotates the device, the cell will get its new width while still having the previous height. The new height calculation is not yet done at the beginning of the rotation animation. At this moment, the 2:1 aspect ratio cannot exist, which is why we cannot set it to 1000 (required). Right after the new height is calculated it the aspect ratio constraint will kick back in. It seems that the state in which the aspect ratio constraint cannot exist is not even visible so don't worry about your cell looking strange. Also leaving it at 1000 only seems to generate an error message about the constraint, after which it continues as expected.
  • Instead of assigning the outlets from the ImageCell to new variables in the view controller you may also create a scrollViewDidScroll in the cell, which is then being called from the scrollViewDidScroll from your view controller. You can get the cell using cellForRowAtIndexPath. See the code on GitHub to see this done.

The Agile Revolution - Episode 91

Coding the Architecture - Simon Brown - Tue, 07/21/2015 - 06:47

While at the YOW! conference in Australia during December 2014, I was interviewed by Craig Smith and Tony Ponton for The Agile Revolution podcast. It's a short episode (28 minutes) but we certainly packed a lot in, with the discussion covering software architecture, my C4 model, technical leadership, agility, lightweight documentation and even a little bit about enterprise architecture.

Speaking at YOW! in Australia

Thanks to Craig and Tony for taking the time to do this and I hope you enjoy The Agile Revolution - Episode 91: Coding The Architecture with Simon Brown.

Categories: Architecture

Algolia's Fury Road to a Worldwide API Steps Part 2


The most frequent questions we answer for developers and devops are about our architecture and how we achieve such high availability. Some of them are very skeptical about high availability with bare metal servers, while others are skeptical about how we distribute data worldwide. However, the question I prefer is “How is it possible for a startup to build an infrastructure like this”. It is true that our current architecture is impressive for a young company:

  • Our high-end dedicated machines are hosted in 13 worldwide regions with 25 data-centers

  • our master-master setup replicates our search engine on at least 3 different machines

  • we process over 6 billion queries per month

  • we receive and handle over 20 billion write operations per month

Just like Rome wasn't built in a day, our infrastructure wasn't as well. This series of posts will explore the 15 instrumental steps we took when building our infrastructure. I will even discuss our outages and bugs in order to you to understand how we used them to improve our architecture.

The first blog post of the series focused on the early days of our beta. This blog post will focus on the first 18 months of our service from September 2013 to December 2014 and even include our first outages!

Step 4: January 2014
Categories: Architecture

Released Today: Visual Studio 2015, ASP.NET 4.6, ASP.NET 5 & EF 7 Previews

ScottGu's Blog - Scott Guthrie - Mon, 07/20/2015 - 16:14

Today is a big day with major release announcements for Visual Studio 2015, Visual Studio 2013 Update 5, and .NET Framework 4.6. All these releases have been covered in great detail on Soma’s Blog, Visual Studio Blog, and .NET Blog

Join us online for the Visual Studio 2015 Release Event, where you can see Soma, Brian Harry, Scott Hanselman, and many other demo new Visual Studio 2015 features and technologies. This year, in a new segment called “In The Code”, we share how a team of Microsoft engineers created a real app in 3 days. There will be opportunities along the way to interact in live Q&A with the team on subjects such as Agile development, web and cloud development, cross-platform mobile dev and much more. 

In this post I’d like to specifically talk about some of the ground we have covered in ASP.NET and Entity Framework.  In this release of Visual Studio, we are releasing ASP.NET 4.6, updating our Visual Studio Web Development Tools, and updating the latest beta release of our new ASP.NET 5 framework.  Below are details on just a few of the great updates available today: ASP.NET Tooling Improvements

Today’s VS 2015 release delivers some great updates for web development.  Here are just a few of the updates we are shipping in this release: JSON Editor

JSON has become a first class experience in Visual Studio 2015 and we are now giving you a great editor to allow you to maintain your JSON content.  With support for JSON Schema validation, intellisense, and support for SchemaStore.org writing and producing JSON content has never been as easy.  We’ve also added intellisense support for bower.json and package.json files for bower and npm package manager use.

image HTML Editor Updates

Our HTML editor received a lot of attention in this update.  We wanted to deliver an editor that kept up with HTML 5 standards and provided rich support for popular new frameworks and libraries.  We previously shipped the bootstrap responsive web framework with our ASP.NET templates, and we are now providing intellisense for their classes with an indicator icon to show that they are bootstrap CSS classes.

image

 

This helps you keep clear the classes that you wrote in your project, like the page-inner class above, and the bootstrap classes marked with the B icon.

We are also keeping up with support for the emerging web components standard with the import link for the web components that markup imports.

 image

We are also providing intellisense for AngularJS directives and attributes with an appropriate Angular icon so you know you’re triggering AngularJS functionality

 image JavaScript Editor Improvements

With the VS 2015 release we are introducing support for AngularJS structures including controllers, services, factories, directives and animations.  There is also support for the new EcmaScript 6 features such as classes, arrow functions, and template strings. We are also bringing a navigation bar to the editor to help you navigate between the major elements of your JavaScript.  With JSDoc support to deliver intellisense, JavaScript development gets easier.

 image ReactJS Editor Support

We spent some time with the folks at Facebook to make sure that we delivered first class capabilities for developers using their ReactJS framework.  With appropriate syntax highlighting and intellisense for React methods, developers should be very comfortable building React applications with the new Visual Studio:

 image Support for JavaScript package managers like Grunt and Gulp and Task Runners

JavaScript and modern web development techniques are the new recommended way to build client-side code for your web application.  We support these tools and programming techniques with our new Task Runner Explorer that executes grunt and gulp task runners.  You can open this tool window with the Ctrl+Alt+Backspace hotkey combination.

 image

Execute any of the tasks defined in your gruntfile.js or gulpfile.js by right-clicking on the task name in the left panel and choosing “Run” from the context menu that appears.  You can even use this context menu to attach grunt or gulp tasks to project build events in Visual Studio like “After Build” as shown in the figure above.  Every time the .NET objects in your web project are completed compiling, the ‘build’ task will be executed from the gruntfile.js

Combined with the intellisense support for JavaScript and JSON editors, we think that developers wanting to use grunt and gulp tasks will really enjoy this new Visual Studio experience.  You can add grunt and gulp tasks with the newly integrated npm package manager capabilities.  When you create a package.json file in your web project, we will install and upgrade local copies of all packages referenced.  Not only do we deliver syntax highlighting and intellisense for package.json terms, we also provide package name and version lookup against the npmjs.org gallery.

 image

The bower package manager is also supported with great intellisense, syntax highlighting and the same package name and version support in the bower.json file that we provide for package.json.

 image

These improvements in managing and writing JavaScript configuration files and executing grunt or gulp tasks brings a new level of functionality to Visual Studio 2015 that we think web developers will really enjoy.

ASP.NET 4.6 Runtime Improvements

Today’s release also includes a bunch of enhancements to ASP.NET from a runtime perspective. HTTP/2 Support

Starting with ASP.NET 4.6 we are introducing support for the HTTP/2 standard.  This new version of the HTTP protocol delivers a true multiplexing of requests and responses between browser and web server.  This exciting update is as easy as enabling SSL in your web projects to immediately improve your ASP.NET application responsiveness.

 image

With SSL enabled (which is a requirement of the HTTP/2 protocol), IISExpress on Windows 10 will begin interacting with the browser using the updated protocol.  The difference between the protocols is clear.  Consider the network performance presented by Microsoft Edge when requesting the same website without SSL (and receiving HTTP/1.x) and with SSL to activate the HTTP/2 protocol:

image

image

Both samples are showing the default ASP.NET project template’s home page.  In both scenarios the HTML for the page is retrieved in line 1.  In HTTP/1.x on the left, the first six elements are requested and we see grey bars to indicate waiting to request the last two elements.  In HTTP/2 on the right, all eight page elements are loaded concurrently, with no waiting. Support for the .NET Compiler Platform

We now support the new .NET compilers provided in the .NET Compiler Platform (codenamed Roslyn).  These compilers allow you to access the new language features of Visual Basic and C# throughout your Web Forms markup and MVC view pages.  Our markup can look much simpler and readable with new language features like string interpolation:

Instead of building a link in Web Forms like this:

  <a href="/Products/<%: model.Id %>/<%: model.Name %>"><%: model.Name %></a>

We can deliver a more readable piece of markup like this:

  <a href="<%: $"/Products/{model.Id}/{model.Name}" %>"><%: model.Name %></a>

We’ve also bundled the Microsoft.CodeDom.Providers.DotNetCompilerPlatform NuGet package to enable your Web Forms assets to compile significantly faster without requiring any changes to your code or project. Async Model Binding for Web Forms

Model binding was introduced for Web Forms applications in ASP.NET 4, and we introduced async methods in .NET 4.5  We heard your requests to be able to execute your model binding methods on a Web Form asynchronously with the new language features.  Our team has made this as easy as adding an async=”true” attribute to the @Page directive and return a Task from your model binding methods:

    public async Task<IEnumerable<Product>> myGrid_GetData()

    {

      var repo = new Repository();

      return await repo.GetAll();

    }

We have a blog post demonstrating with more information and tips about this feature on our MSDN Web Development blog. ASP.NET 5

I introduced ASP.NET 5 back in February and shared in detail what this release would bring. I’ll reiterate just a few high level points here, check out my post Introducing ASP.NET 5 for a more complete run down. 

ASP.NET 5 works with .NET Core as well as the full .NET Framework to give you greater flexibility when hosting your web apps. With ASP.NET MVC 6 we are merging the complimentary features and functionality from MVC, Web API, and Web Pages. With ASP.NET 5 we are also introducing a new HTTP request pipeline based on our learnings from Katana which enables you to add only the components you need with an opt-in strategy. Additionally, included in this release are multiple development features for improved productivity and to enable you to build better web applications. ASP.NET 5 is also open source. You can find us on GitHub, view and download the code, submit changes, and track when changes are made.   

The ASP.NET 5 Beta 5 runtime packages are in preview and not recommended for use in production, so please continue using ASP.NET 4.6 for building production grade apps. For details on the latest ASP.NET 5 beta enhancements added and issues fixed, check out the published release notes for ASP.NET 5 beta 5 on GitHub. To get started with ASP.NET 5 get the docs and tutorials on the ASP.NET site

To learn more and keep an eye on all updates to ASP.NET, checkout the Webdev blog and read along with the tutorials and documentation at www.asp.net/vnext Entity Framework

With today’s release, we not only have an update to Entity Framework 6 that primarily includes bug fixes and community contributions, but we also released a preview version of Entity Framework 7, keep reading for details: Entity Framework 6.x

Visual Studio 2015 includes Entity Framework 6.1.3. EF 6.1.3 primarily focuses on bug fixes and community contributions; you can see a list of the changes included in EF 6.1.3 in this EF 6.1.3 announcement blog post. The Entity Framework 6.1.3 runtime is included in a number of places in this release. In EF 6.1.3 when you can create a new model using the Entity Framework Tools in a project that does not already have the EF runtime installed, the runtime is automatically installed for you. Additionally, the runtime is pre-installed in new ASP.NET projects, depending on the project template you select.

image 

To learn more and keep an eye on all updates to Entity Framework, checkout the ADO.NET blog.   Entity Framework 7

Entity Framework 7 is in preview and not yet ready for production yet. This new version of Entity Framework enables new platforms and new data stores. Universal Windows Platform, ASP.NET 5, and traditional desktop applications can now use EF7. EF7 can also be used in .NET applications that run on Mac and Linux. Visual Studio 2015 includes an early preview of the EF7 runtime that is installed in new ASP.NET 5 projects. 

image

For more information on EF7, check out the GitHub page for what is EF7 all about.

image Summary

Today’s Visual Studio release is a big one that we are proud to share with you all. Thank you for your continued support by providing feedback on the interim releases (CTPs, Preview, RC).  We are really looking forward to seeing what you build with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me @scottgu omni

Categories: Architecture, Programming

How To Get Smarter By Making Distinctions

"Whatever you do in life, surround yourself with smart people who'll argue with you." -- John Wooden

There’s a very simple way to get smarter.

You can get smarter by creating categories.

Not only will you get smarter, but you’ll also be more mindful, and you’ll expand your vocabulary, which will improve your ability to think more deeply about a given topic or domain.

In my post, The More Distinctions You Make, the Smarter You Get, I walk through the ins and outs of creating categories to increase your intelligence, and I use the example of “fat.”   I attempt to show how “Fat is bad” isn’t very insightful, and how by breaking “fat” down into categories, you can dive deeper and reveal new insight to drive better decisions and better outcomes.

I’m this post, I’m going to walk this through with an example, using “security” as the topic.

The first time I heard the word “security”, it didn’t mean much to me, beyond “protect.”

The next thing somebody taught me, was how I had to focus on CIA:  Confidentiality, Integrity, and Availability.

That was a simple way to break security down into meaningful parts.

And then along came Defense in Depth.   A colleague explained that Defense in Depth meant thinking about security in terms of multiple layers:  Network, Host, Application, and Data.

But then another colleague said, the real key to thinking about security and Defense in Depth, was to think about it in terms of people, process, and technology.

As much as I enjoyed these thought exercises, I didn’t find them actionable enough to actually improve software or application security.  And my job was to help Enterprise developers build better Line-Of-Business applications that were scalable and secure.

So our team went to the drawing board to map out actionable categories to take application security much deeper.

Right off the bat, just focusing on “application” security vs. “network” security or “host” security, helped us to get more specific and make security more tangible and more actionable from an Line-of-Business application perspective.

Security Categories

Here are the original security categories that we used to map out application security and make it more actionable:

  1. Input and Data Validation
  2. Authentication
  3. Authorization
  4. Configuration Management
  5. Sensitive Data
  6. Session Management
  7. Cryptography
  8. Exception Management
  9. Auditing and Logging

Each of these buckets helped us create actionable principles, patterns, and practices for improving security.

Security Categories Explained

Here is a brief description of each application security category:

Input and Data Validation
How do you know that the input your application receives is valid and safe? Input validation refers to how your application filters, scrubs, or rejects input before additional processing. Consider constraining input through entry points and encoding output through exit points. Do you trust data from sources such as databases and file shares?

Authentication
Who are you? Authentication is the process where an entity proves the identity of another entity, typically through credentials, such as a user name and password.

Authorization
What can you do? Authorization is how your application provides access controls for resources and operations.

Configuration Management
Who does your application run as? Which databases does it connect to? How is your application administered? How are these settings secured? Configuration management refers to how your application handles these operational issues.

Sensitive Data
How does your application handle sensitive data? Sensitive data refers to how your application handles any data that must be protected either in memory, over the network, or in persistent stores.

Session Management
How does your application handle and protect user sessions? A session refers to a series of related interactions between a user and your Web application.

Cryptography
How are you keeping secrets (confidentiality)? How are you tamper-proofing your data or libraries (integrity)? How are you providing seeds for random values that must be cryptographically strong? Cryptography refers to how your application enforces confidentiality and integrity.

Exception Management
When a method call in your application fails, what does your application do? How much do you reveal? Do you return friendly error information to end users? Do you pass valuable exception information back to the caller? Does your application fail gracefully?

Auditing and Logging
Who did what and when? Auditing and logging refer to how your application records security-related events.

As you can see, just by calling out these different categories, you suddenly have a way to dive much deeper and explore application security in depth.

The Power of a Security Category

Let’s use a quick example.  Let’s take Input Validation.

Input Validation is a powerful security category, given how many software security flaws and how many vulnerabilities and how many attacks all stem from a lack of input validation, including Buffer Overflows.

But here’s the interesting thing.   After quite a bit of research and testing, we found a powerful security pattern that could help more applications stand up to more security attacks.  It boiled down to the following principle:

Validate for length, range, format, and type.

That’s a pithy, but powerful piece of insight when it comes to implementing software security.

And, when you can’t validate the input, make it safe by sanitizing the output.  And along these lines, keep user input out of the control path, where possible.

All of these insights flow from just focusing on Input Validation as a security category.

Threats, Attacks, Vulnerabilities, and Countermeasures

Another distinction our team made was to think in terms of threats, attacks, vulnerabilities, and countermeasures.  We knew that threats could be intentional and malicious (as in the case of attacks), but they could also be accidental and unintended.

We wanted to identify vulnerabilities as weaknesses that could be addressed in some way.

We wanted to identify countermeasures as the actions to take to help mitigate risks, reduce the attack surface, and address vulnerabilities.

Just by chunking up the application security landscape into threats, attacks, vulnerabilities, and countermeasures, we empowered more people to think more deeply about the application security space.

Security Vulnerabilities Organized by Security Categories

Using the security categories above, we could easily focus on finding security vulnerabilities and group them by the relevant security category.

Here are some examples:

Input/Data Validation

  • Using non-validated input in the Hypertext Markup Language (HTML) output stream
  • Using non-validated input used to generate SQL queries
    Relying on client-side validation
  • Using input file names, URLs, or user names for security decisions
  • Using application-only filters for malicious input
  • Looking for known bad patterns of input
  • Trusting data read from databases, file shares, and other network resources
  • Failing to validate input from all sources including cookies, query string parameters, HTTP headers, databases, and network resources

Authentication

  • Using weak passwords
  • Storing clear text credentials in configuration files
  • Passing clear text credentials over the network
  • Permitting over-privileged accounts
  • Permitting prolonged session lifetime
  • Mixing personalization with authentication

Authorization

  • Relying on a single gatekeeper
  • Failing to lock down system resources against application identities
  • Failing to limit database access to specified stored procedures
  • Using inadequate separation of privileges

Configuration Management

  • Using insecure administration interfaces
  • Using insecure configuration stores
  • Storing clear text configuration data
  • Having too many administrators
  • Using over-privileged process accounts and service accounts

Sensitive Data

  • Storing secrets when you do not need to
  • Storing secrets in code
  • Storing secrets in clear text
  • Passing sensitive data in clear text over networks

Session Management

  • Passing session identifiers over unencrypted channels
  • Permitting prolonged session lifetime
  • Having insecure session state stores
  • Placing session identifiers in query strings

Cryptography

  • Using custom cryptography
  • Using the wrong algorithm or a key size that is too small
  • Failing to secure encryption keys
  • Using the same key for a prolonged period of time
  • Distributing keys in an insecure manner

Exception Management

  • Failing to use structured exception handling
  • Revealing too much information to the client

Auditing and Logging

  • Failing to audit failed logons
  • Failing to secure audit files
  • Failing to audit across application tiers
Threats and Attacks Organized by Security Categories

Again, using our security categories, we could then group threats and attacks by relevant security categories.

Here are some examples of security threats and attacks organized by security categories:

Input/Data Validation

  • Buffer overflows
  • Cross-site scripting
  • SQL injection
  • Canonicalization attacks
  • Query string manipulation
  • Form field manipulation
  • Cookie manipulation
  • HTTP header manipulation

Authentication

  • Network eavesdropping
  • Brute force attacks
  • Dictionary attacks
  • Cookie replay attacks
  • Credential theft

Authorization

  • Elevation of privilege
  • Disclosure of confidential data
  • Data tampering
  • Luring attacks

Configuration Management

  • Unauthorized access to administration interfaces
  • Unauthorized access to configuration stores
  • Retrieval of clear text configuration secrets
  • Lack of individual accountability

Sensitive Data

  • Accessing sensitive data in storage
  • Accessing sensitive data in memory (including process dumps)
  • Network eavesdropping
  • Information disclosure

Session Management

  • Session hijacking
  • Session replay
  • Man-in-the-middle attacks

Cryptography

  • Loss of decryption keys
  • Encryption cracking

Exception Management

  • Revealing sensitive system or application details
  • Denial of service attacks

Auditing and Logging

  • User denies performing an operation
  • Attacker exploits an application without trace
  • Attacker covers his tracks
Countermeasures Organized by Security Categories

Now here is where the rubber really meets the road.  We could group security countermeasures by security categories to make them more actionable.

Here are example security countermeasures organized by security categories:

Input/Data Validation

  • Do not trust input
  • Validate input: length, range, format, and type
  • Constrain, reject, and sanitize input
  • Encode output

Authentication

  • Use strong password policies
  • Do not store credentials
  • Use authentication mechanisms that do not require clear text credentials to be passed over the network
  • Encrypt communication channels to secure authentication tokens
  • Use HTTPS only with forms authentication cookies
  • Separate anonymous from authenticated pages

Authorization

  • Use least privilege accounts
  • Consider granularity of access
  • Enforce separation of privileges
  • Use multiple gatekeepers
  • Secure system resources against system identities

Configuration Management

  • Use least privileged service accounts
  • Do not store credentials in clear text
  • Use strong authentication and authorization on administrative interfaces
  • Do not use the Local Security Authority (LSA)
  • Avoid storing sensitive information in the Web space
  • Use only local administration

Sensitive Data

  • Do not store secrets in software
  • Encrypt sensitive data over the network
  • Secure the channel

Session Management

  • Partition site by anonymous, identified, and authenticated users
  • Reduce session timeouts
  • Avoid storing sensitive data in session stores
  • Secure the channel to the session store
  • Authenticate and authorize access to the session store

Cryptography

  • Do not develop and use proprietary algorithms (XOR is not encryption. Use platform-provided cryptography)
  • Use the RNGCryptoServiceProvider method to generate random numbers
  • Avoid key management. Use the Windows Data Protection API (DPAPI) where appropriate
  • Periodically change your keys

Exception Management

  • Use structured exception handling (by using try/catch blocks)
  • Catch and wrap exceptions only if the operation adds value/information
  • Do not reveal sensitive system or application information
  • Do not log private data such as passwords

Auditing and Logging

  • Identify malicious behavior
  • Know your baseline (know what good traffic looks like)
  • Use application instrumentation to expose behavior that can be monitored

As you can see, the security countermeasures can easily be reviewed, updated, and moved forward, because the actionable principles are well organized by the security categories.

There are many ways to use creating categories as a way to get smarter and get better results.

In the future, I’ll walk through how we created an Agile Security approach, using categories.

Meanwhile, check out my post on The More Distinctions You Make, the Smarter You Get to gain some additional insights into how to use empathy and creating categories to dive deeper, learn faster, and get smarter on any topic you want to take on.

Categories: Architecture, Programming

We Help Our Customers Transform

"Innovation—the heart of the knowledge economy—is fundamentally social." -- Malcolm Gladwell

I’m a big believer in having clarity around what you help your customers do.

I was listening to Satya Nadella’s keynote at the Microsoft Worldwide Partner Conference, and I like how he put it so simply, that we help our customers transform.

Here’s what Satya had to say about how we help our customers transform their business:

“These may seem like technical attributes, but they are key to how we drive business success for our customers, business transformation for our customers, because all of what we do, collectively, is centered on this core goal of ours, which is to help our customers transform.

When you think about any customer of ours, they're being transformed through the power of digital technology, and in particular software.

There isn't a company out there that isn't a software company.

And our goal is to help them differentiate using digital technology.

We want to democratize the use of digital technology to drive core differentiation.

It's no longer just about helping them operate their business.

It is about them excelling at their business using software, using digital technology.

It is about our collective ability to drive agility for our customers.

Because if there is one truth that we are all faced with, and our customers are faced with, it's that things are changing rapidly, and they need to be able to adjust to that.

And so everything we do has to support that goal.

How do they move faster, how do they interpret data quicker, how are they taking advantage of that to take intelligent action.

And of course, cost.

But we'll keep coming back to this theme of business transformation throughout this keynote and throughout WPC, because that's where I want us to center in on.

What's the value we are adding to the core of our customer and their ability to compete, their ability to create innovation.

And anchored on that goal is our technical ambition, is our product ambition.”

Transformation is the name of the game.

You Might Also Like

Satya Nadella is All About Customer Focus

SatyaSatya Nadella on a Mobile-First, Cloud-First World

Satya Nadella on Empower Every Person on the Planet

Satya Nadella on Everyone Has To Be a Leader

Satya Nadella on How the Key To Longevity is To Be a Learning Organization

Satya Nadella on Live and Work a Meaningful Life

Sayta Nadelle on The Future of Software

Categories: Architecture, Programming