Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

The Great Leadership Quotes Collection Revamped

A while back I put together a comprehensive collection of leadership quotes.   It’s a combination of the wisdom of the ages + modern sages.   It was time for a revamp.  Here it is:

The Great Leadership Quotes Collection

It's a serious collection of leadership quotes and includes lessons from the likes of John Maxwell, Jim Rohn, Lao Tzu, Ralph Waldo Emerson, and more.

Leadership is Influence

John Maxwell said it best when he defined leadership as influence.  Tom Peters added a powerful twist to leadership when he said that leadership is not about creating followers—it’s about creating more leaders.

I like to think of leadership in terms of incremental spheres of influence starting with personal or self-leadership, followed by team leadership, followed by organizational leadership, etc.   Effectively, you can expand your sphere of influence, but none of it really works, if you can’t lead yourself first.

Leadership is Multi-Faceted (Just Like You)

I also like to think about the various aspects of leadership, such as Courage, Challenges, Character, Communication, Connection, Conviction, Credibility, Encouragement, Failure, Fear, Heart, Influence, Inspiration, Learning, Self-Leadership, Servant-Leadership, Teamwork, and Vision.  As such, I’ve used these categories to help put the leadership quotes into a meaningful collection with simple themes.

I’ve also included special sections on What is Leadership, Leadership Defined, and Leading by Example. 

Sometimes the Source is More Interesting than the Punch line

While I haven’t counted the leadership quotes, there are a lot.   But they are well-organized and easy to scan.   You’ll notice how the names of famous people that said the leadership quote will pop out at you.  I bolded the names for extra impact and to help you quickly jump to interesting people, to see what they have to say about the art and science of leadership.

I bet you can find at least three leadership quotes that you can use on a daily basis to think a little better, feel a little better, or do a little better.

Leadership is Everyone’s Job

For those of you that think that leadership is something that other people do, or something that gets done to you, or that leadership is a position, I’ll share the words of John Maxwell on this topic:

“A great leader’s courage to fulfill his vision comes from passion, not position.” —  John Maxwell

In fact, if you’ve never seen it before or need a quick reminder that everyone is a leader, this is a great video that makes the point hit home:

Everyone is a Leader

It’s one of those cool, simple, cartoon videos that shows how leadership is everyone’s job and that without that philosophy, people, systems, organizations, etc. all fail.

The world moves too fast and things change too much to wait for somebody at the top to tell you what to do.   The closer you are to where the action is, the more context you have, and the more insight you can use to make better decisions and get better results.

Leadership is a body of principles, patterns, and practices that you can use to empower yourself, and others, with skill.

Just like a Jedi, your force gets stronger the more you use it.

If You Want to Grow Your Leadership, Then Give Your Power Away

But always remember the surprise about leadership – the more you give your power away, the more power that comes back to you.

It’s not Karma.  It’s caring.  And it’s contagious.

(As Brian Tracy would say, the three C’s of leadership are Consideration,Caring,and Courtesy.)

Well, maybe it is like Karma in that what goes around, comes around, and leadership amplifies when you share it with people and help everyone become all that they are capable of.

Stand strong when tested, and lead yourself from the inside out.

You Might Also Like

347 Personal Effectiveness Articles to Help You Change Your Game

Happiness Quotes Revamped

Habits, Dreams, and Goals

Interview with The Entrepreneur’s Library on Getting Results the Agile Way

My Story of Personal Transformation

Categories: Architecture, Programming

Happiness Quotes Revamped

I've completely overhauled my collection of happiness quotes.  It was time for a revamp.  Hear what Charlie Brown, Dale Carnegie, Aristotle, Confucius, and more have to say about the art and skill of happiness.

Here is the new collection of happiness quotes fresh from the press:

Happiness Quotes (Sources of Insight)

It should be a lot easier to read and a lot easier to use.   And it’s exhaustive, so you’ll most likely find at lead a few quotes you’ve never heard before.   While I have some general quotes on happiness, I also organized my happiness quotes collection into key themes:  What is Happiness, Skilled Happiness, and the Pursuit of Happiness.

Why Happiness Quotes?

Well, life happens.  And life has it’s ups and downs, whether you’re an executive, an IT leader, a Program Manager, a developer, or you name it.

As the world changes faster around us, our thoughts shape our work and life.

When people aren’t happy, they don’t work well.  You can argue it’s the job, but a study in the Journal of Occupational and Organizational Psychology showed that it’s the other way around:

Happiness leads to job satisfaction

They found that the causal relationship between subjective-well being to job satisfaction was stronger than the causal relationship from job satisfaction to subjective well-being.

It helps explain why one person’s treasure is another’s trash.

The HOW of Happiness

If you arm yourself with a great set of happiness quotes, you’ll find ways think better.  Some of the quotes are actually profound wisdom distilled into a sentence.   Some are just entertaining.

Sometimes it just helps to validate your thinking.  For example, are you worried that every time you start to feel happy that something will go wrong?   You are not alone.  Charlie Brown feels the same way:

"I think I’m afraid to be happy, because whenever I get too happy, something bad always happens."Charlie Brown

I think you’ll find a great deal of wisdom within the quotes and they will help you find your own personal HOW of happiness.  While happiness can be a team sport, it’s really an individual skill that you learn to master over a lifetime.   While you may not end up as a shiny, happy Pollyanna, you can find ways to raise your frustration tolerance, enjoy more of your moments, and bounce back faster when life throws you the proverbial curve ball.

The Secret of Happiness

While some might say that the secret of happiness is finding your one thing, I think it’s more than that.  As a mentor at Microsoft, I coach a lot of individuals on personal effectiveness, and a big part of it, is figuring out how to do what makes you come alive.

But there’s actually more to the story.

The ultimate way to find your happiness is to treat happiness like a verb, right here and right now, and find ways to spend more time in your values, while in the service of others or some greater good.   And, as Zig Ziglar would say, “Enjoy the price of success.”

That’s fulfillment.

And, that my friends is deep happiness.

Dig deeper.

You Might Also Like

Habits, Dreams, and Goals

Interview with The Entrepreneur’s Library on Getting Results the Agile Way

My Story of Personal Transformation

Categories: Architecture, Programming

Setting up an OpenVPN server inside an AWS VPC

Agile Testing - Grig Gheorghiu - Sat, 01/31/2015 - 23:56
My goal in this post is to show how to set up an OpenVPN server within an AWS VPC so that clients can connect to EC2 instances via a VPN connection. It's fairly well documented how to configure a site-to-site VPN with an AWS VPC gateway, but articles talking about client-based VPN connections into a VPC are harder to find.

== OpenVPN server setup ==

Let's start with setting up the OpenVPN server. I launched a new Ubuntu 14.04 instance in our VPC and I downloaded the latest openvpn source code via:

wget http://swupdate.openvpn.org/community/releases/openvpn-2.3.6.tar.gz

In order for the 'configure' step to succeed, I also had to install the following Ubuntu packages:

apt-get install build-essential openssl libssl-dev lzop liblzo2-dev libpam-dev

I then ran the usual commands:

./configure; make; sudo make install

At this point I proceeded to set up my own Certificate Authority (CA), per the OpenVPN HOWTO guide.  As it turned out, I needed the easy-rsa helper scripts on the server running openvpn. I got them from github:

git clone https://github.com/OpenVPN/easy-rsa.git

To generate the master CA certificate & key, I did the following:

cd ~/easy-rsa/easyrsa3
cp vars.example vars

- edited vars file and set these variables with the proper values for my organization:
set_var EASYRSA_REQ_COUNTRY
set_var EASYRSA_REQ_PROVINCE
set_var EASYRSA_REQ_CITY
set_var EASYRSA_REQ_ORG
set_var EASYRSA_REQ_EMAIL
set_var EASYRSA_REQ_OU

./easyrsa build-ca
(this will use the info specified in the vars file above)

To generate the OpenVPN server certificate and key, I ran:

./easyrsa build-server-full server
(I was prompted for a password for the server key)

To generate an OpenVPN client certificate and key for user myuser, I ran:

./easyrsa  build-client-full myuser
(I was prompted for a password for the client key)

The next step was to generate the Diffie Hellman (DH) parameters for the server by running:

./easyrsa gen-dh

I was ready at this point to configure the OpenVPN server.

I created a directory called /etc/openvpn and copied the pki directory under ~/easy-rsa/easyrsa3 to /etc/openvpn. I also copied the sample server configuration file ~/openvpn-2.3.6/sample/sample-config-files/server.conf to /etc/openvpn.

I edited /etc/openvpn/server.conf and specified the following:

ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/issued/server.crt
key /etc/openvpn/pki/private/server.key  # This file should be kept secret
dh /etc/openvpn/pki/dh.pem

server 10.9.0.0 255.255.255.0
ifconfig-pool-persist /etc/openvpn/ipp.txt

push "route 172.30.0.0 255.255.0.0"

The first block specifies the location of the CA certificate, the server key and certificate, and the DH certificate.

The 'server' parameter specifies a new subnet from which both the OpenVPN server and the OpenVPN clients connecting to the server will get their IP addresses. I set it to 10.9.0.0/24. The client IP allocations will be saved in the ipp.txt file, as specified in the ifconfig-pool-persist parameter.

One of the most important options, which I missed when I initially configured the server, is the 'push route' one. This makes the specified subnet (i.e. the instances in the VPC that you want to get to via the OpenVPN server) available to the clients connecting to the OpenVPN server without the need to create static routes on the clients. In my case, all the EC2 instances in the VPC are on the 172.30.0.0/16 subnet, so that's what I specified above.

Two more very important steps are needed on the OpenVPN server. It took me quite a while to find them so I hope you will be spared the pain.

The first step was to turn on IP forwarding on the server:

- uncomment the following line in /etc/sysctl.conf:
net.ipv4.ip_forward=1

- run
sysctl -p

The final step in the configuration of the OpenVPN server was to make it do NAT via itpables masquerading (thanks to rbgeek's blog post for these last two critical steps):

- run
iptables -t nat -A POSTROUTING -s 10.9.0.0/24 -o eth0 -j MASQUERADE

- also add the above line to /etc/rc.local so it gets run on reboot

Now all that's needed on the server is to actually run openvpn. You can run it in the foreground for troubleshooting purposes via:

openvpn /etc/openvpn/server.conf

Once everything works, run it in daemon mode via:

openvpn --daemon --config /etc/openvpn/server.conf

You will be prompted for the server key password when you start up openvpn. Haven't looked yet on how to run the server in a fully automated way.

Almost forgot to specify that you need to allow incoming traffic to UDP port 1194 in the AWS security group where your OpenVPN server belongs. Also allow traffic from that security group to the security groups of the EC2 instances that you actually want to reach over the OpenVPN tunnel.

== OpenVPN client setup ==

This is on a Mac OSX Mavericks client, but I'm sure it's similar for other clients.

Install tuntap
- download tuntap_20150118.tar.gz from http://tuntaposx.sourceforge.net
- untar and install tuntap_20150118.pkg

Install lzo
wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz
tar xvfz lzo-2.06.tar.gz
cd lzo-2.06
./configure; make; sudo make install

Install openvpn
- download openvpn-2.3.6.tar.gz from http://openvpn.net/index.php/open-source/downloads.htmltar xvf openvpn-2.3.6.tar
cd openvpn-2.3.6
./configure; make; sudo make install

At this point ‘openvpn --help’ should work.
The next step for the client setup is to copy the CA certificate ca.crt, and the client key and certificate (myuser.key and myuser.crt) from the OpenVPN server to the local client. I created an openvpn directory under my home directory on my Mac and dropped ca.crt in ~/openvpn/pki, myuser.key in ~/openvpn/pki/private and myuser.crt in ~/openvpn/pki/issued. I also copied the sample file ~/openvpn-2.3.6/sample/sample-config-files/client.conf to ~/openvpn and specified the following parameters in that file:
remote EXTERNAL_IP_ADDRESS_OF_OPENVPN_SERVER 1194
ca /Users/myuseropenvpn/pki/ca.crtcert /Users/myuser/openvpn/pki/issued/myuser.crtkey /Users/myuser/openvpn/pki/private/myuser.key
Then I started up the OpenVPN client via:
sudo openvpn ~/openvpn/client.conf(at this point I was prompted for the password for myuser.key)
To verify that the OpenVPN tunnel is up and running, I ping-ed the internal IP address of the OpenVPN server (in my case it was 10.9.0.1 on the internal subnet I specified in server.conf), as well as the internal IPs of various EC2 instances behind the OpenVPN server. Finally, I ssh-ed into those internal IP addresses and declared victory.
That's about it. Hope this helps!

UPDATE: I discovered in the mean time a very good Mac OSX GUI tool for managing client OpenVPN connections: Tunnelblick. All it took was importing the client.conf file mentioned above.

Interview with The Entrepreneur's Library on Getting Results the Agile Way

What is Agile Results all about?   What are the most important keys to using Agile Results to master productivity, time management, and work-life balance?

If you've ever wondered what Getting Results the Agile Way is all about, or want to know how to make the most of the book, this is it.  I answer these questions and more in my interview with The Entrepreneur's Library on Getting Results the Agile Way:

Interview with The Entrepreneur's Library on Agile Results

In this interview, Wade Danielson, the creator of The Entrepreneur’s Library, asks me the following questions:

  1. What was the inspiration behind writing this book?
  2. What makes this book different from others regarding this same topic?
  3. If the reader could only take one concept/principle/action item out of the entire book, what would you want that to be?
  4. Do you have a favorite quote from your book?
  5. If there was only one book you recommend to our listeners based on the way it has impacted your life, what would that be?

Wade also gives me a chance to give a walkthrough of the book, Getting Results the Agile Way, where I explain how to make the most of the book, and what each section is really about.

It’s a unique chance to get the philosophy behind Agile Results and why it’s really a personal results system for work and life.   It’s not a system that you break yourself against.  Instead, it’s a simple system for meaningful results that supports you and the way you work.  It helps you optimize your productivity by focusing on the wins that matter, playing to your strengths, and using your best energy for your best results.

An amazing thing happens when you become more focused and productive …

You get more out of life.

And you can get more done in a day than other people get done all week.

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For January 30th, 2015

Hey, it's HighScalability time:


It's a strange world...exotic, gigantic molecules Fit Inside Each Other like Russian nesting dolls
  • 1.39 billion: Facebook Monthly Active Users; $18 billion profit: Apple in 3 months; 200 million: Kik users; 11.2 billion: age of the oldest known solar system; 3 billion: videos viewed per day on Facebook
  • Quotable Quotes:
    • @kevinroose: This dude wins SF bingo. RT @caro: An Uber driver is Airbnb'ing the trunk of his Tesla for $85/night.
    • @BenedictEvans: Only 16% of Facebook DAUs aren't using it on mobile
    • @rezendi: Yo's Law: "in the 21st century tech industry, satire and reality are not merely indistinguishable but actually interchangeable."
    • Brent Ozar: I recommend that people back up data, not servers.
    • @AnnaPawlicka: "Shared State is the Root of All Evil"
    • Peter Lawrey: micro-day - about 1/12 of a second. micro-century - 51.3 minutes. femto-parsec - about 30 metres.
    • TapirLiu: OH: docker is like a condom to protect your computer from Node.
    • @DigitCurator: "The Next Decade In Storage": Resistive RAM promises better scaling, efficiency, and 1000x endurance of flash memory 
    • @BenedictEvans: At the end of 2014 Apple had ~650-675m live iOS devices. With zero unit sales growth, 700-720m by end 2015. Consumer PCs in use - 7-800m
    • @MailChimp: We sent 14.1 billion emails in December, including 741 million on Cyber Monday.
    • @mjpt777:  That's in the past. We can now do 20 million per second :-) per stream.
    • @bradwilson: Conclusions: 1. Ethernet over power does not perform as well as WiFi (??) 2. Ethernet over power hates being shared among multiple PCs
    • @mjpt777: Specialized Evolution of the General-Purpose CPU  - note that performance per watt is approx doubling per generation. 
    • @nighitingale: "The Earth is 4.6 billion years old. Scaling to 46 years, humans have been here 4 hours, the industrial..."
    • Joseph Campbell: The hero’s journey always begins with the call. One way or another, a guide must come to say, “Look, you’re in Sleepy Land. Wake. Come on a trip."
    • Frank Herbert: the most persistent principles of the universe were accident and error.

  • Will Facebook ever figure out this mobile thing? Not long ago that was the big question. We have an answer. In the fourth quarter, the percentage of its advertising revenue from mobile devices increased to 69%, up from 66% in the third quarter and 53% a year earlier. Mobile daily active users were 745 million on average for December 2014, an increase of 34 percent year-over-year.

  • The power of smart: Facebook’s Powerful Ad Tools Grew Its Revenue 25X Faster Than User Count. Facebook might be running out of people, but they aren't running out of ways of monetizing those people. Math grows faster than users.

  • The Cathedral of Computation by Ian Bogost. Agree in part. There does seem to be an uncritical acceptance of algorithms, as if because they enliven machines they are some how pure and objective, when the opposite is the case. Algorithms are made for human purposes by teams of humans and show the biases and hubris of their makers. And like all creatures, algorithms should be subject to skepticism, law, and review.

  • We have many long running debates in tech. Server side vs client side rendering is just one of them. A thoughtful analysis: Tradeoffs in server side and client side rendering by Malte Ubl.  Bret Slatkin boldly claims: Experimentally verified: "Why client-side templating is wrong". He concludes: I hope never to render anything server-side ever again. I feel more comfortable in making that choice than ever thanks to all this data. I see rare occasions when server-side rendering could make sense for performance, but I don't expect to encounter many of those situations in the future.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Run your iOS app without overwriting the App Store version

Xebia Blog - Fri, 01/30/2015 - 13:59

Sometimes when you're developing a new version of your iOS app, you'd like to run it on your iPhone or iPad and still be able to run the current version that is released on the App Store. Normally when you run your app from Xcode on your device, it will overwrite any existing version. If you then want to switch back to the version from the App Store, you'll have to delete the development version and download it again from the App Store.

In this blog post I'll describe how you can run a test version of your app next to your production version. This method also works when you have embedded extensions (like the Today Widget or WatchKit app) in your app or when you want to beta test your app with Apple's TestFlight.

There are two different ways to accomplish this. The first method is to create a new target within your project that runs the same app but with a different name and identifier. With iOS 8 it is now possible to embed extensions in your app. Since these extensions are embedded in the app target, this approach doesn't work when you have extensions and therefore I'll only describe the second method which is based on User-Defined Settings in the build configuration.

Before going into detail, here is a quick explanation of how this works. Xcode already creates two build configurations for us: Release and Debug. By adding some User-Defined Settings to the build configurations we can run the Debug configuration with a different Bundle identifier than the Release configuration. This essentially creates a separate app on your device, keeping the one from the App Store (which used the Release configuration) intact.

To beta distribution of the app built with debug configuration easier we'll create multiple schemes.

The basics

Follow these steps exactly to run a test version on your device next to your App Store version. These steps are based on Xcode 6.1.1 with an Apple Developer Account with admin privileges.

Click on your project in the Project Navigator and make sure the main target is selected. Under both the General and Info tabs you will find the Bundle identifier. If you change this name and run your app, it will create a new app next to the old one. Add -test to your current Bundle identifier so you get something like com.example.myapp-test. Xcode will probably tell you that you don't have a provisioning profile. Let Xcode fix this issue for you and it will create a new Development profile for you named something like iOSTeam Provisioning Profile: com.example.myapp-test.

So we're already able to run a test version of our app next to the App Store version. However, manually changing the Bundle identifier all the time isn't very practical. The Bundle identifier is part of the Info.plist and it's not derived from a Build configuration property like for example the Bundle name (which uses ${PRODUCT_NAME} from the build configuration by default). Therefore we can't simply specify a different Bundle identifier for a different Build configuration using an existing property. But, we can use a User-Defined Setting for this.

Go to the Build Settings tab of your project and click the + button to add a User-Defined Setting. Name the new setting BUNDLE_ID and give it the value of the test Bundle identifier you created earlier (com.example.myapp-test). Then click the small triangle in front of the setting to expand the setting. Remove the -test part from the Release configuration. You should end up with something like this:

Screen Shot 2015-01-30 at 11.29.43

Now go to the Info tab and change the value of the Bundle identifier to ${BUNDLE_ID}. In the General tab you will still see the correct Bundle Identifier, but if you click on it you see the text is slightly grayed out, which means it's taken from a Build configuration setting.

Screen Shot 2015-01-30 at 11.35.52

To test if this works, you can change edit the current Scheme and change the Build Configuration of the Run action from Debug to Release. Close the Scheme window and go to the General tab again to see that the Bundle Identifier has changed to the Release BUNDLE_ID. (If you still had the General tab open and don't see the change then switch to another tab and back; the panel will reload the identifier). Make sure to change the Build configuration back to Debug in your scheme afterwards.

When you now Archive an app before you release it to the App Store, it will use the correct identifier from the Release configuration and when you run the app from Xcode on your device, it will use the identifier for testing. That way it no longer overwrites your App Store version on your device.

App name

Both our App Store app and test app still have the same name. This makes it hard to know which one is which. To solve this, find the Product Name in the Build Settings and change the name for the Debug configuration to something else, like MyApp Test. You can even use another app icon for your test build. Just change the Asset Catalog App Icon Set Name for the Debug configuration.

Beta distribution

What if you want to distribute a version of the app for Beta testing (through TestFlight or something similar) to other people that also shouldn't overwrite their Apple Store version? Our Archive action is using the Release build configuration. We could change that manually to Debug to have the test Bundle identifier but then we would be getting all of the Debug settings in our app, which is not something we want. We need to create another Build configuration.

Go to the project settings of your project (so not the target settings). Click the + button under Configurations and duplicate the Release configuration. Call the new configuration AdHoc. (You might already have such a Build configuration for other reasons, in that case you can skip this step and use that one for the next steps.)

Now go to the Build Settings of your main target and change the AdHoc value of the User-Defined Setting BUNDLE_ID to the same as the Debug value. Do the same for the Product name is you changed that in the previous step.

We could already make a Beta test Archive now by manually changing the configuration of the Archive action to Debug. But it makes it easier if we create a new scheme to do this. So go to the Manage Schemes and click to + button at the bottom left to create a new scheme. Make sure that your main target is selected as Target and add " Test" to the name so you end up with something like "MyApp Test". Also check the Shared checkbox if you are sharing your schemes on your version control system.

Double click the new scheme to edit it and change the build configuration of the Archive action to AdHoc. Now you can Archive with the MyApp Test scheme selected to create a test version of your app that you can distribute to your testers without it overwriting their App Store version.

To avoid confusion about which build configuration is used by which scheme action, you should also change the configuration of the Profile action to AdHoc. And in your normal non-test scheme, you can change the build configuration of all actions to Release. This allows you to run the app in both modes of your device, which sometimes might be necessary, for example when you need to test push notifications that only work for the normal Bundle identifier.

Extensions

As mentioned in the intro, the main reason to use multiple schemes with different build configurations and User-Defined settings as opposed to creating multiple targets with different Bundle identifiers is because of Extensions, like the Today extension or a WatchKit extension. An extension can only be part of a single target.

Extensions make things even more complex since their Bundle identifier needs to be prefixed with the parent app's bundle identifier. And since we just made that one dynamic, we need to make the Bundle identifier of our extensions dynamic as well.

If you don't already have an existing extension, create a new one now. I've tested the approach described below with Today extensions and WatchKit extensions but it should work with any other extension as well.

The steps for getting a dynamic Bundle identifier for the extensions is very similar as the once for the main target so I won't go into too much detail here.

First open the Info tab of the new target that was created for the extension, e.g. MyAppToday. You'll see here that the Bundle display name is not derived from the PRODUCT_NAME. This is probably because the product name (which is derived from the target name) is something not very user friendly like MyAppToday and it is assumed that you will change it. In case of the Today extension, running a test version of the app next to the App Store version will also create 2 Today extensions in the Today view of the phone. To be able to differentiate between the two we'll also make the Bundle display name dynamic.

Change the value of it to ${BUNDLE_DISPLAY_NAME} and then add a User-Defined Setting for BUNDLE_DISPLAY_NAME with different names for Debug/AdHoc and Release.

You might have noticed that the Bundle identifier of the extension is already party dynamic, something like com.example.myapp.$(PRODUCT_NAME:rfc1034identifier). Change this to ${PARENT_BUNDLE_ID}.$(PRODUCT_NAME:rfc1034identifier) and add a User-Defined Setting for PARENT_BUNDLE_ID to your Build Settings. The values of the PARENT_BUNDLE_ID should be exactly the same as the ones you used in your main target, e.g. com.example.myapp for Release and com.example.myapp-test for Debug and AdHoc.

That's it, you can now Run and Archive your app with extensions who's Bundle identifier are prefixed with the parent's Bundle identifier.

App Groups entitlements

You might have an extension that shares UserDefaults data or Core Data stores with the main app. In that case you need to have matching App Groups entitlements in both your main app and extensions. Since we have dynamic Bundle identifiers that use different provisioning profiles, we also have to make our App Groups dynamic.

If you don't have App Groups entitlements (or other entitlements) yet, go to the Capabilities tab of your main target and switch on App Groups. Add an app group in the form off group.[bundle identifier], e.g. group.com.example.myapp. This will generate an entitlements file for your project (MyApp.entitlements) and set the Code Signing Entitlements of your Build Settings to something like MyApp/MyApp.entitlements. Locate the entitlements file in Finder and duplicate it. Change the name of the copy by replacing " Copy" with "Test" (MyAppTest.entitlements). Drag the copy into your project. You should now have two entitlement files in your project. Open the Test entitlements file in Xcode's Property List editor and add "-test" to the value of Item 0 under com.apple.security.application-groups to match it with the Bundle identifier we used for testing, e.g. com.example.myapp-test. Now go back to the Build Settings and change the Debug and AdHoc values of Code Signing Entitlements to match with the file name of the Test entitlements.

Repeat all these steps for the Extension target. Xcode will also generate the entitlements file in the extension folder. You should end up with two entitlements files for your main target and two entitlements files for every extension.

The Application Groups for testing need to be added to the provisioning profiles which Xcode will handle automatically for you. It might warn/ask you for this while building.

Conclusion

It might be quite some work to follow all these steps and to get everything to work, but if you use your normal iPhone for development and still want to use or show a stable version of your app at the same time, it's definitely worth doing. And your Beta testers will thank your for it as well.

Another elasticsearch blog post, now about Shield

Gridshore - Thu, 01/29/2015 - 21:13

<p>I just wrote another piece of text on my other blog. This time I wrote about the recently release elasticsearch plugin called Shield. If you want to learn more about securing your elasticsearch cluster, please head over to my other blog and start reading</p>

http://amsterdam.luminis.eu/2015/01/29/elasticsearch-shield-first-steps-using-java/

The post Another elasticsearch blog post, now about Shield appeared first on Gridshore.

Categories: Architecture, Programming

Live Webinar with JetBrains: Software Architecture as Code

Coding the Architecture - Simon Brown - Thu, 01/29/2015 - 14:17

I'm doing a live and free webinar with Trisha Gee and the other fine people over at JetBrains on February 12th at 15:00 GMT. The topic is "software architecture as code" and I'll be talking about/showing how you can create a software architecture model in code, rather than drawing static diagrams in tools such as Microsoft Visio.

Over the past few years, I've been distilling software architecture down to its essence, helping organisations adopt a lightweight style of software architecture that complements agile approaches. This includes doing "just enough" up front design to understand the significant structural elements of the software, some lightweight sketches to communicate that vision to the team, identifying the highest priority risks and mitigating them with concrete experiments. Software architecture is inherently about technical leadership, stacking the odds of success in your favour and ensuring that everybody is heading in the same direction.

But it's 2015 and, with so much technology at our disposal, we're still manually drawing software architecture diagrams in tools like Microsoft Visio. Furthermore, these diagrams often don't reflect the implementation in code, and vice versa. This session will look at why this happens and how to resolve the conflict between software architecture and code through the use of architecturally-evident coding styles and the representation of software architecture models as code.

Please sign-up here if you'd like to join us.

Categories: Architecture

Instagram Strategy to Radically Reduce Traffic: Kill all the spambots!

RIP to my fallen robot followers on Instagram, if there's a heaven for robot instagram users, you guys are in there

— alldaychubbyboy (@Allday)

How do you scale to handle increased user traffic? Have less traffic. No, this is not a koan. The best way to deal with traffic is not to have it. 

In a two day span Instagram disappeared 18.9 million users or more than 29 percent of their "followers." Justin Bieber lost 3.5 million followers (15 percent), Kim Kardashian lost 1.3 million followers (5.5 percent), Rihanna lost 1.2 million followers.

Instagram explains this dramatic reckoning was achieved by "removing deactivated spam accounts and accounts that violated its community guidelines." 

In an age when high user counts and tantalizing engagement metrics are more valuable than bitcoins, this can't have been an easy decision, but it was made after being bought by Facebook.

Why? Gabe Madway, an Instagram spokesman, tells us why: We totally get that it’s uncomfortable for people. The overall goal is we want it to be perceived that the people following you are real.

Uncomfortable is an understatement. A BuzzFeed article nicely captured some of the anger, here's just one example (could be NSFW):

Categories: Architecture

Voxxed interview and 20% discount on my Parleys course

Coding the Architecture - Simon Brown - Mon, 01/26/2015 - 19:21

Voxxed have just published a short interview with me about software architecture, sketches, agile and my "Software Architecture for Developers" training course on Parleys where I answer the following questions:

  1. You're an independent consultant - have your experiences in this (sometimes challenging) field fed into your course?
  2. Who is your course aimed at? How experienced do people need to be?
  3. Do you think a good grasp of agile methodology is important for this course?
  4. Can you give us an example of the kind of sketch you'd use to visualize your architecture?
  5. What's wrong with many of the software architecture sketches that you see?
  6. Diagrams that don't reflect the code - why is this a problem?
  7. A recent article suggested young developers should avoid the agile manifesto - what's your take on this?

You can read the full interview on Voxxed and, this week, the first 100 people to sign-up to my Parleys course using this link will get a 20% discount.

Software Architecture for Developers

Categories: Architecture

Paper: Immutability Changes Everything by Pat Helland

I was excited to see that Pat Helland has published another thought provoking paper: Immutability Changes Everything. If video is more your style, Pat gave a wonderful talk on the same subject at RICON2012 (videoslides).

It's fun to see how Pat's thinking is evolving over time as he's worked at Tandem Computers (TransactionMonitoring Facility), Amazon, Microsoft (Microsoft Transaction Server and SQL Service Broker), and now Salesforce.

You might have enjoyed some of Pat's other visionary papers: Life beyond Distributed Transactions: an Apostate’s OpinionThe end of an architectural era: (it's time for a complete rewrite), and Idempotence Is Not a Medical Condition.

This new paper is a high level overview of why immutability, the idea that destructive updates are not allowed, is a huge architectural win and because of cheaper disk, RAM, and compute, it's now financially feasible to keep all the things. The key insight is that without data updates, coordination in a distributed system becomes a much simpler problem to solve.

Immutability is an architectural concept that's been gaining steam on several fronts. Facebook is using a declarative immutable programming model in both the model and the view. We are seeing the idea of immutable infrastructure rise in DevOps. Aeron is a new messaging system that uses a persistent log to good advantage. The Lambda Architecture makes use of immutability. Datomic is a database data that treats data as a time-ordered series of immutable objects.

If that's of interest, then you'll like the paper.

Overview:

Categories: Architecture

Stuff The Internet Says On Scalability For January 23rd, 2015

Hey, it's HighScalability time:


Elon Musk: The universe is really, really big  [Gigapixels of Andromeda [4K]]
  • 90: is the new 50 for woman designer; $656.8 million: 3 months of Uber payouts; $10 billion: all it takes to build the Internet in space; 1 billion: registered WeChat users
  • Quotable Quotes:
    • @antirez: Tech stacks, more replaceable than ever: hardware is better, startups get $$ (few nodes + or - who cares), alternatives countless.
    • Olivio Sarikas: If every Star in this Image was a 2 millimeter Sandcorn you would end up with 1110 kg of Sand!!!!!!!!!
    • Chad Cipoletti: In even simpler terms, we see brands as people.
    • @timoreilly: Love it: “We need a stack, not a pile” says @michalmigurski.
    • @neha: I would be very happy to never again see a distributed systems paper eval on a workload that would fit on one machine.
    • @etherealmind: OH: "oh yeah, the extra 4 PB of storage is being installed today. Its about 4 racks of gear".
    • @lintool: Andrew Moore: Google's ecommerce platform ingests 100K-200K events per second continuously. 

  • Programming as myth building. Myths to Live By: The true symbol does not merely point to something else. It contains in itself a structure which awakens our consciousness to a new awareness of the inner meaning of life and of reality itself. A true symbol takes us to the center of the circle, not to another point on the circumference.

  • Not shocking at all: "We found the majority of catastrophic failures could easily have been prevented by performing simple testing on error handling code...A majority (77%) of the failures require more than one input event to manifest, but most of the failures(90%) require no more than 3." Really, who has the time? More on human nature in Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems.

  • Let simplicity fail before climbing the complexity ladder. Scalability! But at what COST?: "Big data systems may scale well, but this can often be just because they introduce a lot of overhead. Rather than making your computation go faster, the systems introduce substantial overheads which can require large compute clusters just to bring under control. In many cases, you’d be better off running the same computation on your laptop." But notice the kicker: "it took some work for parallel union-find." Replacing smart work with brute force is often the greater win. What are a few machine cycles between friends?

  • Programming is the ultimate team sport, so Why are Some Teams Smarter Than Others? The smartest teams were distinguished by three characteristics. First, their members contributed more equally to the team’s discussions. Second, their members can better read complex emotional states. Third, teams with more women outperformed teams with more men.

  • WhatsApp doesn't understand the web. Interesting design and discussions. Using proprietary Chrome APIs is a tough call, but this is more perplexing: "Your phone needs to stay connected to the internet for our web client to work." Is this for consistency reasons? To make sure the phone and the web stay in sync? Is it for monetization reasons? It does create a closed proxy that effectively prevents monetization leaks. It's tough to judge a solution without understanding the requirements, but there must be something compelling to impose so many limitations.

  • Roman Leventov analysis of Redis data structures. In which Salvatore 'antirez' Sanfilippo addresses point by point criticisms of Redis' implementation. People love Redis, part of that love has to come from what a good guy antirez is. Here he doesn't go all black diamond alpha nerd in the face of a challenge. He admits where things can be improved. He explains design decisions in detail. He advances the discussion with grace, humility, and smarts. A worthy model to emulate.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

As a DBA Expert, which database would you choose?

This is a guest post by Jenny Richards, a professional database administrator who is currently employed at Remote DBA.

In the world of databases, there is no single silver bullet fitting for every gun. How you select the database to use is very dependent on every other factor of your work: 

  • Who are you and what do you do? 
  • What is your end goal – what are you working to achieve?
  • How much data do you intend to store?
  • On what language and OS platforms do your applications run?
  • What is your budget?
  • Will you also require data warehousing, decision support systems and/or BI?
Background information
Categories: Architecture

Learn from my pain - 5 Lessons from Ello's Adventures in Rapid Scaling

Within one week Ello went from thousands of sessions a day to a few million sessions a day. Mike Pack wrote a great article sharing what they’ve learned: 5 Early Lessons from Rapid, High Availability Scaling with Rails.

Some of their scaling challenges: quantity of data, team size, DNS, bot prevention, responding to users, inappropriate content, and other forms of caching. What did they learn?

  1. Move the graph. User relationships were implemented on a standard Rails stack using Heroku and Postgres. The relationships table became the bottleneck. Solution: denormalize the social graph and move hot data into Redis. Redis is used for speed and Postgres is used for durability. Lesson: know the core pillar that supports your core offering and make it work.

  2. Create indexes early, or you're screwed. There's a camp that says only create indexes when they are needed. They are wrong. The lack of btree indexes kills query performance. Forget a unique index and your data becomes corrupted. Once the damage is done it's hard to add unique indexes later. The data has to be cleaned up and indexes take a long time to build when there's a lot of data.

  3. Sharding is cool, but not that cool. Shard all the things only after you've tried vertically scaling as much as possible. Sharding caused a lot of pain. Creating a covering index from the start and adding more RAM so data could be served from memory, not from disk, would have saved a lot of time and stress as the system scaled.

  4. Don't create bottlenecks, or do. Every new user automatically followed a system user that was used for announcements, etc. Scaling problems that would have been months down the road hit quickly as any write to the system user caused a write amplification of millions of records. The lesson here is not what you may think. While scaling to meet the challenge of the system user was a pain, it made them stay ahead of the scaling challenge. Lesson: self-inflict problems early and often.

  5. It always takes 10 times longer. All the solutions mentioned take much longer to implement than you might think. Early estimates of a couple days soon give way to the reality of much longer time hits. Simply moving large amounts of data can take days. Adding indexes to large amounts of data takes time. And with large amounts of data problems tend to happen as you get to the larger data sizes which means you need to apply a fix and start over. 

This full article is excellent and is filled with much more detail that makes it well worth reading.

Categories: Architecture

Continuous Delivery across multiple providers

Xebia Blog - Wed, 01/21/2015 - 13:04

Over the last year three of the four customers I worked with had a similar challenge with their environments. In different variations they all had their environments setup across separate domains. Ranging from physically separated on-premise networks to having environments running across different hosting providers managed by different parties.

Regardless of the reasoning behind having these kinds of setup it’s a situation where the continuous delivery concepts really add value. The stereotypical problems that exist with manual deployment and testing practices tend to get amplified when they occur in seperated domains. Things get even worse when you add more parties to the mix (like external application developers). Sticking to doing things manually is a recipe for disaster unless you enjoy going through expansive procedures every time you want to do anything in any of ‘your’ environments. And if you’ve outsourced your environments to an external party you probably don’t want to have to (re)hire a lot of people just so you can communicate with your supplier.

So how can continuous delivery help in this situation? By automating your provisioning and deployments you make deploying your applications, if nothing else, repeatable and predictable. Regardless of where they need to run.

Just automating your deployments isn’t enough however, a big question that remains is who does what. A question that is most likely backed by a lengthy contract. Agreements between all the parties are meant to provide an answer to that very question. A development partner develops, an outsourcing partners handles the hardware, etc. But nobody handles bringing everything together...

The process of automating your steps already provides some help with this problem. In order to automate you need some form of agreement on how to provide input for the tooling. This at least clarifies what the various parties need to produce. It also clarifies what the result of a step will be. This removes some of the fuzziness out of the process. Things like is the JVM part of the OS or part of the middleware should become clear. But not everything is that clearcut. It’s parts of the puzzle where pieces actually come together that things turn gray. A single tool may need input from various parties. Here you need to resists the common knee-jerk reaction to shield said tool from other people with procedures and red tape. Instead provide access to those tools to all relevant parties and handle your separation of concerns through a reliable access mechanism. Even then there might be some parts that can’t be used by just a single party and in that case, *gasp*, people will need to work together. 

What this results in is an automated pipeline that will keep your environments configured properly and allow applications to be deployed onto them when needed, within minutes, wherever they may run.

MultiProviderCD

The diagram above shows how we set this up for one of our clients. Using XL Deploy, XL Release and Puppet as the automation tooling of choice.

In the first domain we have a git repository to which developers commit their code. A Jenkins build is used to extract this code, build it and package it in such a way that the deployment automation tool (XL Deploy) understands. It’s also kind enough to make that package directly available in XL Deploy. From there, XL Deploy is used to deploy the application not only to the target machines but also to another instance of XL Deploy running in the next domain, thus enabling that same package to be deployed there. This same mechanism can then be applied to the next domain. In this instance we ensure that the machines we are deploying to are consistent by using Puppet to manage them.

To round things off we use a single instance of XL Release to orchestrate the entire pipeline. A single release process is able to trigger the build in Jenkins and then deploy the application to all environments spread across the various domains.

A setup like this lowers deployment errors that come with doing manual deployments and cuts out all the hassle that comes with following the required procedures. As an added bonus your deployment pipeline also speeds up significantly. And we haven’t even talked about adding automated testing to the mix…

Sponsored Post: Couchbase, VividCortex, Internap, SocialRadar, Campanja, Transversal, MemSQL, Hypertable, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Senior DevOps EngineerSocialRadar. We are a VC funded startup based in Washington, D.C. operated like our West Coast brethren. We specialize in location-based technology. Since we are rapidly consuming large amounts of location data and monitoring all social networks for location events, we have systems that consume vast amounts of data that need to scale. As our Senior DevOps Engineer you’ll take ownership over that infrastructure and, with your expertise, help us grow and scale both our systems and our team as our adoption continues its rapid growth. Full description and application here.

  • Linux Web Server Systems EngineerTransversal. We are seeking an experienced and motivated Linux System Engineer to join our Engineering team. This new role is to design, test, install, and provide ongoing daily support of our information technology systems infrastructure. As an experienced Engineer you will have comprehensive capabilities for understanding hardware/software configurations that comprise system, security, and library management, backup/recovery, operating computer systems in different operating environments, sizing, performance tuning, hardware/software troubleshooting and resource allocation. Apply here.

  • Campanja is an Internet advertising optimization company born in the cloud and today we are one of the nordics bigger AWS consumers, the time has come for us to the embrace the next generation of cloud infrastructure. We believe in immutable infrastructure, container technology and micro services, we hope to use PaaS when we can get away with it but consume at the IaaS layer when we have to. Please apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Sign Up for New Aerospike Training Courses.  Aerospike now offers two certified training courses; Aerospike for Developers and Aerospike for Administrators & Operators, to help you get the most out of your deployment.  Find a training course near you. http://www.aerospike.com/aerospike-training/
Cool Products and Services
  • See How PayPal Manages 1B Documents & 10TB Data with Couchbase. This presentation showcases PayPal's usage of Couchbase within its architecture, highlighting Linear scalability, Availability, Flexibility & Extensibility. See How PayPal Manages 1B Documents & 10TB Data with Couchbase.

  • VividCortex is a hosted (SaaS) database performance management platform that provides unparalleled insight and query-level analysis for both MySQL and PostgreSQL servers at micro-second detail. It's not just another tool to draw time-series charts from status counters. It's deep analysis of every metric, every process, and every query on your systems, stitched together with statistics and data visualization. Start a free trial today with our famous 15-second installation.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Aerospike demonstrates RAM-like performance with Google Compute Engine Local SSDs. After scaling to 1 M Writes/Second with 6x fewer servers than Cassandra on Google Compute Engine, we certified Google’s new Local SSDs using the Aerospike Certification Tool for SSDs (ACT) and found RAM-like performance and 15x storage cost savings. Read more.

  • FoundationDB 3.0. 3.0 makes the power of a multi-model, ACID transactional database available to a set of new connected device apps that are generating data at previously unheard of speed. It is the fastest, most scalable, transactional database in the cloud - A 32 machine cluster running on Amazon EC2 sustained more than 14M random operations per second.

  • Diagnose server issues from a single tab. The Scalyr log management tool replaces all your monitoring and analysis services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. It's a universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs,” but enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – try it free! (See how Scalyr is different if you're looking for a Splunk alternative.)

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Try is free in the Future

Xebia Blog - Mon, 01/19/2015 - 09:40

Lately I have seen a few developers consistently use a Try inside of a Future in order to make error handling easier. Here I will investigate if this has any merits or whether a Future on it’s own offers enough error handling.

If you look at the following code there is nothing that a Future can’t supply but a Try can:

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.{Await, Future, Awaitable}
import scala.concurrent.duration._
import scala.util.{Try, Success, Failure}

object Main extends App {

  // Happy Future
  val happyFuture = Future {
    42
  }

  // Bleak future
  val bleakFuture = Future {
    throw new Exception("Mass extinction!")
  }

  // We would want to wrap the result into a hypothetical http response
  case class Response(code: Int, body: String)

  // This is the handler we will use
  def handle[T](future: Future[T]): Future[Response] = {
    future.map {
      case answer: Int => Response(200, answer.toString)
    } recover {
      case t: Throwable => Response(500, "Uh oh!")
    }
  }

  {
    val result = Await.result(handle(happyFuture), 1 second)
    println(result)
  }

  {
    val result = Await.result(handle(bleakFuture), 1 second)
    println(result)
  }
}

After giving it some thought the only situation where I could imagine Try being useful in conjunction with Future is when awaiting a Future but not wanting to deal with error situations yet. The times I would be awaiting a future are very few in practice though. But when needed something like this migth do:

object TryAwait {
  def result[T](awaitable: Awaitable[T], atMost: Duration): Try[T] = {
    Try {
      Await.result(awaitable, atMost)
    }
  }
}

If you do feel that using Trys inside of Futures adds value to your codebase please let me know.

Meteor

Xebia Blog - Sun, 01/18/2015 - 12:11

Did you ever use AngularJS as a frontend framework? Then you should definitely give Meteor a try! Where AngularJS is powerful just as a client framework, meteor is great as a full stack framework. That means you just write your code in one language as if there is no back- and frontend at all. In fact, you get an Android and IOS client for free. Meteor is so incredibly simple that you are productive from the beginning.

Where meteor kicks angular

One of the killing features of meteor is that you'll have a shared code base for frontend and backend. In the next code snippet, you'll see a file shared by backend and frontend:

// Collection shared and synchronized accross client, server and database
Todos = new Mongo.Collection('todos');

// Shared validation logic
validateTodo = function (todo) {
  var errors = {};
  if (!todo.title)
    todo.title = "Please fill in a title";
  return errors;
}

Can you imagine how neat the code above is?

Scan 04 Jan 2015 18.48-page4

With one codebase, you get the full stack!

  1. As in the backend file and in the frontend file one can access and query over the Todos collection. Meteor is responsible for syncing the todos. Even when another user adds an item, it will be visible to your client directly. Meteor accomplishes this by a client-side Mongo implementation (MiniMongo).
  2. One can write validation rules once! And they are executed both on the front-end and on the back-end. So you can give my user quick feedback about invalid input, but you can also guarantee that no invalid data is processed by the backend (when someone bypasses the client). And this is all without duplicated code.

Another killer feature of meteor is that it works out of the box, and it's easy to understand. Angular can be a bit overwhelming; you have to learn concepts like directives, services, factories, filters, isolated scopes, transclusion. For some initial scaffolding, you need to know grunt, yeoman, etcetera. With meteor every developer can create, run and deploy a full-stack application within minutes. After installing meteor you can run your app within seconds.

$ curl https://install.meteor.com | /bin/sh
$ meteor create dummyapp
$ cd dummyapp
$ meteor
$ meteor deploy dummyapp.meteor.com
Screen Shot 2015-01-04 at 19.49.08

Meteor dummy application

Another nice aspect of meteor is that it uses DDP, the Distributed Data Protocol. The team invented the protocol and they are heavily promoting it as "REST for websockets". It is a simple, pragmatic approach allowing it to be used to deliver live updates as data changes in the backend. Remember that this works all out of the box. This talk walks you through the concepts of it. But the result is that if you change data on a client it will be updated immediately on the other client.

And there is so much more, like...

  1. Latency Compensation. On the client, Meteor prefetches data and simulates models to make it look like server method calls return instantly.
  2. Meteor is open source and integrates with existing open source tools and frameworks.
  3. Services (like an official package server and a build farm).
  4. Command line tools
  5. Hot deploys
Where meteor falls short

Yes, meteor is not the answer to all your problems. The reason, I'll still choose angular above meteor for my professional work, is because the view framework of angular rocks. It makes it easy to structure your client code into testable units and connect them via dependency injection. With angular you can separate your HTML from your javascript. With meteor your javascript contains HTML elements, (because their UI-library is based on handlebars. That makes testing harder and large projects will become unstructured very quickly.

Another flaw emerges if your project already has a backend. When you choose meteor, you choose their full stack. That means: Mongo as database and Node.js as backend. Despite you are able to create powerful applications, Meteor doesn't allow you (easily) to change this stack.

Under the hood

Meteor consists out of several subprojects. In fact, it is a library of libraries. In fact, it is a stack; a standard set of core packages that are designed to work well together:

Components used by meteor

  1. To make meteor reactive they've included the components blaze and tracker. The blaze component is heavily based on handlebars.
  2. The DDP component is a new protocol, described by meteor, for modern client-server communication.
  3. Livequery and full stack database take all the pain of data synchronization between the database, backend and frontend away! You don't have to think about in anymore.
  4. The Isobuild package is a unified build system for browser, server and mobile.
Conclusion

If you want to create a website or a mobile app with a backend in no time, with getting lots of functionality out of the box, meteor is a very interesting tool. If you want to have more control or connect to an existing backend, then meteor is probably less suitable.

You can watch this presentation I recently gave, to go along with the article.

Stuff The Internet Says On Scalability For January 16th, 2015

Hey, it's HighScalability time:


First people to free-climb the Dawn Wall of El Capitan using nothing but stone knives and bearskins (pics). 
  • $3.3 trillion: mobile revenue in 2014; ~10%: the difference between a good SpaceX landing and a crash; 6: hours for which quantum memory was held stable 
  • Quotable Quotes:
    • @stevesi: "'If you had bought the computing power found inside an iPhone 5S in 1991, it would have cost you $3.56 million.'"
    • @imgurAPI: Where do you buy shares in data structures? The Stack Exchange
    • @postwait: @xaprb agreed. @circonus does per-second monitoring, but *retain* one minute for 7 years; that plus histograms provides magical insight.
    • @iamaaronheld: A single @awscloud datacenter consumes enough electricity to send 24 DeLoreans back in time
    • @rstraub46: "We are becoming aware that the major questions regarding technology are not technical but human questions" - Peter Drucker, 1967
    • @Noahpinion: Behavioral economics IS the economics of information. via @CFCamerer 
    • @sheeshee: "decentralize all the things" (guess what everybody did in the early 90ies & why we happily flocked to "services". ;)
    • New Clues: The Internet is no-thing at all. At its base the Internet is a set of agreements, which the geeky among us (long may their names be hallowed) call "protocols," but which we might, in the temper of the day, call "commandments."

  • Can't agree with this. We Suck at HTTP. HTTP is just a transport. It should only deliver transport related error codes. Application errors belong in application messages, not spread all over the stack. 

  • Apple has lost the functional high ground. It's funny how microservices are hot and one of its wins is the independent evolution of services. Apple's software releases now make everything tied together. It's a strategy tax. The watch just extends the rigidity of the structure. But this is a huge upgrade. Apple is moving to a cloud multi-device sync model, which is a complete revolution. It will take a while for all this to shake out. 

  • This is so cool, I've never heard of Cornelis Drebbel (1620s) before or about his amazing accomplishments. The Vulgar Mechanic and His Magical Oven: His oven is one of the earliest devices that gave human control away to a machine and thus can be seen as a forerunner of the smart machine, the self-deciding automaton, the thinking robot.

  • Do you think there's a DevOps identity crisis, as Baron Schwartz suggests? Does DevOps have a messaging and positioning problem? Is DevOps just old wine in a new skin? Is DevOps made up of echo chambers? I don't know, but an interesting analysis by Baron.

  • How does Hyper-threading double your CPU throughput?: So if you are optimizing for higher throughput – that may be fine. But if you are optimizing for response time, then you may consider running with HT turned off.

  • Underdog.io share's what's Inside Datadog’s Tech Stack: python, javascript and go; the front-end happen in D3 and React; databases are Kafka, redis, Cassandra, S3, ElasticSearch, PostgreSQL; DevOps is Chef, Capistrano, Jenkins, Hubot, and others.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Bandita Joarder on How Presence is Something You Can Learn

Bandita is one of the most amazing leaders in the technology arena.

She’s not just technical, but she also has business skills, and executive presence.

But she didn’t start out that way.

She had to learn presence from the school of hard knocks.   Many people think presence is something that either you have or you don’t.

Bandita proves otherwise.

Here is a guest post by Bandita Joarder on how presence is something you can learn:

Presence is Something You Can Learn

It’s a personal story.  It’s an empowering story.  It’s a story of a challenge and a change, and how learning the power of presence, helped Bandita move forward in her career.

Enjoy.

Categories: Architecture, Programming