Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Hands-on Test Automation Tools session wrap up - Part1

Xebia Blog - Sun, 09/21/2014 - 15:57

Last week we had our first Hands-on Test Automation sessions.
Developers and Testers were challenged to show and tell their experiences in Test Automation.
That resulted in lots of in depth discussions and hands-on Test Automation Tool shoot-outs.

In this blogpost we'll share the outcome of the different sessions, like the famous Cucumber vs. FitNesse debat.

Stay tuned for upcoming updates!

Test Automation Frameworks

The following Test Automation Frameworks were demoed and discussed

1. FitNesse

FitNesse is a test management and execution tool.
You'll have to write/use fixture code if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
You can have a good drill down in test results.
You can make use of scenario's and scenario libraries to make test automation steps reusable.
But refactoring is hard when scenario's are used extensively since there is no IDE support (yet)

2. Cucumber

Cucumber is a specification tool which describes how software should behave.
You'll have to write/use step definitions if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
Cucumber forces you to write specifications / tests with scenarios (Behaviour in human readable language).
You can drill down into test results, but you'll need reporting libraries like Cucumber Reporting
We recommend using IntelliJ IDEA with the Cucumber plugin since it supports Cucumber seamlessly.
Refactoring becomes less problematic since you're using a IDE.

3. Selenium / WebDriver IDE

Selenium / WebDriver automates human interactions with web browser.
With the Selenium IDE you can record and play your tests in Firefox

Pros and Cons
It can get you started very quickly. You can record and play your tests scripts without writing any code.
Reusability of test automation code is not possible. You'll need to export it into an IDE to introduce reusability.

Must haves in Test Automation Tools

During the parallel sessions we've collected the following must haves in test automation tools.

Testers and Developers becoming best friends

When developers do not feel comfortable with the test automation tool, testers will try to fill the gap all by themselves. Most of the time these efforts result in hard to maintain test automation code. At some point in time test automation will become a bottleneck in Continuous Delivery. When picking a test automation tool consider each other's needs and pick a tool together. Feeling comfortable in writing and maintaining test automation code is really important to make test automation beneficial.

Separating What from How

Tools like FitNesse and Cucumber were designed to separate the What from the How in test automation. When combining both in those tools, you'll end up getting lost in details and you'll lose focus of what you are testing.
Use tools like FitNesse and Cucumber to describe What you are testing and put all details about How you are testing in test automation code (like fixture code and step definitions)

Other interesting tools
  • Thucydides: Reporting tests and results (including functional coverage)
  • Vagrant: Provisioning System Under Test instances
  • Liquibase: Treating database schema changes as 'code'

Stay tuned for upcoming updates!

 

Stuff The Internet Says On Scalability For September 19th, 2014

Hey, it's HighScalability time:


Galactic bolt-hole or supermassive black hole, weighing as much as 21 million suns?
  • Quotable Quotes:
    • @debuggist: Chief takeaway from #velocityconf for me: failure happens so monitor for the ones that are important albeit in systems or culture & fix them
    • @Carnage4Life: The real tech bubble is valuations of Google, Facebook & Twitter are inflated by app install ads from unprofitable startups 
    • Jay Parikh: Android has 2x as many users as iOS. However, iOS average revenue per user is 4x higher than Android. 
    • Joe Armstrong: We’ve made a mess. Need to reverse entropy. Quantum mechanics sets limits to ultimate speed of computation. We need Math. Abolish names and places. Build the condenser. Make low-power computers - no net environmental damaged.
    • @reneritchie: iOS 8 is a beefy update. The internet is feeling the strain of millions of downloads. If it’s slow or stuttering, just give it some time.
    • @vishyp: Quip is big on disconnected clients and clients as write-thru caches. Like it! Uses C++ for x-platform. @btaylor #atscale2014
    • @csabacsoma: Instragram: we use Postgres for almost everything now, some Redis and Memcached #AtScale2014
    • @rfberry: "To maximize throughput, we needed an integrated approach to backpressure." #AtScale2014
    • @sharonw: I asked this Facebook / Instagram panel about image upload. Key changes: retry aggressively, and resize and encode on device. #AtScale2014
    • Igor Zaika: Most of the developers who work on Microsoft Office are younger than the codebase. 
    • Chris Marra: There are about 10k different device models accessing Facebook. Designing for high-end smartphones does not cut it.
    • @Carnage4Life: Apple spends $100m on a U2 album you don't want. Microsoft spends that on ads you don't like. Amazon spends it on free shipping #Marketing
    • @beerops: Laziness as a multiplier: train other people to do what you can do to remove yourself as a bottleneck #velocityconf
    • @xaprb: "Failure is a feature of complex systems." - #velocityconf
    • @dalmaer: "Our mobile app is a write through cache (SQLite) to the source of truth (MySQL on AWS)" -- @btaylor #AtScale2014
    • @herminghaus: @BenedictEvans Designed an 11.4TB patent retrieval system in 1993 with slow WORM robots. Cost $140m. Now <$1000.- at BestBuy.
    • @BenedictEvans: You can't ask people to decide on a trade-off when they have experience of one side but not the other.

  • Caching at Scale. There's a need to better manage caching, especially under failure conditions. Solutions are generally in the form of a proxy layer above memcached. Along these lines Box created Tron, Twitter created TwemProxy, and Facebook created a value meal in McRouter. Database people have always countered, why have a separate cache, just build a cache into the database? This hasn't worked for various reasons, mostly because a database always cares more about being a good database rather than being a good cache. Vitess wants to fix that. Vitess is an open-source system written in Go used at Youtube that: challenges the paradigm of treating caching as a separate layer by directly addressing the issues of database scalability and by modifying the handling of SQL queries.  

  • Talk about your Chaos Godzilla. Facebook Turned Off Entire Data Center to Test Resiliency. Before flipping that switch there must have been a little pause, perhaps thinking this wouldn't be prudent, but damn the torpedoes and full speed ahead. Apparently some issues were found, but it went fairly smoothly. Hazaa for the chutzpah.

  • Best LAN party ever? Researchers twist four radio beams together to achieve high data transmission speeds. The researchers reached data transmission rates of 32 gigabits per second across 2.5 meters of free space in a basement lab.

  • This is an understatement. iOS 8, thoroughly reviewed. An amazing job. A big take away for me is Apple is systematically removing reasons not to buy an iPhone. Bigger phone. Check. Configurable keyboard. Check. Extensions that display in the today view and allow app cooperation. Check. Another take away is Apple is abandoning simplicity for configurability, which is embracing complexity, which is a potential experience killer.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Installing Oracle on Docker (Part 1)

Xebia Blog - Fri, 09/19/2014 - 12:56

I’ve spent Xebia’s Innovation Day last August experimenting with Docker in a team with two of my colleagues and a guest. We thought Docker sounds really cool, especially if your goal is to run software that doesn’t require lots of infrastructure and can be easily installed, e.g. because it runs from a jar file. We wondered however what would happen if we tried to run enterprise-software, like an Oracle database. Software that is notoriously difficult to install and choosy about the infrastructure it runs on. Hence our aim for the day: install an Oracle database on CoreOS and Docker.

We chose CoreOS because of its small footprint and the fact that it is easily installed in a VM using Vagrant (see https://coreos.com/docs/running-coreos/platforms/vagrant/). We used default Vagrantfile and CoreOS files with one modification: $vb_memory = 2024 in config.rb which allows the Oracle’s pre installer to run. The config files we used can be found here: https://github.com/jvermeir/OraDocker/

Starting with a default CoreOS install we then implemented the steps described here: http://www.tecmint.com/oracle-database-11g-release-2-installation-in-linux/.
Below is a snippet from the first version of our Dockerfile (tag: b0a7b56).
FROM centos:centos6
# Step 1: Setting Hostname
ENV HOSTNAME oracle.docker.xebia.com
# Step 2
RUN yum -y install wget
RUN wget --no-check-certificate https://public-yum.oracle.com/public-yum-ol6.repo -O /etc/yum.repos.d/public-yum-ol6.repo
RUN wget --no-check-certificate https://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
RUN yum -y install oracle-rdbms-server-11gR2-preinstall

Note that this takes awhile because the pre installer downloads a number of CentOS packages that are missing in CoreOS.
Execute this in a shell:
vagrant up
vagrant ssh core-01
cd ~/share/docker
docker build -t oradocker .

This seemed like a good time to do a commit to save our work in Docker:
docker ps # note the container-id and substitute for the last parameter in the line below this one.
docker commit -m "executed pre installer" 07f7389c811e janv/oradocker

At this point we studiously ignore some of the advice listed under ‘Step 2’ in Tecmint’s install manual, namely adding the HOSTNAME to /etc/syconfig/network, allowing access to the xhost (what would anyone want that for?) and mapping an IP address to a hostname in /etc/hosts (setting the hostname through ‘ENV HOSTNAME’ had no real effect as far as we could tell). We tried that but it didn’t seem to work. Denying reality and substituting our own we just ploughed on…

Next we added Docker commands to the Dockerfile that creates the oracle user, copy the relevant installer files and unzip them. Docker starts by sending a build context to the Docker daemon. This takes quite some time because the Oracle installer files are large. There’s probably some way to avoid this, but we didn’t look into that. Unfortunately Docker copies the installer files each time you run docker -t … only to conclude later on that nothing changed.

The next version of our Dockerfile sort of works in the sense that it starts up the installer. The installer then complains about missing swap space. We fixed this temporarily at the CoreOS level by running the following commands:
sudo su -
swapfile=$(losetup -f)
truncate -s 1G /swap
losetup $swapfile /swap
mkswap $swapfile
swapon $swapfile

found here: http://www.spinics.net/lists/linux-btrfs/msg28533.html
This works but it doesn’t survive a reboot.
Now the installer continues only to conclude that networking is configured improperly (one of those IgnoreAndContinue decisions coming back to bite us):
[FATAL] PRVF-0002 : Could not retrieve local nodename

For this to work you need to change /etc/hosts which our Docker version doesn’t allow us to do. Apparently this is fixed in a later version, but we didn’t get around to testing that. And maybe changing /etc/sysconfig/network is even enough, but we didn't try that either.

The latest version of our work is on github (tag: d87c5e0). The repository does not include the Oracle installer files. If you want to try for yourself you can download the files here: ">http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index-092322.html and adapt the docker file if necessary.

Below is a list of ToDo’s:

  1. Avoid copying large installer files if they’re not gonna be used anyway.
  2. Find out what should happen if you call ‘ENV HOSTNAME oracle.docker.xebia.com’.
  3. Make swap file setting permanent on CoreOS.
  4. Upgrade Docker version so we can change /etc/hosts

All in all this was a useful day, even though the end result was not a running database. We hope to continue working on the ToDo’s in the future.

Management Innovation is at the Top of the Innovation Stack

Management Innovation is at the top of the Innovation Stack.  

The Innovation Stack includes the following layers:

  1. Management Innovation
  2. Strategic Innovation
  3. Product Innovation
  4. Operational Innovation

While there is value in all of the layers, some layers of the Innovation Stack are more valuable than others in terms of overall impact.  I wrote a post that walks through each of the layers in the Innovation Stack.

I think it’s often a surprise for people that Product or Service Innovation is not at the top of the stack.   Many people assume that if you figure out the ultimate product, then victory is yours.

History shows that’s not the case, and that Management Innovation is actually where you create a breeding ground for ideas and people to flourish.

Management Innovation is all about new ways of mobilizing talent, allocating resources, and building strategies.

If you want to build an extremely competitive advantage, then build a Management Innovation advantage.  Management Innovation advantages are tough to copy or replicate.

If you’ve followed my blog, you know that I’m a fan of extreme effectiveness.   When it comes to innovation, I’ve had the privilege and pleasure of playing a role in lots of types of innovation over the years at Microsoft.   If I look back, the most significant impact has always been in the area of Management Innovation.

It’s the trump card.

Categories: Architecture, Programming

Full width iOS Today Extension

Xebia Blog - Thu, 09/18/2014 - 10:57

Apple decided that the new Today Extensions of iOS 8 should not be the full width of the notification center. They state in their documentation:

Because space in the Today view is limited and the expected user experience is quick and focused, you shouldn’t create a widget that's too big by default. On both platforms, a widget must fit within the width of the Today view, but it can increase in height to display more content.

Source: https://developer.apple.com/library/ios/documentation/General/Conceptual/ExtensibilityPG/NotificationCenter.html

This means that developers that create Today Extensions can only use a width of 273 points instead of the full 320 points (for iPhones pre iPhone 6) and have a left offset of the remaining 47 points. Though with the release of iOS 8, several apps like DropBox and Evernote do seem to have a Today Extension that uses the full width. This raises question wether or not Apple noticed this and why it came through the approval process. Does Apple not care?

Should you want to create a Today Extension with the full width yourself as well, here is how to do it (in Swift):

override func viewWillAppear(animated: Bool) {
    super.viewWillAppear(animated)
    if let superview = view.superview {
        var frame = superview.frame
        frame = CGRectMake(0, CGRectGetMinY(frame), CGRectGetWidth(frame) + CGRectGetMinX(frame), CGRectGetHeight(frame))
        superview.frame = frame
    }
}

This changes the super view (Today View) of your Today Extension view. It doesn't use any private Api's, but Apple might reject it for not following their rules. So think carefully before you use it.

Become high performing. By being happy.  

Xebia Blog - Thu, 09/18/2014 - 03:59

The summer holidays are over. Fall is coming. Like the start of every new year, a good moment for new inspiration.

Recently, I went twice to the Boston area for a client of Xebia. I met there (I dislike the word “assessments"..) a number of experienced Scrum teams. They had an excellent understanding of Scrum, but were not able to convert this to an excellent performance. Actually, there were somewhat frustrated and their performance was slightly going down.

So, they were great teams, great team members, their agile processes were running smoothly, but still not a single winning team. Which left in my opinion only one option: a lack of Spirit.   Spirit is the fertilizer of Scrum and actually every framework, methodology and innovation.  But how to boost the spirit?

Screen Shot 2014-09-17 at 10.43.43 PM Until a few years ago, I would “just" organize teambuilding sessions to boost this, parallel with fixing/escalating the root causes. Nobel, but far from effective.   It’s much more about mindset en happiness and taking your own responsibility there.   Let’s explain this a little bit more here.

This are definitely awkward times. Terrible wars and epidemics where we can’t turn our back from anymore, an economic system which hardly survives, a more and more accelerating and a highly demanding society. In all which we have to find “time” for our friends, family, yourself and job or study. The last ones are essential to regain balance in a more and more depressing world. But how?

One of the most important building blocks of the agile mindset and life is happiness. Happiness is the fuel of acceleration and success. But what is happiness? Happiness is the ultimate moment you’re not thinking, but enjoying the moment and forget the world around you. For example, craftmanship will do this with you. Losing track of time while exercising the craft you love.

But too often we prevent our selves from being happy. Why should I be happy in this crazy world?   With this mentality you’re kept hostage by your worrying mind and ignore the ultimate state you were born: pure, happy ready to explore the world (and make mistakes!). It’s not a bad thing to be egocentric sometimes and switch off your dominant mind now and then. Every human being has the state of mind and ability to do this. But too rarely we do.

On the other hand, it’s also not a bad thing to be angry, frightened or sad sometimes. These emotions will help enjoying your moments of happiness more. But often your mind will resist these emotions. They are perceived as a sign of weakness or as a negative thing you should avoid. A wrong assumption. The harder you're trying to resist these emotions, the longer they will stay in your system and prevent you from being happy.

Being aware of these mechanisms I’ve explained above, you’ll be happier, more productive and better company for your family, friends and colleagues. Parties will not be a forced way trying to create happiness anymore, but a celebration of personal responsibility, success and happiness.

The FireBox Warehouse Scale Computer in 2020 Will Have 1K Sockets, 100K Cores, 100PB NV RAM, and a 4Pb/s Network

That's the eye popping prediction from Krste Asanović, University of California, Berkeley, in a presentation he gave at FAST '14 titled: FireBox: A Hardware Building Block for 2020 Warehouse-Scale Computers (pdf).

FireFox looks system like this:

Trends in Warehouse Scale Computers (WSC)s:
Categories: Architecture

Continuous Delivery is about removing waste from the Software Delivery Pipeline

Xebia Blog - Wed, 09/17/2014 - 15:44

On October the 22nd I will be speaking at the Continuous Delivery and DevOps Conference in Copenhagen where I will share experiences on a very successful implementation of a new website serving about 20.000.000 page views a month.

Components and content for this site were developed by five(!) different vendors and for this project the customer took the initiative to work according to DevOps principles and implement a fully automated Software Delivery Process as they went along. This was a big win for the project, as development teams could now focus on delivering new software instead of fixing issues within the delivery process itself and I was the lucky one to implement this.

This blog is about visualizing the 'waste' we addressed within the project where you might find the diagrams handy when communicating Continuous Delivery principles within your own organization.

To enable yourself to work according to Continuous Delivery principles, an effective starting point is to remove waste from the Software Delivery Process. If you look at a traditional Software Delivery Process you'll find that there are probably many areas in your existing process that do not add any value for the customer at all.

These area's should be seen as pure waste, not adding any value to your customer, costing you either time or money (or both) over-and-over-and-over again. Each time new features are being developed and pushed to production, many people will perform a lot of costly manual work and run into the same issues over and over again. The diagram below provides an example of common area's where you might find waste in your existing Software Development Pipeline. Imagine this process to repeat every time a development team delivers new software. Within your conversation, you might want to an equal diagram to explain pain points within your current Software Delivery Process.

a traditional software delivery process

a traditional software delivery process

Automation of the Software Delivery process within this project, was all about eliminating known waste as much as possible. This resulted in setting up an Agile project structure and start working according to DevOps principles, enabling the team to deliver software on a frequent basis. Next to that, we automated the central build with Jenkins CI, which checks out code from a Git Version Management System, compiles code using maven, stores components in Apache Archiva, kicks off static, unit and functional tests covering both the JEE and PHP codebase and creating Deployment Units for further processing down the line. Deployment Automation itself was implemented by introducing XL Deploy. By doing so, every time a developer pushed new JEE or PHP code into the Git Version Management System, freshly baked deployment units were instantly deployed to the target landscape, which in its turn was managed by Puppet. An abstract diagram of this approach and chosen tooling is provided below.

overview of tooling for automating the software delivery process

overview of tooling for automating the software delivery process

When paving the way for Continuous Delivery, I often like to refer to this as working on the six A's: Setting up Agile (Product Focused) Delivery teams, Automating the build, Automating tests, Automating Deployments, Automating the Provisioning Layer and clean, easy to handle Software Architectures. The A for Architecture is about making sure that the software that is being delivered actually supports automation of the Software Delivery Process itself and put's the customer in the position to work according to Continuous Delivery principles. This A is not visible in the diagram.

After automation of the Software Delivery Process, the customer's Software Development Process behaved like the optimized process below, providing the team the opportunity to push out a constant & fluent flow of new features to the end user. Within your conversation, you might want to use this diagram to explain advantages to your organization.

an optimized software delivery process

an optimized software delivery process

As we automated the Software Delivery Pipeline for the customer we positioned this customer to go live at a press of a button. And on the go-live date, it was just that: a press of the button and 5 minutes later the site was completely live, making this the most boring go-live event I've ever experienced. The project itself was real good fun though! :)

Needless to say that subsequent updates are now moved into live state in a matter of minutes as the whole process just became very reliable. Deploying code just became a non-event. More details on how we made this project a complete success, how we implemented this environment, the project setting, the chosen tooling along with technical details I will happily share at the Continuous Delivery and DevOps Conference in Copenhagen. But of course you can also contact me directly. For now, I just hope to meet you there..

Michiel Sens.

How To Think Like a Microsoft Executive

One of the things I do, as a patterns and practices kind of guy, is research and share success patterns. 

One of my more interesting bodies of work is my set of patterns and practices for successful executive thinking.

A while back, I interviewed several Microsoft executives to get their take on how to think like an effective executive.

While the styles vary, what I enjoyed is the different mindset that each executive uses as they approach the challenge of how to change the world in a meaningful way.

5 Key Questions to Share Proven Practices for Executive Thinking

My approach was pretty simple.   I tried to think of a simple way to capture and distill the essence. I originally went the path of identifying key thinking scenarios (changing perspective, creating ideas, evaluating ideas, making decisions, making meaning, prioritizing ideas, and solving problems) ... and the path of identifying key thinking techniques (blue ocean/strategic profile, PMI, Six Thinking Hats, PQ/PA, BusinessThink, Five Whys, ... etc.) -- but I think just a simple set of 5 key questions was more effective.

These are the five questions I ended up using:

  1. What frame do you mostly use to evaluate ideas? (for example, one frame is: who's the customer? what's the problem? what's the competition doing? what does success look like?)
  2. How do you think differently, than other people might, that helps you get a better perspective on the problem?
  3. How do you think differently, than other people might, that helps you make a better decision?
  4. What are the top 3 questions you ask yourself the most each day that make the most difference?
  5. How do you get in your best state of mind or frame of mind for your best thinking?

The insights and lessons learned could fill books, but I thought I would share three of the responses that I tend to use and draw from on a regular basis …

Microsoft Executive #1

1) The dominant framework I like to use for decisions is: how can we best help the customer? Prioritizing the customer is nearly always the right way to make good decisions for the long term. While one has to have awareness of the competition and the like, it usually fails to “follow taillights” excessively. The best lens through which to view the competition is, “how are they helping their customers, and is there anything we can learn from them about how to help our own customers?”

2) I don’t think that there is anything magical about executive thinking. The one thing we hopefully have is a greater breadth and depth of experience on key decisions. We use this experience to discern patterns, and those patterns often help us make good decisions on relatively little data.

3) Same answer as #2.

4) How can we help our customers more? Are we being realistic in our assessments of ourselves, our offerings and the needs of our customers? How can we best execute on delivering customer value?

5) It is key to keep some discretionary time for connecting with customers, studying the competition and the marketplace and “white space thinking.” It is too easy to get caught up on being reactionary to lots of short-term details and therefore lose the time to think about the long term.

Microsoft Executive #2

There are three things that I think about as it relates to leading organizations: Vision, People and Results. Some of the principles in each of these components will apply to any organization, whether the organization's goal is to make profit, achieve strategic objectives, or make non-profit social impact.

Vision

In setting the vision and top level objectives, it is very important to pick the right priorities. I like to focus on the big rocks instead of small rocks at the vision-setting stage. In today's world of information overload, it is really easy to get bombarded with too many things needing attention. This can dilute your focus across too many objectives. The negative effect of not having a clear concentrated focus multiplies rapidly across many people when you are running a large organization. So, you need to first ask yourself what are the few ultimate results that are the objectives of your organization and then stay disciplined to focus on those objectives. The ultimate goal might be a single objective or a few, but should not be a laundry list. It is alright to have multiple metrics that are aligned to drive each objective, but the overall objectives themselves should be crisp and focused.

People

The next step in running an organization is to make sure you have the right people in the right jobs. This starts with first identifying the needs of the business to achieve the vision set out above. Then, I try to figure out what types of roles are needed to meet those needs. What will the organization structure look like? What kind of competencies, that is, attributes, skills, and behaviors, are needed in those roles to meet expected results? If there is a mismatch between the role and the person, it can set up both the employee and the business for failure. So, this is a crucial step in making sure you've a well running organization.

Once you have the right people in the right jobs, I try to make sure that the work environment encourages people to do their best. Selfless leadership, where the leaders have a sense of humility and are committed to the success of the business over their own self, is essential. An inclusive environment where everyone is encouraged to contribute is also a must. People's experience with the organization is for the most part shaped by their interaction with their immediate manager. Therefore, it is very important that a lot of care goes into selecting, encouraging and rewarding people managers who can create a positive environment for their employees.

Results

Finally, the organization needs to produce results towards achieving the vision and the objectives you set out. Do not confuse results with actions. You need to make sure you reward people based on performance towards producing results instead of actions. When setting commitments for people, you need to be thoughtful about what metrics you choose so that you incent the right behavior. This again helps build an environment that encourages people to do their best. Producing results also requires that you've a compelling strategy for the organization. Thus, you need to stay on top of where the market and customers are. This will help you focus your organization's efforts on anticipating customer needs, and proactively taking steps to delight customers. This is necessary to ensure that organization's resources are prioritized towards those efforts that will produce the highest return on investment.

Microsoft Executive #3
  1. Different situations call for different pivots.  That said, I most often start with the customer, as technology is just a tool; ultimately, people are trying to solve problems.  I should note, however, that “customer” does not always mean the person who licenses or uses our products and/or services.  While they may be the focus, my true “customer” is sometimes the business itself (and its management), a business group, or a government (addressing a policy issue).  Often, the problem presented has to be solved in a multi-disciplinary way (e.g., a mixture of policy changes, education, technological innovation, and business process refinements).  Think, for example, about protecting children on-line.  While technology may help, any comprehensive solution may also involve government laws, parental and child education, a change in website business practices, etc.
  2. As noted above, the key is thinking in a multi-disciplinary way. People gravitate to what they know; thus the old adage that “if you have a hammer, everything you see is a nail.” Think more broadly about an issue, and a more interesting solution to the customer’s problem may present itself. (Scenario focused engineering works this way too.)
  3. It is partially about thinking differently (as discussed above), but also about seeking the right counsel.  There is an interesting truth about hard Presidential decisions.  The more sensitive an issue, the fewer the number of people consulted (because of the sensitivity) and the less informed the decision.  Obtaining good counsel – while avoiding the pitfall of paralysis (either because you have yet to speak to everyone on the planet or because there was not universal consensus on what to do next) is the key.
  4. (1) What is the right thing to do? (This may be harder than it looks because the different customers described above may have different interests.  For example, a costly solution may be good for customers but bad for shareholders.  A regulatory solution might be convenient for governments but stifle technological innovation.)  (2) What unintended consequences might occur? (The best laid plans….).  (3) Will the solution be achievable?
  5. I need quiet time; time to think deeply.

The big things that really stand out for me are using the customer as the North Star, balancing with multi-disciplinary perspectives, evaluating multiple, cascading ramifications, and leading with vision.

You Might Also Like

100 Articles to Sharpen Your Mind

Rituals for Results

Thinking About Career Paths

Categories: Architecture, Programming

Sponsored Post: Apple, Flipboard, All Your Base, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Siri Operations Developer. Apple is looking for talented developers to help build the next generation internal cloud platform for Siri. This person should be excited about solving difficult distributed systems problems as well as constantly improving user-experience. This person will be working with a highly technical and motivated team solving the hard problems. Please apply here.
    • Site Reliability Engineer. The Apple Pay Site Reliability Team is hiring for multiple roles focused on the front line customer experience and the back end integration of Apple systems with our Network and Banking partners. Please apply here.
    • Senior Software Engineer, iTunes Infrastructure. Hands-on senior software engineering for the iTunes digital media supply chain engineering team. We are looking for a self starting, energetic individual who is not afraid to question assumptions and with excellent written and oral communication skills. Please apply here
    • iTunes - Content Management Tools Engineer. The candidate should have several years experience developing large-scale web-based applications using object-oriented languages. Excellent understanding of relational databases and data-modeling techniques is also a must. Please apply here

  • Flipboard's Site Reliability Engineering Team is hiring! This team offers great challenges solving unique problems unlike any you have seen!  They work exclusively in the cloud, ensuring a highly available and performant product to millions of users daily.  If you have a passion for large-scale systems, next generation provisioning and orchestration tools apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • All Your Base is the only curated database conference of its kind in the UK. Listen to talks from database creators, industry leaders and developers working at the coal face on where to store and how to handle your data. Book tickets.
Cool Products and Services
  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Better, Faster, Cheaper: Pick Three. Scalyr is your universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs”; our columnar data store enables enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – get on board!

  • Whitepaper Clarifies ACID Support in Aerospike. In our latest whitepaper, author and Aerospike VP of Engineering & Operations, Srini Srinivasan, defines ACID support in Aerospike, and explains how Aerospike maintains high consistency by using techniques to reduce the possibility of partitions.  Read the whitepaper: http://www.aerospike.com/docs/architecture/assets/AerospikeACIDSupport.pdf.

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.  http://aiscaler.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Cuttable Scope

Early on in my Program Management career, I ran into challenges around cutting scope.

The schedule said the project was done by next week, but scope said the project would be done a few months from now.

On the Microsoft patterns & practices team, we optimized around “fix time, flex scope.”   This ensured we were on time, on budget.  This helped constrain risk.  Plus, as soon as you start chasing scope, you become a victim of scope creep, and create a runaway train.  It’s better to get smart people shipping on a cadence, and focus on creating incremental value.  If the trains leave the station on time, then if you miss a train, you know you can count on the next train.  Plus, this builds a reputation for shipping and execution excellence.

And so I would have to cut scope, and feel the pains of impact ripple across multiple dependencies.

Without a simple chunking mechanism, it was a game of trying to cut features and trying to figure out which requirements could be completed and still be useful within a given time frame.

This is where User Stories and System Stories helped.  

Stories created a simple way to chunk up value.   Stories help us put requirements into a context and a testable outcome, share what good looks like, and estimate our work.  So paring stories down is fine, and a good thing, as long as we can still achieve those basic goals.

Stories help us create Cuttable Scope.  

They make it easier to deliver value in incremental chunks.

A healthy project start includes a baseline set of stories that help define a Minimum Credible Release, and additional stories that would add additional, incremental value.

It helps create a lot of confidence in your project when there is a clear vision for what your solution will do, along with a healthy path of execution that includes a baseline release, along with a healthy pipeline of additional value, chunked up in the form of user stories that your stakeholders and user community can relate to.

You Might Also Like

Continuous Value Delivery the Agile Way

Experience-Driven Development

Kanban: The Secret of High-Performing Teams at Microsoft

Minimum Credible Release (MCR) and Minimum Viable Product (MVP)

Portfolios, Programs, and Projects

Categories: Architecture, Programming

Getting Things Right: A Look at Centralized vs Decentralized Systems Through the Eyes of Instant Replay

Three baseball umpires were sitting around a bar, talking about how they make calls on each pitch: First umpire: Some are balls and some are strikes, and I call them as they are. Second umpire: Some are balls and some are strikes, and I call them as I see 'em. Third umpire: Some are balls and some are strikes, but they ain’t nothin' until I call 'em.


AT&T's Global Network Operations Center

 


MLB's Instant Replay Bunker
NHL's Situation Room

It’s fun to look at how concepts we think of as belonging primarily to the domain of computer science play out in other fields. One intriguing example is how Instant Replay reflects and even helps shape the culture of a sport by how replay is implemented: decentralized or centralized.

Lucrative TV deals have pumped huge sums of money into professional sports. With so much money in play, sports have shifted from being pure entertainment to wanting to get things right. The price of making a bad call is just too high to let the human element decide the fate of titans.

Getting things right is also a much talked about subject in computer science. In CS the language of getting things right uses terms like transaction, rollback, quorum, optimistic replication, linearizability, synchronization, lock, eventually consistent, compensating transaction, and so on.

In sports to get things right referees use terms like flag, penalty, by rule, ruling stands, reset the clock, down and distance, line to gain, the whistle blew, ruling confirmed, and ruling overturned.

Though the vocabulary is different, the intent is much the same. Correctness.

Intent is not all tech and sports have in common. As technology evolves we are seeing sports change to take advantage of the new capabilities technology offers. And those changes should be familiar to anyone in software. Sports have gone from a completely decentralized system of officiating to where we now see the NBA, NFL, MLB, and NHL, all converging on some form of a centralized system.

The NHL were the innovators, starting their centralized instant replay system in 2011. It works something like this...officials sit in a war room located in Toronto that looks a lot like every network operations center ever built. Video feeds from all games flow into the room. When there is a controversy or an obvious review-worthy play, Toronto is contacted for a quick review and judgement on the correct call.  Every sport will implement their own centralized replay system in their own way, but that's the gist of it.

We’ve seen the exact same transformation as federated services like email have been replaced with centralized services like Twitter and Facebook. It turns out sports and computer science have some deeper similarities. What might those be?

Categories: Architecture

Apache Spark, ETL and Parquet

spark-parquet

One of the projects we’re currently running in my group (Amdocs’ Technology Research) is an evaluation the current state of different option for reporting on top of and near Hadoop (I hope I’ll be able to publish the results when we’d have them). Anyway, part of the preparations for the benchmark includes ingesting a lot of events (CDRs) into the system and creating different aggregations on top of them for instance, for voice call billing events we create yearly, monthly, weekly and daily and hourly aggregations on the subscriber level which include measures like : count of calls, average duration, sum of pricing, median balance, hourly distribution of calls, popular destinations etc.

We are using spark to do the ingestion and I thought that there are two interesting aspects I can share, which I haven’t seen too many examples on the internet, namely:

  • doing multiple aggregations i.e. an aggregations that goes beyond a work-count level
  • Writing to an Hadoop output format (Parquet in the example)

I created a minimal example, which uses a simple, synthesized input and   demonstrates these two issues – you can get the complete code for that on github . The rest of this post will highlight some of the points from the example.

Let’s start with the main core spark code, which is simple enough:

View the code on Gist.

  • line 1 – is reading a CSV as text file
  • line 3 is doing a simple parsing of the file and replacing it with a class. The parsed RDDs are cached since we’d iterate them multiple times (for each aggregation)
  • like 5,6 groups by multiple keys. The nice way to do that is using SparkSQL but the version I used (1.0.2) had very limited and undocumented SQL (I had to go to the code). This should be better in future versions. If you still use “regular” spark, I haven’t found a better way to group on multiple fields except creating the complex key by hand which is what getHourly and getWeekly methods do (create a string with year,month… etc dimensions)
  • like 11,12 do the aggregations themselves (which we’d look into next). The output  is again Pair RDDs. This seems useless however, it turns out you need pair RDDs if you want to save to Hadoop formats (more on that later)

Aggregations

Once we do the group by key (lines 5,6 above) we have a collection of Call records (Iterable[Call]). Now scala has a lot of nifty functions to transform Iterables (map, flatmap, foldleft etc. etc.) however we have a long list of aggregations to compute and the the Calls collection can get rather big (e.g. for yearly aggregates) and we have lots and lots of collections to iterate (terabytes and terabytes). So instead of using all these features I opted for the more traditional Java like aggregation, which while being pretty ugly, minimized the number of iterations I am making on the data.

You can see the result below (if you have a nicer looking, yet efficient, idea, I’d be happy to hear about it)

View the code on Gist.

Save to Parquet

The end result of doing the aggregations is an hierarchical structure – lise of simple measures (avgs, sums, counts etc.) but also a phone book, which also has an array of pricings and an hours breakdown which is also an array. I decided to store that in Parquet/ORC formats which are efficient for queries in Hadoop (by Hive/Impala depending on the Hadoop distribution you are using). For the purpose of the example I included the code to persist to parquet.

If you can use SparkSQL than support for Parquet is built in and you can do something as simple as

View the code on Gist.

You might have notices that the weeklyAggregated is a simple RDD and not a pair RDD since it isn’t needed here. Unfortunately for me I had to use Spark 0.9 (our lab is currently on Cloudera 5.0.1) – also in the more general case of writing to other Hadoop file formats you can’t use this trick.

One point specific to Parquet is that you can’t write to it directly – you have to use a “writer” class and parquet has Avro, Thrift and ProtoBuf writers available. I thought that’s going to be nice and easy so I looked for scala library to serialize to one of these formats and chose Scalavro (which I used in the past) turns out that, while the serialization is Avro compatible it is not the standard Avro class the Avro writer for parquet expects. No problem, I thought I’ll just take another library that creates works with Thrift – Twitter has one, it is called Scrooge works nice and all – but again it is not completely standard (doesn’t inherit from TBase). Oh well, maybe protobuf will do the job so I tried ScalaBuff , alas, this didn’t work well either (inheriting from LightMessage and not Message as expected by the writer) – I ended up using the plain Java generator for protobuf. This further uglified the aggregation logic but at least it got he job done. The output from the aggregation is a  pair RDD  of protobuf serializable aggregations

So, why the the pair RDD? it turns out all the interesting save functions that can use Hadoop file format only exist on pair RDD and not on regular ones. If you know this little trick the code is not very complicated:

View the code on Gist.

The first two lines in the snippet above configure the writer and are specific to parquet. the last line is the one that does the actual save to file – it specified the output directory, the key class (Void since we don’t need this with the parquet format), the for the records, the Hadoop output format class (Parquet in our case) and lastly a job configuration

To summarize. When you want to do more that a simple toy example the code may end up more complicated than the concise examples you see on-line. When working with lots and lots of data the number of iterations you make on the data also begin to be important and you need to pay attention to that.

Spark is a very powerful library for working on big data, it has a lot of components and capabilities. However its biggest weakness (in my opinion anyway) is its documentation. It makes it easy to start work with the platform, but when you want to do something a little more interesting you are left to dig around without proper directions. I hope this post will help some others when they’d do their digging around :)

Lastly,again I remind you, that  you can get the complete code for that on github

Categories: Architecture

iOS Today Widget in Swift - Tutorial

Xebia Blog - Sat, 09/13/2014 - 18:51

Since both the new app extensions of iOS 8 and Swift are both fairly new, I created a sample app that demonstrates how a simple Today extension can be made for iOS 8 in Swift. This widget will show the latest blog posts of the Xebia Blog within the today view of the notification center.

The source code of the app is available on GitHub. The app is not available on the App Store because it's an example app, though it might be quite useful for anyone following this Blog.

In this tutorial, I'll go through the following steps:

New Xcode project

Even though an extension for iOS 8 is a separate binary than an app, it's not possible to create an extension without an app. That makes it unfortunately impossible to create stand alone widgets, which this sample would be since it's only purpose is to show the latest posts in the Today view. So we create a new project in Xcode and implement a very simple view. The only thing the app will do for now is tell the user to swipe down from the top of the screen to add the widget.

iOS Simulator Screen Shot 11 Sep 2014 23.13.21

Time to add our extension target. From the File menu we choose New > Target... and from the Application Extension choose Today Extension.

Screen Shot 2014-09-11 at 23.20.34

We'll name our target XebiaBlogRSSWidget and of course use Swift as Language.

XebiaBlogRSS

The created target will have the following files:

      • TodayViewController.swift
      • MainInterface.storyboard
      • Info.plist

Since we'll be using a storyboard approach, we're fine with this setup. If however we wanted to create the view of the widget programatically we would delete the storyboard and replace the NSExtensionMainStoryboard key in the Info.plist with NSExtensionPrincipalClass and TodayViewController as it's value. Since (at the time of this writing) Xcode cannot find Swift classes as extension principal classes, we also would have to add the following line to our TodayViewController:

@objc (TodayViewController)

Update: Make sure to set the "Embedded Content Contains Swift Code" build setting of the main app target to YES. Otherwise your widget written in Swift will crash.

Add dependencies with cocoapods

The widget will get the latest blog posts from the RSS feed of the blog: http://blog.xebia.com/feed/. That means we need something that can read and parse this feed. A search on RSS at Cocoapods gives us the BlockRSSParser as first result. Seems to do exactly what we want, so we don't need to look any further and create our Podfile with the following contents:

platform :ios, "8.0"

target "XebiaBlog" do

end

target "XebiaBlogRSSWidget" do
pod 'BlockRSSParser', '~> 2.1'
end

It's important to only add the dependency to the XebiaBlogRSSWidget target since Xcode will build two binaries, one for the app itself and a separate one for the widget. If we would add the dependency to all targets it would be included in both binaries and thus increasing the total download size for our app. Always only add the necessary dependencies to both your app target and widget target(s).

Note: Cocoapods or Xcode might give you problems when you have a target without any Pod dependencies. In that case you may add a dependency to your main target and run pod install, after which you might be able to delete it again.

The BlockRSSParser is written in objective-c, which means we need to add an objective-c bridging header in order to use it from Swift. We add the file XebiaBlogRSSWidget-Bridging-Header.h to our target and add the import.

#import "RSSParser.h"

We also have to tell the Swift compiler about it in our build settings:

Screen Shot 2014-09-12 at 15.35.21

Load RSS feed

Finally time to do some coding. The generated TodayViewController has a function called widgetPerformUpdateWithCompletionHandler. This function gets called every once in awhile to ask for new data. It also gets called right after viewDidLoad when the widget is displayed. The function has a completion handler as parameter, which we need to call when we're done loading data. A completion handler is used instead of a return function so we can load our feed asynchronously.

In objective-c we would write the following code to load our feed:

[RSSParser parseRSSFeedForRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://blog.xebia.com/feed/"]] success:^(NSArray *feedItems) {
  // success
} failure:^(NSError *error) {
  // failure
}];

In Swift this looks slightly different. Here the the complete implementation of widgetPerformUpdateWithCompletionHandler:

func widgetPerformUpdateWithCompletionHandler(completionHandler: ((NCUpdateResult) -> Void)!) {
  let url = NSURL(string: "http://blog.xebia.com/feed/")
  let req = NSURLRequest(URL: url)

  RSSParser.parseRSSFeedForRequest(req,
    success: { feedItems in
      self.items = feedItems as? [RSSItem]
      completionHandler(.NewData)
    },
    failure: { error in
      println(error)
      completionHandler(.Failed)
  })
}

We assign the result to a new optional variable of type RSSItem array:

var items : [RSSItem]?

The completion handler gets called with either NCUpdateResult.NewData if the call was successful or NCUpdateResult.Failed when the call failed. A third option is NCUpdateResult.NoData which is used to indicate that there is no new data. We'll get to that later in this post when we cache our data.

Show items in a table view

Now that we have fetched our items from the RSS feed, we can display them in a table view. We replace our normal View Controller with a Table View Controller in our Storyboard and change the superclass of TodayViewController and add three labels to the prototype cell. No different than in iOS 7 so I won't go into too much detail here (the complete project is on GitHub).

We also create a new Swift class for our custom Table View Cell subclass and create outlets for our 3 labels.

import UIKit

class RSSItemTableViewCell: UITableViewCell {

    @IBOutlet weak var titleLabel: UILabel!
    @IBOutlet weak var authorLabel: UILabel!
    @IBOutlet weak var dateLabel: UILabel!

}

Now we can implement our Table View Data Source functions.

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return items.count
  }
  return 0
}

Since items is an optional, we use Optional Binding to check that it's not nil and then assign it to a temporary non optional variable: let items. It's fine to give the temporary variable the same name as the class variable.

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  let cell = tableView.dequeueReusableCellWithIdentifier("RSSItem", forIndexPath: indexPath) as RSSItemTableViewCell

  if let item = items?[indexPath.row] {
    cell.titleLabel.text = item.title
    cell.authorLabel.text = item.author
    cell.dateLabel.text = dateFormatter.stringFromDate(item.pubDate)
  }

  return cell
}

In our storyboard we've set the type of the prototype cell to our custom class RSSItemTableViewCell and used RSSItem as identifier so here we can dequeue a cell as a RSSItemTableViewCell without being afraid it would be nil. We then use Optional Binding to get the item at our row index. We could also use forced unwrapping since we know for sure that items is not nil here:

let item = items![indexPath.row]

But the optional binding makes our code saver and prevents any future crash in case our code would change.

We also need to create the date formatter that we use above to format the publication dates in the cells:

let dateFormatter : NSDateFormatter = {
        let formatter = NSDateFormatter()
        formatter.dateStyle = .ShortStyle
        return formatter
    }()

Here we use a closure to create the date formatter and to initialise it with our preferred date style. The return value of the closure will then be assigned to the property.

Preferred content size

To make sure we that we can actually see the table view we need to set the preferred content size of the widget. We'll add a new function to our class that does this.

func updatePreferredContentSize() {
  preferredContentSize = CGSizeMake(CGFloat(0), CGFloat(tableView(tableView, numberOfRowsInSection: 0)) * CGFloat(tableView.rowHeight) + tableView.sectionFooterHeight)
}

Since the widgets all have a fixed width, we can simply specify 0 for the width. The height is calculated by multiplying the number of rows with the height of the rows. Since this will set the preferred height greater than the maximum allowed height of a Today widget it will automatically shrink. We also add the sectionFooterHeight to our calculation, which is 0 for now but we'll add a footer later on.

When the preferred content size of a widget changes it will animate the resizing of the widget. To have the table view nicely animating along this transition, we add the following function to our class which gets called automatically:

override func viewWillTransitionToSize(size: CGSize, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {
  coordinator.animateAlongsideTransition({ context in
    self.tableView.frame = CGRectMake(0, 0, size.width, size.height)
  }, completion: nil)
}

Here we simply set the size of the table view to the size of the widget, which is the first parameter.

Of course we still need to call our update method as well as reloadData on our tableView. So we add these two calls to our success closure when we load the items from the feed

success: { feedItems in
  self.items = feedItems as? [RSSItem]
  
  self.tableView.reloadData()
  self.updatePreferredContentSize()

  completionHandler(.NewData)
},

Let's run our widget:

Screen Shot 2014-09-09 at 13.12.50

It works, but we can make it look better. Table views by default have a white background color and black text color and that's no different within a Today widget. We'd like to match the style with the standard iOS Today widget so we give the table view a clear background and make the text of the labels white. Unfortunately that does make our labels practically invisible since the storyboard editor in Xcode will still show a white background for views that have a clear background color.

If we run again, we get a much better looking result:

Screen Shot 2014-09-09 at 13.20.05

Open post in Safari

To open a Blog post in Safari when tapping on an item we need to implement the tableView:didSelectRowAtIndexPath: function. In a normal app we would then use the openURL: method of UIApplication. But that's not available within a Today extension. Instead we need to use the openURL:completionHandler: method of NSExtensionContext. We can retrieve this context through the extensionContext property of our View Controller.

override func tableView(tableView: UITableView, didSelectRowAtIndexPath indexPath: NSIndexPath) {
  if let item = items?[indexPath.row] {
    if let context = extensionContext {
      context.openURL(item.link, completionHandler: nil)
    }
  }
}
More and Less buttons

Right now our widget takes up a bit too much space within the notification center. So let's change this by showing only 3 items by default and 6 items maximum. Toggling between the default and expanded state can be done with a button that we'll add to the footer of the table view. When the user closes and opens the notification center, we want to show it in the same state as it was before so we need to remember the expand state. We can use the NSUserDefaults for this. Using a computed property to read and write from the user defaults is a nice way to write this:

let userDefaults = NSUserDefaults.standardUserDefaults()

var expanded : Bool {
  get {
    return userDefaults.boolForKey("expanded")
  }
  set (newExpanded) {
    userDefaults.setBool(newExpanded, forKey: "expanded")
    userDefaults.synchronize()
  }
}

This allows us to use it just like any other property without noticing it gets stored in the user defaults. We'll also add variables for our button and number of default and maximum rows:

let expandButton = UIButton()

let defaultNumRows = 3
let maxNumberOfRows = 6

Based on the current value of the expanded property we'll determine the number of rows that our table view should have. Of course it should never display more than the actual items we have so we also take that into account and change our function into the following:

override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
  if let items = items {
    return min(items.count, expanded ? maxNumberOfRows : defaultNumRows)
  }
  return 0
}

Then the code to make our button work:

override func viewDidLoad() {
  super.viewDidLoad()

  updateExpandButtonTitle()
  expandButton.addTarget(self, action: "toggleExpand", forControlEvents: .TouchUpInside)
  tableView.sectionFooterHeight = 44
}

override func tableView(tableView: UITableView, viewForFooterInSection section: Int) -> UIView? {
  return expandButton
}

Depending on the current value of our expanded property we wil show either "Show less" or "Show more" as button title.

func updateExpandButtonTitle() {
  expandButton.setTitle(expanded ? "Show less" : "Show more", forState: .Normal)
}

When we tap the button, we'll invert the expanded property, update the button title and preferred content size and reload the table view data.

func toggleExpand() {
  expanded = !expanded
  updateExpandButtonTitle()
  updatePreferredContentSize()
  tableView.reloadData()
}

And as a result we can now toggle the number of rows we want to see.

Screen Shot 2014-09-13 at 18.53.39Screen Shot 2014-09-13 at 18.53.43

Caching

At this moment, each time we open the widget, we first get an empty list and then once the feed is loaded, the items are displayed. To improve this, we can cache the retrieved items and display those once the widget is opened before we load the items from the feed. The TMCache library makes this possible with little effort. We can add it to our Pods file and bridging header the same way we did for the BlockRSSParser library.

Also here, a computed property works nice for caching the items and hide the actual implementation:

var cachedItems : [RSSItem]? {
  get {
    return TMCache.sharedCache().objectForKey("feed") as? [RSSItem]
  }
  set (newItems) {
    TMCache.sharedCache().setObject(newItems, forKey: "feed")
  }
}

Since the RSSItem class of the BlockRSSParser library conforms to the NSCoding protocol, we can use them directly with TMCache. When we retrieve the items from the cache for the first time, we'll get nil since the cache is empty. Therefore cachedItems needs to be an optional as well as the downcast and therefore we need to use the as? operator.

We can now update the cache once the items are loaded simply by assigning a value to the property. So in our success closure we add the following:

self.cachedItems = self.items

And then to load the cached items, we add two more lines to the end of viewDidLoad:

items = cachedItems
updatePreferredContentSize()

And we're done. Now each time the widget is opened it will first display the cached items.

There is one last thing we can do to improve our widget. As mentioned earlier, the completionHandler of widgetPerformUpdateWithCompletionHandler can also be called with NCUpdateResult.NoData. Now that we have the items that we loaded previously we can compare newly loaded items with the old and use NoData in case they haven't changed. Here is our final implementation of the success closure:

success: { feedItems in
  if self.items == nil || self.items! != feedItems {
    self.items = feedItems as? [RSSItem]
    self.tableView.reloadData()
    self.updatePreferredContentSize()
    self.cachedItems = self.items
    completionHandler(.NewData)
  } else {
    completionHandler(.NoData)
  }
},

And since it's Swift, we can simply use the != operator to see if the arrays have unequal content.

Source code on GitHub

As mentioned in the beginning of this post, the source code of the project is available on GitHub with some minor changes that are not essential to this blog post. Of course pull requests are always welcome. Also let me know in the comments below if you'd wish to see this widget released on the App Store.

React In Modern Web Applications: Part 2

Xebia Blog - Fri, 09/12/2014 - 22:32

As mentioned in part 1 of this series, React is an excellent Javascript library for writing highly dynamic client-side components. However, React is not meant to be a complete MVC framework. Its strength is the View part of the Model-View-Controller pattern. AngularJS is one of the best known and most powerful Javascript MVC frameworks. One of the goals of the course was to show how to integrate React with AngularJS.

Integrating React with AngularJS

Developers can create reusable components in AngularJS by using directives. Directives are very powerful but complex, and the code quickly tends to becomes messy and hard to maintain. Therefore, replacing the directive content in your AngularJS application with React components is an excellent use-case for React. In the course, we created a Timesheet component in React. To use it with AngularJS, create a directive that loads the WeekEntryComponent:

angular.module('timesheet')
    .directive('weekEntry', function () {
        return {
            restrict: 'A',
            link: function (scope, element) {
                React.renderComponent(new WeekEntryComponent({
                                             scope.companies, 
                                             scope.model}), element.find('#weekEntry')[0]);
            }
        };
    });

as you can see, a simple bit of code. Since this is AngularJS code, I prefer not to use JSX, and write the React specific code in plain Javascript. However, if you prefer you could use JSX syntax (do not forget to add the directive at the top of the file!). We are basically creating a component and rendering it on the element with id weekentry. We pass two values from the directive scope to the WeekEntryComponent as properties. The component will render based on the values passed in.

Reacting to changes in AngularJS

However, the code as shown above has one fatal flaw: the component will not re-render when the companies or model values change in the scope of the directive. The reason is simple: React does not know these values are dynamic, and the component gets rendered once when the directive is initialised. React re-renders a component in only two situations:

  • If the value of a property changes: this allows parent components to force re-rendering of child components
  • If the state object of a component changes: a component can create a state object and modify it

State should be kept to a minimum, and preferably be located in only one component. Most components should be stateless. This makes for simpler, easier to maintain code.

To make the interaction between AngularJS and React dynamic, the WeekEntryComponent must be re-rendered explicitly by AngularJS whenever a value changes in the scope. AngularJS provides the watch function for this:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(new WeekEntryComponent({
                rows: newData[1].companies,
                clients: newData[0]
              }
            ), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

In this code, AngularJS watches two values in the scope, clients and model, and when one of the values is changed by an user action the function gets called, re-rendering the React component only when both values are defined. If most of the scope is needed from AngularJS, it is better to pass in the complete scope as property, and put a watch on the scope itself. Remember, React is very smart about re-rendering to the browser. Only if a change in the scope properties leads to a real change in the DOM will the browser be updated!

Accessing AngularJS code from React

When your component needs to use functionality defined in AngularJS you need to use a function callback. For example, in our code, saving the timesheet is handled by AngularJS, while the save button is managed by React. We solved this by creating a saveTimesheet function, and adding it to the scope of directive. In the WeekEntryComponent we added a new property: saveMethod:

angular.module('timesheet')
  .directive('weekEntry', function () {
    return {
      restrict: 'A',
      link: function (scope, element) {
        scope.$watchCollection('[clients, model]', function (newData) {
          if (newData[0] !== undefined && newData[1] !== undefined) {
            React.renderComponent(WeekEntryComponent({
              rows: newData[1].companies,
              clients: newData[0],
              saveMethod: scope.saveTimesheet
            }), element.find('#weekEntry')[0]);
          }
        });
      }
    };
  });

Since this function is not going to be changed it does not need to be watched. Whenever the save button in the TimeSheetComponent is clicked, the saveTimesheet function is called and the new state of the timesheet saved.

What's next

In the next part we will look at how to deal with state in your components and how to avoid passing callbacks through your entire component hierarchy when changing state

Stuff The Internet Says On Scalability For September 12th, 2014

Hey, it's HighScalability time:


Each dot in this image is an entire galaxy containing billions of stars. What's in there?
  • Quotable Quotes:
    • mseepgood: Or "another language that's becoming popular, Node.js"
    • Joe Moreno: What good are billions of cycles of CPU power that make me wait. I shouldn't have to wait longer and longer due to launching, buffering, syncing, I/O and latency.
    • @stevecheney: Apple Pay is the magic that integrated hardware / software produces. No one else in the world can do this.
    • @etherealmind: Next gen Intel Xeon E5 V3 CPU includes packet processor for 40GBE, 30x increase in OpenSSL crypto, 25% increase in DPDK perf. #IDF14
    • @pbailis: There's actually an interesting question in understanding when to break "sharing" -- at core, NUMA domain, server, or cluster level?
    • @fmueller_bln: Just wait some minutes for vagrant to provision a vm with puppet and you’ll know why docker may be better option for dev machines...

  • Encryption will make fighting the spam war much costlier reveals Mike Hearn in an awesome post: A brief history of the spam war, where he gives insightful color commentary of the punch counter punch between World Heavyweight Champion Google and the challenger, Clever Spammer.  Mike worked in the Gmail trenches for over four years and recommends: make sending email cost money; use money to create deposits using bitcoin. 

  • jeswin: No other browser can practically implement or support Dart. If they do their implementation will be slower than Google's, and will get classified as inferior. < Ignoring the merits of Dart, this is an interesting ecosystem effect. By rating sites for non quality of content reasons Google can in effect select for characteristics over which they have a comparative advantage. It's not an arms length transaction. 

  • Dateline Seattle. Social media users execute a coordinated denial of service attack on cell networks, preventing those in need from accessing emergency services. Who are these terrorists? Football fans. City of Seattle asks people to stop streaming videos, posting photos because of football. Tweets, Instagram, YouTube, and Snapchat are overloading the cell networks so calls can't get through. Should the cell network expand capacity? Should there be an app tax to constrain demand? Should users pay per packet? As a 49ers fan I have another suggestion...move games to a different venue, perhaps the moon. That will help.

  • Are you a militant cable cutter who thinks the future of  TV is the Internet? Not so fast says Dan Rayburn in Internet Traffic Records Could Be Broken This Week Thanks To Apple, NFL, Sony, Xbox, EA and Others: Delivering video over the Internet at the same scale and quality that you do over a cable network isn’t possible. The Internet is not a cable network and if you think otherwise, you will be proven wrong this week. We’re going to see long download times, more buffering of streams, more QoS issues and ISPs that will take steps to deal with the traffic. 

  • Ted Nelson takes on the impossible in on How Bitcoin Actually Works (Computers for Cynics #7). And he does an excellent job, sharing his usual insight with a twist. The title is misleading however. There's hardly any cynicism. How disappointing! Ted is clearly impressed with the design and implementation of bitcoin. For good reason. No matter what you think of bitcoin and its potential role in society, it is a very well thought out and impressive piece of technology. On par with Newton, Mr. Nelson suggests. If you watch this you'll probably realize that you don't actually understand bitcoin, even if you think you do, and that's a good thing.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Azure: SQL Databases, API Management, Media Services, Websites, Role Based Access Control and More

ScottGu's Blog - Scott Guthrie - Fri, 09/12/2014 - 07:14

This week we released a major set of updates to Microsoft Azure. This week’s updates include:

  • SQL Databases: General Availability of Azure SQL Database Service Tiers
  • API Management: General Availability of our API Management Service
  • Media Services: Live Streaming, Content Protection, Faster and Cost Effective Encoding, and Media Indexer
  • Web Sites: Virtual Network integration, new scalable CMS with WordPress and updates to Web Site Backup in the Preview Portal
  • Role-based Access Control: Preview release of role-based access control for Azure Management operations
  • Alerting: General Availability of Azure Alerting and new alerts on events

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:   SQL Databases: General Availability of Azure SQL Database Service Tiers

I’m happy to announce the General Availability of our new Azure SQL Database service tiers - Basic, Standard, and Premium.  The SQL Database service within Azure provides a compelling database-as-a-service offering that enables you to quickly innovate & stand up and run SQL databases without having to manage or operate VMs or infrastructure.

Today’s SQL Database Service Tiers all come with a 99.99% SLA, and databases can now grow up to 500GB in size.

Each SQL Database tier now guarantees a consistent performance level that you can depend on within your applications – avoiding the need to worry about “noisy neighbors” who might impact your performance from time to time.

Built-in point-in-time restore support now provides you with the ability to automatically re-create databases at a certain point of time (giving you much more backup flexibility and allowing you to restore to exactly the point before you accidentally did something bad to your data).

Built-in auditing support enables you to gain insight into events and changes that occur with the databases you host.

Built-in active geo-replication support, available with the premium tier, enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well:

image

One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc).  The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions:

image

This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  More Flexible Pricing

SQL Databases are now billed on a per-hour basis – allowing you to quickly create and tear down databases, and dynamically scale up or down databases even more cost effectively.

Basic Tier databases support databases up to 2GB in size and cost $4.99 for a full month of use.  Standard Tier databases support 250GB databases and now start at $15/month (there are also higher performance standard tiers at $30/month and $75/month). Premium Tier databases support 500GB databases as well as the active geo-replication feature and now start at $465/month.

The below table provides a quick look at the different tiers and functionality:

image

This page provides more details on how to think about DTU performance with each of the above tiers, and provides benchmark details on the number of transactions supported by each of the above service tiers and performance levels.

During the preview, we’ve heard from some ISVs, which have a large number of databases with variable performance demands, that they need the flexibility to share DTU performance resources across multiple databases as opposed to managing tiers for databases individually.  For example, some SaaS ISVs may have a separate SQL database for each customer and as the activity of each database varies, they want to manage a pool of resources with a defined budget across these customer databases.  We are working to enable this scenario within the new service tiers in a future service update. If you are an ISV with a similar scenario, please click here to sign up to learn more.

Learn more about SQL Databases on Azure here. API Management Service: General Availability Release

I’m excited to announce the General Availability of the Azure API Management Service.

In my last post I discussed how API Management enables customers to securely publish APIs to developers and accelerate partner adoption.  These APIs can be used from mobile and client applications (on any device) as well as other cloud and service based applications.

The API management service supports the ability to take any APIs you already have (either in the cloud or on-premises) and publish them for others to use.  The API Management service enables you to:

  • Throttle, rate limit and quota your APIs
  • Gain analytic insights on how your APIs are being used and by whom
  • Secure your APIs using OAuth or key-based access
  • Track the health of your APIs and quickly identify errors
  • Easily expose a developer portal for your APIs that provides documentation and test experiences to developers who want to use your APIs

Today’s General Availability provides a formal SLA for Standard tier services.  We also have a developer tier of the service that you can use, starting at just $49 per month. OAuth support in the Developer Portal

The API Management service provides a developer console that enables a great on-boarding and interactive learning experience for developers who want to use your APIs.  The developer console enables you to easily expose documentation as well enable developers to try/test your APIs.

With this week’s GA release we are also adding support that enables API publishers to register their OAuth Authorization Servers for use in the console, which in turn allows developers to sign in with their own login credentials when interacting with your API - a critical feature for any API that supports OAuth. All normative authorization grant types are supported plus scopes and default scopes.

image

For more details on how to enable OAuth 2 support with API Management and integration in the new developer portal, check out this tutorial.

Click here to learn more about the API Management service and try it out for free. Media Services: Live Streaming, DRM, Faster Cost Effective Encoding, and Media Indexer

This week we are excited to announce the public preview of Live Streaming and Content Protection support with Azure Media Services.

The same Internet scale streaming solution that leading international broadcasters used to live stream the 2014 Winter Olympic Games and 2014 FIFA World Cup to tens of millions of customers globally is now available in public preview to all Azure customers. This means you can now stream live events of any size with the same level of scalability, uptime, and reliability that was available to the Olympics and World Cup. DRM Content Protection

This week Azure Media Services is also introducing a new Content Protection offering which features both static and dynamic encryption with first party PlayReady license delivery and an AES 128-bit key delivery service.  This makes it easy to DRM protect both your live and pre-recorded video assets – and have them be available for users to easily watch them on any device or platform (Windows, Mac, iOS, Android and more). Faster and More Cost Effective Media Encoding

This week, we are also introducing faster media encoding speeds and more cost-effective billing. Our enhanced Azure Media Encoder is designed for premium media encoding and is billed based on output GBs. Our previous encoder was billed on both input + output GBs, so the shift to output only billing will result in a substantial price reduction for all of our customers.

To help you further optimize your encoding workflows, we’re introducing Basic, Standard, and Premium Encoding Reserved units, which give you more flexibility and allow you to tailor the encoding capability you pay for to the needs of your specific workflows. Media Indexer

Additionally, I’m happy to announce the General Availability of Azure Media Indexer, a powerful, market differentiated content extraction service which can be used to enhance the searchability of audio and video files.  With Media Indexer you can automatically analyze your media files and index the audio and video content in them. You can learn more about it here. More Media Partners

I’m also pleased to announce the addition this week of several media workflow partners and client players to our existing large set of media partners:

  • Azure Media Services and Telestream’s Wirecast are now fully integrated, including a built-in destination that makes its quick and easy to send content from Wirecast’s live streaming production software to Azure.
  • Similarly, Newtek’s Tricaster has also been integrated into the Azure platform, enabling customers to combine the high production value of Tricaster with the scalability and reliability of Azure Media Services.
  • Cires21 and Azure Media have paired up to help make monitoring the health of your live channels simple and easy, and the widely-used JW player is now fully integrated with Azure to enable you to quickly build video playback experiences across virtually all platforms.
Learn More

Visit the Azure Media Services site for more information and to get started for free. Websites: Virtual Network Integration, new Scalable CMS with WordPress

This week we’ve also released a number of great updates to our Azure Websites service.

Virtual Network Integration

Starting this week you can now integrate your Azure Websites with Azure Virtual Networks. This support enables your Websites to access resources attached to your virtual networks.  For example: this means you can now have a Website directly connect to a database hosted in a non-public VM on a virtual network.  If your Virtual Network is connected to your on-premises network (using a Site-to-Site software VPN or ExpressRoute dedicated fiber VPN) you can also now have your Website connect to resources in your on-premises network as well.

The new Virtual Network support enables both TCP and UDP protocols and will work with your VNET DNS. Hybrid Connections and Virtual Network are compatible such that you can also mix both in the same Website.  The new virtual network support for Web Sites is being released this week in preview.  Standard web hosting plans can have up to 5 virtual networks enabled. A website can only be connected to one virtual network at a time but there is no restriction on the number of websites that can be connected to a virtual network.

You can configure a Website to use a Virtual Network using the new Preview Azure Portal (http://portal.azure.com).  Click the “Virtual Network” tile in your web-site to bring up a virtual network blade that you can use to either create a new virtual network or attach to an existing one you already have:

image

Note that an Azure Website requires that your Virtual Network has a configured gateway and Point-to-Site enabled. It will remained grayed out in the UI above until you have enabled this. Scalable CMS with WordPress

This week we also released support for a Scalable CMS solution with WordPress running on Azure Websites.  Scalable CMS with WordPress provides the fastest way to build an optimized and hassle free WordPress Website. It is architected so that your WordPress site loads fast and can support millions of page views a month, and you can easily scale up or scale out as your traffic increases.

It is pre-configured to use Azure Storage, which can be used to store your site’s media library content, and can be easily configured to use the Azure CDN.  Every Scalable CMS site comes with auto-scale, staged publishing, SSL, custom domains, Webjobs, and backup and restore features of Azure Websites enabled. Scalable WordPress also allows you to use Jetpack to supercharge your WordPress site with powerful features available to WordPress.com users.

You can now easily deploy Scalable CMS with WordPress solutions on Azure via the Azure Gallery integrated within the new Azure Preview Portal (http://portal.azure.com).  When you select it within the portal it will walk you through automatically setting up and deploying a complete solution on Azure:

image

Scalable WordPress is ideal for Web developers, creative agencies, businesses and enterprises wanting a turn-key solution that maximizes performance of running WordPress on Azure Websites.  It’s fast, simple and secure WordPress hosting on Azure Websites. Updates to Website Backup

This week we also updated our built-in Backup feature within Azure Websites with a number of nice enhancements.  Starting today, you can now:

  • Choose the exact destination of your backups, including the specific Storage account and blob container you wish to store your backups within.
  • Choose to backup SQL databases or MySQL databases that are declared in the connection strings of the website.
  • On the restore side, you can now restore to both a new site, and to a deployment slot on a site. This makes it possible to verify your backup before you make it live.

These new capabilities make it easier than ever to have a full history of your website and its associated data. Security: Role Based Access Control for Management of Azure

As organizations move more and more of their workloads to Azure, one of the most requested features has been the ability to control which cloud resources different employees can access and what actions they can perform on those resources.

Today, I’m excited to announce the preview release of Role Based Access Control (RBAC) support in the Azure platform. RBAC is now available in the Azure preview portal and can be used to control access in the portal or access to the Azure Resource Manager APIs. You can use this support to limit the access of users and groups by assigning them roles on Azure resources. Highlights include:

  • A subscription is no longer the access management boundary in Azure. In April, we introduced Resource Groups, a container to group resources that share lifecycle. Now, you can grant users access on a resource group as well as on individual resources like specific Websites or VMs.
  • You can now grant access to both users groups. RBAC is based on Azure Active Directory, so if your organization already uses groups in Azure Active Directory or Windows Server Active Directory for access management, you will be able to manage access to Azure the same way.

Below are some more details on how this works and can be enabled.

Azure Active Directory

Azure Active Directory is our directory service in the cloud.  You can create organizational tenants within Azure Active Directory and define users and groups within it – without having to have any existing Active Directory setup on-premises.

Alternatively, you can also sync (or federate) users and groups from your existing on-premises Active Directory to Azure Active Directory, and have your existing users and groups automatically be available for use in the cloud with Azure, Office 365, as well as over 2000 other SaaS based applications:

image

All users that access your Azure subscriptions, are now present in the Azure Active Directory, to which the subscription is associated. This enables you to manage what they can do as well as revoke their access to all Azure subscriptions by disabling their account in the directory. Role Permissions

In this first preview we are pre-defining three built-in Azure roles that give you a choice of granting restricted access:

  • A Owner can perform all management operations for a resource and its child resources including access management.
  • A Contributor can perform all management operations for a resource including create and delete resources. A contributor cannot grant access to others.
  • A Reader has read-only access to a resource and its child resources. A Reader cannot read secrets.

In the RBAC model, users who have been configured to be the service administrator and co-administrators of an Azure subscription are mapped as belonging to the Owners role of the subscription. Their access to both the current and preview management portals remains unchanged.

Additional users and groups that you then assign to the new RBAC roles will only have those permissions, and also will only be able to manage Azure resources using the new Azure preview portal and Azure Resource Manager APIs.  RBAC is not supported in the current Azure management portal or via older management APIs (since neither of these were built with the concept of role based security built-in).

Restricting Access based on Role Based Permissions

Let’s assume that your team is using Azure for development, as well as to host the production instance of your application. When doing this you might want to separate the resources employed in development and testing from the production resources using Resource Groups.

You might want to allow everyone in your team to have a read-only view of all resources in your Azure subscription – including the ability to read and review production analytics data. You might then want to only allow certain users to have write/contributor access to the production resources.  Let’s look at how to set this up:

Step 1: Setting up Roles at the Subscription Level

We’ll begin by mapping some users to roles at the subscription level.  These will then by default be inherited by all resources and resource groups within our Azure subscription.

To set this up, open the Billing blade within the Preview Azure Portal (http://portal.azure.com), and within the Billing blade select the Azure subscription that you wish to setup roles for: 

image

Then scroll down within the blade of subscription you opened, and locate the Roles tile within it:

image

Clicking the Roles title will bring up a blade that lists the pre-defined roles we provide by default (Owner, Contributor, Reader).  You can click any of the roles to bring up a list of the users assigned to the role.  Clicking the Add button will then allow you to search your Azure Active Directory and add either a user or group to that role. 

Below I’ve opened up the default Reader role and added David and Fred to it:

image

Once we do this, David and Fred will be able to log into the Preview Azure Portal and will have read-only access to the resources contained within our subscription.  They will not be able to edit any changes, though, nor be able to see secrets (passwords, etc).

Note that in addition to adding users and groups from within your directory, you can also use the Invite button above to invite users who are not currently part of your directory, but who have a Microsoft Account (e.g. scott@outlook.com), to also be mapped into a role.

Step 2: Setting up Roles at the Resource Level

Once you’ve defined the default role mappings at the subscription level, they will by default apply to all resources and resource groups contained within it. 

If you wish to scope permissions even further at just an individual resource (e.g. a VM or Website or Database) or at a resource group level (e.g. an entire application and all resources within it), you can also open up the individual resource/resource-group blade and use the Roles tile within it to further specify permissions.

For example, earlier we granted David reader role access to all resources within our Azure subscription.  Let’s now grant him contributor role access to just an individual VM within the subscription.  Once we do this he’ll be able to stop/start the VM as well as make changes to it.

To enable this, I’ve opened up the blade for the VM below.  I’ve then scrolled down the blade and found the Roles tile within the VM.  Clicking the contributor role within the Roles tile will then bring up a blade that allows me to configure which users will be contributors (meaning have read and modify permissions) for this particular VM.  Notice below how I’ve added David to this:

image

Using this resource/resource-group level approach enables you to have really fine-grained access control permissions on your resources. Command Line and API Access for Azure Role Based Access Control

The enforcement of the access policies that you configure using RBAC is done by the Azure Resource Manager APIs.  Both the Azure preview portal as well as the command line tools we ship use the Resource Manager APIs to execute management operations. This ensures that access is consistently enforced regardless of what tools are used to manage Azure resources.

With this week’s release we’ve included a number of new Powershell APIs that enable you to automate setting up as well as controlling role based access. Learn More about Role Based Access

Today’s Role Based Access Control Preview provides a lot more flexibility in how you manage the security of your Azure resources.  It is easy to setup and configure.  And because it integrates with Azure Active Directory, you can easily sync/federate it to also integrate with the existing Active Directory configuration you might already have in your on-premises environment.

Getting started with the new Azure Role Based Access Control support is as simple as assigning the appropriate users and groups to roles on your Azure subscription or individual resources. You can read more detailed information on the concepts and capabilities of RBAC here. Your feedback on the preview features is critical for all improvements and new capabilities coming in this space, so please try out the new features and provide us your feedback. Alerts: General Availability of Azure Alerting and new Alerts on Events support

I’m excited to announce the release of Azure Alerting to General Availability. Azure alerts supports the ability to create alert thresholds on metrics that you are interested in, and then have Azure automatically send an email notification when that threshold is crossed. As part of the general availability release, we are removing the 10 alert rule cap per subscription.

Alerts are available in the full azure portal by clicking Management Services in the left navigation bar:

image

Also, alerting is available on most of the resources in the Azure preview portal:

image

You can create alerts on metrics from 8 different services today (and we are adding more all the time):

  • Cloud Services
  • Virtual Machines
  • Websites
  • Web hosting plans
  • Storage accounts
  • SQL databases
  • Redis Cache
  • DocumentDB accounts

In addition to general availability for alerts on metrics, we are also previewing the ability to create alerts on operational events. This means you can get an email if someone stops your website, if your virtual machines are deleted, or if your Azure Resource Manager template deployment failed. Like alerts on metrics, you can route these alerts to the service and co-administrators, or, to a custom email address you provide.  You can configure these events on a resource in the Azure Preview Portal.  We have enabled this within the Portal for Websites – we’ll be extending it to all resources in the future.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

omni
Categories: Architecture, Programming

Ten at Ten Meetings

Ten at Ten are a very simple tool for helping teams stay focused, connected, and collaborate more effectively, the Agile way.

I’ve been leading distributed teams and v-teams for years.   I needed a simple way to keep everybody on the same page, expose issues, and help everybody on the team increase their awareness of results and progress, as well as unblock and breakthrough blocking issues.

Why Ten at Ten Meetings?

When people are remote, it’s easy to feel disconnected, and it’s easy to start to feel like different people are just a “black box” or “go dark.”

Ten at Ten Meetings have been my friend and have helped me help everybody on the team stay in sync and appreciate each other’s work, while finding better ways to team up on things, and drive to results, in a collaborative way.  I believe I started Ten at Ten Meetings back in 2003 (before that, I wasn’t as consistent … I think 2003 is where I realized a quick sync each day, keeps the “black box” away.)

Overview of Ten at Ten Meetings

I’ve written about Ten at Ten Meetings before in my posts on How To Lead High-Performance Distributed Teams, How I Use Agile Results, Interview on Timeboxing for HBR (Harvard Business Review), Agile Results Works for Teams and Leaders Too,  and 10 Free Leadership Tools for Work and Life, but I thought it would be helpful to summarize some of the key information at a glance.

Here is an overview of Ten at Ten Meetings:

This is one of my favorite tools for reducing email and administration overhead and getting everybody on the same page fast.  It's simply a stand-up meeting.  I tend to have them at 10:00, and I set a limit of 10 minutes.  This way people look forward to the meeting as a way to very quickly catch up with each other, and to stay on top of what's going on, and what's important.  The way it works is I go around the (virtual) room, and each person identifies what they got done yesterday, what they're getting done today, and any help they need.  It's a fast process, although it can take practice in the beginning.  When I first started, I had to get in the habit of hanging up on people if it went past 10 minutes.  People very quickly realized that the ten minute meeting was serious.  Also, as issues came up, if they weren't fast to solve on the fly and felt like a distraction, then we had to learn to take them offline.  Eventually, this helped build a case for a recurring team meeting where we could drill deeper into recurring issues or patterns, and focus on improving overall team effectiveness.

3 Steps for Ten at Ten Meetings

Here is more of a step-by-step approach:

  1. I schedule ten minutes for Monday through Thursday, at whatever time the team can agree to, but in the AM. (no meetings on Friday)
  2. During the meeting, we go around and ask three simple questions:  1)  What did you get done?  2) What are you getting done today? (focused on Three Wins), and 3) Where do you need help?
  3. We focus on the process (the 3 questions) and the timebox (10 minutes) so it’s a swift meeting with great results.   We put issues that need more drill-down or exploration into a “parking lot” for follow up.  We focus the meeting on status and clarity of the work, the progress, and the impediments.

You’d be surprised at how quickly people start to pay attention to what they’re working on and on what’s worth working on.  It also helps team members very quickly see each other’s impact and results.  It also helps people raise their bar, especially when they get to hear  and experience what good looks like from their peers.

Most importantly, it shines the light on little, incremental progress, and, if you didn’t already know, progress is the key to happiness in work and life.

You Might Also Like

10 Free Leadership Tools for Work and Life

How I Use Agile Results

How To Lead High-Performance Distributed Teams

Categories: Architecture, Programming

Xcode 6 GM & Learning Swift (with the help of Xebia)

Xebia Blog - Thu, 09/11/2014 - 07:32

I guess re-iterating the announcements Apple did on Tuesday is not needed.

What is most interesting to me about everything that happened on Tuesday is the fact that iOS 8 now reached GM status and Apple sent the call to bring in your iOS 8 uploads to iTunes connect. iOS 8 is around the corner in about a week from now allowing some great new features to the platform and ... Swift.

Swift

I was thinking about putting together a list of excellent links about Swift. But obviously somebody has done that already
https://github.com/Wolg/awesome-swift
(And best of all, if you find/notice an epic Swift resource out there, submit a pull request to that REPO, or leave a comment on this blog post.)

If you are getting started, check out:
https://github.com/nettlep/learn-swift
It's a Github repo filled with extra Playgrounds to learn Swift in a hands-on matter. It elaborates a bit further on the later chapters of the Swift language book.

But the best way to learn Swift I can come up with is to join Xebia for a day (or two) and attend one of our special purpose update training offers hosted by Daniel Steinberg on 6 and 7 november. More info on that:

Booting a Raspberry Pi B+ with the Raspbian Debian Wheezy image

Agile Testing - Grig Gheorghiu - Thu, 09/11/2014 - 01:32
It took me a while to boot my brand new Raspberry Pi B+ with a usable Linux image. I chose the Raspbian Debian Wheezy image available on the downloads page of the official raspberrypi.org site. Here are the steps I needed:

1) Bought micro SD card. Note DO NOT get a regular SD card for the B+ because it will not fit in the SD card slot. You need a micro SD card.

2) Inserted the SD card via an SD USB adaptor in my MacBook Pro.

3) Went to the command line and ran df to see which volume the SD card was mounted as. In my case, it was /dev/disk1s1.

4) Unmounted the SD card. I initially tried 'sudo umount /dev/disk1s1' but the system told me to use 'diskutil unmount', so the command that worked for me was:

diskutil unmount /dev/disk1s1

5) Used dd to copy the Raspbian Debian Wheezy image (which I previously downloaded) per these instructions. Important note: the target of the dd command is /dev/disk1 and NOT /dev/disk1s1. I tried initially with the latter, and the Raspberry Pi wouldn't boot (one of the symptoms that something was wrong other than the fact that nothing appeared on the monitor, was that the green light was solid and not flashing; a google search revealed that one possible cause for that was a problem with the SD card). The dd command I used was:

dd if=2014-06-20-wheezy-raspbian.img of=/dev/disk1 bs=1m

6) At this point, I inserted the micro SD card into the SD slot on the Raspberry Pi, then connected the Pi to a USB power cable, a monitor via an HDMI cable, a USB keyboard and a USB mouse. I was able to boot and change the password for the pi user. The sky is the limit next ;-)