Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/7' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Automated UI Testing with React Native on iOS

Xebia Blog - Mon, 02/08/2016 - 21:30
code { display: inline !important; font-size: 90% !important; color: #6a205e !important; background-color: #f9f9f9 !important; border-radius: 4px !important; }

React Native is a technology to develop mobile apps on iOS and Android that have a near-native feel, all from one codebase. It is a very promising technology, but the documentation on testing can use some more depth. There are some pointers in the docs but they leave you wanting more. In this blog post I will show you how to use XCUITest to record and run automated UI tests on iOS.

Start by generating a brand new react native project and make sure it runs fine:
react-native init XCUITest && cd XCUITest && react-native run-ios
You should now see the default "Welcome to React Native!" screen in your simulator.

Let's add a textfield and display the results on screen by editing index.ios.js:

class XCUITest extends Component {

  constructor(props) {
    super(props);
    this.state = { text: '' };
  }

  render() {
    return (
      <View style={styles.container}>
        <TextInput
          testID="test-id-textfield"
          style={{borderWidth: 1, height: 30, margin: 10}}
          onChangeText={(text) => this.setState({text})}
          value={this.state.text}
        />
        <View testID="test-id-textfield-result" >
          <Text style={{fontSize: 20}}>You typed: {this.state.text}</Text>
        </View>
      </View>
    );
  }
}

Notice that I added testID="test-id-textfield" and testID="test-id-textfield-result" to the TextInput and the View. This causes React Native to set a accessibilityIdentifier on the native view. This is something we can use to find the elements in our UI test.

Recording the test

Open the XCode project in the ios folder and click File > New > Target. Then pick iOS > Test > iOS UI Testing Bundle. The defaults are ok, click Finish. Now there should be a XCUITestsUITests folder with a XCUITestUITests.swift file in it.

Let's open XCUITestUITests.swift and place the cursor inside the testExample method. At the bottom left of the editor there is a small red button. If you press it, the app will build and start in the simulator.

Every interaction you now have with the app will be recorded and added to the testExample method, just like in the looping gif at the bottom of this post. Now type "123" and tap on the text that says "You typed: 123". End the recording by clicking on the red dot again.

Something like this should have appeared in your editor:

      let app = XCUIApplication()
      app.textFields["test-id-textfield"].tap()
      app.textFields["test-id-textfield"].typeText("123")
      app.staticTexts["You typed: 123"].tap()

Notice that you can pull down the selectors to change them. Change the "You typed" selector to make it more specific, change the .tap() into .exists and then surround it with XCTAssert to do an actual assert:

      XCTAssert(app.otherElements["test-id-textfield-result"].staticTexts["You typed: 123"].exists)

Now if you run the test it will show you a nice green checkmark in the margin and say "Test Succeeded".

In this short blogpost I showed you how to use the React Native testID attribute to tag elements and record and adapt a XCUITest in XCode. There is a lot more to be told about React Native, so don't forget to follow me on twitter (@wietsevenema)

Recording UI Tests in XCode

What's Next? The NFL's Magic Yellow Line Shows the Way to Augmented Reality

 

What’s next? Mobile is entering its comforting middle age period of development. Conversational commerce is a thing, a good thing, but is it really a great thing?

What’s next may be what has been next for decades: Augmented reality (AR) (and VR). AR systems will be here sooner than you might think. A matter of years, not decades. Robert Scoble, for example, thinks Meta, an early startup in AR industry, will be bigger than the Macintosh. More on that in a later post. Magic Leap has no product and $1.3 billion in funding. Facebook has Oculus. Microsoft has HoloLens. Google may be releasing a VR system later this year. Apple is working on VR. Becoming the next iPhone is up for grabs.

AR is a Huge Opportunity for Programmers and Startups 

This is a technological revolution that will be bigger than mobile. Opportunities in mobile for developers have largely played out. Experience shows the earlier you get in on a revolution the better the opportunity will be. Do you want to be writing free iOS apps forever?

It’s so early we don’t really have an idea what AR is or what the market will be or what it means from a developer perspective. But if you watched the Super Bowl you saw an early example of the power of AR. It’s the benign looking, yet technically impressive, computer generated yellow first down line marker.

Augmented Reality is Already a Sports Reality
Categories: Architecture

Making Agile even more Awesome. By Nature.

Xebia Blog - Mon, 02/08/2016 - 11:31

Watching the evening news and it should be no surprise the world around us is increasingly changing and is becoming too complex to fit in a system we as humankind still can control.  We have to learn and adapt much faster solving our epic challenges. The Agile Mindset and methodologies are an important mainstay here. Adding some principles from nature makes it even more awesome.

In organizations, in our lives, we are in a constant battle “beating the system”.  Steering the economy, nature, life.  We’re fighting against it, and becoming less and less successful in it.  What should change here?

First, we could start to let go the things we can’t control and fully trust the system we live in: Nature. It’s the ultimate Agile System, continuously learning and adapting to changing environments.  But how?

We have created planes and boats by observing how nature did it: Biomimetics.  In my job as an Agile Innovation consultant, I’m using these and other related principles:

  1. Innovation engages in lots of experimentation: life creates success models through making mistakes, survival of the fittest.
  2. Continuously improve by feedback loops.
  3. Use only the energy you need. Work smart and effective.
  4. Fit form to function. Function is primary important to esthetics.
  5. Recycle: Resources are limited, (re)use them smart.
  6. Encourage cooperation.
  7. Positivity is an important source of energy, like sunlight can be for nature.
  8. Aim for diversity. For example, diverse problem solvers working together can outperform groups of high-ability problem solvers.
  9. Demand local expertise, to be aware of the need of local differences.
  10. Create a safe environment to experiment. Like Facebook is able to release functionality every hour for a small group of users.
  11. Outperform frequently to gain endurance and to stay fit.
  12. Reduce complexity by minimizing the number of materials and tools.For example, 96% of life on this planet is made up of six types of atoms: Carbon, Hydrogen, Oxygen, Nitrogen, Phosphorus and Sulphur
How to kickstart your start-up?

Until a couple of years ago,  innovative tools were only available for financial powerful companies.  Now, innovative tools like 3D printing and the Internet of Things are accessible for everybody.  The same applies for Agile.  This enables you to enter new markets against extreme low marginal costs.  In these start-ups you can recognize elements of natural agility.  A brilliant example is Joe Justice’ WikiSpeed. In less than 3 months he succeeded in building a 100 Mile/Gallon street legal car defeating companies like Tesla.  This all shows you can solve apparently impossible challenges by trusting on your natural common sense.  It's that simple.

Paul Takken (Xebia) and Joe Justice (Scrum inc.) are currently working together on several global initiatives coaching governments and large enterprises in reinventing themselves how they can anticipate on today's epic challenges.  This is done by a smarter use of people’s talents, tooling, materials and Agile- and Lean principles as mentioned above.

Setting up Jenkins to run headless Selenium tests in Docker containers

Agile Testing - Grig Gheorghiu - Sat, 02/06/2016 - 01:40
This is the third post in a series on running headless Selenium WebDriver tests. Here are the first two posts:
  1. Running Selenium WebDriver tests using Firefox headless mode on Ubuntu
  2. Running headless Selenium WebDriver tests in Docker containers
In this post I will show how to add the final piece to this workflow, namely how to fully automate the execution of Selenium-based WebDriver tests running Firefox in headless mode in Docker containers. I will use Jenkins for this example, but the same applies to other continuous integration systems.
1) Install docker-engine on the server running Jenkins (I covered this in my post #2 above)
2) Add the jenkins user to the docker group so that Jenkins can run the docker command-line tool in order to communicate with the docker daemon. Remember to restart Jenkins after doing this.
3) Go through the rest of the workflow in my post above ("Running headless Selenium WebDriver tests in Docker containers") and make sure you can run all the commands in that post from the command line of the server running Jenkins.
4) Create a directory structure for your Selenium WebDriver tests (mine are written in Python). 
I have a directory called selenium-docker which contains a directory called tests, under which I put all my Python WebDriver tests named sel_wd_*.py. I also  have a simple shell script I named run_selenium_tests.sh which does the following:
#!/bin/bash
TARGET=$1 # e.g. someotherdomain.example.com (if not specified, the default is somedomain.example.com)
for f in `ls tests/sel_wd_*.py`; do    echo Running $f against $TARGET    python $f $TARGETdone
My selenium-docker directory also contains the xvfb.init file I need for starting up Xvfb in the container, and finally it contains this Dockerfile:
FROM ubuntu:trusty
RUN echo "deb http://ppa.launchpad.net/mozillateam/firefox-next/ubuntu trusty main" > /etc/apt/sources.list.d//mozillateam-firefox-next-trusty.listRUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys CE49EC21RUN apt-get updateRUN apt-get install -y firefox xvfb python-pipRUN pip install seleniumRUN mkdir -p /root/selenium/tests
ADD tests /root/selenium/testsADD run_all_selenium_tests.sh /root/selenium
ADD xvfb.init /etc/init.d/xvfbRUN chmod +x /etc/init.d/xvfbRUN update-rc.d xvfb defaults
ENV TARGET=somedomain.example.com
CMD (service xvfb start; export DISPLAY=:10; cd /root/selenium; ./run_all_selenium_tests.sh $TARGET)
I explained what this Dockerfile achieves in the 2nd post referenced above. The ADD instructions will copy all the files in the tests directory to the directory called /root/selenium/tests, and will copy run_all_selenium_tests.sh to /root/selenium. The ENV variable TARGET represents the URL against which we want to run our Selenium tests. It is set by default to somedomain.example.com, and is used as the first argument when running run_all_selenium_tests.sh in the CMD instruction.
At this point, I checked in the selenium-docker directory and all files and directories under it into a Github repository I will call 'devops'.
5) Create a new Jenkins project (I usually create a New Item and copy it from an existing project).
I specified that the build is parameterized and I indicated a choice parameter called TARGET_HOST with a few host/domain names that I want to test. I also specified Git as the Source Code Management type, and I indicated the URL of the devops repository on Github. Most of the action of course happens in the Jenkins build step, which in my case is of type "Execute shell". Here it is:
#!/bin/bash
set +e
IMAGE_NAME=selenium-wd:v1
cd $WORKSPACE/selenium-docker
# build the image out of the Dockerfile in the current directory/usr/bin/docker build -t $IMAGE_NAME .
# run a container based on the imageCONTAINER_ID=`/usr/bin/docker run -d -e "TARGET=$TARGET_HOST" $IMAGE_NAME`
echo CONTAINER_ID=$CONTAINER_ID  # while the container is still running, sleep and check logs; repeat every 40 secwhile [ $? -eq 0 ];do  sleep 40  /usr/bin/docker logs $CONTAINER_ID  /usr/bin/docker ps | grep $IMAGE_NAMEdone
# docker logs sends errors to stderr so we need to save its output to a file first/usr/bin/docker logs $CONTAINER_ID > d.out 2>&1
# remove the container so they don't keep accumulatingdocker rm $CONTAINER_ID
# mark jenkins build as failed if log output contains FAILEDgrep "FAILED" d.out
if [[ $? -eq 0 ]]; then    rm d.out   exit 1else  rm d.out  exit 0fi
Some notes:
  • it is recommended that you specify #!/bin/bash as the 1st line of your script, to make sure that bash is the shell that is being used
  • use set +e if you want the Jenkins shell script to continue after hitting a non-zero return code (the default behavior is for the script to stop on the first line it encounters an error and for the build to be marked as failed; subsequent lines won't get executed, resulting in much pulling of hair)
  • the Jenkins script will build a new image every time it runs, so that we make sure we have updated Selenium scripts in place
  • when running the container via docker run, we specify -e "TARGET=$TARGET_HOST" as an extra command line argument. This will override the ENV variable named TARGET in the Dockerfile with the value received from the Jenkins multiple choice dropdown for TARGET_HOST
  • the main part of the shell script stays in a while loop that checks for the return code of "/usr/bin/docker ps | grep $IMAGE_NAME". This is so we wait for all the Selenium tests to finish, at which point docker ps will not show the container running anymore (you can still see the container by running docker ps -a)
  • once the tests finish, we save the stdout and stderr of the docker logs command for our container to a file (this is so we capture both stdout and stderr; at first I tried something like docker logs $CONTAINER_ID | grep FAILED but this was never successful, because it was grep-ing against stdout, and errors are sent to stderr)
  • we grep the file (d.out) for the string FAILED and if we find it, we exit with code 1, i.e. unsuccessful as far as Jenkins is concerned. If we don't find it, we exit successfully with code 0.


Stuff The Internet Says On Scalability For February 5th, 2016


We have an early entry for the best vacation photo of the century. 

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • 1 billion: WhatsApp users; 3.5 billion: Facebook users in 2030; $3.5 billion: art sold online; $150 billion: China's budget for making chips; 37.5MB: DNA information in a single sperm; 

  • Quotable Quotes:
    • @jeffiel: "But seriously developers, trust us next time your needs temporarily overlap our strategic interests. And here's a t-shirt."
    • @feross: Modern websites are the epitome of inefficiency. Using giant multi-MB javascript files to do what static HTML could do in 1999.
    • Rob Joyce (NSA): We put the time in …to know [that network] better than the people who designed it and the people who are securing it,' he said. 'You know the technologies you intended to use in that network. We know the technologies that are actually in use in that network. Subtle difference. You'd be surprised about the things that are running on a network vs. the things that you think are supposed to be there.
    • @MikeIsaac: i just realized how awkward Facebook's f8 conference is gonna be this year
    • @Nick_Craver: Stats correction: Stack Overflow did 157,370,800,409 redis ops in the past 30 days, almost always under 2% CPU:
    • @BenedictEvans: The global SMS system does around 20bn messages a day. WhatsApp is now doing 42bn. With 57 engineers.
    • @jaygoldberg: WhatsApp has the benefit of running on top of the world's data networks which employ a few more engineers... 
    • @anildash: It’s odd that developers think Twitter is so hostile while Facebook shuts down stuff like Parse & FBML + cuts back the Instagram & FB APIs.
    • @asynchio:  I use to think CEP = stateful business rules engine + inference + stream processing. Has it changed?
    • @Marco_Rasp: "SOA is about reuse, MicroServices about time to market." @samnewman #microxchg
    • @pfhllnts: "I predict quantum containers where Docker exists both inside and outside a container." @marcoceppi #fosdem
    • @viktorklang: Awesome story: 295x speedup with Akka Streams on same HW compared to Rails :) 
    • krinchan: Yes. Because a currency almost completely controlled by Chinese miners who are strangling the network at 1MB blocks, causing transaction times in excess of three hours at peak and just introduced the ability to arbitrarily reverse those transactions during the lag is totally going to handle DraftKings and FanDuel.
    • @mpesce: 1/The Apple AX series SOCs are more than powerful enough to run a Hololens-type device very effectively.
    • Matthew Yglesias: Amazon's leadership, from CEO Jeff Bezos on down, are deliberately redeploying every dollar of revenue Amazon earns into making the company bigger and bigger.
    • German forest ranger finds that trees have social networks: trees operate less like individuals and more as communal beings. Working together in networks and sharing resources, they increase their resistance to threats
    • @ValaAfshar: 11 years ago some guy named Mark Zuckerberg talks about his new company. He is now 4th richest person in the world. 
    • Bernard Marr: In China, the government is rolling out a social credit score that aggregates not only a citizen’s financial worthiness, but also how patriotic he or she is, what they post on social media, and who they socialize with
    • @Carnage4Life: Facebook is valued at $326 billion and worth more than Exxon Mobil. Remember when people freaked out at $15B value? 
    • @Nick_Craver: High levels of efficiency at scale aren't one thing; it's a thousand things. Many we haven't really shared in detail...and we should.
    • 2BuellerBells: Things to reinvent: Event loops (done!) Unix (In progress!) Erlang (est. 5 years)
    • @LusciousPear: I'm consistently seeing GETs from @googlecloud storage 2-5x faster than S3. niiiice
    • Kevin Old: The future looks mighty scalable.
    • @BenedictEvans: All curation grows until it requires search. All search grows until it requires curation.
    • @Carnage4Life: Google has 7 services with 1B monthly active users; Gmail, Search, Chrome, Android, Maps, YouTube and Google Play 
    • @jmhodges: That's 1.3 million unique domains in a single day. Yesterday. Let's Encrypt is doing a thing.
    • @danielbryantuk: "60% percent of app users rate performance/response time ahead of features" @grabnerandi  #OOP2016 
    • @tdeekens: Sometimes Monoliths don’t get enough respect. They’re part of our revenue system allowing us to build Microservices. They gave us a business
    • Searching for the Algorithms Underlying Life: Valiant’s self-stated goal is to find “mathematical definitions of learning and evolution which can address all ways in which information can get into systems.” If successful, the resulting “theory of everything”...would literally fuse life science and computer science together.
    • @mountain_ghosts: 1995: the information superhighway will mean anyone can do anything from anywhere 2015: must be willing to relocate to San Francisco

  • Fingerprinting made burglars put on gloves. CCTV made kids pull their hoods up. Spying made honest people use encryption. Forensics: What Bugs, Burns, Prints, DNA and More Tell Us About Crime.

  • So that's what bandwidth means. ucaetano: The bandwidth doesn't depend on the frequency you're occupying, but on the amount of spectrum available: you "usually" get in the order of 1 bps for every Hz of spectrum available for mobile: a 20Mz chunk of spectrum will give you ~20Mbps, no matter if it is 700MHz or 5 GHz. Higher frequencies have awful penetration and range, that's why today you define who wins in the mobile game by the amount of 700MHz and 800MHz spectrum they own. In other words, lower frequency spectrum is (within certain limits) always better.

  • Even spies have limits. Optic Nerve: millions of Yahoo webcam images intercepted by GCHQ. A British surveillance agency suffered the indignity of only saving images every five minutes from user feeds to reduce server load. My kingdom for a cloud! Why? They needed data to train their face recognition algorithms. That's what happens if you aren't Google.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Robot Framework and the keyword-driven approach to test automation - Part 2 of 3

Xebia Blog - Wed, 02/03/2016 - 18:03

In part 1 of our three-part post on the keyword-driven approach, we looked at the position of this approach within the history of test automation frameworks. We elaborated on the differences, similarities and interdependencies between the various types of test automation frameworks. This provided a first impression of the nature and advantages of the keyword-driven approach to test automation.

In this post, we will zoom in on the concept of a 'keyword'.

What are keywords? What is their purpose? And what are the advantages of utilizing keywords in your test automation projects? And are there any disadvantages or risks involved?

As stated in an earlier post, the purpose of this first series of introductory-level posts is to prevent all kinds of intrusive expositions in later posts. These later posts will be of a much more practical, hands-on nature and should be concerned solely with technical solutions, details and instructions. However, for those that are taking their first steps in the field of functional test automation and/or are inexperienced in the area of keyword-driven test automation frameworks, we would like to provide some conceptual and methodological context. By doing so, those readers may grasp the follow-up posts more easily.

Keywords in a nutshell A keyword is a reusable test function

The term ‘keyword’ refers to a callable, reusable, lower-level test function that performs a specific, delimited and recognizable task. For example: ‘Open browser’, ‘Go to url’, ‘Input text’, ‘Click button’, ‘Log in’, 'Search product', ‘Get search results’, ‘Register new customer’.

Most, if not all, of these are recognizable not only for developers and testers, but also for non-technical business stakeholders.

Keywords implement automation layers with varying levels of abstraction

As can be gathered from the examples given above, some keywords are more atomic and specific (or 'specialistic') than others. For instance, ‘Input text’ will merely enter a string into an edit field, while ‘Search product’ will be comprised of a chain (sequence) of such atomic actions (steps), involving multiple operations on various types of controls (assuming GUI-level automation).

Elementary keywords, such as 'Click button' and 'Input text', represent the lowest level of reusable test functions: the technical workflow level. These often do not have to be created, but are being provided by existing, external keyword libraries (such as Selenium WebDriver), that can be made available to a framework. A situation that could require the creation of such atomic, lowest-level keywords, would be automating at the API level.

The atomic keywords are then reused within the framework to implement composite, functionally richer keywords, such as 'Register new customer', 'Add customer to loyalty program', 'Search product', 'Add product to cart', 'Send gift certificate' or 'Create invoice'. Such keywords represent the domain-specific workflow activity level. They may in turn be reused to form other workflow activity level keywords that automate broader chains of workflow steps. Such keywords then form an extra layer of wrappers within the layer of workflow activity level keywords. For instance, 'Place an order' may be comprised of 'Log customer in', 'Search product', 'Add product to cart', 'Confirm order', etc. The modularization granularity applied to the automation of such broader workflow chains is determined by trading off various factors against each other - mainly factors such as the desired levels of readability (of the test design), of maintainablity/reusability and of coverage of possible alternative functional flows through the involved business process. The eventual set of workflow activity level keywords form the 'core' DSL (Domain Specific Language) vocabulary in which the highest-level specifications/examples/scenarios/test designs/etc. are to be written.

The latter (i.e. scenarios/etc.) represent the business rule level. For example, a high-level scenario might be:  'Given a customer has joined a loyalty program, when the customer places an order of $75,- or higher, then a $5,- digital gift certificate will be sent to the customer's email address'. Such rules may of course be comprised of multiple 'given', 'when' and/or 'then' clauses, e.g. multiple 'then' clauses conjoined through an 'and' or 'or'. Each of these clauses within a test case (scenario/example/etc.) is a call to a workflow activity level, composite keyword. As explicated, the workflow-level keywords, in turn, are calling elementary, technical workflow level keywords that implement the lowest-level, technical steps of the business scenario. The technical workflow level keywords will not appear directly in the high-level test design or specifications, but will only be called by keywords at the workflow activity level. They are not part of the DSL.

Keywords thus live in layers with varying levels of abstraction, where, typically, each layer reuses (and is implemented through) the more specialistic, concrete keywords from lower levels. Lower level keywords are the building blocks of higher level keywords and at the highest-level your test cases will also be consisting of keyword calls.

Of course, your automation solution will typically contain other types of abstraction layers, for instance a so-called 'object-map' (or 'gui-map') which maps technical identifiers (such as an xpath expression) onto logical names, thereby enhancing maintainability and readability of your locators. Of course, the latter example once again assumes GUI-level automation.

Keywords are wrappers

Each keyword is a function that automates a simple or (more) composite/complex test action or step. As such, keywords are the 'building blocks' for your automated test designs. When having to add a customer as part of your test cases, you will not write out (hard code) the technical steps (such as entering the first name, entering the surname, etc.), but you will have one statement that calls the generic 'Add a customer' function which contains or 'wraps' these steps. This wrapped code, as a whole, thereby offers a dedicated piece of functionality to the testers.

Consequently, a keyword may encapsulate sizeable and/or complex logic, hiding it and rendering it reusable and maintainable. This mechanism of keyword-wrapping entails modularization, abstraction and, thus, optimal reusability and maintainability. In other words, code duplication is prevented, which dramatically reduces the effort involved in creating and maintaining automation code.

Additionally, the readability of the test design will be improved upon, since the clutter of technical steps is replaced by a human readable, parameterized call to the function, e.g.: | Log customer in | Bob Plissken | Welcome123 |. Using so-called embedded or interposed arguments, readability may be enhanced even further. For instance, declaring the login function as 'Log ${userName} in with password ${password}' will allow for a test scenario to call the function like this: 'Log Bob Plissken in with password Welcome123'.

Keywords are structured

As mentioned in the previous section, keywords may hide rather complex and sizeable logic. This is because the wrapped keyword sequences may be embedded in control/flow logic and may feature other programmatic constructs. For instance, a keyword may contain:

  • FOR loops
  • Conditionals (‘if, elseIf, elseIf, 
, else’ branching constructs)
  • Variable assignments
  • Regular expressions
  • Etc.

Of course, keywords will feature such constructs more often than not, since encapsulating the involved complexity is one of the main purposes for a keyword. In the second and third generation of automation frameworks, this complexity was an integral part of the test cases, leading to automation solutions that were inefficient to create, hard to read & understand and even harder to maintain.

Being a reusable, structured function, a keyword can also be made generic, by taking arguments (as briefly touched upon in the previous section). For example, ‘Log in’ takes arguments: ${user}, ${pwd} and perhaps ${language}. This adds to the already high levels of reusability of a keyword, since multiple input conditions can be tested through the same function. As a matter of fact, it is precisely this aspect of a keyword that enables so-called data-driven test designs.

Finally, a keyword may also have return values, e.g.: ‘Get search results’ returns: ${nrOfItems}. The return value can be used for a myriad of purposes, for instance to perform assertions, as input for decision-making or for passing it into another function as argument, Some keywords will return nothing, but only perform an action (e.g. change the application state, insert a database record or create a customer).

Risks involved With great power comes great responsibility

The benefits of using keywords have been explicated above. Amongst other advantages, such as enhanced readability and maintainability, the keyword-driven approach provides a lot of power and flexibility to the test automation engineer. Quasi-paradoxically, in harnessing this power and flexibility, the primary risk involved in the keyword-driven approach is being introduced. That this risk should be of topical interest to us, will be established by somewhat digressing into the subject of 'the new testing'.

In many agile teams, both 'coders' and 'non-coders' are expected to contribute to the automation code base. The boundaries between these (and other) roles are blurring. Despite the current (and sometimes rather bitter) polemic surrounding this topic, it seems to be inevitable that the traditional developer role will have to move towards testing (code) and the traditional tester role will have to move towards coding (tests). Both will use testing frameworks and tools, whether it be unit testing frameworks (such as JUnit), keyword-driven functional test automation frameworks (such as RF or Cucumber) and/or non-functional testing frameworks (such as Gatling or Zed Attack Proxy).

To this end, the traditional developer will have to become knowledgeable and gain experience in the field of testing strategies. Test automation that is not based on a sound testing strategy (and attuned to the relevant business and technical risks), will only result in a faster and more frequent execution of ineffective test designs and will thus provide nothing but a false sense of security. The traditional developer must therefore make the transition from the typical tool-centric approach to a strategy-centric approach. Of course, since everyone needs to break out of the silo mentality, both developer and tester should also collaborate on making these tests meaningful, relevant and effective.

The challenge for the traditional tester may prove to be even greater and it is there that the aforementioned risks are introduced. As stated, the tester will have to contribute test automation code. Not only at the highest-level test designs or specifications, but also at the lowest-level-keyword (fixture/step) level, where most of the intelligence, power and, hence, complexity resides. Just as the developer needs to ascend to the 'higher plane' of test strategy and design, the tester needs to descend into the implementation details of turning a test strategy and design into something executable. More and more testers with a background in 'traditional', non-automated testing are therefore entering the process of acquiring enough coding skills to be able to make this contribution.

However, by having (hitherto) inexperienced people authoring code, severe stability and maintainability risks are being introduced. Although all current (i.e. keyword-driven) frameworks facilitate and support creating automation code that is reusable, maintainable, robust, reliable, stable and readable, still code authors will have to actively realize these qualities, by designing for them and building them in into their automation solutions. Non-coders though, in my experience, are (at least initially) having quite some trouble understanding and (even more dangerously) appreciating the critical importance of applying design patters and other best practices to their code. That is, most traditional testers seem to be able to learn how to code (at a sufficiently basic level) rather quickly, partially because, generally, writing automation code is less complex than writing product code. They also get a taste for it: they soon get passionate and ambitious. They become eager to applying their newly acquired skills and to create lot's of code. Caught in this rush, they often forget to refactor their code, downplay the importance of doing so (and the dangers involved) or simply opt to postpone it until it becomes too large a task. Because of this, even testers who have been properly trained in applying design patterns, may still deliver code that is monolithic, unstable/brittle, non-generic and hard to maintain. Depending on the level at which the contribution is to be made (lowest-level in code or mid-level in scripting), these risks apply to a greater or lesser extent. Moreover, this risky behaviour may be incited by uneducated stakeholders, as a consequence of them holding unrealistic goals, maintaining a short-term view and (to put it bluntly) being ignorant with regards to the pitfalls, limitations, disadvantages and risks that are inherent to all test automation projects.

Then take responsibility ... and get some help in doing so

Clearly then, the described risks are not so much inherent to the frameworks or to the approach to test automation, but rather flow from inexperience with these frameworks and, in particular, from inexperience with this approach. That is, to be able to (optimally) benefit from the specific advantages of this approach, applying design patterns is imperative. This is a critical factor for the long-term success of any keyword-driven test automation effort. Without applying patterns to the test code, solutions will not be cost-efficient, maintainable or transferable, amongst other disadvantages. The costs will simply outweigh the benefits on the long run. Whats more, essentially the whole purpose and added value of using keyword-driven frameworks are lost, since these frameworks had been devised precisely to this end: counter the severe maintainability/reusability problems of the earlier generation of frameworks. Therefore, from all the approaches to test automation, the keyword-driven approach depends to the greatest extent on the disciplined and rigid application of standard software development practices, such as modularization, abstraction and genericity of code.

This might seem a truism. However, since typically the traditional testers (and thus novice coders) are nowadays directed by their management towards using keyword-driven frameworks for automating their functional, black-box tests (at the service/API- or GUI-level), automation anti-patterns appear and thus the described risks emerge. To make matters worse, developers remain mostly uninvolved, since a lot of these testers are still working within siloed/compartmented organizational structures.

In our experience, a combination of a comprehensive set of explicit best practices, training and on-the-job coaching, and a disciplined review and testing regime (applied to the test code) is an effective way of mitigating these risks. Additionally, silo's need to be broken down, so as to foster collaboration (and create synergy) on all testing efforts as well as to be able to coordinate and orchestrate all of these testing efforts through a single, central, comprehensive and shared overall testing strategy.

Of course, the framework selected to implement a keyword-driven test automation solution, is an important enabler as well. As will become apparent from this series of blog posts, the Robot Framework is the platform par excellence to facilitate, support and even stimulate these counter-measures and, consequently, to very swiftly enable and empower seasoned coders and beginning coders alike to contribute code that is efficient, robust, stable, reusable, generic, maintainable as well as readable and transferable. That is not to say that it is the platform to use in any given situation, just that it has been designed with the intent of implementing the keyword-driven approach to its fullest extent. As mentioned in a previous post, the RF can be considered as the epitome of the keyword-driven approach, bringing that approach to its logical conclusion. As such it optimally facilitates all of the mentioned preconditions for long-term success. Put differently, using the RF, it will be hard not to avoid the pitfalls inherent to keyword-driven test automation.

Some examples of such enabling features (that we will also encounter in later posts):

  • A straightforward, fully keyword-oriented scripting syntax, that is both very powerful and yet very simple, to create low- and/or mid-level test functions.
  • The availability of dozens of keyword libraries out-of-the-box, holding both convenience functions (for instance to manipulate and perform assertions on xml) and specialized keywords for directly driving various interface types. Interfaces such as REST, SOAP or JDBC can thus be interacted with without having to write a single line of integration code.
  • Very easy, almost intuitive means to apply a broad range of design patterns, such as creating various types of abstraction layers.
  • And lots and lots of other great and unique features.
Summary

We have now an understanding of the characteristics and purpose of keywords and of the advantages of structuring our test automation solution into (various layers of) keywords. At the same time, we have looked at the primary risk involved in the application of such a keyword-driven approach and at ways to deal with these risks.

Keyword-driven test automation is aimed at solving the problems that were instrumental in the failure of prior automation paradigms. However, for a large part it merely facilitates the involved solutions. That is, to actually reap the benefits that a keyword-driven framework has to offer, we need to use it in an informed, professional and disciplined manner, by actively designing our code for reusability, maintainability and all of the other qualities that make or break long-term success. The specific design as well as the unique richness of powerful features of the Robot Framework will give automators a head start when it comes to creating such code.

Of course, this 'adage' of intelligent and adept usage, is true for any kind of framework that may be used or applied in the course of a software product's life cycle.

Part 3 of this second post, will go into the specific implementation of the keyword-driven approach by the Robot Framework.

A Case Study: WordPress Migration for Shift.ms

The case study presented involves a migration from custom database to WordPress. The company with the task is Valet and it has a vast portfolio of previously done jobs that included shifts from database to WordPress, multisite-to-multisite, and multisite to single site among others. The client is Shift.ms.

Problem

The client, Shift.ms, presented a taxing problem to the team. Shift.ms had a custom database that they needed migrated to WordPress. They had installed a WordPress/BuddyPress and wanted their data moved into this new installation. All this may seem rather simple. However, there was one problem; the client had some data in the newly installed WordPress that they intended to keep.

Challenges

The main problem was that the schema for the database and that of WordPress are very different in infrastructure. The following issues arose in an effort to deal with the problem:

Categories: Architecture

FitNesse in your IDE

Xebia Blog - Wed, 02/03/2016 - 17:10

FitNesse has been around for a while. The tool has been created by Uncle Bob back in 2001. It’s centered around the idea of collaboration. Collaboration within a (software) engineering team and with your non-programmer stakeholders. FitNesse tries to achieve that by making it easy for the non-programmers to participate in the writing of specifications, examples and acceptance criteria. It can be launched as a wiki web server, which makes it accessible to basically everyone with a web browser.

The key feature of FitNesse is that it allows you to verify the specs with the actual application: the System Under Test (SUT). This means that you have to make the documentation executable. FitNesse considers tables to be executable. When you read ordinary documentation you’ll find that requirements and examples are outlined in tables often, hence this makes for a natural fit.

There is no such thing as magic, so the link between the documentation and the SUT has to be created. That’s where things become tricky. The documentation lives in our wiki server, but code (that’s what we require to connect documentation and SUT) lives on the file system, in an IDE. What to do? Read a wiki page, remember the class and method names, switch to IDE, create classes and methods, compile, switch back to browser, test, and repeat? Well, so much for fast feedback! When you talk to programmers, you’ll find this to be the biggest problem with FitNesse.

Imagine, as a programmer, you're about to implement an acceptance test defined in FitNesse. With a single click, a fixture class is created and adding fixture methods is just as easy. You can easily jump back and forth between the FitNesse page and the fixture code. Running the test page is as simple as hitting a key combination (Ctrl-Shift-R comes to mind). You can set breakpoints, step through code with ease. And all of this from within the comfort of your IDE.

Acceptance test and BDD tools, such as Cucumber and Concordion, have IDE plugins to cater for that, but for FitNesse this support was lacking. Was lacking! Such a plugin is finally available for IntelliJ.

screenshot_15435-1

Over the last couple of months, a lot of effort has been put in building this plugin. It’s available from the Jetbrains plugin repository, simply named FitNesse. The plugin is tailored for Slim test suites, but also works fine with Fit tables. All table types are supported. References between script, decision tables and scenarios work seamlessly. Running FitNesse test pages is as simple as running a unit test. The plugin automatically finds FitNesseRoot based on the default Run configuration.

The current version (1.4.3) even has (limited) refactoring support: renaming Java fixture classes and methods will automatically update the wiki pages.

Feel free to explore the new IntelliJ plugin for FitNesse and let me know what you think!

(GitHub: https://github.com/gshakhn/idea-fitnesse)

Nine Product Management lessons from the Dojo

Xebia Blog - Tue, 02/02/2016 - 23:00
Are you kidding? a chance to add the Matrix to a blogpost?

Are you kidding? a chance to add the Matrix to a blogpost?

As I am gearing up for the belt exams next Saturday I couldn’t help to notice the similarities of what we learn in the dojo (it’s where the martial arts are taught) and how we should behave as Product Managers. Here are 9 lessons, straight from the Dojo, ready for your day job:

1.) Some things are worth fighting for

In Judo we practice Randori, which means ground wrestling. You will find that there are some grips that are worth fighting for, but some you should let go in search of a better path to victory.

In Product Management, we are the heat shield of the product, constantly between engineering striving for perfection, sales wanting something else, marketing pushing the launch date and management hammering on the PNL.

You need to pick your battles, some you deflect, some you unarm, and some you accept, because you are maneuvering yourself so you can make the move that counts.

Good product managers are not those who win the most battles, but those who know which ones to win.

2.) Preserve your partners

It’s fun to send people flying through the air, but the best way to improve yourself is to improve your partner. You are in this journey together, just as in Product Management. Ask yourself the following question today: “whom do I need to train as my successor” and start doing so.

I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers

"I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers"

3.) There is no such thing as fair

It’s a natural reaction if someone changed the rules of the game. We protest, we go on strike, we say it’s not fair, but in a market driven environment, what is fair? Disruption, changing the rules of the game has become the standard (24% of the companies experience it already, 58% expect it, 42% is still in denial) We can go on strike or adapt to it.

The difference between Kata and free sparing is that your opponents will not follow a prescribed path. Get over it.

4.) Behavior leads to outcome

I’m heavily debating the semantics with my colleague from South Africa (you know who you are), so it’s probably wording but the grunt of it is: if you want more of something, you should start doing it. Positive brand experiences will drive people to your products; hence one bad product affects all other products of your brand.

It’s not easy to change your behavior, whether it is in sport, health, customer interaction or product philosophy, but a different outcome starts with different behaviour.

Where did my product go?

Where did my product go?

5.) If it’s not working try something different

Part of Saturday’s exams will be what in Jujitsu is called “indirect combinations”. This means that you will be judged on the ability to move from one technique to another when the first one fails. Brute force is also an option, but not one that is likely to succeed, even if you are very strong.

Remember Microsoft pouring over a billion marketing dollars in Windows Phone? Brute forcing its position by buying Nokia? Blackberry doing something similar with QNX and only now switching to Android? Indirect combinations is not a lack of perseverance but adaptability to achieve result without brute force and with a higher chance of success.

This is where you tap out

This is where you tap out

6.) Failure is always an option

Tap out! Half of the stuff in Jujitsu is originally designed to break your bones, so tap out if your opponent has got a solid grip. It’s not the end, it’s the beginning. Nobody gets better without failing.

Two third of all Product Innovations fails, the remaining third takes about five iterations to get it right. Test your idea thoroughly but don’t be afraid to try something else too.

7.) Ask for help

There is no way you know it all. Trust your team, peers and colleagues to help you out. Everyone has something to offer, they may not always have the solution for you but in explaining your problem you will often find the solution.

8.) The only way to get better is to show up

I’m a thinker. I like to get the big picture before I act. This means that I can also overthink something that you just need to do. Though it is okay to study and listen, don’t forget to go out there and start doing it. Short feedback loops are key in building the right product, even if the product is not build right. So talk to customers, show them what you are working on, even in an early stage. You will not get better at martial arts or product management if wait too long to show up.

9.) Be in the moment

Don’t worry about what just happened, or what might happen. Worry about what is right in front of you. The technique you are forcing is probably not the one you want.

 

This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

Sponsored Post: Netflix, Macmillan Learning, Aerospike, TrueSight Pulse, LaunchDarkly, Robinhood, StatusPage.io, Redis Labs, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Macmillan Learning, a premier e-learning institute, is looking for VP of DevOps to manage the DevOps teams based in New York and Austin. This is a very exciting team as the company is committed to fully transitioning to the Cloud, using a DevOps approach, with focus on CI/CD, and using technologies like Chef/Puppet/Docker, etc. Please apply here.

  • DevOps Engineer at Robinhood. We are looking for an Operations Engineer to take responsibility for our development and production environments deployed across multiple AWS regions. Top candidates will have several years experience as a Systems Administrator, Ops Engineer, or SRE at a massive scale. Please apply here.

  • Senior Service Reliability Engineer (SRE): Drive improvements to help reduce both time-to-detect and time-to-resolve while concurrently improving availability through service team engagement.  Ability to analyze and triage production issues on a web-scale system a plus. Find details on the position here: https://jobs.netflix.com/jobs/434

  • Manager - Performance Engineering: Lead the world-class performance team in charge of both optimizing the Netflix cloud stack and developing the performance observability capabilities which 3rd party vendors fail to provide.  Expert on both systems and web-scale application stack performance optimization. Find details on the position here https://jobs.netflix.com/jobs/860482

  • Senior Devops Engineer - StatusPage.io is looking for a senior devops engineer to help us in making the internet more transparent around downtime. Your mission: help us create a fast, scalable infrastructure that can be deployed to quickly and reliably.

  • Software Engineer (DevOps). You are one of those rare engineers who loves to tinker with distributed systems at high scale. You know how to build these from scratch, and how to take a system that has reached a scalability limit and break through that barrier to new heights. You are a hands on doer, a code doctor, who loves to get something done the right way. You love designing clean APIs, data models, code structures and system architectures, but retain the humility to learn from others who see things differently. Apply to AppDynamics

  • Software Engineer (C++). You will be responsible for building everything from proof-of-concepts and usability prototypes to deployment- quality code. You should have at least 1+ years of experience developing C++ libraries and APIs, and be comfortable with daily code submissions, delivering projects in short time frames, multi-tasking, handling interrupts, and collaborating with team members. Apply to AppDynamics
Fun and Informative Events

  • Your event could be here. How cool is that?
Cool Products and Services
  • Aerospike Shows Fivefold Cost Advantage over Cassandra at Higher Performance in DataStax’s Own Benchmark. A recent NoSQL database performance test by DataStax concluded that Cassandra bested Couchbase, MongoDB and HBase. Since Aerospike wasn’t included in the evaluation, we ran the benchmark against Aerospike in the same test cases. The result? Aerospike dramatically outperformed Cassandra AND cost 5 times less. Read the details here

  • Dev teams are using LaunchDarkly’s Feature Flags as a Service to get unprecedented control over feature launches. LaunchDarkly allows you to cleanly separate code deployment from rollout. We make it super easy to enable functionality for whoever you want, whenever you want. See how it works.

  • TrueSight Pulse is SaaS IT performance monitoring with one-second resolution, visualization and alerting. Monitor on-prem, cloud, VMs and containers with custom dashboards and alert on any metric. Start your free trial with no code or credit card.

  • Turn chaotic logs and metrics into actionable data. Scalyr is a tool your entire team will love. Get visibility into your production issues without juggling multiple tools and tabs. Loved and used by teams at Codecademy, ReturnPath, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

The Big List of Alternatives to Parse

Parse is not going away. It’s going to get better.
Ilya Sukhar — April 25th, 2013 on the Future of Parse

 

Parse is dead. The great diaspora has begun. The gold rush is on. There’s a huge opportunity for some to feed and grow on Parse’s 600,000 fleeing customers.

Where should you go? What should you do? By now you’ve transitioned through all five stages of grief and ready for stage six: doing something about it. Fortunately there are a lot of options and I’ve gathered as many resources as I can here in one place.

There is a Lot Pain Out There

Parse closing is a bigger deal than most shutterings. There’s even a petition: Don't Shut down Parse.com. That doesn’t happen unless you’ve managed to touch people. What could account for such an outpouring of emotion?

Parse and the massive switch to mobile computing grew up at the same time. Mobile is by definition personal. Many programmers capable of handling UI programming challenge were not as experienced with backend programming and Parse filled that void. When a childhood friend you grew to depend on dies, it hurts. That hurt is deep. It goes into the very nature of how you make stuff, how you grow, how you realize your dreams, how you make a living. That’s a very intimate connection.

For a trip through memory lane Our Incredible Journey is a tumblr chronicling many services that are no longer with us.

Some reactions from around the net:

maxado_zdl: F*ck you facebook!!!!!!!!!!!!!!!!!!!!!!!!

pacp_ec: Damn it Facebook only George R. R. Martin is allowed to kill my heroes

Mythul: I really hate facebook right now ! Thanks for screwing up my apps with your bad business model!

Mufro: Damn. We've been slowly migrating our smaller apps to Parse as we make annual updates. Now we're trying to figure out what we're gonna do... go back to the pain of rolling our own server backends out? This leaves a pretty big hole in the market IMO. I don't know of anyone who gets you off the ground as quickly and affordably as Parse does. It's been a joy to use their product, but I knew deep down it was too good to be true. I guess we'll have to take a look at AWS again, maybe Azure. We use Firebase in another project, so we might check that out too. This sucks though.

samwize7: When Facebook acquired Parse, I thought it is good news since they ain't profitable, and now they have a backing of a giant, who tried hard to woo developers. I built many mobile apps using Parse, and has always been a fan of how they build a product for developers. Their documentation is awesome, their free tier is generous, their SDK covers widely. Today, their announcement is a sad news. And once again, proves that we can't trust Facebook.

clev1: This literally just ruined my day....I've got 2 major projects near completion that I've been using Parse as a BaaS for. Anyone with experience know how difficult or a transition it is to switch to Firebase?

solumamantis: I just can't believe the service is being retired... I started using three months ago - my new app coming out soon is completely reliant on it..... I will have a look on Firebase, but honestly I think i will build my own Parse/Node.js version and manage it myself....

changingminds: What the f*ck. Wtf am I supposed to do with 120k users who currently use my app that uses parse? I gotta redo the entire f*cking backend? F*cking bullsh*t.

manooka: My entire startup relies on Parse. I developed the website and apps myself as this was perfect for me as a Front-end developer without having to worry about back-end servers/databases etc. This is SERIOUSLY bad news.

stuntmanmikey: I'm a full-stack developer who is part of a startup that depends on Parse. As the only developer, the amount of time we've saved NOT having to write a data access layer and web service layer has been a windfall for us. Now I'm left to either switch to a similar product (Firebase just doesn't have the same appeal to me) or implement the backend myself at great cost.

neckbeardfedoras: The thing is, most of the folks using Parse probably use it because they're not full stack or back end developers. Removal of Parse means more time or money spent on resources to manage a back end system.

Why did Facebook Shutdown Parse?
Categories: Architecture

A Patreon Architecture Short

Patreon recently snagged $30 Million in funding. It seems the model of pledging $1 for individual feature releases or code changes won't support fast enough growth. CEO Jack Conte says: We need to bring in so many people so fast. We need to keep up with hiring and keep up with making all of the things.

Since HighScalability is giving Patreon a try I've naturally wondered how it's built. Modulo some serious security issues Patreon has always worked well. So I was interested to dig up this nugget in a thread on the funding round where the Director of Engineering at Patreon shares a little about how Patreon works:

  • Server is in Python using Flask and SQLAlchemy, 
  • Runs on AWS (EC2, RDS (MySQL), and some Redis, Celery, SQS, etc. to boot). 
  • A few microservices here and there in other languages too (e.g. real time chat server with Node & Firebase)
  • Web code is written in React (with some legacy code in Angular). We tend to use Redux for the non-component pieces, but are still trying out new React-compatible libraries here and there.
  • iOS and Android code are written in Objective-C and Java, respectively. 
  • We use Realm on both platforms for data storage
  • Most of the rest is pretty standard modern project stuff (CocoaPods for iOS, Gradle on Android, etc.)

For this time period it seems like a good set of technologies to use for the type of application Patreon is. It's interesting to see Angular as referred to as legacy code. React seems to be winning the framework wars.

The use of Realm is notable on the mobile platform as a common storage layer. Realm's simplicity is attractive.

The use of microservices may have helped Patreon dodge the Parse closing down bullet. Instead of trying to find one backend to rule them all they picked Firebase, a more targeted technology, to implement a specific feature. Service diversification is a great way to manage service failure risk.

Categories: Architecture

Trends for 2016

Our world is changing faster than ever before.  It can be tough to keep up.  And what you don’t know, can sometimes hurt you.

Especially if you get disrupted.

If you want to be a better disruptor vs. be the disrupted, it helps to know what’s going on around the world.  There are amazing people, amazing companies, and amazing discoveries changing the world every day.  Or at least giving it their best shot.

  • You know the Mega-Trends: Cloud, Mobile, Social, and Big Data.
  • You know the Nexus-Of-Forces, where the Mega-Trends (Cloud, Mobile, Social, Big Data) converge around business scenarios.
  • You know the Mega-Trend of Mega-Trends:  Internet-Of-Things (IoT)

But do you know how Virtual Reality is changing the game? 


Disruption is Everywhere

Are you aware of how the breadth and depth of diversity is changing our interactions with the world?  Do you know how “bi-modal” or “dual-speed IT” are really taking shape in the 3rd Era of IT or the 4th Industrial Revolution?

Do you know what you can print now with 3D printers? (and have you seen the 3D printed car that can actually drive? 
 and did you know we have a new land speed record with the help of the Cloud, IoT, and analytics? 
 and have you seen what driverless cars are up to?)

And what about all of the innovation that’s happening in and around cities? (and maybe a city near you.)

And what’s going on in banking, healthcare, retail, and just about every industry around the world?

Trends for Digital Business Transformation in a Mobile-First, Cloud-First World

Yes, the world is changing, and it’s changing fast.  But there are patterns.  I did my yearly trends post to capture and share some of these trends and insights:

Trends for 2016: The Year of the Bold

Let me warn you now – it’s epic.  It’s not a trivial little blog post of key trends for 2016.  It’s a mega-post, packed full with the ideas, terms, and concepts that are shaping Digital Transformation as we know it.

Even if you just scan the post, you will likely find something you haven’t seen or heard of before.  It’s a bird’s-eye view of many of the big ideas that are changing software and the tech industry as well as what’s changing other industries, and the world around us.

If you are in the game of Digital Business Transformation, you need to know the vocabulary and the big ideas that are influencing the CEOs, CIOs, CDOs (Chief Digital Officers), COOs, CFOs, CISOs (Chief Information Security Officers), CINOs (Chief Innovation Officers), and the business leaders that are funding and driving decisions as they make their Digital Business Transformations and learn how to adapt for our Mobile-First, Cloud-First world.

If you want to be a disruptor, Trends for 2016: The Year of the Bold is a fast way to learn the building blocks of next-generation business in a Digital Economy in a Mobile-First, Cloud-First world.

10 Key Trends for 2016

Here are the 10 key trends at a glance from Trends for 2016: The Year of the Bold to get you started:

  1. Age of the Customer
  2. Beyond Smart Cities
  3. City Innovation
  4. Context is King
  5. Culture is the Critical Path
  6. Cybersecurity
  7. Diversity Finds New Frontiers
  8. Reputation Capital
  9. Smarter Homes
  10. Virtual Reality Gets Real

Perhaps the most interesting trend is how culture is making or breaking companies, and cities, as they transition to a new era of work and life.  It’s a particularly interesting trend because it’s like a mega-trend.  It’s the people and process part that goes along with the technology.  As many people are learning, Digital Transformation is a cultural shift, not a technology problem.

Get ready for an epic ride and read Trends for 2016: The Year of the Bold.

If you read nothing else, at least read the section up front titled, “The Year of the Bold” to get a quick taste of some of the amazing things happening to change the globe. 

Who knows maybe we’ll team up on tackling some of the Global Goals and put a small dent in the universe.

You Might Also Like

10 Big Ideas from Getting Results the Agile Way

10 Personal Productivity Tools from Agile Results

Agile Results for 2016

How To Be a Visionary Leader

The Future of Jobs

The New Competitive Landscape

What Life is Like with Agile Results

Categories: Architecture, Programming

Which Agile Organizational Model or Framework to use? Use them all!

Xebia Blog - Sat, 01/30/2016 - 22:11

Many organizations are reinventing themselves as we speak.  One of the most difficult questions to answer is: which agile organizational model or framework do we use? SAFe? Holacracy? LeSS? Spotify?

Based on my experience on all these models, my answer is: just use as many agile models and frameworks you can get your hands on.  Not by choosing one of them specifically, but by experimenting with elements of all these models the agile way: Inspect, Learn and Adapt continuously.

For example, you could use Spotify’s tribe-structure, Holacracy’s consent- and role principles and SAFe’s Release Trains in your new agile organization. But most important: experiment towards your own “custom made” agile organizational building blocks.  And remember: taking on the Agile Mindset is 80% of the job, only 20% is implementing this agile "organization".

Probably the worst thing you can do is just copy-paste an existing model.  You will inherit the same rigid situation you just wanted to prevent by implementing a scaled, agile organizational model.

Finally, the main ingredient of this agile recipe is trust.  You have to trust your colleagues and this new born agile organization in being anti-fragile and self-correcting just right from the start.  These principles are the same as successful agile organizations you probably admire, depend on.

Stuff The Internet Says On Scalability For January 29th, 2016

Hey, it's HighScalability time:


This is a trace of a Google search query. A single query might touch a couple thousand machines.

 

If you like this Stuff then please consider supporting me on Patreon.
  • 88: the too short life of Marvin Minsky; $18.4 billion: profit made by Apple in 3 months; 100M: hours of video watched on Facebook each day; 1.59 billion: Facebook users; $115B: size of game market by 2020; 12 years: Mars rover still going strong; 96.3m: barrels of oil produced per day; 570 Billion: object brighter than the Sun; 134 pounds: carried by drones;  $2.4 billion: AWS Q4 sales; 2.5 million: advertisers on the Facebook;

  • Quotable Quotes:
    • @ptaoussanis: Real-world scaling 101: be in the habit of routinely, objectively asking what parts of your system could stand to be simplified or removed
    • @Carnage4Life: Azure revenue up 140%. Search revenue from #BingAds up 21%. Microsoft is killing it in the cloud
    • @gabriel_boya: Scaling up a Cloud Service on @azure takes so many hours that your customers may be gone by the time your instances are allocated...
    • AJ007: Facebook is the only platform that lets advertisers target a mass audience with very fine demographic precision. Google you lose the demographics. Television, you lose the the precision.
    • Junaid Anwar: It is to be noted that clustering [node.js] yielded two times the performance as compared to the non-clustering case which shows that performance linearly increases with processing cores when clustering is used.
    • crash41301: Our company has been slowly shrinking the hundreds of services we have down to a handful of larger, automated tested services and the dev team (about 50) likes it much more.
    • @swardley: Compute is the activity, Architecture is the practice
    • van lessen: Self-Contained Systems (SCS) describe an architectural approach to build software systems, e.g. to split monoliths into multiple functionally separated, yet largely autonomous web applications. 
    • R. P. Feynman: What is the cause of management's fantastic faith in the machinery?
    • Steven Max Patterson: Facebook filters much from the raw newstream and gives me what it thinks I want with about 20% accuracy.
    • Brandon Butterworth~ a single mega data centre might simply represent a single, large potential point of failure
    • boggzPit: Damn it Facebook. Why did I ever believe you could handle being cool to developers?
    • Vadim Tkachenko: To recap an interesting point in that post: when using 48 cores with the server, the result was worse than with 12 cores. I wanted to understand the reason is was true, so I started digging. My primary suspicion was that Java (I never trust Java) was not good dealing with 100GB of memory.
    • Seth Lloyd: Our algorithm shows that you don't need a big quantum computer to kick some serious topological butt...You could find the topology of simple structures on a very simple quantum computer. 
    • Robert Scoble: When he was doing his thesis 20 years ago, it took him two years to analyze just 24 hours of data from farms (he pulls in data from satellites, Doppler radar and even drones). Today, his company does the same thing in seconds.
    • @jgrahamc: Devotees of microservices use 'monolith' as a derogatory term; wait 10 years and we'll be using 'spider's web' as a derogatory term.
    • @mweagle: I see your femtoservice, and pivot with a single source code point: “yoctoservice” :) #disrupt #unicorn #M&A
    • milesrout: The entire point of Docker is that you use it for everything. It's a universal application image format. That is the point. It's contained, secure, and childproof. That is the point. It's not just about scalability. If I could use a desktop operating system where all programs ran as docker containers, I'd do that too. That's what they're for.
    • Bill Wash: I will never pass up an opportunity to help out a colleague, and I’ll remember the days before I knew everything.
    • @CarlHasselskog:  my startup handles ~10 million uploaded files/day with two employees in total (entire company). That's largely thanks to you guys.
    • AJ Kohn: December saw more negative numbers with a 6.96% decrease, year over year, in desktop search volume. Every month in 2015 had lower desktop query volume than the same month in 2014. Every. Month.
    • Jerry Chen: Every startup has a different size unit of value. Bigger is not better, smaller is not better.
    • sacundim: No, the goal of normalization is to eliminate logical inconsistencies—data sets that entail two or more different answers to the same question. 
    • Jake Archibald: Streams can be used to do fun things like turn clouds to butts, transcode MPEG to GIF, but most importantly, they can be combined with service workers to become the fastest way to serve content.
    • Solomon Hykes: Computers do run only one unikernel at a time. It’s just that sometimes they are virtual computers. Remember that virtualization is increasingly hardware-assisted, and the software parts are mature. So for many use cases it’s reasonable to separate concerns and just assume that VMs are just a special type of computer.

  • Relying on a tool backed by a big company is no protection. Facebook is closing down Parse. This is a stunner because Parse was a popular and well made service, used by millions of now adrift mobile apps. What happened? This might be it: "Facebook also would have had to invest untold millions of dollars in capital and, more importantly, engineering talent, to get the Parse business fully off the ground to have a better chance at making a dent in competitors like Amazon, Microsoft and Google." How about Firebase? The Firebase founder responds: "We're not going anywhere. What makes us different? Firebase is very complementary to Google's other product offerings. Cloud for one, as well as Angular, Polymer, GCM, etc." The moral of the store is told by bsaul: "parse wasn't a core service for facebook, nor a relevant source of a revenue AND their API wasn't standard. Those points combined made it very risky for people to use it." 

  • The Internet will soon be eating a lot of Brotli, Google's new lossless compression algorithm that is making the Internet 17-25% faster. Support will be in Chrome and other browsers, but server side support may take longer. Why does it only work with https? Richard Coles: one reason why this is limited to https is to stop it being mangled by proxies, which has been a practical problem in the past with encodings.

  • Young Skynet is continuing its dastardly plan of self-creation by seeding deep learning both far and wide. Microsoft Open Sources Deep Learning, AI Toolkit On GitHub. Twitter released Distributed learning in TorchTeach Yourself Deep Learning with TensorFlow and Udacity.

  • While the Super Bowl will make a mess of local traffic, it's great for cell phone service. Verizon spent $70 million to triple Bay Area LTE capacity ahead of the Super Bowl. They have more than tripled its 4G LTE network capacity; Build 16 new area cell sites; Install 75 small cells; Boost capacity by adding 37 XLTE to existing sites; Complete preparations to deploy 14 mobile cell sites in high traffic locations.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Start with Needs and Wants

“The purpose of a business is to create a customer.” – Peter Drucker

So many people start with solutions, and then wonder where the customers are.

It’s the proverbial, “When all you have is a hammer, everything looks like a nail.”

The truth is, if all you have is a hammer, then get better at finding nails.  And while you are looking for those nails, get better at expanding your toolbox.

If you want to be a better Entrepreneur or a trend hunter or a product manager or a visionary leader, then start with needs and wants.  It will help you quickly cut through the overwhelm and overload of ideas, trends, and insights to get to the ideas that matter.

Some say the most valuable thing in the world is ideas.  Many others say that coming up with ideas is not the problem.  The problem is execution.  The truth here is that so many ideas fail because they didn’t create a customer or raving fans.  They didn’t address relevant pains, needs, and desired outcomes.  Instead, they solve problems that nobody has or create things that nobody wants (unless it’s free), besides the creator, and that’s how you end up in the mad scientist syndrome.  Or, ideas die because they were not presented in a way that speaks to the needs and wants, and so you end up a brilliant, misunderstood genius.

Start Viewing the World Through the Lens of Human Needs and Wants

Here is some good insight and timeless truths on how to find trends that matter and how to create ideas that do, too from the 5 Trends for 2016 report by Trendwatching.com.

Via 5 Trends for 2016:

“Trends emerge as innovators address consumers’ basic needs and wants in novel ways.
As trend watchers, that’s why we look for clusters of innovations which are defining (and redefining) customer expectations.

Start by asking why customers might embrace you using a channel. Next, challenge whether existing channels really satisfy the deep needs and wants of your customers. Could you create any new ones? Finally, imagine entirely new contexts you could leverage (perhaps even those that customers aren’t yet consciously aware of).

As long as the onslaught of technological change continues, we’ll keep shouting this mantra from the rooftops: stop viewing the world through the lens of technology, and start viewing technology through the lens of basic human needs and wants.

Put another way: all those tech trends you’re obsessed with are fine, but can you use them to deliver something people actually want?”

Start with Scenarios to Validate Customer Pains, Needs, and Desired Outcomes

A scenario is simply a story told from the customer's point of view that explains their situation and what they want to achieve.

They are a great tool for validating ideas, capturing ideas, and sharing ideas.  What makes them so powerful is that they are a story told in the Voice-of-the-Customer (VOC).  The Current State story captures the pains and needs.  The Desired Future State captures the vision of the desired outcomes.  Here is an example:

Current State
As a product manager, I'm struggling to keep up with changing customer behavior and band perception is eroding.  Competition from new market entrants is creating additional challenges as we face new innovations, lower prices, and better overall customer experiences.

Desired Future State
By tapping into the vast amounts of information from social media, we gain deep customer insight.  We find new opportunities to better understand customer preferences and perceptions of the brand.  We combine social data with internal market data to gain deeper insights into brand awareness and profitable customer segments.  Employees are better able to share ideas, connect with each other, connect with customers, and connect with partners to bring new ideas to market.  We are able to pair up with the key influencers in social media to help reshape the story and perception of our brand.

Customer Wants and Needs are the Breeding Ground of Innovation

Makes total sense right?   But how often do you see anybody ever do this?  That’s the real gap.

Instead, we see hammers not even looking for nails, but trying to sell hammers.

But maybe people want drills?  No, they don’t want to by drills or drill-bits.  They want to buy holes.  And when you create that kind of clarity, you start to get resourceful and you can create ideas and solutions in a way that’s connected to what actually counts.

You Might Also Like

6 Steps for Enterprise Architecture as Strategy

10 High-Value Activities in the Enterprise

Agile Methodology in Microsoft patterns & practices

Customer-Connected Engineering

How To Turn IT into an Asset Rather than a Liability

Scenario-Driven Value Realization

Why So Many Ideas Die

Categories: Architecture, Programming

Start with Needs and Wants

“The purpose of a business is to create a customer.” – Peter Drucker

So many people start with solutions, and then wonder where the customers are.

It’s the proverbial, “When all you have is a hammer, everything looks like a nail.”

The truth is, if all you have is a hammer, then get better at finding nails.  And while you are looking for those nails, get better at expanding your toolbox.

If you want to be a better Entrepreneur or a trend hunter or a product manager or a visionary leader, then start with needs and wants.  It will help you quickly cut through the overwhelm and overload of ideas, trends, and insights to get to the ideas that matter.

Some say the most valuable thing in the world is ideas.  Many others say that coming up with ideas is not the problem.  The problem is execution.  The truth here is that so many ideas fail because they didn’t create a customer or raving fans.  They didn’t address relevant pains, needs, and desired outcomes.  Instead, they solve problems that nobody has or create things that nobody wants (unless it’s free), besides the creator, and that’s how you end up in the mad scientist syndrome.  Or, ideas die because they were not presented in a way that speaks to the needs and wants, and so you end up a brilliant, misunderstood genius.

Start Viewing the World Through the Lens of Human Needs and Wants

Here is some good insight and timeless truths on how to find trends that matter and how to create ideas that do, too from the 5 Trends for 2016 report by Trendwatching.com.

Via 5 Trends for 2016:

“Trends emerge as innovators address consumers’ basic needs and wants in novel ways.
As trend watchers, that’s why we look for clusters of innovations which are defining (and redefining) customer expectations.

Start by asking why customers might embrace you using a channel. Next, challenge whether existing channels really satisfy the deep needs and wants of your customers. Could you create any new ones? Finally, imagine entirely new contexts you could leverage (perhaps even those that customers aren’t yet consciously aware of).

As long as the onslaught of technological change continues, we’ll keep shouting this mantra from the rooftops: stop viewing the world through the lens of technology, and start viewing technology through the lens of basic human needs and wants.

Put another way: all those tech trends you’re obsessed with are fine, but can you use them to deliver something people actually want?”

Start with Scenarios to Validate Customer Pains, Needs, and Desired Outcomes

A scenario is simply a story told from the customer's point of view that explains their situation and what they want to achieve.

They are a great tool for validating ideas, capturing ideas, and sharing ideas.  What makes them so powerful is that they are a story told in the Voice-of-the-Customer (VOC).  The Current State story captures the pains and needs.  The Desired Future State captures the vision of the desired outcomes.  Here is an example:

Current State
As a product manager, I'm struggling to keep up with changing customer behavior and band perception is eroding.  Competition from new market entrants is creating additional challenges as we face new innovations, lower prices, and better overall customer experiences.

Desired Future State
By tapping into the vast amounts of information from social media, we gain deep customer insight.  We find new opportunities to better understand customer preferences and perceptions of the brand.  We combine social data with internal market data to gain deeper insights into brand awareness and profitable customer segments.  Employees are better able to share ideas, connect with each other, connect with customers, and connect with partners to bring new ideas to market.  We are able to pair up with the key influencers in social media to help reshape the story and perception of our brand.

Customer Wants and Needs are the Breeding Ground of Innovation

Makes total sense right?   But how often do you see anybody ever do this?  That’s the real gap.

Instead, we see hammers not even looking for nails, but trying to sell hammers.

But maybe people want drills?  No, they don’t want to by drills or drill-bits.  They want to buy holes.  And when you create that kind of clarity, you start to get resourceful and you can create ideas and solutions in a way that’s connected to what actually counts.

You Might Also Like

6 Steps for Enterprise Architecture as Strategy

10 High-Value Activities in the Enterprise

Agile Methodology in Microsoft patterns & practices

Customer-Connected Engineering

How To Turn IT into an Asset Rather than a Liability

Scenario-Driven Value Realization

Why So Many Ideas Die

Categories: Architecture, Programming

Hoe om te gaan met Start-ups

Xebia Blog - Fri, 01/29/2016 - 16:15

Dit is een vraag die regelmatig door mijn hoofd speelt. In ieder geval moeten we stoppen met het continu romantiseren van deze initiatieven en als corporate Nederland nou eens echt mee gaan spelen.

Maar hoe?

Grofweg zijn er twee strategieën als corporate: opkopen of zelf beter doen! Klinkt simpel, maar is toch best wel complex. Waarschijnlijk is de beste strategie om een mix te kiezen van beide, waarbij je maximaal je eigen corporate kracht gebruikt (ja, die heb je), en tegelijkertijd volledig de kracht van start-up innovatie kunt gebruiken.

Deze post verkent de mogelijkheden en je moet vooral verder lezen, als ook jij wilt weten hoe jij de digitalisering van de 21ste eeuw wilt overleven.

Waarom moet ik hier iets mee?

Eigenlijk hoef ik deze alinea natuurlijk niet meer te schrijven toch? De gemiddelde leeftijd van bedrijven neemt af.
avg age fortune 500
Dit komt mede doordat de digitalisering van producten en diensten de entree barriĂšres in veel markten steeds lager maken. Er is dus meer concurrentie en daarom moet je beter je best doen om relevant te blijven.
Ten tweede, start-ups zijn hot! Iedereen wil voor een start-up werken en dus gaat het talent daar ook heen. Talent uit een toch al (te) krappe pool. Daarom moet je meer dan voorheen innoveren, omdat je anders de “war on talent” verliest.
Als laatste is er natuurlijk veel te winnen met digitale innovatie. De snelheid waarmee bedrijven tegenwoordig via digitale producten en diensten winstgevend kunnen worden is ongelofelijk, dus doe je het goed, dan doe je mee.

Wat zijn mijn mogelijkheden?

Er zijn eigenlijk maar twee manieren om met start-ups om te gaan. De eerste is om simpelweg een aandeel te nemen in een start-up of een veelbelovende start-up over te nemen. De andere is om zelf te gaan innoveren vanuit de eigen kracht in de organisatie.

De voordelen van aandelen en overnames is natuurlijk de snelle winst. De huid wordt vaak niet goedkoop verkocht, maar dan heb je ook wat. Helemaal mooi is het, de start-up actief is in een segment of markt, waar je zelf met je brand niet in wilt of kunt zitten (incumbent inertia). De nieuwe aanwinst is dan complementair aan de bestaande business. Bijvoorbeeld een grote bank, die een start-up overneemt die gericht is op het verkopen van kortlopend krediet aan midden- en kleinbedrijf.

Het nadeel is natuurlijk dat het moeilijk is om de bestaande assets over te hevelen. Bovendien wordt het nieuwe bijna nooit echt een onderdeel van de staande organisatie en misschien wil je dat ook helemaal niet. De kans bestaat namelijk dat de overgenomen start-up te veel beĂŻnvloed wordt door het moeder bedrijf en daarom meegetrokken wordt door de zwaartekracht van bureaucratie en contraproductieve bestaande corporate gedragspatronen.

Daarbij komt, dat een succesvolle start-up vanzelf onderhevig wordt aan aanvallen van weer andere start-ups. De “kannibalen mindset” moet er in blijven! Facebook heeft daarom altijd gezegd, als wij ons eigen model niet disrupten, doet iemand anders dat wel. Misschien is het waar wat de CEO van Intel Andy Grove eens zei: “only the paranoid survive”.

Zelf innovatiever worden is natuurlijk ook een optie. Dat is echter behoorlijk complex. Meestal wordt innovatie binnen de corporate nog in een lab-setting geĂŻsoleerd. Niet dat dit fout is, maar start-ups doen natuurlijk zoiets niet he. De start-up is namelijk het lab!

Het is grappig dat het lijkt alsof start-ups altijd nieuwe markten met nieuwe producten proberen te bereiken en dat we dit doorgaans bestempelen als “echte” innovatie. In een corporate setting worden namelijk alle product-marktcombinaties in een portfolio gezet (bijvoorbeeld een BCG-matrix) en gaat het om balans tussen huidige business en nieuwe business en de juiste cashflow verhoudingen.
Nou is het leuke dat start-ups maling hebben aan jouw portfolio en dus in elk kwadrant concurreren, zij het business die voor jou in de volwassenheid zit, of in de groei of in je lab-fase. Start-ups zijn simpelweg in de meerderheid en opereren los van elkaar op verschillende fronten. Dit betekent dat feitelijk iedereen in de corporate setting onder vuur ligt door start-ups en dus dat ook iedereen ongeacht de rol in het portfolio moet leren innoveren. Een voorbeeld van hoe waardevol het is om deze mindset te adopteren is het verhaal van deze jongeman uit 1973, hij werkte voor Kodak.

Je hele bedrijf veranderen is natuurlijk een ontzettende klus en als alternatief zou je ook kunnen kiezen om hetzelfde effect te creĂ«ren als bij een overname; het bewust lanceren van eigen start-ups voldoende los opgezet van de moederorganisatie om te versnellen. Deze eigen start-ups moeten directe concurrenten worden van de huidige business en zo succesvol worden dat iig een deel van de bestaande eigen en concurrerende business daar naartoe stroomt. Grote corporates kunnen zich op die manier meer en meer omvormen tot een netwerk van start-up nodes, waarbij het moeder bedrijf ondersteund en strategische complementaire nodes aankoopt waar nodig. Hoe zo’n node er uit ziet en hoe je dit organiseert is voer voor een volgende post.

Mocht je niet kunnen wachten en dit eerder willen weten dan kun je natuurlijk altijd bellen voor een gesprek.

Tinder: How does one of the largest recommendation engines decide who you'll see next?

We've heard a lot about the Netflix recommendation algorithm for movies, how Amazon matches you with stuff, and Google's infamous PageRank for search. How about Tinder? It turns out Tinder has a surprisingly thoughtful recommendation system for matching people.

This is from an extensive profile, Mr. (Swipe) Right?, on Tinder founder Sean Rad:

Categories: Architecture

Tinder: How does one of the largest recommendation engines decide who you'll see next?

We've heard a lot about the Netflix recommendation algorithm for movies, how Amazon matches you with stuff, and Google's infamous PageRank for search. How about Tinder? It turns out Tinder has a surprisingly thoughtful recommendation system for matching people.

This is from an extensive profile, Mr. (Swipe) Right?, on Tinder founder Sean Rad:

Categories: Architecture