Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Setting up Jenkins to run headless Selenium tests in Docker containers

Agile Testing - Grig Gheorghiu - Sat, 02/06/2016 - 01:40
This is the third post in a series on running headless Selenium WebDriver tests. Here are the first two posts:
  1. Running Selenium WebDriver tests using Firefox headless mode on Ubuntu
  2. Running headless Selenium WebDriver tests in Docker containers
In this post I will show how to add the final piece to this workflow, namely how to fully automate the execution of Selenium-based WebDriver tests running Firefox in headless mode in Docker containers. I will use Jenkins for this example, but the same applies to other continuous integration systems.
1) Install docker-engine on the server running Jenkins (I covered this in my post #2 above)
2) Add the jenkins user to the docker group so that Jenkins can run the docker command-line tool in order to communicate with the docker daemon. Remember to restart Jenkins after doing this.
3) Go through the rest of the workflow in my post above ("Running headless Selenium WebDriver tests in Docker containers") and make sure you can run all the commands in that post from the command line of the server running Jenkins.
4) Create a directory structure for your Selenium WebDriver tests (mine are written in Python). 
I have a directory called selenium-docker which contains a directory called tests, under which I put all my Python WebDriver tests named sel_wd_*.py. I also  have a simple shell script I named run_selenium_tests.sh which does the following:
#!/bin/bash
TARGET=$1 # e.g. someotherdomain.example.com (if not specified, the default is somedomain.example.com)
for f in `ls tests/sel_wd_*.py`; do    echo Running $f against $TARGET    python $f $TARGETdone
My selenium-docker directory also contains the xvfb.init file I need for starting up Xvfb in the container, and finally it contains this Dockerfile:
FROM ubuntu:trusty
RUN echo "deb http://ppa.launchpad.net/mozillateam/firefox-next/ubuntu trusty main" > /etc/apt/sources.list.d//mozillateam-firefox-next-trusty.listRUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys CE49EC21RUN apt-get updateRUN apt-get install -y firefox xvfb python-pipRUN pip install seleniumRUN mkdir -p /root/selenium/tests
ADD tests /root/selenium/testsADD run_all_selenium_tests.sh /root/selenium
ADD xvfb.init /etc/init.d/xvfbRUN chmod +x /etc/init.d/xvfbRUN update-rc.d xvfb defaults
ENV TARGET=somedomain.example.com
CMD (service xvfb start; export DISPLAY=:10; cd /root/selenium; ./run_all_selenium_tests.sh $TARGET)
I explained what this Dockerfile achieves in the 2nd post referenced above. The ADD instructions will copy all the files in the tests directory to the directory called /root/selenium/tests, and will copy run_all_selenium_tests.sh to /root/selenium. The ENV variable TARGET represents the URL against which we want to run our Selenium tests. It is set by default to somedomain.example.com, and is used as the first argument when running run_all_selenium_tests.sh in the CMD instruction.
At this point, I checked in the selenium-docker directory and all files and directories under it into a Github repository I will call 'devops'.
5) Create a new Jenkins project (I usually create a New Item and copy it from an existing project).
I specified that the build is parameterized and I indicated a choice parameter called TARGET_HOST with a few host/domain names that I want to test. I also specified Git as the Source Code Management type, and I indicated the URL of the devops repository on Github. Most of the action of course happens in the Jenkins build step, which in my case is of type "Execute shell". Here it is:
#!/bin/bash
set +e
IMAGE_NAME=selenium-wd:v1
cd $WORKSPACE/selenium-docker
# build the image out of the Dockerfile in the current directory/usr/bin/docker build -t $IMAGE_NAME .
# run a container based on the imageCONTAINER_ID=`/usr/bin/docker run -d -e "TARGET=$TARGET_HOST" $IMAGE_NAME`
echo CONTAINER_ID=$CONTAINER_ID  # while the container is still running, sleep and check logs; repeat every 40 secwhile [ $? -eq 0 ];do  sleep 40  /usr/bin/docker logs $CONTAINER_ID  /usr/bin/docker ps | grep $IMAGE_NAMEdone
# docker logs sends errors to stderr so we need to save its output to a file first/usr/bin/docker logs $CONTAINER_ID > d.out 2>&1
# remove the container so they don't keep accumulatingdocker rm $CONTAINER_ID
# mark jenkins build as failed if log output contains FAILEDgrep "FAILED" d.out
if [[ $? -eq 0 ]]; then    rm d.out   exit 1else  rm d.out  exit 0fi
Some notes:
  • it is recommended that you specify #!/bin/bash as the 1st line of your script, to make sure that bash is the shell that is being used
  • use set +e if you want the Jenkins shell script to continue after hitting a non-zero return code (the default behavior is for the script to stop on the first line it encounters an error and for the build to be marked as failed; subsequent lines won't get executed, resulting in much pulling of hair)
  • the Jenkins script will build a new image every time it runs, so that we make sure we have updated Selenium scripts in place
  • when running the container via docker run, we specify -e "TARGET=$TARGET_HOST" as an extra command line argument. This will override the ENV variable named TARGET in the Dockerfile with the value received from the Jenkins multiple choice dropdown for TARGET_HOST
  • the main part of the shell script stays in a while loop that checks for the return code of "/usr/bin/docker ps | grep $IMAGE_NAME". This is so we wait for all the Selenium tests to finish, at which point docker ps will not show the container running anymore (you can still see the container by running docker ps -a)
  • once the tests finish, we save the stdout and stderr of the docker logs command for our container to a file (this is so we capture both stdout and stderr; at first I tried something like docker logs $CONTAINER_ID | grep FAILED but this was never successful, because it was grep-ing against stdout, and errors are sent to stderr)
  • we grep the file (d.out) for the string FAILED and if we find it, we exit with code 1, i.e. unsuccessful as far as Jenkins is concerned. If we don't find it, we exit successfully with code 0.


Android Studio 2.0 - Beta

Android Developers Blog - Fri, 02/05/2016 - 23:40

Posted by Jamal Eason, Product Manager, Android

Android Studio 2.0 is latest release of the official Android IDE focused on build performance and emulator speed to improve the app development experience. With brand new features like Instant Run which enables you to quickly edit and view code changes, or the new & faster Android emulator, Android Studio 2.0 is the upgrade you do not want to miss. In preparation for the final release, you can download Android Studio 2.0 Beta in the Beta release channel. Overall, the Android Studio 2.0 release has a host of new features which include:

  • *Updated for Beta* Instant Run - Enables a faster code edit & app deployment cycle.
  • *Updated for Beta* Android Emulator - Brand new emulator that is faster than most real devices, and includes a brand new user interface.
  • *Updated for Beta* Google App Indexing Integration & Testing - Adding App Indexing into your app helps you re-engage your users. In the first preview of Android Studio 2.0 you could add indexing code stubs into your code. With the beta release you can now test and validate your URL links in your app all within the IDE.
  • Fast ADB - Installing and pushing files is now up to 5x faster using Android Studio 2.0 with an updated Android Debug Bridge (ADB) offered in platform-tools 23.1.0.
  • GPU Profiler Preview - For graphics intensive applications, you can now visually step through your OpenGL ES code to optimize your app or game
  • Integration of IntelliJ 15 - Android Studio is based on the efficient coding platform of Intellij. Check out the new features from IntelliJ here.

Check out the latest installment of Android Studio Tool Time video below to watch the highlights of the features.



New Features in Android Studio 2.0 Beta
Instant Run

We first previewed Instant Run in November; this latest beta release introduces a new capability called Cold Swap

Instant Run in Android Studio 2.0 allows you to quickly make changes to your app code while your app is running on an Android device or Android Emulator. Instead of waiting for your entire app to rebuild and redeploy after each code change, Android Studio 2.0 will try to incrementally build and push only the incremental code or resource change. Depending on the code changes you make, you can see the results of your change in under a second. By simply updating your app to use the latest Gradle plugin ( 'com.android.tools.build:gradle:2.0.0-beta2’ ), you can take advantage of this time saving features with no other modifications to your code. If your project is setup correctly with Instant Run, you will see a lightning bolt next to your Run button on the toolbar:


Instant Run Button

Behind the scenes, Android Studio 2.0 instruments your code during the first compilation and deployment of your app to your device in order to determine where to swap out code and resources. The Instant Run features updates your app on a best-effort basis and automatically uses one of the following swap methods to update your app:

  • Hot Swap - When only method implementations (including constructors) are changed, the changes are hot swapped. Your application keeps running and the new implementation is used the next time the method is called.
  • Warm Swap - When app resources are changed, the changes are warm swapped. This is similar to a hot swap, except that the current Activity is restarted. You will notice a slight flicker on the screen as the Activity restarts.
  • *New for Beta* Cold Swap - This will quickly restart the whole application. Typically for structural code change, including changes to the class hierarchy, method signatures, static initializers, or fields. Cold Swap is available when you deploy to targets with API level 21 or above.

We made major changes to Instant Run since the first preview of Android Studio 2.0, and now the feature works with more code and resources cases. We will continue to add more code change cases to Instant Run in future releases of Android Studio. If you have any suggestions, please feel free to send us a feature request and learn more about Instant Run here.

App Indexing

Supporting app indexing is now even easier with Android Studio 2.0. App Indexing puts your app in front of users who use Google Search. It works by indexing the URL patterns you provide in your app manifest and using API calls from your app to make content within your app available to both existing and new users. Specifically, when you support URLs for your app content, your users can go directly to those links from Google Search results on their device.

  • Code Generation Introduced in Android Studio 2.0 Preview, you can right click on AndroidManifest.xml or Activity method (or go to Code ‚Üí Generate‚Ķ‚Üí App Indexing API Code) to insert HTTP URL stub codes into your manifest and app code.

  • *New for Beta* URL Testing & Validation What is new in Android Studio 2.0 Beta is that you can now validate and check the results of your URLs with the built-in validation tool (Tools ‚Üí Android ‚Üí Google App Indexing Test). To learn more about app indexing, click here.

Insert App Indexing API Code into your app

App Indexing Testing


App Indexing Test Results

Android Emulator

*Updated for Beta* The new and faster Android emulator also includes fixes and small enhancements for this beta release. Notably, we updated the rotation controls on the emulator toolbar and added multi-touch support to help test apps that use pinch & zoom gestures. To use the multi-touch feature, hold down the Alt key on your keyboard and right-click your mouse to center the point of reference or click & drag the left mouse button to zoom.


Pinch & Zoom Gesture with Multi-Touch

What's Next

Android Studio 2.0 is a big release, and now is good time to check out the beta release to incorporate the new features into your workflow. The beta release is near stable release quality, and should be relatively bug free. But as with any beta release, bugs may still exist, so, if you do find an issue, let us know so we can work to fix it. If you’re already using Android Studio, you can check for updates on the Beta channel from the navigation menu (Help → Check for Update [Windows/Linux] , Android Studio → Check for Updates [OS X]). When you update to beta, you will get access to the new version of Android Studio and Android Emulator.

Connect with us, the Android Studio development team, on Google+.

Categories: Programming

Stuff The Internet Says On Scalability For February 5th, 2016


We have an early entry for the best vacation photo of the century. 

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • 1 billion: WhatsApp users; 3.5 billion: Facebook users in 2030; $3.5 billion: art sold online; $150 billion: China's budget for making chips; 37.5MB: DNA information in a single sperm; 

  • Quotable Quotes:
    • @jeffiel: "But seriously developers, trust us next time your needs temporarily overlap our strategic interests. And here's a t-shirt."
    • @feross: Modern websites are the epitome of inefficiency. Using giant multi-MB javascript files to do what static HTML could do in 1999.
    • Rob Joyce (NSA): We put the time in …to know [that network] better than the people who designed it and the people who are securing it,' he said. 'You know the technologies you intended to use in that network. We know the technologies that are actually in use in that network. Subtle difference. You'd be surprised about the things that are running on a network vs. the things that you think are supposed to be there.
    • @MikeIsaac: i just realized how awkward Facebook's f8 conference is gonna be this year
    • @Nick_Craver: Stats correction: Stack Overflow did 157,370,800,409 redis ops in the past 30 days, almost always under 2% CPU:
    • @BenedictEvans: The global SMS system does around 20bn messages a day. WhatsApp is now doing 42bn. With 57 engineers.
    • @jaygoldberg: WhatsApp has the benefit of running on top of the world's data networks which employ a few more engineers... 
    • @anildash: It’s odd that developers think Twitter is so hostile while Facebook shuts down stuff like Parse & FBML + cuts back the Instagram & FB APIs.
    • @asynchio:  I use to think CEP = stateful business rules engine + inference + stream processing. Has it changed?
    • @Marco_Rasp: "SOA is about reuse, MicroServices about time to market." @samnewman #microxchg
    • @pfhllnts: "I predict quantum containers where Docker exists both inside and outside a container." @marcoceppi #fosdem
    • @viktorklang: Awesome story: 295x speedup with Akka Streams on same HW compared to Rails :) 
    • krinchan: Yes. Because a currency almost completely controlled by Chinese miners who are strangling the network at 1MB blocks, causing transaction times in excess of three hours at peak and just introduced the ability to arbitrarily reverse those transactions during the lag is totally going to handle DraftKings and FanDuel.
    • @mpesce: 1/The Apple AX series SOCs are more than powerful enough to run a Hololens-type device very effectively.
    • Matthew Yglesias: Amazon's leadership, from CEO Jeff Bezos on down, are deliberately redeploying every dollar of revenue Amazon earns into making the company bigger and bigger.
    • German forest ranger finds that trees have social networks: trees operate less like individuals and more as communal beings. Working together in networks and sharing resources, they increase their resistance to threats
    • @ValaAfshar: 11 years ago some guy named Mark Zuckerberg talks about his new company. He is now 4th richest person in the world. 
    • Bernard Marr: In China, the government is rolling out a social credit score that aggregates not only a citizen’s financial worthiness, but also how patriotic he or she is, what they post on social media, and who they socialize with
    • @Carnage4Life: Facebook is valued at $326 billion and worth more than Exxon Mobil. Remember when people freaked out at $15B value? 
    • @Nick_Craver: High levels of efficiency at scale aren't one thing; it's a thousand things. Many we haven't really shared in detail...and we should.
    • 2BuellerBells: Things to reinvent: Event loops (done!) Unix (In progress!) Erlang (est. 5 years)
    • @LusciousPear: I'm consistently seeing GETs from @googlecloud storage 2-5x faster than S3. niiiice
    • Kevin Old: The future looks mighty scalable.
    • @BenedictEvans: All curation grows until it requires search. All search grows until it requires curation.
    • @Carnage4Life: Google has 7 services with 1B monthly active users; Gmail, Search, Chrome, Android, Maps, YouTube and Google Play 
    • @jmhodges: That's 1.3 million unique domains in a single day. Yesterday. Let's Encrypt is doing a thing.
    • @danielbryantuk: "60% percent of app users rate performance/response time ahead of features" @grabnerandi  #OOP2016 
    • @tdeekens: Sometimes Monoliths don’t get enough respect. They’re part of our revenue system allowing us to build Microservices. They gave us a business
    • Searching for the Algorithms Underlying Life: Valiant’s self-stated goal is to find “mathematical definitions of learning and evolution which can address all ways in which information can get into systems.” If successful, the resulting “theory of everything”...would literally fuse life science and computer science together.
    • @mountain_ghosts: 1995: the information superhighway will mean anyone can do anything from anywhere 2015: must be willing to relocate to San Francisco

  • Fingerprinting made burglars put on gloves. CCTV made kids pull their hoods up. Spying made honest people use encryption. Forensics: What Bugs, Burns, Prints, DNA and More Tell Us About Crime.

  • So that's what bandwidth means. ucaetano: The bandwidth doesn't depend on the frequency you're occupying, but on the amount of spectrum available: you "usually" get in the order of 1 bps for every Hz of spectrum available for mobile: a 20Mz chunk of spectrum will give you ~20Mbps, no matter if it is 700MHz or 5 GHz. Higher frequencies have awful penetration and range, that's why today you define who wins in the mobile game by the amount of 700MHz and 800MHz spectrum they own. In other words, lower frequency spectrum is (within certain limits) always better.

  • Even spies have limits. Optic Nerve: millions of Yahoo webcam images intercepted by GCHQ. A British surveillance agency suffered the indignity of only saving images every five minutes from user feeds to reduce server load. My kingdom for a cloud! Why? They needed data to train their face recognition algorithms. That's what happens if you aren't Google.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

What Does Agile Mean to You?

Over on Techwell, my monthly column is Agile Does Not Equal Scrum: Know the Difference.

I wrote the article because I am tired of people saying “Agile/Scrum” as if Scrum was the only way to do agile.

I use iterations, kanban, and the XP technical practices when I work with teams. I am not religious about the “right” way to do agile. I like any combination of approaches that help a team deliver value often. I like anything that helps a team to get feedback on their work and their team process.

I like anything that helps management ask the right questions and create an environment in which teams can succeed.

Dogma doesn’t work very well for me. (I know, you are so surprised.)

If you are thinking about your agile approach, ask yourself, “What does agile mean to me? What value will agile deliver?”

Before you decide on an approach, answer that question. You might be more Dan in my most recent Pragmatic Manager, Define Your Agile Success. Once you know what agile means to you, you might start to read more about possibilities that fit for you.

If you are a leader in your organization trying to use agile more effectively, consider participating in the Influential Agile Leader.

Categories: Project Management

What Does Agile Mean to You?

Over on Techwell, my monthly column is Agile Does Not Equal Scrum: Know the Difference.

I wrote the article because I am tired of people saying “Agile/Scrum” as if Scrum was the only way to do agile.

I use iterations, kanban, and the XP technical practices when I work with teams. I am not religious about the “right” way to do agile. I like any combination of approaches that help a team deliver value often. I like anything that helps a team to get feedback on their work and their team process.

I like anything that helps management ask the right questions and create an environment in which teams can succeed.

Dogma doesn’t work very well for me. (I know, you are so surprised.)

If you are thinking about your agile approach, ask yourself, “What does agile mean to me? What value will agile deliver?”

Before you decide on an approach, answer that question. You might be more Dan in my most recent Pragmatic Manager, Define Your Agile Success. Once you know what agile means to you, you might start to read more about possibilities that fit for you.

If you are a leader in your organization trying to use agile more effectively, consider participating in the Influential Agile Leader.

Categories: Project Management

Storytelling A Tool For More Than Just Presentations

Stories help you visualize your goals

Stories help you visualize your goals

In the Harvard Business Review¬†article¬†The Irresistible Power of Storytelling as a Strategic Business Tool¬†by Harrison Monarth (March 11, 2014), Keith Quesenberry,¬†a researcher¬†from Johns Hopkins,¬†notes¬†‚ÄúPeople are attracted to stories because we‚Äôre social creatures and we relate to other people.‚ÄĚ The power of storytelling is that it helps us understand each other and develop empathy. Storytelling is a tool¬†that is useful in many scenarios; for presentations, but also to help people frame their thoughts and for gathering information. A story provides both a deeper and more nuanced connection with information than most lists of PowerPoint bullets or even structured requirements documents. Here are just a few scenarios (other than presentations) where stories can be useful:

Imagining the impact of change
Almost every change project begins with a vision statement or a set of high-level goals.  Rarely are these visions and goals inspirational, nor do they provide enough structure to guide behavior as the program progresses.  Stories can be used to generate a more emotive vision of the future in the words of the leader(s) or stakeholders. These stories describe what working for the organization would be like after the change happens or a day in the life of the new organization.  The story that unfolds will provide a more nuanced set of requirements, gaps in the vision and guidance on the expectation of how people will behave in language that provides guidance and motivation.

Establishing goals
Asking an executive or even a group of leaders for their goals will typically generate a crisp bullet-pointed list. While important, these types of goals don’t tend to be actionable for anyone other than leader whose MBO is tied to the goal. For everyone else, the bullet-pointed goals describe an endpoint without the nuance of what that endpoint means and how to get to there. In this scenario, a story is the raw material to generate goals at an actionable level.  In scenarios where an organization already has developed a vision or specific SMART goals, I often ask the team or leaders to imagine that they have attained those goals and to tell me how that happened or how they got there.  I have the storytellers use story patterns such as the Hero’s Journey or Freytag’s Pyramid as tools to organize their thoughts and stories.  The use of patterns helps the storyteller think through and describe the entire journey to a goal. The storyteller envisions the whole process which provides motivation for attaining the goal.

Storytelling to generate scenarios for epics, features and stories
One of most interesting dilemmas most Agile teams face is generating an initial backlog. In classic waterfall projects a business analyst or two, a bunch of subject matter experts and possibly a project manager would get together to generate a requirements document. Sometimes Agile projects and products use the same process (thus is a hybrid form of Agile).  A better option is to assemble a cross functional team of SMEs, product owners, product managers and the development team personnel (or at least a subset of the developers), and use facilitated storytelling to generate a set of scenarios which are then decomposed into features, epics and user stories using standard grooming techniques. The stories that emerge will include functional, non-functional and technical themes, rather than simply the functional view user requirements documents usually exhibited. The process of beginning by generating story-based scenarios not only provides the team with the information needed to create user stories, but also provides context for what is being built.

Storytelling might seem like just something that you do when you are making a presentation or explaining the current status.  Storytelling might even seem a little old fashion, yet it seems to be an integral part of the human experience.  Stories establish a vision of where we want a journey to end and for how we want to make the journey.  Stories provide a rich context and the emotional power needed to help guide a team as they work through the process of writing, testing and demonstrating code.  Starting at the beginning, if we think stories are valuable then we need to embrace using storytelling as a tool to develop understanding.

 


Categories: Process Management

Project Tango workshops help bring indoor location apps to life

Android Developers Blog - Thu, 02/04/2016 - 18:21

Posted by Eitan Marder-Eppstein, Developer Engineering Lead, Project Tango

GPS helps us find our way outside whether it is turn by turn navigation to the nearest grocery or just getting us oriented in a new city. But once we get indoors, it is not quite as easy - GPS doesn't work, with accuracy dropping and navigation becoming all but impossible. This is one of the reasons why we started Project Tango, which has centimeter-scale accuracy of a device’s location, allowing better navigation and experiences in indoor spaces.

Over the past few weeks, we’ve been collecting amazing ideas from around the world for great apps for Lenovo’s Project Tango-powered phone. (Have an idea? If you can dream it, you can submit it!) As part of this program we're hosting workshops, focused on specific Tango features. And we just wrapped up a session that we hosted with Westfield Labs devoted to indoor location. Here are some of the highlights:



As you can see, everyone from retail brands to robot startups joined in on the fun‚ÄĒusing Project Tango's motion tracking, depth perception, and area learning capabilities to build some amazing location-based apps. Some of our favorites included:

  • Wayfair made it possible to look through your phone and visualize how a piece of furniture would look in your home.
  • Lowe‚Äôs Innovation Labs improved in-store navigation by overlaying directions to individual items
  • And Aisle411 created a shop-along experience with some of your favorite celebrities

The next stop in our series is a utilities workshop, where we'll be going deep on getting things done with Project Tango‚ÄĒlike taking 3D measurements, or mapping your home or building. In the meantime, keep submitting your ideas to the App Incubator (the deadline is February 15!), and we'll see you soon!

Categories: Programming

Listen To Music While Working: Is It A Good Idea?

Making the Complex Simple - John Sonmez - Thu, 02/04/2016 - 17:00

Today I received a question from a reader asking me about listening to music while working. But… Is it a good idea? Or is it a bad idea? Does it affect your productivity? In this video, I talk about different types of music and how each one of them interacts with your attention and the […]

The post Listen To Music While Working: Is It A Good Idea? appeared first on Simple Programmer.

Categories: Programming

The Dark Sides of Agile and Earned Value Management and Their Fixes

Herding Cats - Glen Alleman - Thu, 02/04/2016 - 16:08

Both Agile development and Earned Value Management are coming together in the Federal Government. DOD and many Federal Civil agencies are adopting the use of Agile software development. At the same time, FAR 34.2 and DFARS 234.2 are flow down on procurement programs that implement Earned Value Management.  

I've written about to benefits of this before, Here's some thoughts on the Dark Side

EVM+Agile the darkside from Glen Alleman Related articles Agile Software Development in the DOD Seven Principles of Agile Software Development in the US DOD What Do We Mean When We Say "Agile Community?"
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Wed, 02/03/2016 - 23:37

It is a capital mistake to theorize before one has data - Sir Arthur Conan Doyel

Categories: Project Management

Tooling in the .net World

Actively Lazy - Wed, 02/03/2016 - 21:51

As a refugee Java developer first washing up on the shores of .net land, I was pretty surprised at the poor state of the tools and libraries that seemed to be in common use. I couldn’t believe this shanty town was the state of the art; but with a spring in my step I marched inland. A few years down the road, I can honestly say it really isn’t that great – there are some high points, but there are a lot of low points.¬†I hadn’t realised at the time but as a Java developer I’d been incredibly spoiled by a vivacious open source community producing loads of great software. In the .net world there seem to be two ways of solving any problem:

  1. The Microsoft way
  2. The other way*

*Note: may not work, probably costs money.

IDE

The obvious place to start is the IDE. I’m sorry, but Visual Studio just isn’t good enough. Compared to Eclipse and IntelliJ for day-to-day development Visual Studio is a poor imitation. Worse, until recently, it was¬†expensive as a lone developer. While there were community editions of Visual Studio, these were even more pared down than the unusable professional edition. Frankly, if you can’t run ReSharper, it doesn’t count as an IDE.

I hear there are actually people out there who¬†use Visual Studio without ReSharper. What are these people? Sadists? What do they do? I hope it isn’t writing software. Perhaps this explains the poor state of so much software; with tools this bad – is it any wonder?

But finally Microsoft have seen sense and recently the free version of Visual Studio supports plugins – which means you can use ReSharper. You still have to pay for it,¬†but with their recent licensing changes I don’t feel quite so bad handing over a few quid a month renting ReSharper before I commit to buying an outright license.¬†Obviously the situation for companies is different – it’s much easier for a company to justify spending the money on Visual Studio licenses and ReSharper licenses.

With ReSharper Visual Studio is at least closer to Eclipse or IntelliJ. It still falls short, but there is clearly only so much lipstick that JetBrains can put on that particular pig. I do however have great hope for Project Rider, basically IntelliJ for .net.

Restful Services

A few years back another Java refugee and I started trying to write a RESTful, CQRS-style web service in C#. We’d done the same in a previous company on the Java stack and expected our choices to be similarly varied. But instead of a vast plethora of approaches from basic HTTP listeners to servlet containers to full blown app servers we narrowed the field down to two choices:

  1. Microsoft’s WCF
  2. ServiceStack

My fellow developer played with WCF and decided he couldn’t make it easily fit the RESTful, CQRS style he had in mind. After playing with ServiceStack we found it could. But then begins a long, tortuous process of finding all the things ServiceStack hadn’t quite got right; all the things we didn’t quite agree with. Now, this is not entirely uncommon in any technology selection process. But we’d already quickly narrowed the field to two. We were now committed to a technology that at every turn was causing us new, unanticipated problems (most annoyingly, problems that other dev teams weren’t having with vanilla WCF services!)

We joked (not really joking) that it would be simpler to write our own service layer on top of the basic HTTP support in .net. In hindsight, it probably would have been. But really, we’d paid the price for having the temerity to step off the One True Microsoft Way.

Worse, what had started as an open source project went paid for during the development of our service – which meant that our brand new web service was immediately legacy as the service layer it was built on was no longer officially supported.

Dependency Management

The thing I found most surprising on arrival in .net land was that people¬†were checking all their¬†third party binaries into version control like it was the 1990s! Java developers might complain that Maven is nothing more than a DSL for downloading the internet. But you know what, it’s bloody good at downloading the internet. Instead, I had the internet checked in to version control.

However, salvation was at hand with NuGet. Except, NuGet really is a bit crap. NuGet doesn’t so much manage my dependencies as break them. All the damned time. Should we restrict all versions of a library (say log4net) to one version across the solution? Nah, let’s have a few variations. Oh, but nuget, now I get random runtime errors because of method signature mis-matches. But it doesn’t make the build fail? Brilliant, thank you, nuget.

So at my current place we’ve written a pre-build script to check that nuget hasn’t screwed the dependencies up. This pre-build check fails more often than I would like to believe.

So managing a coherent set of dependencies isn’t the job of the dependency tool, so what does it do? It downloads one file at a time from the internet. Well done. I think wget can do that, too. It’s a damned sight faster than the nuget power shell console, too. Nuget: it might break your builds, but at least it’s slow.

The Microsoft Way

And then I find things that blow me away. At my current place we’ve written some add-ins to Excel so that people who¬†live and die by Excel can interact with our software. This is pretty cool: adding buttons and menus and ribbons into Excel, all integrated to my back-end services.

In my life as a Java developer I can never even imagine attempting this. The hoops to jump through would have been far too numerous. But in Visual Studio? Create a new, specific type of solution, hit F5, Excel opens. Set a breakpoint, Excel stops. Oh my God –¬†this is an integrated development environment.

Obviously our data is all stored in Microsoft SQLServer. This also has brilliant integration with Visual Studio. For example, we experimented with creating a .net assembly to read some of the binary data we’re storing in the DB. This way we could run efficient queries on complex data types directly in the DB. The dev process for this? Similarly awesome: deploy to the DB and run directly from within Visual Studio. Holy integrated dev cycle, batman!

When there is a Microsoft way, this is why it is so compelling. Whatever they do will be brilliantly integrated with everything else they do. It might not have the flexibility you want, it might not have all the features you want. But it will be spectacularly well¬†integrated with everything else you’re already using.

Why?

Why does it have to be this way? C# is really awesome language; well designed with a lot of expressive power. But the open source ecosystem around it is barren. If the Microsoft Way doesn’t fit the bill, you are often completely stuck.

I think it comes down to the history of Visual Studio being so expensive. Even as a C# developer by day, I am not spending the thick end of ¬£1,000 to get a matching dev environment at home,¬†so I can¬†play. Even if I had a solid idea of an open source project to start, I’m not going to invest a thousand quid just to see.

But finally Microsoft seem to be catching on to the open source thing. Free versions of Visual Studio and projects like CoreCLR can only help. But the Java ecosystem has a decade head start and this ultimately creates a network effect: it’s hard to write good open source software for .net because there’s so little good open source tooling for .net.


Categories: Programming, Testing & QA

Quote of the Day

Herding Cats - Glen Alleman - Wed, 02/03/2016 - 20:07

Winnie-the-Pooh read two notices very carefully, first from left to right, and afterwards, in case he had missed some of it, from right to left - A.A. Milne, Winnie-the-Pooh

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Making Conjectures Without Testable Outcomes Deadlines Always Matter
Categories: Project Management

How Do You Estimate Work That's Never Been Done Before?

Herding Cats - Glen Alleman - Wed, 02/03/2016 - 18:09

This is a common lament in the #Noestimates community

We've never done this before, so how can we possibly estimate how long it will take?

Well the first question is has it actually never been done before, or is the real question is have YOU never done it before? If the work has truly never been done before by anyone, anywhere on the planet, then it's probably going to be hard to estimate. But if the solution has been done before, this brings up the big question for those paying to have the development done, why would they hire someone that has no prior experience providing solutions in the problem domain? 

Let's assume you have prior experience in the problem domain, but the solution is not readily at hand. This means the solution actually has to be engineered. This is the role of  the well developed process of producing the engineering basis of estimate.

Let's start with the notion of Reference Classes. Here's some databases of projects that can used as Reference Classes used to frame the boundaries of the estimating processes for what you may consider new and never before seen work, but in fact has been done by someone else.

  • Nederlandse Software Metrieken Association -¬†www.nesma.org

  • International Software Benchmarking Standards Group -¬†www.isbsg.org

  • Common Software Measurement International Consortium -¬†www.cosmicon.com

Next is the notion that every graduate student in almost any discipline - mine was Physics - is taught in the first year of grad school. When you have an idea, go first and check that some one hasn't already solved the problem. This starts with a literature search. This is a simple problem to solve in 2016. It's called Google. In 1975 it required weeks is long days in the library looking through journals and books in the Physics Library UC Irvine. But before anyone says This is a new and innovative problem and I can't possibly estimate how long it will take. Do your homework and go see if that is actually the case.

When we hear - how can I possibly estimate what I haven't done before? There are more steps to go before answering I can't

  • Have you developed a process architecture for what you want done from this software? Is this¬†process architecture in the form of some swim lane diagram?
  • Have the system elements been partitioned and their interfaces defined at the data and process level. This definition can be high level to assess coupling and cohesion issues that will drive up cost?
  • Have Use Case's been developed for the system functions? Have the common actors, processes, and data been identified?

This list goes on for awhile. But when I hear I can't possibly estimate something that I haven't done before, it begs the question ...

Do you know what Done looks like in units of measure meaningful to the decision makers?

No? then no wonder you can't put upper and lower bounds on any estimate of effort - you don't even know what the customer wants and most importantly you don't know what the units of measure are for what done looks like.

Go find those answers and them you'll have a clearer idea of what the problem is and possibly how to solve it.

Categories: Project Management

Robot Framework and the keyword-driven approach to test automation - Part 2 of 3

Xebia Blog - Wed, 02/03/2016 - 18:03

In part 1 of our three-part post on the keyword-driven approach, we looked at the position of this approach within the history of test automation frameworks. We elaborated on the differences, similarities and interdependencies between the various types of test automation frameworks. This provided a first impression of the nature and advantages of the keyword-driven approach to test automation.

In this post, we will zoom in on the concept of a 'keyword'.

What are keywords? What is their purpose? And what are the advantages of utilizing keywords in your test automation projects? And are there any disadvantages or risks involved?

As stated in an earlier post, the purpose of this first series of introductory-level posts is to prevent all kinds of intrusive expositions in later posts. These later posts will be of a much more practical, hands-on nature and should be concerned solely with technical solutions, details and instructions. However, for those that are taking their first steps in the field of functional test automation and/or are inexperienced in the area of keyword-driven test automation frameworks, we would like to provide some conceptual and methodological context. By doing so, those readers may grasp the follow-up posts more easily.

Keywords in a nutshell A keyword is a reusable test function

The term ‚Äėkeyword‚Äô refers to a callable, reusable, lower-level test function that performs a specific, delimited and recognizable task. For example: ‚ÄėOpen browser‚Äô, ‚ÄėGo to url‚Äô, ‚ÄėInput text‚Äô, ‚ÄėClick button‚Äô, ‚ÄėLog in‚Äô, 'Search product', ‚ÄėGet search results‚Äô, ‚ÄėRegister¬†new customer‚Äô.

Most, if not all, of these are recognizable not only for developers and testers, but also for non-technical business stakeholders.

Keywords implement automation layers with varying levels of abstraction

As can be gathered from the examples given above, some keywords are more atomic and specific (or 'specialistic') than others. For instance,¬†‚ÄėInput text‚Äô will merely enter a string into an edit field, while ‚ÄėSearch product‚Äô will be comprised of a chain (sequence) of such atomic actions (steps), involving multiple operations on various types of controls (assuming GUI-level automation).

Elementary keywords, such as 'Click button' and 'Input text', represent the lowest level of reusable test functions: the technical workflow level. These often do not have to be created, but are being provided by existing, external keyword libraries (such as Selenium WebDriver), that can be made available to a framework. A situation that could require the creation of such atomic, lowest-level keywords, would be automating at the API level.

The atomic keywords are then reused within the framework to implement composite, functionally richer keywords, such as 'Register new customer', 'Add customer to loyalty program', 'Search product', 'Add product to cart', 'Send gift certificate' or 'Create invoice'. Such keywords represent the domain-specific workflow activity level. They may in turn be reused to form other workflow activity level keywords that automate broader chains of workflow steps. Such keywords then form an extra layer of wrappers within the layer of workflow activity level keywords. For instance, 'Place an order' may be comprised of 'Log customer in', 'Search product', 'Add product to cart', 'Confirm order', etc. The modularization granularity applied to the automation of such broader workflow chains is determined by trading off various factors against each other - mainly factors such as the desired levels of readability (of the test design), of maintainablity/reusability and of coverage of possible alternative functional flows through the involved business process. The eventual set of workflow activity level keywords form the 'core' DSL (Domain Specific Language) vocabulary in which the highest-level specifications/examples/scenarios/test designs/etc. are to be written.

The latter (i.e. scenarios/etc.) represent the business rule level. For example, a high-level scenario might be:  'Given a customer has joined a loyalty program, when the customer places an order of $75,- or higher, then a $5,- digital gift certificate will be sent to the customer's email address'. Such rules may of course be comprised of multiple 'given', 'when' and/or 'then' clauses, e.g. multiple 'then' clauses conjoined through an 'and' or 'or'. Each of these clauses within a test case (scenario/example/etc.) is a call to a workflow activity level, composite keyword. As explicated, the workflow-level keywords, in turn, are calling elementary, technical workflow level keywords that implement the lowest-level, technical steps of the business scenario. The technical workflow level keywords will not appear directly in the high-level test design or specifications, but will only be called by keywords at the workflow activity level. They are not part of the DSL.

Keywords thus live in layers with varying levels of abstraction, where, typically, each layer reuses (and is implemented through) the more specialistic, concrete keywords from lower levels. Lower level keywords are the building blocks of higher level keywords and at the highest-level your test cases will also be consisting of keyword calls.

Of course, your automation solution will typically contain other types of abstraction layers, for instance a so-called 'object-map' (or 'gui-map') which maps technical identifiers (such as an xpath expression) onto logical names, thereby enhancing maintainability and readability of your locators. Of course, the latter example once again assumes GUI-level automation.

Keywords are wrappers

Each keyword is a function that automates a simple or (more) composite/complex test action or step. As such, keywords are the 'building blocks' for your automated test designs. When having to add a customer as part of your test cases, you will not write out (hard code) the technical steps (such as entering the first name, entering the surname, etc.), but you will have one statement that calls the generic 'Add a customer' function which contains or 'wraps' these steps. This wrapped code, as a whole, thereby offers a dedicated piece of functionality to the testers.

Consequently, a keyword may encapsulate sizeable and/or complex logic, hiding it and rendering it reusable and maintainable. This mechanism of keyword-wrapping entails modularization, abstraction and, thus, optimal reusability and maintainability. In other words, code duplication is prevented, which dramatically reduces the effort involved in creating and maintaining automation code.

Additionally, the readability of the test design will be improved upon, since the clutter of technical steps is replaced by a human readable, parameterized call to the function, e.g.: | Log customer in | Bob Plissken | Welcome123 |. Using so-called embedded or interposed arguments, readability may be enhanced even further. For instance, declaring the login function as 'Log ${userName} in with password ${password}' will allow for a test scenario to call the function like this: 'Log Bob Plissken in with password Welcome123'.

Keywords are structured

As mentioned in the previous section, keywords may hide rather complex and sizeable logic. This is because the wrapped keyword sequences may be embedded in control/flow logic and may feature other programmatic constructs. For instance, a keyword may contain:

  • FOR loops
  • Conditionals (‚Äėif, elseIf, elseIf, ‚Ķ, else‚Äô branching constructs)
  • Variable assignments
  • Regular expressions
  • Etc.

Of course, keywords will feature such constructs more often than not, since encapsulating the involved complexity is one of the main purposes for a keyword. In the second and third generation of automation frameworks, this complexity was an integral part of the test cases, leading to automation solutions that were inefficient to create, hard to read & understand and even harder to maintain.

Being a reusable, structured function, a¬†keyword can also be made generic, by taking¬†arguments (as briefly touched upon in the previous section). For example,¬†‚ÄėLog in‚Äô takes arguments: ${user}, ${pwd} and perhaps¬†${language}. This adds to the already high levels of reusability¬†of a keyword, since¬†multiple input conditions can be tested through the same¬†function. As a matter of fact, it is precisely this aspect of a keyword that¬†enables so-called data-driven test designs.

Finally, a keyword may also have return values, e.g.: ‚ÄėGet search results‚Äô returns: ${nrOfItems}. The return value can be used for a myriad of purposes, for instance to perform¬†assertions, as input for decision-making or for¬†passing it into another function as argument, Some¬†keywords will return nothing, but only perform an action (e.g. change the application state, insert a database record or create a customer).

Risks involved With great power comes great responsibility

The benefits of using keywords have been explicated above. Amongst other advantages, such as enhanced readability and maintainability, the keyword-driven approach provides a lot of power and flexibility to the test automation engineer. Quasi-paradoxically, in harnessing this power and flexibility, the primary risk involved in the keyword-driven approach is being introduced. That this risk should be of topical interest to us, will be established by somewhat digressing into the subject of 'the new testing'.

In many agile teams, both 'coders' and 'non-coders' are expected to contribute to the automation code base. The boundaries between these (and other) roles are blurring. Despite the current (and sometimes rather bitter) polemic surrounding this topic, it seems to be inevitable that the traditional developer role will have to move towards testing (code) and the traditional tester role will have to move towards coding (tests). Both will use testing frameworks and tools, whether it be unit testing frameworks (such as JUnit), keyword-driven functional test automation frameworks (such as RF or Cucumber) and/or non-functional testing frameworks (such as Gatling or Zed Attack Proxy).

To this end, the traditional developer will have to become knowledgeable and gain experience in the field of testing strategies. Test automation that is not based on a sound testing strategy (and attuned to the relevant business and technical risks), will only result in a faster and more frequent execution of ineffective test designs and will thus provide nothing but a false sense of security. The traditional developer must therefore make the transition from the typical tool-centric approach to a strategy-centric approach. Of course, since everyone needs to break out of the silo mentality, both developer and tester should also collaborate on making these tests meaningful, relevant and effective.

The challenge for the traditional tester may prove to be even greater and it is there that the aforementioned risks are introduced. As stated, the tester will have to contribute test automation code. Not only at the highest-level test designs or specifications, but also at the lowest-level-keyword (fixture/step) level, where most of the intelligence, power and, hence, complexity resides. Just as the developer needs to ascend to the 'higher plane' of test strategy and design, the tester needs to descend into the implementation details of turning a test strategy and design into something executable. More and more testers with a background in 'traditional', non-automated testing are therefore entering the process of acquiring enough coding skills to be able to make this contribution.

However, by having (hitherto) inexperienced people authoring code, severe stability and maintainability risks are being introduced. Although all current (i.e. keyword-driven) frameworks facilitate and support creating automation code that is reusable, maintainable, robust, reliable, stable and readable, still code authors will have to actively realize these qualities, by designing for them and building them in into their automation solutions. Non-coders though, in my experience, are (at least initially) having quite some trouble understanding and (even more dangerously) appreciating the critical importance of applying design patters and other best practices to their code. That is, most traditional testers seem to be able to learn how to code (at a sufficiently basic level) rather quickly, partially because, generally, writing automation code is less complex than writing product code. They also get a taste for it: they soon get passionate and ambitious. They become eager to applying their newly acquired skills and to create lot's of code. Caught in this rush, they often forget to refactor their code, downplay the importance of doing so (and the dangers involved) or simply opt to postpone it until it becomes too large a task. Because of this, even testers who have been properly trained in applying design patterns, may still deliver code that is monolithic, unstable/brittle, non-generic and hard to maintain. Depending on the level at which the contribution is to be made (lowest-level in code or mid-level in scripting), these risks apply to a greater or lesser extent. Moreover, this risky behaviour may be incited by uneducated stakeholders, as a consequence of them holding unrealistic goals, maintaining a short-term view and (to put it bluntly) being ignorant with regards to the pitfalls, limitations, disadvantages and risks that are inherent to all test automation projects.

Then take responsibility ... and get some help in doing so

Clearly then, the described risks are not so much inherent to the frameworks or to the approach to test automation, but rather flow from inexperience with these frameworks and, in particular, from inexperience with this approach. That is, to be able to (optimally) benefit from the specific advantages of this approach, applying design patterns is imperative. This is a critical factor for the long-term success of any keyword-driven test automation effort. Without applying patterns to the test code, solutions will not be cost-efficient, maintainable or transferable, amongst other disadvantages. The costs will simply outweigh the benefits on the long run. Whats more, essentially the whole purpose and added value of using keyword-driven frameworks are lost, since these frameworks had been devised precisely to this end: counter the severe maintainability/reusability problems of the earlier generation of frameworks. Therefore, from all the approaches to test automation, the keyword-driven approach depends to the greatest extent on the disciplined and rigid application of standard software development practices, such as modularization, abstraction and genericity of code.

This might seem a truism. However, since typically the traditional testers (and thus novice coders) are nowadays directed by their management towards using keyword-driven frameworks for automating their functional, black-box tests (at the service/API- or GUI-level), automation anti-patterns appear and thus the described risks emerge. To make matters worse, developers remain mostly uninvolved, since a lot of these testers are still working within siloed/compartmented organizational structures.

In our experience, a combination of a comprehensive set of explicit best practices, training and on-the-job coaching, and a disciplined review and testing regime (applied to the test code) is an effective way of mitigating these risks. Additionally, silo's need to be broken down, so as to foster collaboration (and create synergy) on all testing efforts as well as to be able to coordinate and orchestrate all of these testing efforts through a single, central, comprehensive and shared overall testing strategy.

Of course, the framework selected to implement a keyword-driven test automation solution, is an important enabler as well. As will become apparent from this series of blog posts, the Robot Framework is the platform par excellence to facilitate, support and even stimulate these counter-measures and, consequently, to very swiftly enable and empower seasoned coders and beginning coders alike to contribute code that is efficient, robust, stable, reusable, generic, maintainable as well as readable and transferable. That is not to say that it is the platform to use in any given situation, just that it has been designed with the intent of implementing the keyword-driven approach to its fullest extent. As mentioned in a previous post, the RF can be considered as the epitome of the keyword-driven approach, bringing that approach to its logical conclusion. As such it optimally facilitates all of the mentioned preconditions for long-term success. Put differently, using the RF, it will be hard not to avoid the pitfalls inherent to keyword-driven test automation.

Some examples of such enabling features (that we will also encounter in later posts):

  • A straightforward, fully keyword-oriented scripting syntax, that is both very powerful and yet very simple, to create low- and/or mid-level test functions.
  • The availability of¬†dozens of keyword libraries out-of-the-box, holding both convenience functions (for instance to manipulate and¬†perform¬†assertions on xml) and specialized keywords for directly driving various¬†interface types. Interfaces such as REST, SOAP or JDBC can thus be interacted with¬†without having to write a single line of integration code.
  • Very easy, almost intuitive means to apply a broad range of¬†design patterns, such as creating¬†various types of¬†abstraction layers.
  • And lots and lots of other great and unique features.
Summary

We have now an understanding of the characteristics and purpose of keywords and of the advantages of structuring our test automation solution into (various layers of) keywords. At the same time, we have looked at the primary risk involved in the application of such a keyword-driven approach and at ways to deal with these risks.

Keyword-driven test automation is aimed at solving the problems that were instrumental in the failure of prior automation paradigms. However, for a large part it merely facilitates the involved solutions. That is, to actually reap the benefits that a keyword-driven framework has to offer, we need to use it in an informed, professional and disciplined manner, by actively designing our code for reusability, maintainability and all of the other qualities that make or break long-term success. The specific design as well as the unique richness of powerful features of the Robot Framework will give automators a head start when it comes to creating such code.

Of course, this 'adage' of intelligent and adept usage, is true for any kind of framework that may be used or applied in the course of a software product's life cycle.

Part 3 of this second post, will go into the specific implementation of the keyword-driven approach by the Robot Framework.

A Case Study: WordPress Migration for Shift.ms

The case study presented involves a migration from custom database to WordPress. The company with the task is Valet and it has a vast portfolio of previously done jobs that included shifts from database to WordPress, multisite-to-multisite, and multisite to single site among others. The client is Shift.ms.

Problem

The client, Shift.ms, presented a taxing problem to the team. Shift.ms had a custom database that they needed migrated to WordPress. They had installed a WordPress/BuddyPress and wanted their data moved into this new installation. All this may seem rather simple. However, there was one problem; the client had some data in the newly installed WordPress that they intended to keep.

Challenges

The main problem was that the schema for the database and that of WordPress are very different in infrastructure. The following issues arose in an effort to deal with the problem:

Categories: Architecture

FitNesse in your IDE

Xebia Blog - Wed, 02/03/2016 - 17:10

FitNesse has been around for a while. The tool has been created by Uncle Bob back in 2001. It’s centered around the idea of collaboration. Collaboration within a (software) engineering team and with your non-programmer stakeholders. FitNesse tries to achieve that by making it easy for the non-programmers to participate in the writing of specifications, examples and acceptance criteria. It can be launched as a wiki web server, which makes it accessible to basically everyone with a web browser.

The key feature of FitNesse is that it allows you to verify the specs with the actual application: the System Under Test (SUT). This means that you have to make the documentation executable. FitNesse considers tables to be executable. When you read ordinary documentation you’ll find that requirements and examples are outlined in tables often, hence this makes for a natural fit.

There is no such thing as magic, so the link between the documentation and the SUT has to be created. That’s where things become tricky. The documentation lives in our wiki server, but code (that’s what we require to connect documentation and SUT) lives on the file system, in an IDE. What to do? Read a wiki page, remember the class and method names, switch to IDE, create classes and methods, compile, switch back to browser, test, and repeat? Well, so much for fast feedback! When you talk to programmers, you’ll find this to be the biggest problem with FitNesse.

Imagine, as a programmer, you're about to implement an acceptance test defined in FitNesse. With a single click, a fixture class is created and adding fixture methods is just as easy. You can easily jump back and forth between the FitNesse page and the fixture code. Running the test page is as simple as hitting a key combination (Ctrl-Shift-R comes to mind). You can set breakpoints, step through code with ease. And all of this from within the comfort of your IDE.

Acceptance test and BDD tools, such as Cucumber and Concordion, have IDE plugins to cater for that, but for FitNesse this support was lacking. Was lacking! Such a plugin is finally available for IntelliJ.

screenshot_15435-1

Over the last couple of months, a lot of effort has been put in building this plugin. It’s available from the Jetbrains plugin repository, simply named FitNesse. The plugin is tailored for Slim test suites, but also works fine with Fit tables. All table types are supported. References between script, decision tables and scenarios work seamlessly. Running FitNesse test pages is as simple as running a unit test. The plugin automatically finds FitNesseRoot based on the default Run configuration.

The current version (1.4.3) even has (limited) refactoring support: renaming Java fixture classes and methods will automatically update the wiki pages.

Feel free to explore the new IntelliJ plugin for FitNesse and let me know what you think!

(GitHub: https://github.com/gshakhn/idea-fitnesse)

Awards Season

Herding Cats - Glen Alleman - Wed, 02/03/2016 - 16:34

Screen Shot 2016-01-19 at 6.40.39 PM

Read the full article at CEOLEVEL

Categories: Project Management

Leadership: Are Leaders Born? Or Are They Made?

Making the Complex Simple - John Sonmez - Wed, 02/03/2016 - 14:00

Leadership is one of those perennial topics in our society that we can‚Äôt stop talking, thinking, and writing about. A search on Google turns up over 400,000,000 results; a search on Amazon brings back over 70,000 books on the topic. Every day, more books are authored, more articles are written, more podcasts are recorded, and […]

The post Leadership: Are Leaders Born? Or Are They Made? appeared first on Simple Programmer.

Categories: Programming

The Scooter Computer

Coding Horror - Jeff Atwood - Wed, 02/03/2016 - 09:52

When we initially deployed our handbuilt colocated servers for Discourse in 2013, I needed a way to provide an isolated VPN channel in for secure remote access and troubleshooting. Rather than dedicate a whole server to this task, I purchased the inexpensive, open source firmware friendly Asus RT-N16 router, flashed it with the popular TomatoUSB open source firmware, removed the antennas, turned off the WiFi and dropped it off in our colocated rack to let it act as a dedicated VPN access point.


Asus RT-N16

And that box – which was $100 then and around $70 now – worked well enough until now. Although the version of OpenSSL in the 2012 era Tomato firmware we used is not vulnerable to Heartbleed, it's still getting out of date in terms of the encryption it supports and allows. And Tomato itself is updated sporadically, chaotically at best.

Let's face it: this is just a little box that runs a chopped up version of Linux, with a bit of specialized wireless hardware and multiple antennas tacked on … that we're not even using. So when it came time to upgrade, we wondered:

Why not just go with a small box that can run a real, full Linux distro? Wouldn't that be simpler and easier to keep up to date?

After doing some research and asking on Twitter, I discovered there are a ton of amazing little Broadwell "mini-PC" boxes available on AliExpress.

The specs are kind of amazing for the price. I paid ~$350 each for the ones I selected:

  • i5-5200 Broadwell 2 core / 4 thread CPU at 2.2 Ghz - 2.7 Ghz
  • 16GB DDR3 RAM
  • 128GB M.2 SSD
  • Dual gigabit Realtek 8168 ethernet
  • front 4 USB 3.0 ports / rear 4 USB 2.0 ports
  • Dual HDMI out

(There's also optical and analog audio connectors on the front, as well as a SD card reader, which I covered with a sticker since we had no need for audio. I also stripped the WiFi out since we didn't need it, but it was included for the price, too.)

Selecting the i5-4258u, 4GB RAM, and 64GB SSD pushes the price down to $270. That's still a solid CPU, only a single generation behind Intel's latest and greatest Skylake, and carrying the midrange i5 moniker; it's no pushover. There are also many, many variants of this box from other AliExpress sellers that have slightly older, cheaper CPUs that are still plenty powerful. You can easily spec a box similar to this one for $200.

That's not a whole lot more than the $200 you'd pay for a high end router these days, and as Ars Technica notes, the average x86 box is radically faster.

Note that the above graphs, "homebrew" means an old, 1.8 Ghz Ivy Bridge dual core chip, 3 generations behind current CPUs, that doesn't even merit the i3 or i5 designation, and has no hyperthreading. Do bear that in mind as you keep reading.

Meet The Scooter Computer

This box may be small, and only 15 watt TDP, but it is mighty. I spun up a new Digital Ocean droplet and ran a quick benchmark:

sudo apt-get install sysbench
sysbench --test=cpu --cpu-max-prime=20000 run
Tie Shuttle 6
total time:           28.0707s
total num events:     10000
total time take:      28.0629
per-request stats:
     min:             2.77ms
     avg:             2.81ms
     max:             3.99ms
     ~95 percentile:  3.00ms
Digital Ocean Droplet
total time:          35.9541s
total num events:    10000
total time taken:    35.9492
per-request stats:
     min:             3.50ms
     avg:             3.59ms
     max:             13.31ms
     ~95 percentile:  3.79ms

Results will of course vary by cloud provider, but rest assured this box is just as fast as and possibly even faster than the average cloud box you could spin up right now. Of course it is "only" 2 cores / 4 threads, but the more cores you need, the slower they tend to go because of the overall TDP limits of the core package.

One thing that's not immediately obvious in photos is that this thing is indeed small but hefty, like holding a solid chunk of aluminum in your hand. That's because the box is passively cooled — the whole case is the heatsink, as the CPU on the bottom of the motherboard mates with the finned top of the case.

Opening this box you realize just how simple things are inside it; it's barely more than a highly integrated motherboard strapped to an aluminum block. This isn't a Steve Jobs truck, a Mac Mini car, or even a motorcycle. This is a scooter.

Scooters are very primitive machines; it is both their greatest strength and their greatest weakness. It's arguably the simplest personal wheeled vehicle there is. In these short distance scenarios, scooters tend to win over, say, bicycles because there's less setup and teardown necessary ‚Äď you don't have to lock up a scooter, nor do you have to wear a helmet. Just hop on and go! You get almost all the benefits of gravity and wheeled efficiency with a minimum of fuss and maintenance. And yes, it's fun, too!

Passively cooled computers are paragons of simplicity and reliable consumer electronics, but passively cooling a "real" x86 PC is the holy grail. To get serious performance you usually need to feed the CPU at least 10 to 20 watts – and dissipating that kind of energy with zero fans and ambient airflow alone is not trivial. Let's see how our scooter does overnight running Mersenne Primes, which is the heaviest CPU load possible.

You can place your hand on the top of the box during this, but it's uncomfortable. And the whole box radiates heat, not just the top. Overall it was completely stable for me during overnight mprime torture testing with the 15w TDP CPU I chose, and I am comfortable with these boxes sitting in our rack in the datacenter, even under extended full load. However, I would be very careful putting a 28w TDP CPU in this box unless you are absolutely sure it won't be at full load very often. Passive cooling is hard.

Power consumption, as measured by my Kill-a-Watt, ranged from 7 watts at the Ubuntu Server 14.04 text login screen, to 8-10 watts at an idle Ubuntu 15.10 GUI login screen (the default OS it arrived with), to 14-18 watts in memory testing, to 26 watts in mprime.

I should also mention that even under extreme mprime load, both CPUs stayed at 2.5 Ghz indefinitely, which is unusual in my experience. To achieve 2.7 Ghz you need a single threaded load. Considering the base clock of the i5-5200u is 2.2 Ghz, that's quite good! Many 4-6-8 core CPUs drop all the way down to their base clock pretty fast once they have significant load, which makes the "turbo" moniker a bit of a lie.

(By the way, don't bother using burnP6, it generates way too little heat compared to mprime, which is an absolute monster. If your CPU can survive an overnight run of mprime, I can assure you it's ready for just about anything the real world can throw at it, ever.)

Disk

The machine has M.2 slots for two drives, as well as a SATA port and power cable (not pictured, but was included in the box) if you want to mate a 2.5" drive with the drive mounting holes on the bottom of the case. So if you prefer a mirrored two drive RAID array here for reliability, or a giant honking 2TB 2.5" HDD slapped in there for media storage, all of that is possible!

Be careful, as the internal M.2 slots are 2242, meaning 42mm length. There seem to be mostly (only?) lower cost SSD drives available in this size for whatever reason.

Don't worry, though, the bundled 128GB Phison S9 M.2 SSD has decent performance, roughly equal to a good SSD from a few years ago:

dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
hdparm -Tt /dev/sda

536870912 bytes (537 MB) copied, 1.52775 s, 351 MB/s
Timing cached reads:   11434 MB in  2.00 seconds = 5720.61 MB/sec
Timing buffered disk reads:  760 MB in  3.00 seconds = 253.09 MB/sec

That's respectable SSD performance and won't hold you back in most use cases, but it's not a barn-burning disk subsystem, either. I'm not entirely sure retrofitting, say, the state of the art Samsung 950 Pro M.2 2280 drive is possible due to length restrictions.

Of course the Samsung 850 Pro would fit fine as a traditional 2.5" SATA drive mounted to the case cover, and would perform like this:

536870912 bytes (537 MB) copied, 1.20895 s, 444 MB/s
Timing cached reads:   38608 MB in  2.00 seconds = 19330.61 MB/sec
Timing buffered disk reads: 1584 MB in  3.00 seconds = 527.92 MB/sec

RAM

Intel limits these Broadwell U class CPUs to 16GB RAM total, so maxing the box out is only going to set you back around $70. Still, that's a significant percentage of the ~$350 total cost, and you may not need that much RAM for what you have in mind.

However, do be careful that you get dual-channel RAM for lower RAM configurations; you don't want a single 4GB DIMM, you want two 2GB DIMMs. They ship from the vendor with a single DIMM, so beware. It may not matter depending on the task, as noted by AnandTech, but our boxes will be used for OpenSSL, and memory is cheap, so why not?

The Versatile Scooter

When I began looking at this, I was shocked to discover just how low-end the x86 CPUs are in a lot of "dedicated" devices, such as the official pfSense hardware:

Sure, 2.4 Ghz and 8 cores on that C2758 sounds reasonable – until you realize those are old Intel Bay Trail Atom cores. Even the current Cherry Trail Atom cores aren't so hot. Furthermore, those are probably the maximum "turbo" frequencies being quoted, which are unlikely to be sustained under any kind of real multi-core load. Also, did I mention this is being sold as a $1,400 device? Except for the lack of more than 2 dedicated gigabit ethernet ports, I'd put our scooter computer up against that C2758 any day of the week. And you know what? It'd win.

I think this logic applies to a lot of dedicated hardware these days — routers, switches, firewalls, and so on. You're often better off building up a modern high power, low TDP x86 box and slapping a regular Linux distro on there.

You can even kinda-sorta fit six of them in a 1U rack space.

(Well, except for the power bricks and cables. Vertical mounting on a 1U shelf works out a bit better, and each conveniently came with a stand for vertical operation.)

Now that I've worked with these boxes, I've become rather enamored of the Scooter Computer concept. Wherever we were thinking that we had to run either:

  • A virtual machine on big iron for some small but important utility function in our rack.

  • Dedicated, purpose built hardware for networking, firewall, or switching with a custom OS.

… we can now take advantage of cheap, reliable, flexible, totally solid state commodity x86 hardware that's spread across many machines and running standard Linux distributions, like all the rest of our 1U servers.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Programming

Marshmallow and User Data

Android Developers Blog - Wed, 02/03/2016 - 08:24

Posted by Joanna Smith, Developer Advocate and Giles Hogben, Google Privacy Team

Marshmallow introduced several changes that were designed to help your app look after user data. The goal was to make it easier for developers to do the right thing. So as Android 6.0, Marshmallow, gains traction, we challenge you to do just that.

This post highlights the key considerations for user trust when it comes to runtime permissions and hardware identifiers, and points you to new best practices documentation to clarify what to aim for in your own app.

Permission Changes

With Marshmallow, permissions have moved from install-time to runtime. This is a mandatory change for SDK 23+, meaning it will affect all developers and all applications targeting Android 6.0. Your app will need to be updated anyway, so your challenge is to do so thoughtfully.

Runtime permissions mean that your app can now request access to sensitive information in the context that it will be used. This gives you a chance to explain the need for the permission, without scaring users with a long list of requests.

Permissions are also now organized into groups, so that users can make an informed decision without needing to understand technical jargon. By allowing your users to make a decision, they may decide not to grant a permission or to revoke a previously-granted permission. So, your app needs to be thoughtful when handling API calls requiring permissions that may have been denied, and about building in graceful failure-handling so that your users can still interact with the rest of your app.

Identifier Changes

The other aspect of user trust is doing the right thing with user data. With Marshmallow, we are turning off access to some kinds of data in order to direct developers down this path.

Most notably, Local WiFi and Bluetooth MAC addresses are no longer available. The getMacAddress() method of a WifiInfo object and the BluetoothAdapter.getDefaultAdapter().getAddress() method will both return 02:00:00:00:00:00 from now on.

However, Google Play Services now provides Instance IDs, which identify an application instance running on a device. Instance IDs provide a reliable alternative to non-resettable, device-scoped hardware IDs, as they will not persist across a factory reset and are scoped to an app instance. See the Google Developer's What is Instance ID? help article for more information.

What’s Next

User trust depends largely on what users see and how they feel. Mishandling permissions and identifiers increases the risk of unwanted/unintended tracking, and can result in users feeling that your app doesn’t actually care about the user. So to help you get it right, we’ve created new documentation that should enable developers to be certain that their app is doing the right thing for their users.

So happy developing! May your apps make users happy, and may your reviews reflect that. :)

Categories: Programming