Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Programming
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

App Monetization Insights: How Hydro Coach rapidly reached 22 new markets

Google Code Blog - Tue, 02/09/2016 - 20:30
We created The No-nonsense Guide to App Monetization to help you answer a burning question, “What’s the best way to monetize my app?”. Our new 5-part blog series provides additional tips straight from successful app developers.

Hydro CoachThis week, meet Christoph Pferschy, the app developer and designer behind Hydro Coach. Hydro Coach is a drink reminder and water intake tracker that’s been growing in popularity, now receiving over 5,500 downloads a day. Check out these tips from Christoph.

1. Start small to get big
Christoph is best known for his app Hydro Coach, but his company, Codium App Ideas, has a bigger vision. In his words, they’re on a mission “to create useful and high-quality health-related apps that are combinable to a single health & fitness system.”

But right now, Christoph is focusing on just one app – Hydro Coach. Why?

Because he believes the best way to accomplish a big goal is to focus on a critical piece and nail it. He chose Hydro Coach because he’s personally benefited from drinking more water and he saw that there was less competition in the space.

Now that his business is growing, he’s begun moving forward with his larger plan, connecting with other services and platforms. For example, he explains, “we’ve connected Hydro Coach to Google Fit to synchronize users’ weight and we’re also very proud to be featured as a partner app with Samsung’s S Health,” syncing all drink inputs with their platform.”

Consider Christoph’s approach to building a company: start small to get big. Focusing on too much, too early, might not allow you to build an app experience that user’s love.

2. Assess users’ needs before choosing a business model
Settle on a business model for your app before it launches. Spend time assessing your users’ needs and your business goals to get there. Start by asking yourself guiding questions that will help narrow down the decision for you like, “Who is your audience?”, “What value does your app provide?”, and “How do you intend to promote your app?”. Learn more about these guiding questions and business models in our No-nonsense Guide to App Monetization.

Once you do choose your business model, consider continually improving your app’s user experience, especially when it affects monetization. Christoph is meticulous about providing the best experience for all of his users, explaining,

“We try to improve each user experiment segment with small and careful changes over time. For example, we’ve fine-tuned the algorithm that determines when, how and how often ads are displayed, thanks to Manuela (our AdMob consultant). We also invest in small things that mean a lot to our users, like being sure to say ‘Thank you’ after a user makes an in-app purchase and making the flow after a purchase as smooth as possible. It all adds up.”

As you look through your app, be methodical when choosing your overarching monetization strategy. Once you’ve chosen your model, focus on tweaking the user experience and providing the best possible monetization flow.

3. Consider having your users help with localization
Christoph has done an amazing job getting Hydro Coach fully translated into 22 languages. His secret weapon? His users.

As he explains, “It’s no secret that translating the app is the first and most important step. So here’s my tip: ask your users to help translate. It results in a high-quality translation because people who are already using your app have the needed context and interest. There are several services for this that you can use, but you’ll be surprised how many people love to help.”

If you found these tips helpful, don’t forget to check out The No-nonsense Guide to App Monetization. Also, stay connected on all things AdMob by following our Twitter and Google+ pages.

Posted by Joe Salisbury, Product Specialist, AdMob

Categories: Programming

Automated UI Testing with React Native on iOS

Xebia Blog - Mon, 02/08/2016 - 21:30
code { display: inline !important; font-size: 90% !important; color: #6a205e !important; background-color: #f9f9f9 !important; border-radius: 4px !important; }

React Native is a technology to develop mobile apps on iOS and Android that have a near-native feel, all from one codebase. It is a very promising technology, but the documentation on testing can use some more depth. There are some pointers in the docs but they leave you wanting more. In this blog post I will show you how to use XCUITest to record and run automated UI tests on iOS.

Start by generating a brand new react native project and make sure it runs fine:
react-native init XCUITest && cd XCUITest && react-native run-ios
You should now see the default "Welcome to React Native!" screen in your simulator.

Let's add a textfield and display the results on screen by editing index.ios.js:

class XCUITest extends Component {

  constructor(props) {
    super(props);
    this.state = { text: '' };
  }

  render() {
    return (
      <View style={styles.container}>
        <TextInput
          testID="test-id-textfield"
          style={{borderWidth: 1, height: 30, margin: 10}}
          onChangeText={(text) => this.setState({text})}
          value={this.state.text}
        />
        <View testID="test-id-textfield-result" >
          <Text style={{fontSize: 20}}>You typed: {this.state.text}</Text>
        </View>
      </View>
    );
  }
}

Notice that I added testID="test-id-textfield" and testID="test-id-textfield-result" to the TextInput and the View. This causes React Native to set a accessibilityIdentifier on the native view. This is something we can use to find the elements in our UI test.

Recording the test

Open the XCode project in the ios folder and click File > New > Target. Then pick iOS > Test > iOS UI Testing Bundle. The defaults are ok, click Finish. Now there should be a XCUITestsUITests folder with a XCUITestUITests.swift file in it.

Let's open XCUITestUITests.swift and place the cursor inside the testExample method. At the bottom left of the editor there is a small red button. If you press it, the app will build and start in the simulator.

Every interaction you now have with the app will be recorded and added to the testExample method, just like in the looping gif at the bottom of this post. Now type "123" and tap on the text that says "You typed: 123". End the recording by clicking on the red dot again.

Something like this should have appeared in your editor:

      let app = XCUIApplication()
      app.textFields["test-id-textfield"].tap()
      app.textFields["test-id-textfield"].typeText("123")
      app.staticTexts["You typed: 123"].tap()

Notice that you can pull down the selectors to change them. Change the "You typed" selector to make it more specific, change the .tap() into .exists and then surround it with XCTAssert to do an actual assert:

      XCTAssert(app.otherElements["test-id-textfield-result"].staticTexts["You typed: 123"].exists)

Now if you run the test it will show you a nice green checkmark in the margin and say "Test Succeeded".

In this short blogpost I showed you how to use the React Native testID attribute to tag elements and record and adapt a XCUITest in XCode. There is a lot more to be told about React Native, so don't forget to follow me on twitter (@wietsevenema)

Recording UI Tests in XCode

Pebble Steel Review: 4 Months of a Vibrating Arm

Making the Complex Simple - John Sonmez - Mon, 02/08/2016 - 14:00

I’m seeing smartwatches of various kinds on different people more frequently now. In fact, current market statistics suggest that you can see a smartwatch on 4-6 million people. I’m having a hard time finding direct statistics that I can rely on accurately, but suffice it to say—smartwatches have a strong foothold in our economy. I […]

The post Pebble Steel Review: 4 Months of a Vibrating Arm appeared first on Simple Programmer.

Categories: Programming

Making Agile even more Awesome. By Nature.

Xebia Blog - Mon, 02/08/2016 - 11:31

Watching the evening news and it should be no surprise the world around us is increasingly changing and is becoming too complex to fit in a system we as humankind still can control.  We have to learn and adapt much faster solving our epic challenges. The Agile Mindset and methodologies are an important mainstay here. Adding some principles from nature makes it even more awesome.

In organizations, in our lives, we are in a constant battle “beating the system”.  Steering the economy, nature, life.  We’re fighting against it, and becoming less and less successful in it.  What should change here?

First, we could start to let go the things we can’t control and fully trust the system we live in: Nature. It’s the ultimate Agile System, continuously learning and adapting to changing environments.  But how?

We have created planes and boats by observing how nature did it: Biomimetics.  In my job as an Agile Innovation consultant, I’m using these and other related principles:

  1. Innovation engages in lots of experimentation: life creates success models through making mistakes, survival of the fittest.
  2. Continuously improve by feedback loops.
  3. Use only the energy you need. Work smart and effective.
  4. Fit form to function. Function is primary important to esthetics.
  5. Recycle: Resources are limited, (re)use them smart.
  6. Encourage cooperation.
  7. Positivity is an important source of energy, like sunlight can be for nature.
  8. Aim for diversity. For example, diverse problem solvers working together can outperform groups of high-ability problem solvers.
  9. Demand local expertise, to be aware of the need of local differences.
  10. Create a safe environment to experiment. Like Facebook is able to release functionality every hour for a small group of users.
  11. Outperform frequently to gain endurance and to stay fit.
  12. Reduce complexity by minimizing the number of materials and tools.For example, 96% of life on this planet is made up of six types of atoms: Carbon, Hydrogen, Oxygen, Nitrogen, Phosphorus and Sulphur
How to kickstart your start-up?

Until a couple of years ago,  innovative tools were only available for financial powerful companies.  Now, innovative tools like 3D printing and the Internet of Things are accessible for everybody.  The same applies for Agile.  This enables you to enter new markets against extreme low marginal costs.  In these start-ups you can recognize elements of natural agility.  A brilliant example is Joe Justice’ WikiSpeed. In less than 3 months he succeeded in building a 100 Mile/Gallon street legal car defeating companies like Tesla.  This all shows you can solve apparently impossible challenges by trusting on your natural common sense.  It's that simple.

Paul Takken (Xebia) and Joe Justice (Scrum inc.) are currently working together on several global initiatives coaching governments and large enterprises in reinventing themselves how they can anticipate on today's epic challenges.  This is done by a smarter use of people’s talents, tooling, materials and Agile- and Lean principles as mentioned above.

Android Studio 2.0 - Beta

Android Developers Blog - Fri, 02/05/2016 - 23:40

Posted by Jamal Eason, Product Manager, Android

Android Studio 2.0 is latest release of the official Android IDE focused on build performance and emulator speed to improve the app development experience. With brand new features like Instant Run which enables you to quickly edit and view code changes, or the new & faster Android emulator, Android Studio 2.0 is the upgrade you do not want to miss. In preparation for the final release, you can download Android Studio 2.0 Beta in the Beta release channel. Overall, the Android Studio 2.0 release has a host of new features which include:

  • *Updated for Beta* Instant Run - Enables a faster code edit & app deployment cycle.
  • *Updated for Beta* Android Emulator - Brand new emulator that is faster than most real devices, and includes a brand new user interface.
  • *Updated for Beta* Google App Indexing Integration & Testing - Adding App Indexing into your app helps you re-engage your users. In the first preview of Android Studio 2.0 you could add indexing code stubs into your code. With the beta release you can now test and validate your URL links in your app all within the IDE.
  • Fast ADB - Installing and pushing files is now up to 5x faster using Android Studio 2.0 with an updated Android Debug Bridge (ADB) offered in platform-tools 23.1.0.
  • GPU Profiler Preview - For graphics intensive applications, you can now visually step through your OpenGL ES code to optimize your app or game
  • Integration of IntelliJ 15 - Android Studio is based on the efficient coding platform of Intellij. Check out the new features from IntelliJ here.

Check out the latest installment of Android Studio Tool Time video below to watch the highlights of the features.



New Features in Android Studio 2.0 Beta
Instant Run

We first previewed Instant Run in November; this latest beta release introduces a new capability called Cold Swap

Instant Run in Android Studio 2.0 allows you to quickly make changes to your app code while your app is running on an Android device or Android Emulator. Instead of waiting for your entire app to rebuild and redeploy after each code change, Android Studio 2.0 will try to incrementally build and push only the incremental code or resource change. Depending on the code changes you make, you can see the results of your change in under a second. By simply updating your app to use the latest Gradle plugin ( 'com.android.tools.build:gradle:2.0.0-beta2’ ), you can take advantage of this time saving features with no other modifications to your code. If your project is setup correctly with Instant Run, you will see a lightning bolt next to your Run button on the toolbar:


Instant Run Button

Behind the scenes, Android Studio 2.0 instruments your code during the first compilation and deployment of your app to your device in order to determine where to swap out code and resources. The Instant Run features updates your app on a best-effort basis and automatically uses one of the following swap methods to update your app:

  • Hot Swap - When only method implementations (including constructors) are changed, the changes are hot swapped. Your application keeps running and the new implementation is used the next time the method is called.
  • Warm Swap - When app resources are changed, the changes are warm swapped. This is similar to a hot swap, except that the current Activity is restarted. You will notice a slight flicker on the screen as the Activity restarts.
  • *New for Beta* Cold Swap - This will quickly restart the whole application. Typically for structural code change, including changes to the class hierarchy, method signatures, static initializers, or fields. Cold Swap is available when you deploy to targets with API level 21 or above.

We made major changes to Instant Run since the first preview of Android Studio 2.0, and now the feature works with more code and resources cases. We will continue to add more code change cases to Instant Run in future releases of Android Studio. If you have any suggestions, please feel free to send us a feature request and learn more about Instant Run here.

App Indexing

Supporting app indexing is now even easier with Android Studio 2.0. App Indexing puts your app in front of users who use Google Search. It works by indexing the URL patterns you provide in your app manifest and using API calls from your app to make content within your app available to both existing and new users. Specifically, when you support URLs for your app content, your users can go directly to those links from Google Search results on their device.

  • Code Generation Introduced in Android Studio 2.0 Preview, you can right click on AndroidManifest.xml or Activity method (or go to Code → Generate
→ App Indexing API Code) to insert HTTP URL stub codes into your manifest and app code.

  • *New for Beta* URL Testing & Validation What is new in Android Studio 2.0 Beta is that you can now validate and check the results of your URLs with the built-in validation tool (Tools → Android → Google App Indexing Test). To learn more about app indexing, click here.

Insert App Indexing API Code into your app

App Indexing Testing


App Indexing Test Results

Android Emulator

*Updated for Beta* The new and faster Android emulator also includes fixes and small enhancements for this beta release. Notably, we updated the rotation controls on the emulator toolbar and added multi-touch support to help test apps that use pinch & zoom gestures. To use the multi-touch feature, hold down the Alt key on your keyboard and right-click your mouse to center the point of reference or click & drag the left mouse button to zoom.


Pinch & Zoom Gesture with Multi-Touch

What's Next

Android Studio 2.0 is a big release, and now is good time to check out the beta release to incorporate the new features into your workflow. The beta release is near stable release quality, and should be relatively bug free. But as with any beta release, bugs may still exist, so, if you do find an issue, let us know so we can work to fix it. If you’re already using Android Studio, you can check for updates on the Beta channel from the navigation menu (Help → Check for Update [Windows/Linux] , Android Studio → Check for Updates [OS X]). When you update to beta, you will get access to the new version of Android Studio and Android Emulator.

Connect with us, the Android Studio development team, on Google+.

Categories: Programming

Project Tango workshops help bring indoor location apps to life

Android Developers Blog - Thu, 02/04/2016 - 18:21

Posted by Eitan Marder-Eppstein, Developer Engineering Lead, Project Tango

GPS helps us find our way outside whether it is turn by turn navigation to the nearest grocery or just getting us oriented in a new city. But once we get indoors, it is not quite as easy - GPS doesn't work, with accuracy dropping and navigation becoming all but impossible. This is one of the reasons why we started Project Tango, which has centimeter-scale accuracy of a device’s location, allowing better navigation and experiences in indoor spaces.

Over the past few weeks, we’ve been collecting amazing ideas from around the world for great apps for Lenovo’s Project Tango-powered phone. (Have an idea? If you can dream it, you can submit it!) As part of this program we're hosting workshops, focused on specific Tango features. And we just wrapped up a session that we hosted with Westfield Labs devoted to indoor location. Here are some of the highlights:



As you can see, everyone from retail brands to robot startups joined in on the fun—using Project Tango's motion tracking, depth perception, and area learning capabilities to build some amazing location-based apps. Some of our favorites included:

  • Wayfair made it possible to look through your phone and visualize how a piece of furniture would look in your home.
  • Lowe’s Innovation Labs improved in-store navigation by overlaying directions to individual items
  • And Aisle411 created a shop-along experience with some of your favorite celebrities

The next stop in our series is a utilities workshop, where we'll be going deep on getting things done with Project Tango—like taking 3D measurements, or mapping your home or building. In the meantime, keep submitting your ideas to the App Incubator (the deadline is February 15!), and we'll see you soon!

Categories: Programming

Listen To Music While Working: Is It A Good Idea?

Making the Complex Simple - John Sonmez - Thu, 02/04/2016 - 17:00

Today I received a question from a reader asking me about listening to music while working. But… Is it a good idea? Or is it a bad idea? Does it affect your productivity? In this video, I talk about different types of music and how each one of them interacts with your attention and the […]

The post Listen To Music While Working: Is It A Good Idea? appeared first on Simple Programmer.

Categories: Programming

Tooling in the .net World

Actively Lazy - Wed, 02/03/2016 - 21:51

As a refugee Java developer first washing up on the shores of .net land, I was pretty surprised at the poor state of the tools and libraries that seemed to be in common use. I couldn’t believe this shanty town was the state of the art; but with a spring in my step I marched inland. A few years down the road, I can honestly say it really isn’t that great – there are some high points, but there are a lot of low points. I hadn’t realised at the time but as a Java developer I’d been incredibly spoiled by a vivacious open source community producing loads of great software. In the .net world there seem to be two ways of solving any problem:

  1. The Microsoft way
  2. The other way*

*Note: may not work, probably costs money.

IDE

The obvious place to start is the IDE. I’m sorry, but Visual Studio just isn’t good enough. Compared to Eclipse and IntelliJ for day-to-day development Visual Studio is a poor imitation. Worse, until recently, it was expensive as a lone developer. While there were community editions of Visual Studio, these were even more pared down than the unusable professional edition. Frankly, if you can’t run ReSharper, it doesn’t count as an IDE.

I hear there are actually people out there who use Visual Studio without ReSharper. What are these people? Sadists? What do they do? I hope it isn’t writing software. Perhaps this explains the poor state of so much software; with tools this bad – is it any wonder?

But finally Microsoft have seen sense and recently the free version of Visual Studio supports plugins – which means you can use ReSharper. You still have to pay for it, but with their recent licensing changes I don’t feel quite so bad handing over a few quid a month renting ReSharper before I commit to buying an outright license. Obviously the situation for companies is different – it’s much easier for a company to justify spending the money on Visual Studio licenses and ReSharper licenses.

With ReSharper Visual Studio is at least closer to Eclipse or IntelliJ. It still falls short, but there is clearly only so much lipstick that JetBrains can put on that particular pig. I do however have great hope for Project Rider, basically IntelliJ for .net.

Restful Services

A few years back another Java refugee and I started trying to write a RESTful, CQRS-style web service in C#. We’d done the same in a previous company on the Java stack and expected our choices to be similarly varied. But instead of a vast plethora of approaches from basic HTTP listeners to servlet containers to full blown app servers we narrowed the field down to two choices:

  1. Microsoft’s WCF
  2. ServiceStack

My fellow developer played with WCF and decided he couldn’t make it easily fit the RESTful, CQRS style he had in mind. After playing with ServiceStack we found it could. But then begins a long, tortuous process of finding all the things ServiceStack hadn’t quite got right; all the things we didn’t quite agree with. Now, this is not entirely uncommon in any technology selection process. But we’d already quickly narrowed the field to two. We were now committed to a technology that at every turn was causing us new, unanticipated problems (most annoyingly, problems that other dev teams weren’t having with vanilla WCF services!)

We joked (not really joking) that it would be simpler to write our own service layer on top of the basic HTTP support in .net. In hindsight, it probably would have been. But really, we’d paid the price for having the temerity to step off the One True Microsoft Way.

Worse, what had started as an open source project went paid for during the development of our service – which meant that our brand new web service was immediately legacy as the service layer it was built on was no longer officially supported.

Dependency Management

The thing I found most surprising on arrival in .net land was that people were checking all their third party binaries into version control like it was the 1990s! Java developers might complain that Maven is nothing more than a DSL for downloading the internet. But you know what, it’s bloody good at downloading the internet. Instead, I had the internet checked in to version control.

However, salvation was at hand with NuGet. Except, NuGet really is a bit crap. NuGet doesn’t so much manage my dependencies as break them. All the damned time. Should we restrict all versions of a library (say log4net) to one version across the solution? Nah, let’s have a few variations. Oh, but nuget, now I get random runtime errors because of method signature mis-matches. But it doesn’t make the build fail? Brilliant, thank you, nuget.

So at my current place we’ve written a pre-build script to check that nuget hasn’t screwed the dependencies up. This pre-build check fails more often than I would like to believe.

So managing a coherent set of dependencies isn’t the job of the dependency tool, so what does it do? It downloads one file at a time from the internet. Well done. I think wget can do that, too. It’s a damned sight faster than the nuget power shell console, too. Nuget: it might break your builds, but at least it’s slow.

The Microsoft Way

And then I find things that blow me away. At my current place we’ve written some add-ins to Excel so that people who live and die by Excel can interact with our software. This is pretty cool: adding buttons and menus and ribbons into Excel, all integrated to my back-end services.

In my life as a Java developer I can never even imagine attempting this. The hoops to jump through would have been far too numerous. But in Visual Studio? Create a new, specific type of solution, hit F5, Excel opens. Set a breakpoint, Excel stops. Oh my God – this is an integrated development environment.

Obviously our data is all stored in Microsoft SQLServer. This also has brilliant integration with Visual Studio. For example, we experimented with creating a .net assembly to read some of the binary data we’re storing in the DB. This way we could run efficient queries on complex data types directly in the DB. The dev process for this? Similarly awesome: deploy to the DB and run directly from within Visual Studio. Holy integrated dev cycle, batman!

When there is a Microsoft way, this is why it is so compelling. Whatever they do will be brilliantly integrated with everything else they do. It might not have the flexibility you want, it might not have all the features you want. But it will be spectacularly well integrated with everything else you’re already using.

Why?

Why does it have to be this way? C# is really awesome language; well designed with a lot of expressive power. But the open source ecosystem around it is barren. If the Microsoft Way doesn’t fit the bill, you are often completely stuck.

I think it comes down to the history of Visual Studio being so expensive. Even as a C# developer by day, I am not spending the thick end of ÂŁ1,000 to get a matching dev environment at home, so I can play. Even if I had a solid idea of an open source project to start, I’m not going to invest a thousand quid just to see.

But finally Microsoft seem to be catching on to the open source thing. Free versions of Visual Studio and projects like CoreCLR can only help. But the Java ecosystem has a decade head start and this ultimately creates a network effect: it’s hard to write good open source software for .net because there’s so little good open source tooling for .net.


Categories: Programming, Testing & QA

Robot Framework and the keyword-driven approach to test automation - Part 2 of 3

Xebia Blog - Wed, 02/03/2016 - 18:03

In part 1 of our three-part post on the keyword-driven approach, we looked at the position of this approach within the history of test automation frameworks. We elaborated on the differences, similarities and interdependencies between the various types of test automation frameworks. This provided a first impression of the nature and advantages of the keyword-driven approach to test automation.

In this post, we will zoom in on the concept of a 'keyword'.

What are keywords? What is their purpose? And what are the advantages of utilizing keywords in your test automation projects? And are there any disadvantages or risks involved?

As stated in an earlier post, the purpose of this first series of introductory-level posts is to prevent all kinds of intrusive expositions in later posts. These later posts will be of a much more practical, hands-on nature and should be concerned solely with technical solutions, details and instructions. However, for those that are taking their first steps in the field of functional test automation and/or are inexperienced in the area of keyword-driven test automation frameworks, we would like to provide some conceptual and methodological context. By doing so, those readers may grasp the follow-up posts more easily.

Keywords in a nutshell A keyword is a reusable test function

The term ‘keyword’ refers to a callable, reusable, lower-level test function that performs a specific, delimited and recognizable task. For example: ‘Open browser’, ‘Go to url’, ‘Input text’, ‘Click button’, ‘Log in’, 'Search product', ‘Get search results’, ‘Register new customer’.

Most, if not all, of these are recognizable not only for developers and testers, but also for non-technical business stakeholders.

Keywords implement automation layers with varying levels of abstraction

As can be gathered from the examples given above, some keywords are more atomic and specific (or 'specialistic') than others. For instance, ‘Input text’ will merely enter a string into an edit field, while ‘Search product’ will be comprised of a chain (sequence) of such atomic actions (steps), involving multiple operations on various types of controls (assuming GUI-level automation).

Elementary keywords, such as 'Click button' and 'Input text', represent the lowest level of reusable test functions: the technical workflow level. These often do not have to be created, but are being provided by existing, external keyword libraries (such as Selenium WebDriver), that can be made available to a framework. A situation that could require the creation of such atomic, lowest-level keywords, would be automating at the API level.

The atomic keywords are then reused within the framework to implement composite, functionally richer keywords, such as 'Register new customer', 'Add customer to loyalty program', 'Search product', 'Add product to cart', 'Send gift certificate' or 'Create invoice'. Such keywords represent the domain-specific workflow activity level. They may in turn be reused to form other workflow activity level keywords that automate broader chains of workflow steps. Such keywords then form an extra layer of wrappers within the layer of workflow activity level keywords. For instance, 'Place an order' may be comprised of 'Log customer in', 'Search product', 'Add product to cart', 'Confirm order', etc. The modularization granularity applied to the automation of such broader workflow chains is determined by trading off various factors against each other - mainly factors such as the desired levels of readability (of the test design), of maintainablity/reusability and of coverage of possible alternative functional flows through the involved business process. The eventual set of workflow activity level keywords form the 'core' DSL (Domain Specific Language) vocabulary in which the highest-level specifications/examples/scenarios/test designs/etc. are to be written.

The latter (i.e. scenarios/etc.) represent the business rule level. For example, a high-level scenario might be:  'Given a customer has joined a loyalty program, when the customer places an order of $75,- or higher, then a $5,- digital gift certificate will be sent to the customer's email address'. Such rules may of course be comprised of multiple 'given', 'when' and/or 'then' clauses, e.g. multiple 'then' clauses conjoined through an 'and' or 'or'. Each of these clauses within a test case (scenario/example/etc.) is a call to a workflow activity level, composite keyword. As explicated, the workflow-level keywords, in turn, are calling elementary, technical workflow level keywords that implement the lowest-level, technical steps of the business scenario. The technical workflow level keywords will not appear directly in the high-level test design or specifications, but will only be called by keywords at the workflow activity level. They are not part of the DSL.

Keywords thus live in layers with varying levels of abstraction, where, typically, each layer reuses (and is implemented through) the more specialistic, concrete keywords from lower levels. Lower level keywords are the building blocks of higher level keywords and at the highest-level your test cases will also be consisting of keyword calls.

Of course, your automation solution will typically contain other types of abstraction layers, for instance a so-called 'object-map' (or 'gui-map') which maps technical identifiers (such as an xpath expression) onto logical names, thereby enhancing maintainability and readability of your locators. Of course, the latter example once again assumes GUI-level automation.

Keywords are wrappers

Each keyword is a function that automates a simple or (more) composite/complex test action or step. As such, keywords are the 'building blocks' for your automated test designs. When having to add a customer as part of your test cases, you will not write out (hard code) the technical steps (such as entering the first name, entering the surname, etc.), but you will have one statement that calls the generic 'Add a customer' function which contains or 'wraps' these steps. This wrapped code, as a whole, thereby offers a dedicated piece of functionality to the testers.

Consequently, a keyword may encapsulate sizeable and/or complex logic, hiding it and rendering it reusable and maintainable. This mechanism of keyword-wrapping entails modularization, abstraction and, thus, optimal reusability and maintainability. In other words, code duplication is prevented, which dramatically reduces the effort involved in creating and maintaining automation code.

Additionally, the readability of the test design will be improved upon, since the clutter of technical steps is replaced by a human readable, parameterized call to the function, e.g.: | Log customer in | Bob Plissken | Welcome123 |. Using so-called embedded or interposed arguments, readability may be enhanced even further. For instance, declaring the login function as 'Log ${userName} in with password ${password}' will allow for a test scenario to call the function like this: 'Log Bob Plissken in with password Welcome123'.

Keywords are structured

As mentioned in the previous section, keywords may hide rather complex and sizeable logic. This is because the wrapped keyword sequences may be embedded in control/flow logic and may feature other programmatic constructs. For instance, a keyword may contain:

  • FOR loops
  • Conditionals (‘if, elseIf, elseIf, 
, else’ branching constructs)
  • Variable assignments
  • Regular expressions
  • Etc.

Of course, keywords will feature such constructs more often than not, since encapsulating the involved complexity is one of the main purposes for a keyword. In the second and third generation of automation frameworks, this complexity was an integral part of the test cases, leading to automation solutions that were inefficient to create, hard to read & understand and even harder to maintain.

Being a reusable, structured function, a keyword can also be made generic, by taking arguments (as briefly touched upon in the previous section). For example, ‘Log in’ takes arguments: ${user}, ${pwd} and perhaps ${language}. This adds to the already high levels of reusability of a keyword, since multiple input conditions can be tested through the same function. As a matter of fact, it is precisely this aspect of a keyword that enables so-called data-driven test designs.

Finally, a keyword may also have return values, e.g.: ‘Get search results’ returns: ${nrOfItems}. The return value can be used for a myriad of purposes, for instance to perform assertions, as input for decision-making or for passing it into another function as argument, Some keywords will return nothing, but only perform an action (e.g. change the application state, insert a database record or create a customer).

Risks involved With great power comes great responsibility

The benefits of using keywords have been explicated above. Amongst other advantages, such as enhanced readability and maintainability, the keyword-driven approach provides a lot of power and flexibility to the test automation engineer. Quasi-paradoxically, in harnessing this power and flexibility, the primary risk involved in the keyword-driven approach is being introduced. That this risk should be of topical interest to us, will be established by somewhat digressing into the subject of 'the new testing'.

In many agile teams, both 'coders' and 'non-coders' are expected to contribute to the automation code base. The boundaries between these (and other) roles are blurring. Despite the current (and sometimes rather bitter) polemic surrounding this topic, it seems to be inevitable that the traditional developer role will have to move towards testing (code) and the traditional tester role will have to move towards coding (tests). Both will use testing frameworks and tools, whether it be unit testing frameworks (such as JUnit), keyword-driven functional test automation frameworks (such as RF or Cucumber) and/or non-functional testing frameworks (such as Gatling or Zed Attack Proxy).

To this end, the traditional developer will have to become knowledgeable and gain experience in the field of testing strategies. Test automation that is not based on a sound testing strategy (and attuned to the relevant business and technical risks), will only result in a faster and more frequent execution of ineffective test designs and will thus provide nothing but a false sense of security. The traditional developer must therefore make the transition from the typical tool-centric approach to a strategy-centric approach. Of course, since everyone needs to break out of the silo mentality, both developer and tester should also collaborate on making these tests meaningful, relevant and effective.

The challenge for the traditional tester may prove to be even greater and it is there that the aforementioned risks are introduced. As stated, the tester will have to contribute test automation code. Not only at the highest-level test designs or specifications, but also at the lowest-level-keyword (fixture/step) level, where most of the intelligence, power and, hence, complexity resides. Just as the developer needs to ascend to the 'higher plane' of test strategy and design, the tester needs to descend into the implementation details of turning a test strategy and design into something executable. More and more testers with a background in 'traditional', non-automated testing are therefore entering the process of acquiring enough coding skills to be able to make this contribution.

However, by having (hitherto) inexperienced people authoring code, severe stability and maintainability risks are being introduced. Although all current (i.e. keyword-driven) frameworks facilitate and support creating automation code that is reusable, maintainable, robust, reliable, stable and readable, still code authors will have to actively realize these qualities, by designing for them and building them in into their automation solutions. Non-coders though, in my experience, are (at least initially) having quite some trouble understanding and (even more dangerously) appreciating the critical importance of applying design patters and other best practices to their code. That is, most traditional testers seem to be able to learn how to code (at a sufficiently basic level) rather quickly, partially because, generally, writing automation code is less complex than writing product code. They also get a taste for it: they soon get passionate and ambitious. They become eager to applying their newly acquired skills and to create lot's of code. Caught in this rush, they often forget to refactor their code, downplay the importance of doing so (and the dangers involved) or simply opt to postpone it until it becomes too large a task. Because of this, even testers who have been properly trained in applying design patterns, may still deliver code that is monolithic, unstable/brittle, non-generic and hard to maintain. Depending on the level at which the contribution is to be made (lowest-level in code or mid-level in scripting), these risks apply to a greater or lesser extent. Moreover, this risky behaviour may be incited by uneducated stakeholders, as a consequence of them holding unrealistic goals, maintaining a short-term view and (to put it bluntly) being ignorant with regards to the pitfalls, limitations, disadvantages and risks that are inherent to all test automation projects.

Then take responsibility ... and get some help in doing so

Clearly then, the described risks are not so much inherent to the frameworks or to the approach to test automation, but rather flow from inexperience with these frameworks and, in particular, from inexperience with this approach. That is, to be able to (optimally) benefit from the specific advantages of this approach, applying design patterns is imperative. This is a critical factor for the long-term success of any keyword-driven test automation effort. Without applying patterns to the test code, solutions will not be cost-efficient, maintainable or transferable, amongst other disadvantages. The costs will simply outweigh the benefits on the long run. Whats more, essentially the whole purpose and added value of using keyword-driven frameworks are lost, since these frameworks had been devised precisely to this end: counter the severe maintainability/reusability problems of the earlier generation of frameworks. Therefore, from all the approaches to test automation, the keyword-driven approach depends to the greatest extent on the disciplined and rigid application of standard software development practices, such as modularization, abstraction and genericity of code.

This might seem a truism. However, since typically the traditional testers (and thus novice coders) are nowadays directed by their management towards using keyword-driven frameworks for automating their functional, black-box tests (at the service/API- or GUI-level), automation anti-patterns appear and thus the described risks emerge. To make matters worse, developers remain mostly uninvolved, since a lot of these testers are still working within siloed/compartmented organizational structures.

In our experience, a combination of a comprehensive set of explicit best practices, training and on-the-job coaching, and a disciplined review and testing regime (applied to the test code) is an effective way of mitigating these risks. Additionally, silo's need to be broken down, so as to foster collaboration (and create synergy) on all testing efforts as well as to be able to coordinate and orchestrate all of these testing efforts through a single, central, comprehensive and shared overall testing strategy.

Of course, the framework selected to implement a keyword-driven test automation solution, is an important enabler as well. As will become apparent from this series of blog posts, the Robot Framework is the platform par excellence to facilitate, support and even stimulate these counter-measures and, consequently, to very swiftly enable and empower seasoned coders and beginning coders alike to contribute code that is efficient, robust, stable, reusable, generic, maintainable as well as readable and transferable. That is not to say that it is the platform to use in any given situation, just that it has been designed with the intent of implementing the keyword-driven approach to its fullest extent. As mentioned in a previous post, the RF can be considered as the epitome of the keyword-driven approach, bringing that approach to its logical conclusion. As such it optimally facilitates all of the mentioned preconditions for long-term success. Put differently, using the RF, it will be hard not to avoid the pitfalls inherent to keyword-driven test automation.

Some examples of such enabling features (that we will also encounter in later posts):

  • A straightforward, fully keyword-oriented scripting syntax, that is both very powerful and yet very simple, to create low- and/or mid-level test functions.
  • The availability of dozens of keyword libraries out-of-the-box, holding both convenience functions (for instance to manipulate and perform assertions on xml) and specialized keywords for directly driving various interface types. Interfaces such as REST, SOAP or JDBC can thus be interacted with without having to write a single line of integration code.
  • Very easy, almost intuitive means to apply a broad range of design patterns, such as creating various types of abstraction layers.
  • And lots and lots of other great and unique features.
Summary

We have now an understanding of the characteristics and purpose of keywords and of the advantages of structuring our test automation solution into (various layers of) keywords. At the same time, we have looked at the primary risk involved in the application of such a keyword-driven approach and at ways to deal with these risks.

Keyword-driven test automation is aimed at solving the problems that were instrumental in the failure of prior automation paradigms. However, for a large part it merely facilitates the involved solutions. That is, to actually reap the benefits that a keyword-driven framework has to offer, we need to use it in an informed, professional and disciplined manner, by actively designing our code for reusability, maintainability and all of the other qualities that make or break long-term success. The specific design as well as the unique richness of powerful features of the Robot Framework will give automators a head start when it comes to creating such code.

Of course, this 'adage' of intelligent and adept usage, is true for any kind of framework that may be used or applied in the course of a software product's life cycle.

Part 3 of this second post, will go into the specific implementation of the keyword-driven approach by the Robot Framework.

FitNesse in your IDE

Xebia Blog - Wed, 02/03/2016 - 17:10

FitNesse has been around for a while. The tool has been created by Uncle Bob back in 2001. It’s centered around the idea of collaboration. Collaboration within a (software) engineering team and with your non-programmer stakeholders. FitNesse tries to achieve that by making it easy for the non-programmers to participate in the writing of specifications, examples and acceptance criteria. It can be launched as a wiki web server, which makes it accessible to basically everyone with a web browser.

The key feature of FitNesse is that it allows you to verify the specs with the actual application: the System Under Test (SUT). This means that you have to make the documentation executable. FitNesse considers tables to be executable. When you read ordinary documentation you’ll find that requirements and examples are outlined in tables often, hence this makes for a natural fit.

There is no such thing as magic, so the link between the documentation and the SUT has to be created. That’s where things become tricky. The documentation lives in our wiki server, but code (that’s what we require to connect documentation and SUT) lives on the file system, in an IDE. What to do? Read a wiki page, remember the class and method names, switch to IDE, create classes and methods, compile, switch back to browser, test, and repeat? Well, so much for fast feedback! When you talk to programmers, you’ll find this to be the biggest problem with FitNesse.

Imagine, as a programmer, you're about to implement an acceptance test defined in FitNesse. With a single click, a fixture class is created and adding fixture methods is just as easy. You can easily jump back and forth between the FitNesse page and the fixture code. Running the test page is as simple as hitting a key combination (Ctrl-Shift-R comes to mind). You can set breakpoints, step through code with ease. And all of this from within the comfort of your IDE.

Acceptance test and BDD tools, such as Cucumber and Concordion, have IDE plugins to cater for that, but for FitNesse this support was lacking. Was lacking! Such a plugin is finally available for IntelliJ.

screenshot_15435-1

Over the last couple of months, a lot of effort has been put in building this plugin. It’s available from the Jetbrains plugin repository, simply named FitNesse. The plugin is tailored for Slim test suites, but also works fine with Fit tables. All table types are supported. References between script, decision tables and scenarios work seamlessly. Running FitNesse test pages is as simple as running a unit test. The plugin automatically finds FitNesseRoot based on the default Run configuration.

The current version (1.4.3) even has (limited) refactoring support: renaming Java fixture classes and methods will automatically update the wiki pages.

Feel free to explore the new IntelliJ plugin for FitNesse and let me know what you think!

(GitHub: https://github.com/gshakhn/idea-fitnesse)

Leadership: Are Leaders Born? Or Are They Made?

Making the Complex Simple - John Sonmez - Wed, 02/03/2016 - 14:00

Leadership is one of those perennial topics in our society that we can’t stop talking, thinking, and writing about. A search on Google turns up over 400,000,000 results; a search on Amazon brings back over 70,000 books on the topic. Every day, more books are authored, more articles are written, more podcasts are recorded, and […]

The post Leadership: Are Leaders Born? Or Are They Made? appeared first on Simple Programmer.

Categories: Programming

The Scooter Computer

Coding Horror - Jeff Atwood - Wed, 02/03/2016 - 09:52

When we initially deployed our handbuilt colocated servers for Discourse in 2013, I needed a way to provide an isolated VPN channel in for secure remote access and troubleshooting. Rather than dedicate a whole server to this task, I purchased the inexpensive, open source firmware friendly Asus RT-N16 router, flashed it with the popular TomatoUSB open source firmware, removed the antennas, turned off the WiFi and dropped it off in our colocated rack to let it act as a dedicated VPN access point.


Asus RT-N16

And that box – which was $100 then and around $70 now – worked well enough until now. Although the version of OpenSSL in the 2012 era Tomato firmware we used is not vulnerable to Heartbleed, it's still getting out of date in terms of the encryption it supports and allows. And Tomato itself is updated sporadically, chaotically at best.

Let's face it: this is just a little box that runs a chopped up version of Linux, with a bit of specialized wireless hardware and multiple antennas tacked on … that we're not even using. So when it came time to upgrade, we wondered:

Why not just go with a small box that can run a real, full Linux distro? Wouldn't that be simpler and easier to keep up to date?

After doing some research and asking on Twitter, I discovered there are a ton of amazing little Broadwell "mini-PC" boxes available on AliExpress.

The specs are kind of amazing for the price. I paid ~$350 each for the ones I selected:

  • i5-5200 Broadwell 2 core / 4 thread CPU at 2.2 Ghz - 2.7 Ghz
  • 16GB DDR3 RAM
  • 128GB M.2 SSD
  • Dual gigabit Realtek 8168 ethernet
  • front 4 USB 3.0 ports / rear 4 USB 2.0 ports
  • Dual HDMI out

(There's also optical and analog audio connectors on the front, as well as a SD card reader, which I covered with a sticker since we had no need for audio. I also stripped the WiFi out since we didn't need it, but it was included for the price, too.)

Selecting the i5-4258u, 4GB RAM, and 64GB SSD pushes the price down to $270. That's still a solid CPU, only a single generation behind Intel's latest and greatest Skylake, and carrying the midrange i5 moniker; it's no pushover. There are also many, many variants of this box from other AliExpress sellers that have slightly older, cheaper CPUs that are still plenty powerful. You can easily spec a box similar to this one for $200.

That's not a whole lot more than the $200 you'd pay for a high end router these days, and as Ars Technica notes, the average x86 box is radically faster.

Note that the above graphs, "homebrew" means an old, 1.8 Ghz Ivy Bridge dual core chip, 3 generations behind current CPUs, that doesn't even merit the i3 or i5 designation, and has no hyperthreading. Do bear that in mind as you keep reading.

Meet The Scooter Computer

This box may be small, and only 15 watt TDP, but it is mighty. I spun up a new Digital Ocean droplet and ran a quick benchmark:

sudo apt-get install sysbench
sysbench --test=cpu --cpu-max-prime=20000 run
Tie Shuttle 6
total time:           28.0707s
total num events:     10000
total time take:      28.0629
per-request stats:
     min:             2.77ms
     avg:             2.81ms
     max:             3.99ms
     ~95 percentile:  3.00ms
Digital Ocean Droplet
total time:          35.9541s
total num events:    10000
total time taken:    35.9492
per-request stats:
     min:             3.50ms
     avg:             3.59ms
     max:             13.31ms
     ~95 percentile:  3.79ms

Results will of course vary by cloud provider, but rest assured this box is just as fast as and possibly even faster than the average cloud box you could spin up right now. Of course it is "only" 2 cores / 4 threads, but the more cores you need, the slower they tend to go because of the overall TDP limits of the core package.

One thing that's not immediately obvious in photos is that this thing is indeed small but hefty, like holding a solid chunk of aluminum in your hand. That's because the box is passively cooled — the whole case is the heatsink, as the CPU on the bottom of the motherboard mates with the finned top of the case.

Opening this box you realize just how simple things are inside it; it's barely more than a highly integrated motherboard strapped to an aluminum block. This isn't a Steve Jobs truck, a Mac Mini car, or even a motorcycle. This is a scooter.

Scooters are very primitive machines; it is both their greatest strength and their greatest weakness. It's arguably the simplest personal wheeled vehicle there is. In these short distance scenarios, scooters tend to win over, say, bicycles because there's less setup and teardown necessary – you don't have to lock up a scooter, nor do you have to wear a helmet. Just hop on and go! You get almost all the benefits of gravity and wheeled efficiency with a minimum of fuss and maintenance. And yes, it's fun, too!

Passively cooled computers are paragons of simplicity and reliable consumer electronics, but passively cooling a "real" x86 PC is the holy grail. To get serious performance you usually need to feed the CPU at least 10 to 20 watts – and dissipating that kind of energy with zero fans and ambient airflow alone is not trivial. Let's see how our scooter does overnight running Mersenne Primes, which is the heaviest CPU load possible.

You can place your hand on the top of the box during this, but it's uncomfortable. And the whole box radiates heat, not just the top. Overall it was completely stable for me during overnight mprime torture testing with the 15w TDP CPU I chose, and I am comfortable with these boxes sitting in our rack in the datacenter, even under extended full load. However, I would be very careful putting a 28w TDP CPU in this box unless you are absolutely sure it won't be at full load very often. Passive cooling is hard.

Power consumption, as measured by my Kill-a-Watt, ranged from 7 watts at the Ubuntu Server 14.04 text login screen, to 8-10 watts at an idle Ubuntu 15.10 GUI login screen (the default OS it arrived with), to 14-18 watts in memory testing, to 26 watts in mprime.

I should also mention that even under extreme mprime load, both CPUs stayed at 2.5 Ghz indefinitely, which is unusual in my experience. To achieve 2.7 Ghz you need a single threaded load. Considering the base clock of the i5-5200u is 2.2 Ghz, that's quite good! Many 4-6-8 core CPUs drop all the way down to their base clock pretty fast once they have significant load, which makes the "turbo" moniker a bit of a lie.

(By the way, don't bother using burnP6, it generates way too little heat compared to mprime, which is an absolute monster. If your CPU can survive an overnight run of mprime, I can assure you it's ready for just about anything the real world can throw at it, ever.)

Disk

The machine has M.2 slots for two drives, as well as a SATA port and power cable (not pictured, but was included in the box) if you want to mate a 2.5" drive with the drive mounting holes on the bottom of the case. So if you prefer a mirrored two drive RAID array here for reliability, or a giant honking 2TB 2.5" HDD slapped in there for media storage, all of that is possible!

Be careful, as the internal M.2 slots are 2242, meaning 42mm length. There seem to be mostly (only?) lower cost SSD drives available in this size for whatever reason.

Don't worry, though, the bundled 128GB Phison S9 M.2 SSD has decent performance, roughly equal to a good SSD from a few years ago:

dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync
hdparm -Tt /dev/sda

536870912 bytes (537 MB) copied, 1.52775 s, 351 MB/s
Timing cached reads:   11434 MB in  2.00 seconds = 5720.61 MB/sec
Timing buffered disk reads:  760 MB in  3.00 seconds = 253.09 MB/sec

That's respectable SSD performance and won't hold you back in most use cases, but it's not a barn-burning disk subsystem, either. I'm not entirely sure retrofitting, say, the state of the art Samsung 950 Pro M.2 2280 drive is possible due to length restrictions.

Of course the Samsung 850 Pro would fit fine as a traditional 2.5" SATA drive mounted to the case cover, and would perform like this:

536870912 bytes (537 MB) copied, 1.20895 s, 444 MB/s
Timing cached reads:   38608 MB in  2.00 seconds = 19330.61 MB/sec
Timing buffered disk reads: 1584 MB in  3.00 seconds = 527.92 MB/sec

RAM

Intel limits these Broadwell U class CPUs to 16GB RAM total, so maxing the box out is only going to set you back around $70. Still, that's a significant percentage of the ~$350 total cost, and you may not need that much RAM for what you have in mind.

However, do be careful that you get dual-channel RAM for lower RAM configurations; you don't want a single 4GB DIMM, you want two 2GB DIMMs. They ship from the vendor with a single DIMM, so beware. It may not matter depending on the task, as noted by AnandTech, but our boxes will be used for OpenSSL, and memory is cheap, so why not?

The Versatile Scooter

When I began looking at this, I was shocked to discover just how low-end the x86 CPUs are in a lot of "dedicated" devices, such as the official pfSense hardware:

Sure, 2.4 Ghz and 8 cores on that C2758 sounds reasonable – until you realize those are old Intel Bay Trail Atom cores. Even the current Cherry Trail Atom cores aren't so hot. Furthermore, those are probably the maximum "turbo" frequencies being quoted, which are unlikely to be sustained under any kind of real multi-core load. Also, did I mention this is being sold as a $1,400 device? Except for the lack of more than 2 dedicated gigabit ethernet ports, I'd put our scooter computer up against that C2758 any day of the week. And you know what? It'd win.

I think this logic applies to a lot of dedicated hardware these days — routers, switches, firewalls, and so on. You're often better off building up a modern high power, low TDP x86 box and slapping a regular Linux distro on there.

You can even kinda-sorta fit six of them in a 1U rack space.

(Well, except for the power bricks and cables. Vertical mounting on a 1U shelf works out a bit better, and each conveniently came with a stand for vertical operation.)

Now that I've worked with these boxes, I've become rather enamored of the Scooter Computer concept. Wherever we were thinking that we had to run either:

  • A virtual machine on big iron for some small but important utility function in our rack.

  • Dedicated, purpose built hardware for networking, firewall, or switching with a custom OS.

… we can now take advantage of cheap, reliable, flexible, totally solid state commodity x86 hardware that's spread across many machines and running standard Linux distributions, like all the rest of our 1U servers.

[advertisement] At Stack Overflow, we put developers first. We already help you find answers to your tough coding questions; now let us help you find your next job.
Categories: Programming

Marshmallow and User Data

Android Developers Blog - Wed, 02/03/2016 - 08:24

Posted by Joanna Smith, Developer Advocate and Giles Hogben, Google Privacy Team

Marshmallow introduced several changes that were designed to help your app look after user data. The goal was to make it easier for developers to do the right thing. So as Android 6.0, Marshmallow, gains traction, we challenge you to do just that.

This post highlights the key considerations for user trust when it comes to runtime permissions and hardware identifiers, and points you to new best practices documentation to clarify what to aim for in your own app.

Permission Changes

With Marshmallow, permissions have moved from install-time to runtime. This is a mandatory change for SDK 23+, meaning it will affect all developers and all applications targeting Android 6.0. Your app will need to be updated anyway, so your challenge is to do so thoughtfully.

Runtime permissions mean that your app can now request access to sensitive information in the context that it will be used. This gives you a chance to explain the need for the permission, without scaring users with a long list of requests.

Permissions are also now organized into groups, so that users can make an informed decision without needing to understand technical jargon. By allowing your users to make a decision, they may decide not to grant a permission or to revoke a previously-granted permission. So, your app needs to be thoughtful when handling API calls requiring permissions that may have been denied, and about building in graceful failure-handling so that your users can still interact with the rest of your app.

Identifier Changes

The other aspect of user trust is doing the right thing with user data. With Marshmallow, we are turning off access to some kinds of data in order to direct developers down this path.

Most notably, Local WiFi and Bluetooth MAC addresses are no longer available. The getMacAddress() method of a WifiInfo object and the BluetoothAdapter.getDefaultAdapter().getAddress() method will both return 02:00:00:00:00:00 from now on.

However, Google Play Services now provides Instance IDs, which identify an application instance running on a device. Instance IDs provide a reliable alternative to non-resettable, device-scoped hardware IDs, as they will not persist across a factory reset and are scoped to an app instance. See the Google Developer's What is Instance ID? help article for more information.

What’s Next

User trust depends largely on what users see and how they feel. Mishandling permissions and identifiers increases the risk of unwanted/unintended tracking, and can result in users feeling that your app doesn’t actually care about the user. So to help you get it right, we’ve created new documentation that should enable developers to be certain that their app is doing the right thing for their users.

So happy developing! May your apps make users happy, and may your reviews reflect that. :)

Categories: Programming

Nine Product Management lessons from the Dojo

Xebia Blog - Tue, 02/02/2016 - 23:00
Are you kidding? a chance to add the Matrix to a blogpost?

Are you kidding? a chance to add the Matrix to a blogpost?

As I am gearing up for the belt exams next Saturday I couldn’t help to notice the similarities of what we learn in the dojo (it’s where the martial arts are taught) and how we should behave as Product Managers. Here are 9 lessons, straight from the Dojo, ready for your day job:

1.) Some things are worth fighting for

In Judo we practice Randori, which means ground wrestling. You will find that there are some grips that are worth fighting for, but some you should let go in search of a better path to victory.

In Product Management, we are the heat shield of the product, constantly between engineering striving for perfection, sales wanting something else, marketing pushing the launch date and management hammering on the PNL.

You need to pick your battles, some you deflect, some you unarm, and some you accept, because you are maneuvering yourself so you can make the move that counts.

Good product managers are not those who win the most battles, but those who know which ones to win.

2.) Preserve your partners

It’s fun to send people flying through the air, but the best way to improve yourself is to improve your partner. You are in this journey together, just as in Product Management. Ask yourself the following question today: “whom do I need to train as my successor” and start doing so.

I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers

"I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers"

3.) There is no such thing as fair

It’s a natural reaction if someone changed the rules of the game. We protest, we go on strike, we say it’s not fair, but in a market driven environment, what is fair? Disruption, changing the rules of the game has become the standard (24% of the companies experience it already, 58% expect it, 42% is still in denial) We can go on strike or adapt to it.

The difference between Kata and free sparing is that your opponents will not follow a prescribed path. Get over it.

4.) Behavior leads to outcome

I’m heavily debating the semantics with my colleague from South Africa (you know who you are), so it’s probably wording but the grunt of it is: if you want more of something, you should start doing it. Positive brand experiences will drive people to your products; hence one bad product affects all other products of your brand.

It’s not easy to change your behavior, whether it is in sport, health, customer interaction or product philosophy, but a different outcome starts with different behaviour.

Where did my product go?

Where did my product go?

5.) If it’s not working try something different

Part of Saturday’s exams will be what in Jujitsu is called “indirect combinations”. This means that you will be judged on the ability to move from one technique to another when the first one fails. Brute force is also an option, but not one that is likely to succeed, even if you are very strong.

Remember Microsoft pouring over a billion marketing dollars in Windows Phone? Brute forcing its position by buying Nokia? Blackberry doing something similar with QNX and only now switching to Android? Indirect combinations is not a lack of perseverance but adaptability to achieve result without brute force and with a higher chance of success.

This is where you tap out

This is where you tap out

6.) Failure is always an option

Tap out! Half of the stuff in Jujitsu is originally designed to break your bones, so tap out if your opponent has got a solid grip. It’s not the end, it’s the beginning. Nobody gets better without failing.

Two third of all Product Innovations fails, the remaining third takes about five iterations to get it right. Test your idea thoroughly but don’t be afraid to try something else too.

7.) Ask for help

There is no way you know it all. Trust your team, peers and colleagues to help you out. Everyone has something to offer, they may not always have the solution for you but in explaining your problem you will often find the solution.

8.) The only way to get better is to show up

I’m a thinker. I like to get the big picture before I act. This means that I can also overthink something that you just need to do. Though it is okay to study and listen, don’t forget to go out there and start doing it. Short feedback loops are key in building the right product, even if the product is not build right. So talk to customers, show them what you are working on, even in an early stage. You will not get better at martial arts or product management if wait too long to show up.

9.) Be in the moment

Don’t worry about what just happened, or what might happen. Worry about what is right in front of you. The technique you are forcing is probably not the one you want.

 

This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

Create promo codes for your apps and in-app products in the Google Play Developer Console

Android Developers Blog - Mon, 02/01/2016 - 22:40

Posted by Yoshi Tamura, Product Manager, Google Play

Over the past six months, a number of new tools in the Google Play Developer Console have been added to help you grow your app or game business on Google Play. Our improved beta testing features help you gather more feedback and fix issues. Store Listing Experiments let you run A/B tests on your app’s Play Store listing. Universal App Campaigns and the User Acquisition performance report help you grow your audience and better understand your marketing.

Starting today, you can now generate and distribute promo codes to current and new users on Google Play to drive engagement. Under the Promotions tab in the Developer Console, you can set up promo codes for your apps, games, and in-app products to distribute in your own marketing campaigns (up to 500 codes per app, per quarter). Consider using promo codes to reward loyal users and attract new customers.

How to use promo codes
  1. Choose your app in the Developer Console.
  2. Under the Promotions tab choose Add new promotion.
  3. Review and accept the additional terms of service if you haven’t run a promotion before.
  4. Choose from the options available, then generate and download your promo codes.
  5. Distribute your promo codes via your marketing channels such as social networks, in email, on the web, to your app’s beta testers, or in your app or game itself.
  6. Users can redeem your promo codes in a number of ways, including:
  1. From Google Play, using the Redeem menu option.
  2. From your app. They’ll be directed to the Play checkout flow before being redirected back to your app.
  3. By following a link that embeds the promo code (see tips below).

For more details about running a promotion for your app or game, read this article on the Google Play Developer Help Center.

Tips for making the most of promo codes

Some things to keep in mind when running a successful promotion:

  • There’s a limit of 500 promo codes per app every quarter.
  • You can embed your code in a URL so that users don’t have to enter it themselves (for example, if you’re sending your codes in an email). You can use the URL: https://play.google.com/store?code={CODE} (where {CODE} is a generated promo code).
  • To use promo codes for in-app products, you should implement In-app Promotions in your app. Note that promo codes can’t be used for subscriptions.
  • Review and adhere to the Promotional Code Terms Of Service.

We hope you find interesting ways to use promo codes to find new users and engage existing fans. To learn more about the many tools and best practices you can use to grow your business on Google Play, download our new developer playbook, “The Secrets to App Success on Google Play”.

Categories: Programming

Play Games Permissions are changing in 2016

Android Developers Blog - Mon, 02/01/2016 - 22:37

Posted by Wolff Dobson, Developer Advocate

We’re taking steps to reduce sign-in friction and unnecessary permission requests for players by moving the Games APIs to a new model. The new interaction is:

  • Players are prompted to sign-in once per account, rather than once per game
  • Players no longer need their account upgraded to Google+ to use Play Games services
  • Once players have signed-in for the first time, they will no longer need to sign in to any future games; they will be automatically signed in
  • Note: Players can turn off auto-sign-in through the Play Games App’s settings
Advantages:
  • Once a user signs in for first time, new games will generally be able to sign in without any user interaction
  • There is no consent screen required for signing in on any particular game. Sign-in will be automatic to each new game.

In order to respect user’s privacy and avoid revealing their real name, we also have to change the way player IDs work.

  • For existing players: Games will continue to get their Google+ ID (also called “player ID” in previous documentation) when they sign in.
  • For new players: Games will get a new player ID which is not the same as the previous IDs we’ve used.
Potential issues

Most games should see no interruption or change in service. There are a handful of cases, however, where some change is required.

Below are some issues, along with potential solutions.

These are:

  1. Asking for the Google+ scope unnecessarily
    • Issue: Your users will get unnecessary, potentially disturbing pop-up consent windows
    • Solution: Don’t request any additional scopes unless you absolutely need them
  2. Using the Play Games player ID for other Google APIs that are not games
    • Issue: You will not get valid data back from these other endpoints.
    • Solution: Don’t use player ID for other Google APIs.
  3. Using mobile/client access tokens on the server
    • Issue: Your access token may not contain the information you’re looking for
      • ...and this is not recommended in the first place.
    • Solution: Use the new GetServerAuthCode API instead.

Let’s cover each of these issues in detail.

Issue: Asking for unnecessary scopes

Early versions of our samples and documentation created a GoogleApiClient as follows:

 // Don’t do it this way!  
 GoogleApiClient gac = new GoogleApiClient.Builder(this, this, this)  
           .addApi(Games.API)  
           .addScope(Plus.SCOPE_PLUS_LOGIN) // The bad part  
           .build();  
 // Don’t do it this way!  

In this case, the developer is specifically requesting the plus.login scope. If you ask for plus.login, your users will get a consent dialog.

Solution: Ask only for the scopes you need

Remove any unneeded scopes from your GoogleApiClient construction along with any APIs you no longer use.

 // This way you won’t get a consent screen  
 GoogleApiClient gac = new GoogleApiClient.Builder(this, this, this)  
           .addApi(Games.API)  
           .build();  
 // This way you won’t get a consent screen  
For Google+ users

If your app uses specific Google+ features, such as requiring access to the player’s real-world Google+ social graph, be aware that new users will still be required to have a G+ profile to use your game. (Existing users who have already signed in won’t be asked to re-consent).

To require Google+ accounts to use your game, change your Games.API declaration to the following:

 .addApi(Games.API, new GamesOptions.Builder()  
                       .setRequireGooglePlus(true).build())  

This will ensure that your game continues to ask for the necessary permissions/scopes to continue using the player’s real-world social graph and real name profile.

Issue: Using the Player ID as another ID

If you call the Games.getCurrentPlayerId() API, the value returned here is the identifier that Games uses for this player.

Traditionally, this value could be passed into other APIs such as Plus.PeopleApi.load. In the new model, this is no longer the case. Player IDs are ONLY valid for use with Games APIs.

Solution - Don’t mix IDs

The Games APIs (those accessed from com.google.android.gms.games) all use the Player ID, and as long as you use only those, they are guaranteed to work with the new IDs.

Issue: Using mobile/client access tokens on the server

A common pattern we’ve seen is:

  • Use GoogleAuthUtil to obtain an access token
  • Send this token to a server
  • On the server, call Google to verify the authenticity. This is most commonly done by calling https://www.googleapis.com/oauth2/v1/tokeninfo and looking at the response

This is not recommended in the first place, and is even more not-recommended after the shift in scopes.

Reasons not to do this:

  • It requires your app to know the current account the user is using, which requires holding the GET_ACCOUNTS permission. On Android M, this will result in the user being asked to share their contacts with your app at runtime, which can be intimidating.
  • The tokeninfo endpoint isn’t really designed for this use case - it’s primarily designed as a debugging tool, not as a production API. This means that you may be rate limited in the future if you call this API.
  • The user_id returned by token info may no longer be present with the new model. And even if it is present, the value won’t be the same as the new player ID. (See problem 2 above)
  • The token could expire at any time (access token expiration times are not a guarantee).
  • Using client tokens on the server require extra validation checks to make sure the token is not granted to a different application.
Solution: Use the new GetServerAuthCode flow

Fortunately, the solution is known, and is basically the same as our server-side auth recommendations for web.

  1. Upgrade to the latest version of Google Play Services SDK - at least 8.4.87.

  2. Create a server client ID if you don’t already have one

    1. Go to the Google Developer Console, and select your project

    2. From the left nav, select API Manager, then select Credentials

    3. Select “New Credentials” and choose “OAuth Client ID”

    4. Select “Web Application” and name it something useful for your application

    5. The client id for this web application is now your server client id.

  3. In your game, connect your GoogleApiClient as normal.

  4. Once connected, call the following API:

    1. Games.getGamesServerAuthCode(googleApiClient, “your_server_client_id”)

    2. If you were using GoogleAuthUtil before, you were probably calling this on a background thread - in which case the code looks like this:


 // Good way  
 {  
      GetServerAuthCodeResult result =   
           Games.getGamesServerAuthCode(gac, clientId).await();  
      if (result.isSuccess()) {  
           String authCode = result.getCode();  
            // Send code to server.  
   }  
 }  
 // Good way  


  1. Send the auth code to your server, exactly the same as before.

  2. On your server, make an RPC to https://www.googleapis.com/oauth2/v4/token to exchange the auth code for an access token, probably using a Google Apis Client Library.

    1. You’ll have to provide the server client ID, server client secret (listed in the Developer Console when you created the server client ID), and the auth code.

    2. See more details here: https://developers.google.com/identity/protocols/OAuth2WebServer?utm_campaign=play games_discussion_permissions_012316&utm_source=anddev&utm_medium=blog#handlingresponse

    3. No, really:  You should use a Google Apis Client Library to make this process easier.

  3. Once you have the access token, you can now call www.googleapis.com/games/v1/applications/<app_id>/verify/ using that access token.

    1. Pass the auth token in a header as follows:

      1. “Authorization: OAuth <access_token>”

    2. The response value will contain the player ID for the user. This is the correct player ID to use for this user.

    3. This access token can be used to make additional server-to-server calls as needed.

Note: This API will only return a 200 if the access token was actually issued to your web app.

In summary

Let’s be very clear: If you do nothing, unless you are depending explicitly on Google+ features, you will see no change in functionality, and a smoother sign-in experience.

If you are:

  • Requesting Google+ scopes without using them, it’s a good idea to stop using them from here out.
  • Sending client access tokens to your server, we strongly suggest you use getGamesServerAuthCode() instead.

Thanks, and keep making awesome games!

Categories: Programming

New features to better understand player behavior with Player Analytics

Android Developers Blog - Mon, 02/01/2016 - 22:35

Posted by Lily Sheringham, Developer Marketing at Google Play

Google Play games services includes Player Analytics, a free reporting tool available in the Google Play Developer Console, to help you understand how players are progressing, spending, and churning. Now, you can see what Player Analytics looks like with an exemplary implementation of Play games services: try out the new sample game in the Google Play Developer Console, which we produced with help from Auxbrain, developer of Zombie Highway 2. The sample game uses randomized and anonymized data from a real game and will also let you try the new features we’re announcing today. Note: You need a Google Play Developer account in order to access the sample game.

Use predictive analytics to engage players before they might churn

To help you better understand your players’ behavior, we’ve extended the Player Stats API in Player Analytics with predictive functionality. The churn prediction method will return data on the probability that the player will churn, i.e., stop playing the game, so you can create content in response to this to entice them to stay in your game. Additionally, the spend prediction method will return the probability that the player will spend, and you could, for example, provide discounted in-app purchases or show ads based on these insights.

Create charts in the new funnels report to quickly visualize sequences of events

The funnels report enables you to create a funnel chart from any sequence events, such as achievements, spend, and custom events. For example, you could log custom events for each step in a tutorial flow (e.g., tutorial step 1, step 2, step 3), and then use the funnel report to visualize the exit points in your tutorial.


Measure and compare the effect of changes and cumulative values by new users with cohort’s report

The cohorts report allows you to take any event such as sessions, cumulative spend, and custom events, and compare the cumulative event values by new user cohorts - providing valuable insight into the impact of your decisions on your gaming model. For example, you can view users that started the day before you made a change and the day after. This allows you to measure and compare the effect of changes made, so if you doubled the price of all your items in your in-game store, you can see if the cumulative sessions started after the change was lower or higher than the users that started before the change.


Updated C++, iOS SDKs and Unity plug-in to support Player Stats API

We have updated the C++ and iOS SDKs, and the Unity plug-in, all of which now support the Player Stats API, which includes the basic player stats as well as spend and churn predictions. Be sure to check out the sample game and learn more about Play Games Services. You can also get top tips from game developer Auxbrain to help you find success with Google Play game services.

Categories: Programming

How Fabulous and Yummly grew with App Invites

Android Developers Blog - Mon, 02/01/2016 - 22:34

Posted by Laurence Moroney, Developer Advocate

Introduced in May 2015, App Invites is an out-of-the-box solution for conducting app referrals and encouraging sharing. So far, we’ve seen very positive results on how the feature improves app discovery. While 52 percent of users discover apps by word of mouth, we have seen 92 percent of users trust recommendations from family and friends with App Invites. In this post, we’ll share some success stories from companies that have already used App Invites to grow their user base.

Fabulous is a research-based app incubated in Duke University's Center for Advanced Hindsight. The app helps users to embark on a journey to resetting poor habits, replacing them with healthy rituals, with the ultimate goal of improving health and well-being.

Users started taking advantage of App Invites within the app to share their experience with their friends and family. App Invites installs now account for 60 percent of all Fabulous installs via referrals. Sharing clicks also increased by 10 percent once App Invites were used. Fabulous also noticed increased user retention, with 2x the Life Time Value of the app for users that came in to it via App Invites. Fabulous simplified their user experience, combining SMS and email into a single interface, allowing users to focus on sharing.

Additionally, users that were acquired via App Invites versus other channels were found to be twice as likely to stay with the app.

CTO of Fabulous, Amine Laddhari, commented, “It took me only a few hours to implement App Invites versus several days of work when we built our own solution. It was straightforward!”

You can view the full case study from Fabulous here.

Yummly, a food discovery platform that views cooking a meal as a personalized, shareable experience wanted to expand its user base and generate awareness on the Android platform. It added App Invites so that users could recommend the app to their family and friends, giving functionality to share specific recipes, dinner ideas or shipping lists.

With App invites, they found that installation rates were about 60 percent higher compared to other sharing channels. Additionally, Yummly was able to take advantage of the seamless integration of Google Analytics. It’s the only share channel that has this integration, allowing data such as the number of invites sent, accepted and resulting installs to be accurately tracked.

Melissa Guyre, Product Manager at Yummly, commented, “The App Invites Integration process was seamless. A bonus feature is the excellent tracking tie-in with Google Analytics.”

You can view the full case study from Yummly here.

App Invites is available for Android or iOS, and you can learn how you can build it into your own apps at g.co/appinvites.

Categories: Programming

Trends for 2016

Our world is changing faster than ever before.  It can be tough to keep up.  And what you don’t know, can sometimes hurt you.

Especially if you get disrupted.

If you want to be a better disruptor vs. be the disrupted, it helps to know what’s going on around the world.  There are amazing people, amazing companies, and amazing discoveries changing the world every day.  Or at least giving it their best shot.

  • You know the Mega-Trends: Cloud, Mobile, Social, and Big Data.
  • You know the Nexus-Of-Forces, where the Mega-Trends (Cloud, Mobile, Social, Big Data) converge around business scenarios.
  • You know the Mega-Trend of Mega-Trends:  Internet-Of-Things (IoT)

But do you know how Virtual Reality is changing the game? 


Disruption is Everywhere

Are you aware of how the breadth and depth of diversity is changing our interactions with the world?  Do you know how “bi-modal” or “dual-speed IT” are really taking shape in the 3rd Era of IT or the 4th Industrial Revolution?

Do you know what you can print now with 3D printers? (and have you seen the 3D printed car that can actually drive? 
 and did you know we have a new land speed record with the help of the Cloud, IoT, and analytics? 
 and have you seen what driverless cars are up to?)

And what about all of the innovation that’s happening in and around cities? (and maybe a city near you.)

And what’s going on in banking, healthcare, retail, and just about every industry around the world?

Trends for Digital Business Transformation in a Mobile-First, Cloud-First World

Yes, the world is changing, and it’s changing fast.  But there are patterns.  I did my yearly trends post to capture and share some of these trends and insights:

Trends for 2016: The Year of the Bold

Let me warn you now – it’s epic.  It’s not a trivial little blog post of key trends for 2016.  It’s a mega-post, packed full with the ideas, terms, and concepts that are shaping Digital Transformation as we know it.

Even if you just scan the post, you will likely find something you haven’t seen or heard of before.  It’s a bird’s-eye view of many of the big ideas that are changing software and the tech industry as well as what’s changing other industries, and the world around us.

If you are in the game of Digital Business Transformation, you need to know the vocabulary and the big ideas that are influencing the CEOs, CIOs, CDOs (Chief Digital Officers), COOs, CFOs, CISOs (Chief Information Security Officers), CINOs (Chief Innovation Officers), and the business leaders that are funding and driving decisions as they make their Digital Business Transformations and learn how to adapt for our Mobile-First, Cloud-First world.

If you want to be a disruptor, Trends for 2016: The Year of the Bold is a fast way to learn the building blocks of next-generation business in a Digital Economy in a Mobile-First, Cloud-First world.

10 Key Trends for 2016

Here are the 10 key trends at a glance from Trends for 2016: The Year of the Bold to get you started:

  1. Age of the Customer
  2. Beyond Smart Cities
  3. City Innovation
  4. Context is King
  5. Culture is the Critical Path
  6. Cybersecurity
  7. Diversity Finds New Frontiers
  8. Reputation Capital
  9. Smarter Homes
  10. Virtual Reality Gets Real

Perhaps the most interesting trend is how culture is making or breaking companies, and cities, as they transition to a new era of work and life.  It’s a particularly interesting trend because it’s like a mega-trend.  It’s the people and process part that goes along with the technology.  As many people are learning, Digital Transformation is a cultural shift, not a technology problem.

Get ready for an epic ride and read Trends for 2016: The Year of the Bold.

If you read nothing else, at least read the section up front titled, “The Year of the Bold” to get a quick taste of some of the amazing things happening to change the globe. 

Who knows maybe we’ll team up on tackling some of the Global Goals and put a small dent in the universe.

You Might Also Like

10 Big Ideas from Getting Results the Agile Way

10 Personal Productivity Tools from Agile Results

Agile Results for 2016

How To Be a Visionary Leader

The Future of Jobs

The New Competitive Landscape

What Life is Like with Agile Results

Categories: Architecture, Programming

Ari Lamstein Creates Choropleth Maps Easily

Making the Complex Simple - John Sonmez - Mon, 02/01/2016 - 16:00

Ari Lamstein is a graduate of the FREE Blogging Course and one of the many who took all the knowledge and advice seriously. When Ari emailed me about the success of his new product, I knew I had to interview the guy. I’ve been thinking of creating new courses geared towards entrepreneurship and creating online […]

The post Ari Lamstein Creates Choropleth Maps Easily appeared first on Simple Programmer.

Categories: Programming