Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/1&page=5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Our Answer To the Alert Storm: Introducing Team View Alerts

Xebia Blog - Thu, 07/21/2016 - 11:39
As a Dev or Ops it’s hard to focus on the things that really matter. Applications, systems, tools and other environments are generating notifications at a frequency and amount greater than you are able to cope with. It's a problem for every Dev and Ops professional. Alerts are used to identify trends, spikes or dips

Neo4j: Cypher – Detecting duplicates using relationships

Mark Needham - Wed, 07/20/2016 - 18:32

I’ve been building a graph of computer science papers on and off for a couple of months and now that I’ve got a few thousand loaded in I realised that there are quite a few duplicates.

They’re not duplicates in the sense that there are multiple entries with the same identifier but rather have different identifiers but seem to be the same paper!

e.g. there are a couple of papers titled ‘Authentication in the Taos operating system’:


2016 07 20 11 43 00


2016 07 20 11 43 38

This is the same paper published in two different journals as far as I can tell.

Now in this case it’s quite easy to just do a string similarity comparison of the titles of these papers and realise that they’re identical. I’ve previously use the excellent dedupe library to do this and there’s also an excellent talk from Berlin Buzzwords 2014 where the author uses locality-sensitive hashing to achieve a similar outcome.

However, I was curious whether I could use any of the relationships these papers have to detect duplicates rather than just relying on string matching.

This is what the graph looks like:

Graph  8

We’ll start by writing a query to see how many common references the different Taos papers have:

MATCH (r:Resource {id: "168640"})-[:REFERENCES]->(other)
WITH r, COLLECT(other) as myReferences
UNWIND myReferences AS reference
OPTIONAL MATCH path = (other)-[:REFERENCES]->(reference)
WITH other, COUNT(path) AS otherReferences, SIZE(myReferences) AS myReferences
WITH other, 1.0 * otherReferences / myReferences AS similarity WHERE similarity > 0.5
RETURN other.id, other.title, similarity
ORDER BY similarity DESC
β”‚other.idβ”‚other.title                                β”‚similarityβ”‚
β”‚168640  β”‚Authentication in the Taos operating systemβ”‚1         β”‚
β”‚174614  β”‚Authentication in the Taos operating systemβ”‚1         β”‚

This query:

  • picks one of the Taos papers and finds its references
  • finds other papers which reference those same papers
  • calculates a similarity score based on how many common references they have
  • returns papers that have more than 50% of the same references with the most similar ones at the top

I tried it with other papers to see how it fared:

Performance of Firefly RPC

β”‚other.idβ”‚other.title                                                     β”‚similarity        β”‚
β”‚74859   β”‚Performance of Firefly RPC                                      β”‚1                 β”‚
β”‚77653   β”‚Performance of the Firefly RPC                                  β”‚0.8333333333333334β”‚
β”‚110815  β”‚The X-Kernel: An Architecture for Implementing Network Protocolsβ”‚0.6666666666666666β”‚
β”‚96281   β”‚Experiences with the Amoeba distributed operating system        β”‚0.6666666666666666β”‚
β”‚74861   β”‚Lightweight remote procedure call                               β”‚0.6666666666666666β”‚
β”‚106985  β”‚The interaction of architecture and operating system design     β”‚0.6666666666666666β”‚
β”‚77650   β”‚Lightweight remote procedure call                               β”‚0.6666666666666666β”‚

Authentication in distributed systems: theory and practice

β”‚other.idβ”‚other.title                                               β”‚similarity        β”‚
β”‚121160  β”‚Authentication in distributed systems: theory and practiceβ”‚1                 β”‚
β”‚138874  β”‚Authentication in distributed systems: theory and practiceβ”‚0.9090909090909091β”‚

Sadly it’s not as simple as finding 100% matches on references! I expect the later revisions of a paper added more content and therefore additional references.

What if we look for author similarity as well?

MATCH (r:Resource {id: "121160"})-[:REFERENCES]->(other)
WITH r, COLLECT(other) as myReferences
UNWIND myReferences AS reference
OPTIONAL MATCH path = (other)-[:REFERENCES]->(reference)
WITH r, other, authorSimilarity,  COUNT(path) AS otherReferences, SIZE(myReferences) AS myReferences
WITH r, other, authorSimilarity,  1.0 * otherReferences / myReferences AS referenceSimilarity
WHERE referenceSimilarity > 0.5
MATCH (r)<-[:AUTHORED]-(author)
WITH r, myReferences, COLLECT(author) AS myAuthors
UNWIND myAuthors AS author
OPTIONAL MATCH path = (other)<-[:AUTHORED]-(author)
WITH other, myReferences, COUNT(path) AS otherAuthors, SIZE(myAuthors) AS myAuthors
WITH other, myReferences, 1.0 * otherAuthors / myAuthors AS authorSimilarity
WHERE authorSimilarity > 0.5
RETURN other.id, other.title, referenceSimilarity, authorSimilarity
ORDER BY (referenceSimilarity + authorSimilarity) DESC
β”‚other.idβ”‚other.title                                               β”‚referenceSimilarityβ”‚authorSimilarityβ”‚
β”‚121160  β”‚Authentication in distributed systems: theory and practiceβ”‚1                  β”‚1               β”‚
β”‚138874  β”‚Authentication in distributed systems: theory and practiceβ”‚0.9090909090909091 β”‚1               β”‚
β”‚other.idβ”‚other.title                   β”‚referenceSimilarityβ”‚authorSimilarityβ”‚
β”‚74859   β”‚Performance of Firefly RPC    β”‚1                  β”‚1               β”‚
β”‚77653   β”‚Performance of the Firefly RPCβ”‚0.8333333333333334 β”‚1               β”‚

I’m sure I could find some other papers where neither of these similarities worked well but it’s an interesting start.

I think the next step is to build up a training set of pairs of documents that are and aren’t similar to each other. We could then train a classifier to determine whether two documents are identical.

But that’s for another day!

Categories: Programming

Android Developer Story: StoryToys finds success in the β€˜Family’ section on Google Play

Android Developers Blog - Wed, 07/20/2016 - 17:21

Posted by Lily Sheringham, Google Play team

Based in Dublin, Ireland, StoryToys is a leading publisher of interactive books and games for children. Like most kids’ app developers, they faced the challenges of engaging with the right audiences to get their content discovered. Since the launch of the Family section on Google Play, StoryToys has experienced an uplift of 270% in revenue and an increase of 1300% in downloads.

Hear Emmet O’Neill, Chief Product Officer, and Gavin Barrett, Commercial Director, discuss how the Family section creates a trusted and creative space for families to find new content. Also hear how beta testing, localized pricing and more, has allowed StoryToy’s flagship app, My Very Hungry Caterpillar, to significantly increase engagement and revenue.

Learn more about Google Play for Families and get the Playbook for Developers app to stay up-to-date with more features and best practices that will help you grow a successful business on Google Play.

Categories: Programming

Strictly Enforced Verified Boot with Error Correction

Android Developers Blog - Wed, 07/20/2016 - 01:51

Posted by Sami Tolvanen, Software Engineer


Android uses multiple layers of protection to keep users safe. One of these layers is verified boot, which improves security by using cryptographic integrity checking to detect changes to the operating system. Android has alerted about system integrity since Marshmallow, but starting with devices first shipping with Android 7.0, we require verified boot to be strictly enforcing. This means that a device with a corrupt boot image or verified partition will not boot or will boot in a limited capacity with user consent. Such strict checking, though, means that non-malicious data corruption, which previously would be less visible, could now start affecting process functionality more.

By default, Android verifies large partitions using the dm-verity kernel driver, which divides the partition into 4 KiB blocks and verifies each block when read, against a signed hash tree. A detected single byte corruption will therefore result in an entire block becoming inaccessible when dm-verity is in enforcing mode, leading to the kernel returning EIO errors to userspace on verified partition data access.

This post describes our work in improving dm-verity robustness by introducing forward error correction (FEC), and explains how this allowed us to make the operating system more resistant to data corruption. These improvements are available to any device running Android 7.0 and this post reflects the default implementation in AOSP that we ship on our Nexus devices.

Error-correcting codes

Using forward error correction, we can detect and correct errors in source data by shipping redundant encoding data generated using an error-correcting code. The exact number of errors that can be corrected depends on the code used and the amount of space allocated for the encoding data.

Reed-Solomon is one of the most commonly used error-correcting code families, and is readily available in the Linux kernel, which makes it an obvious candidate for dm-verity. These codes can correct up to ⌊t/2βŒ‹ unknown errors and up to t known errors, also called erasures, when t encoding symbols are added.

A typical RS(255, 223) code that generates 32 bytes of encoding data for every 223 bytes of source data can correct up to 16 unknown errors in each 255 byte block. However, using this code results in ~15% space overhead, which is unacceptable for mobile devices with limited storage. We can decrease the space overhead by sacrificing error correction capabilities. An RS(255, 253) code can correct only one unknown error, but also has an overhead of only 0.8%.

An additional complication is that block-based storage corruption often occurs for an entire block and sometimes spans multiple consecutive blocks. Because Reed-Solomon is only able to recover from a limited number of corrupted bytes within relatively short encoded blocks, a naive implementation is not going to be very effective without a huge space overhead.

Recovering from consecutive corrupted blocks

In the changes we made to dm-verity for Android 7.0, we used a technique called interleaving to allow us to recover not only from a loss of an entire 4 KiB source block, but several consecutive blocks, while significantly reducing the space overhead required to achieve usable error correction capabilities compared to the naive implementation.

Efficient interleaving means mapping each byte in a block to a separate Reed-Solomon code, with each code covering N bytes across the corresponding N source blocks. A trivial interleaving where each code covers a consecutive sequence of N blocks already makes it possible for us to recover from the corruption of up to (255 - N) / 2 blocks, which for RS(255, 223) would mean 64 KiB, for example.

An even better solution is to maximize the distance between the bytes covered by the same code by spreading each code over the entire partition, thereby increasing the maximum number of consecutive corrupted blocks an RS(255, N) code can handle on a partition consisting of T blocks to ⌈T/NβŒ‰ Γ— (255 - N) / 2.

Interleaving with distance D and block size B.

An additional benefit of interleaving, when combined with the integrity verification already performed by dm-verity, is that we can tell exactly where the errors are in each code. Because each byte of the code covers a different source blockβ€”and we can verify the integrity of each block using the existing dm-verity metadataβ€”we know which of the bytes contain errors. Being able to pinpoint erasure locations allows us to effectively double our error correction performance to at most ⌈T/NβŒ‰ Γ— (255 - N) consecutive blocks.

For a ~2 GiB partition with 524256 4 KiB blocks and RS(255, 253), the maximum distance between the bytes of a single code is 2073 blocks. Because each code can recover from two erasures, using this method of interleaving allows us to recover from up to 4146 consecutive corrupted blocks (~16 MiB). Of course, if the encoding data itself gets corrupted or we lose more than two of the blocks covered by any single code, we cannot recover anymore.

While making error correction feasible for block-based storage, interleaving does have the side effect of making decoding slower, because instead of reading a single block, we need to read multiple blocks spread across the partition to recover from an error. Fortunately, this is not a huge issue when combined with dm-verity and solid-state storage as we only need to resort to decoding if a block is actually corrupted, which still is rather rare, and random access reads are relatively fast even if we have to correct errors.


Strictly enforced verified boot improves security, but can also reduce reliability by increasing the impact of disk corruption that may occur on devices due to software bugs or hardware issues.

The new error correction feature we developed for dm-verity makes it possible for devices to recover from the loss of up to 16-24 MiB of consecutive blocks anywhere on a typical 2-3 GiB system partition with only 0.8% space overhead and no performance impact unless corruption is detected. This improves the security and reliability of devices running Android 7.0.

Categories: Programming

SE-Radio Episode 263: Camille Fournier on Real-World Distributed Systems

Stefan Tilkov talks to Camille Fournier about the challenges developers face when building distributed systems. Topics include the definition of a distributed system, whether developers can avoid building them at all, and what changes occur once they choose to. They also talk about the role distributed consensus tools such as Apache Zookeeper play, and whether […]
Categories: Programming

Final Developer Preview before Android 7.0 Nougat begins rolling out

Android Developers Blog - Mon, 07/18/2016 - 18:45

Posted by Dave Burke, VP of Engineering

As we close in on the public rollout of Android 7.0 Nougat to devices later this summer, today we’re releasing Developer Preview 5, the last milestone of this preview series. Last month’s Developer Preview included the final APIs for Nougat; this preview gives developers the near-final system updates for all of the supported preview devices, helping you get your app ready for consumers.

Here’s a quick rundown of what’s included in the final Developer Preview of Nougat:

  • System images for Nexus and other preview devices
  • An emulator that you can use for doing the final testing of your apps to make sure they’re ready
  • The final N APIs (API level 24) and latest system behaviors and UI
  • The latest bug fixes and optimizations across the system and in preinstalled apps

Working with this latest Developer Preview, you should make sure your app handles all of the system behavior changes in Android N, like Doze on the Go, background optimizations, screen zoom, permissions changes, and more. Plus, you can take advantage of new developer features in Android N such as Multi-window support, Direct Reply and other notifications enhancements, Direct boot, new emojis and more.

Publish your apps to alpha, beta or production channels in Google Play

After testing your apps with Developer Preview 5 you should publish the updates to Google Play soon. We recommend compiling against, and optionally targeting, API 24 and then publishing to your alpha, beta, or production channels in the Google Play Developer Console. A great strategy to do this is using Google Play’s beta testing feature to get early feedback from a small group of users -- including Developer Preview users β€” and then doing a staged rollout as you release the updated app to all users.

How to get Developer Preview 5

If you are already enrolled in the Android Beta program, your devices will get the Developer Preview 5 update right away, no action is needed on your part. If you aren’t yet enrolled in Android Beta, the easiest way to get started is by visiting android.com/beta and opt-in your eligible Android phone or tablet -- you’ll soon receive this preview update over-the-air. As always, you can also download and flash this update manually. The Nougat Developer Preview is available for Nexus 6, Nexus 5X, Nexus 6P, Nexus 9, and Pixel C devices, as well as General Mobile 4G [Android One] devices.

Thanks so much for all of your feedback so far. Please continue to share feedback or requests either in the N Developer Preview issue tracker, N Preview Developer community, or Android Beta community as we work towards the consumer release later this summer. Android Nougat is almost here!

Also, the Android engineering team will host a Reddit AMA on r/androiddev to answer all your technical questions about the platform tomorrow, July 19 from 12-2 PM (Pacific Time). We look forward to addressing your questions!

Categories: Programming

Announcing the Google Play Indie Games Festival in San Francisco, Sept. 24

Android Developers Blog - Thu, 07/14/2016 - 17:01
Posted by Jamil Moledina, Google Play, Games Strategic Lead

If you’re an indie game developer, you know that games are a powerful medium of expression of art, whimsy, and delight. Being on Google Play can help you reach over a billion users and build a successful, global business. That’s why we recently introduced programs, like the Indie Corner, to help more gamers discover your works of art.

To further celebrate and showcase the passion and innovation of indie game developers, we’re hosting the Google Play Indie Games Festival at the Terra Gallery in San Francisco, on September 24.

This is a great opportunity for you to showcase your indie title to the public, increase your network, and compete to win great prizes, such as Tango devices, free tickets for Google I/O 2017, and Google ad campaign support. Admission will be free and players will get the chance to play and vote on their favorites.

If you’re interested in showcasing your game, we’re accepting submissions now through August 14. We’ll then select high-quality games that are both innovative and fun for the festival. Submissions are open to US and Canadian developers with 15 or less full time staff. Only games published on or after January 1, 2016 or those to be published by December 31, 2016 are eligible. See complete rules.

We encourage virtual reality and augmented reality game submissions that use the Google VR SDK and the Tango Tablet Development Kit.

At the end of August, we’ll announce the group of indies to be featured at the festival.

You can learn more about the event here. We can’t wait to see what innovative and fun experiences you share with us!

Categories: Programming

Android Wear 2.0 Developer Preview 2

Android Developers Blog - Tue, 07/12/2016 - 18:26

Posted by Hoi Lam, Android Wear Developer Advocate

At Google I/O 2016, we launched the Android Wear 2.0 Developer Preview, which gives developers early access to the next major release of Android Wear. Since I/O, feedback from the developer community has helped us identify bugs and shape our product direction. Thank you!

Today, we are releasing the second developer preview with new functionalities and bug fixes. Prior to the consumer release, we plan to release additional updates, so please send us your feedback early and often. Please keep in mind that this preview is a work in progress, and is not yet intended for daily use.

What’s new?
  • Platform API 24 - We have incremented the Android Platform API version number to 24 to match Nougat. You can now update your Android Wear 2.0 Preview project’s compileSdkVersion to API 24, and we recommend that you also update targetSdkVersion to API 24.
  • Wearable Drawers Enhancements - We launched the wearable drawers as part of the Android Wear 2.0 Preview 1, along with UX guidelines on how to best integrate the navigation drawer and action drawer in your Android Wear app. In Preview 2, we have added additional support for wearable drawer peeking, to make it easier for users to access these drawers as they scroll. Other UI improvements include automatic peek view and navigation drawer closure and showing the first action in WearableActionDrawer’s peek view. For developers that want to make custom wearable drawers, we’ve added peek_view and drawer_content attributes to WearableDrawerView. And finally, navigation drawer contents can now be updated by calling notifyDataSetChanged.
  • Wrist Gestures: Since last year, users have been able to scroll through the notification stream via wrist gestures. We have now opened this system to developers to use within their applications. This helps improve single hand usage, for when your users need their other hand to hold onto their shopping or their kids. See the code sample below to get started with gestures in your app:
 public class MainActivity extends Activity {  
   @Override /* KeyEvent.Callback */  
   public boolean onKeyDown(int keyCode, KeyEvent event) {  
     switch (keyCode) {  
       case KeyEvent.KEYCODE_NAVIGATE_NEXT:  
         Log.d(TAG, "Next");  
         Log.d(TAG, "Previous");  
     // If you did not handle, then let it be handled by the next possible element as deemed by  
     // Activity.  
     return super.onKeyDown(keyCode, event);  

Get started and give us feedback!

The Android Wear 2.0 Developer Preview includes an updated SDK with tools and system images for testing on the official Android emulator, the LG Watch Urbane 2nd Edition LTE, and the Huawei Watch.

To get started, follow these steps:

  1. Take a video tour of the Android Wear 2.0 developer preview
  2. Update to Android Studio v2.1.1 or later
  3. Visit the Android Wear 2.0 Developer Preview site for downloads and documentation
  4. Get the emulator system images through the SDK Manager or download the device system images from the developer preview downloads page
  5. Test your app with your supported device or emulator
  6. Give us feedback

We will update this developer preview over the next few months based on your feedback. The sooner we hear from you, the more we can include in the final release, so don't be shy!

Categories: Programming

SE Radio Episode 262: Software Quality with Bill Curtis

Sven Johann talks with Bill Curtis about Software Quality. They start with what software quality is and then discuss examples of systems which failed to achieve the quality goals (e.g. ObamaCare) and it’s consequences. They then go on with the role of architecture in the overall quality of the system and how to achieve it […]
Categories: Programming

Python: Scraping elements relative to each other with BeautifulSoup

Mark Needham - Mon, 07/11/2016 - 07:01

Last week we hosted a Game of Thrones based intro to Cypher at the Women Who Code London meetup and in preparation had to scrape the wiki to build a dataset.

I’ve built lots of datasets this way and it’s a painless experience as long as the pages make liberal use of CSS classes and/or IDs.

Unfortunately the Game of Thrones wiki doesn’t really do that so I had to find another way to extract the data I wanted – extracting elements based on their position to more prominent elements on the page.

For example, I wanted to extract Arya Stark‘s allegiances which look like this on the page:

2016 07 11 06 45 37

We don’t have a direct route to her allegiances but we do have an indirect path via the h3 element with the text ‘Allegiance’.

The following code gets us the ‘Allegiance’ element:

from bs4 import BeautifulSoup
file_name = "Arya_Stark"
wikia = BeautifulSoup(open("data/wikia/characters/{0}".format(file_name), "r"), "html.parser")
allegiance_element = [tag for tag in wikia.find_all('h3') if tag.text == "Allegiance"]
> print allegiance_element
[<h3 class="pi-data-label pi-secondary-font">Allegiance</h3>]

Now we need to work out the relative position of the div containing the houses. It’s inside the same parent div so I thought it’d probably be the next sibling:

next_element = allegiance_element[0].next_sibling
> print next_element

Nope. Nothing! Hmmm, wonder why:

> print next_element.name, type(next_element)
None <class 'bs4.element.NavigableString'>

Ah, empty string. Maybe it’s the one after that?

next_element = allegiance_element[0].next_sibling.next_sibling
> print next_element.name, type(next_element)
[<a href="/wiki/House_Stark" title="House Stark">House Stark</a>, <br/>, <a href="/wiki/Faceless_Men" title="Faceless Men">Faceless Men</a>, u' (Formerly)']

Hoorah! Afer this it became a case of working out how the text was structure and pulling out what I wanted.

The code I ended up with is on github if you want to recreate it yourself.

Categories: Programming

Neo4j 3.0 Drivers – Failed to save the server ID and the certificate received from the server

Mark Needham - Mon, 07/11/2016 - 06:21

I’ve been using the Neo4j Java Driver on various local databases over the past week and ran into the following certificate problem a few times:

org.neo4j.driver.v1.exceptions.ClientException: Unable to process request: General SSLEngine problem
	at org.neo4j.driver.internal.connector.socket.SocketClient.start(SocketClient.java:88)
	at org.neo4j.driver.internal.connector.socket.SocketConnection.<init>(SocketConnection.java:63)
	at org.neo4j.driver.internal.connector.socket.SocketConnector.connect(SocketConnector.java:52)
	at org.neo4j.driver.internal.pool.InternalConnectionPool.acquire(InternalConnectionPool.java:113)
	at org.neo4j.driver.internal.InternalDriver.session(InternalDriver.java:53)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
	at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1431)
	at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
	at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1214)
	at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1186)
	at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.wrap(TLSSocketChannel.java:270)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runHandshake(TLSSocketChannel.java:131)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:95)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:77)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.<init>(TLSSocketChannel.java:70)
	at org.neo4j.driver.internal.connector.socket.SocketClient$ChannelFactory.create(SocketClient.java:251)
	at org.neo4j.driver.internal.connector.socket.SocketClient.start(SocketClient.java:75)
	... 14 more
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
	at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
	at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
	at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:304)
	at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
	at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1497)
	at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:212)
	at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
	at sun.security.ssl.Handshaker$1.run(Handshaker.java:919)
	at sun.security.ssl.Handshaker$1.run(Handshaker.java:916)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1369)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runDelegatedTasks(TLSSocketChannel.java:142)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.unwrap(TLSSocketChannel.java:203)
	at org.neo4j.driver.internal.connector.socket.TLSSocketChannel.runHandshake(TLSSocketChannel.java:127)
	... 19 more
Caused by: java.security.cert.CertificateException: Unable to connect to neo4j at `localhost:10003`, because the certificate the server uses has changed. This is a security feature to protect against man-in-the-middle attacks.
If you trust the certificate the server uses now, simply remove the line that starts with `localhost:10003` in the file `/Users/markneedham/.neo4j/known_hosts`.
The old certificate saved in file is:
The New certificate received is:
	at org.neo4j.driver.internal.connector.socket.TrustOnFirstUseTrustManager.checkServerTrusted(TrustOnFirstUseTrustManager.java:153)
	at sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:936)
	at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1484)
	... 28 more

I got a bit lazy and just nuked the file it mentions in the error message – /Users/markneedham/.neo4j/known_hosts – which led to this error the next time I call the driver in my application:

Failed to save the server ID and the certificate received from the server to file /Users/markneedham/.neo4j/known_hosts.
Server ID: localhost:10003
Received cert:

I recreated the file with no content and tried again and it worked fine. Alternatively we can choose to turn off encryption when working with local databases and avoid the issue:

Config config = Config.build().withEncryptionLevel( Config.EncryptionLevel.NONE ).toConfig();
try ( Driver driver = GraphDatabase.driver( "bolt://localhost:7687", config );
      Session session = driver.session() )
   // use the driver
Categories: Programming

R: Sentiment analysis of morning pages

Mark Needham - Sat, 07/09/2016 - 07:36

A couple of months ago I came across a cool blog post by Julia Silge where she runs a sentiment analysis algorithm over her tweet stream to see how her tweet sentiment has varied over time.

I wanted to give it a try but couldn’t figure out how to get a dump of my tweets so I decided to try it out on the text from my morning pages writing which I’ve been experimenting with for a few months.

Here’s an explanation of morning pages if you haven’t come across it before:

Morning Pages are three pages of longhand, stream of consciousness writing, done first thing in the morning.

*There is no wrong way to do Morning Pages* – they are not high art.

They are not even β€œwriting.” They are about anything and everything that crosses your mind– and they are for your eyes only.

Morning Pages provoke, clarify, comfort, cajole, prioritize and synchronize the day at hand. Do not over-think Morning Pages: just put three pages of anything on the page…and then do three more pages tomorrow.

Most of my writing is complete gibberish but I thought it’d be fun to see how my mood changes over time and see if it identifies any peaks or troughs in sentiment that I could then look into further.

I’ve got one file per day so we’ll start by building a data frame containing the text, one row per day:

files = list.files(root)
df = data.frame(file = files, stringsAsFactors=FALSE)
df$fullPath = paste(root, df$file, sep = "/")
df$text = sapply(df$fullPath, get_text_as_string)

We end up with a data frame with 3 fields:

> names(df)
[1] "file"     "fullPath" "text"

Next we’ll run the sentiment analysis function – syuzhet#get_nrc_sentiment – over the data frame and get a score for each type of sentiment for each entry:

get_nrc_sentiment(df$text) %>% head()
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     7           14       5    7   8       6        6    12       14       27
2    11           12       2   13   9      10        4    11       22       24
3     6           12       3    8   7       7        5    13       16       21
4     5           17       4    7  10       6        7    13       16       37
5     4           13       3    7   7       9        5    14       16       25
6     7           11       5    7   8       8        6    15       16       26

Now we’ll merge these columns into our original data frame:

df = cbind(df, get_nrc_sentiment(df$text))
df$date = ymd(sapply(df$file, function(file) unlist(strsplit(file, "[.]"))[1]))
df %>% select(-text, -fullPath, -file) %>% head()
  anger anticipation disgust fear joy sadness surprise trust negative positive       date
1     7           14       5    7   8       6        6    12       14       27 2016-01-02
2    11           12       2   13   9      10        4    11       22       24 2016-01-03
3     6           12       3    8   7       7        5    13       16       21 2016-01-04
4     5           17       4    7  10       6        7    13       16       37 2016-01-05
5     4           13       3    7   7       9        5    14       16       25 2016-01-06
6     7           11       5    7   8       8        6    15       16       26 2016-01-07

Finally we can build some ‘sentiment over time’ charts like Julia has in her post:

posnegtime <- df %>% 
  group_by(date = cut(date, breaks="1 week")) %>%
  summarise(negative = mean(negative), positive = mean(positive)) %>% 
names(posnegtime) <- c("date", "sentiment", "meanvalue")
posnegtime$sentiment = factor(posnegtime$sentiment,levels(posnegtime$sentiment)[c(2,1)])
ggplot(data = posnegtime, aes(x = as.Date(date), y = meanvalue, group = sentiment)) +
  geom_line(size = 2.5, alpha = 0.7, aes(color = sentiment)) +
  geom_point(size = 0.5) +
  ylim(0, NA) + 
  scale_colour_manual(values = c("springgreen4", "firebrick3")) +
  theme(legend.title=element_blank(), axis.title.x = element_blank()) +
  scale_x_date(breaks = date_breaks("1 month"), labels = date_format("%b %Y")) +
  ylab("Average sentiment score") + 
  ggtitle("Sentiment Over Time")

2016 07 05 06 47 12

So overall it seems like my writing displays more positive sentiment than negative which is nice to know. The chart shows a rolling one week average and there isn’t a single week where there’s more negative sentiment than positive.

I thought it’d be fun to drill into the highest negative and positive days to see what was going on there:

> df %>% filter(negative == max(negative)) %>% select(date)
1 2016-03-19
> df %>% filter(positive == max(positive)) %>% select(date)
1 2016-01-05
2 2016-06-20

On the 19th March I was really frustrated because my boiler had broken down and I had to buy a new one – I’d completely forgotten how annoyed I was, so thanks sentiment analysis for reminding me!

I couldn’t find anything particularly positive on the 5th January or 20th June. The 5th January was the day after my birthday so perhaps I was happy about that but I couldn’t see any particular evidence that was the case.

Playing around with the get_nrc_sentiment function it does seem to identify positive sentiment when I wouldn’t say there is any. For example here’s some example sentences from my writing today:

> get_nrc_sentiment("There was one section that I didn't quite understand so will have another go at reading that.")
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     0            0       0    0   0       0        0     0        0        1
> get_nrc_sentiment("Bit of a delay in starting my writing for the day...for some reason was feeling wheezy again.")
  anger anticipation disgust fear joy sadness surprise trust negative positive
1     2            1       2    2   1       2        1     1        2        2

I don’t think there’s any positive sentiment in either of those sentences but the function claims 3 bits of positive sentiment! It would be interesting to see if I fare any better with Stanford’s sentiment extraction tool which you can use with syuzhet but requires a bit of setup first.

I’ll give that a try next but in terms of getting an overview of my mood I thought I might get a better picture if I looked for the difference between positive and negative sentiment rather than absolute values.

The following code does the trick:

difftime <- df %>% 
  group_by(date = cut(date, breaks="1 week")) %>%
  summarise(diff = mean(positive) - mean(negative))
ggplot(data = difftime, aes(x = as.Date(date), y = diff)) +
  geom_line(size = 2.5, alpha = 0.7) +
  geom_point(size = 0.5) +
  ylim(0, NA) + 
  scale_colour_manual(values = c("springgreen4", "firebrick3")) +
  theme(legend.title=element_blank(), axis.title.x = element_blank()) +
  scale_x_date(breaks = date_breaks("1 month"), labels = date_format("%b %Y")) +
  ylab("Average sentiment difference score") + 
  ggtitle("Sentiment Over Time")
2016 07 09 07 05 34

This one identifies peak happiness in mid January/February. We can find the peak day for this measure as well:

> df %>% mutate(diff = positive - negative) %>% filter(diff == max(diff)) %>% select(date)
1 2016-02-25

Or if we want to see the individual scores:

> df %>% mutate(diff = positive - negative) %>% filter(diff == max(diff)) %>% select(-text, -file, -fullPath)
  anger anticipation disgust fear joy sadness surprise trust negative positive       date diff
1     0           11       2    3   7       1        6     6        3       31 2016-02-25   28

After reading through the entry for this day I’m wondering if the individual pieces of sentiment might be more interesting than the positive/negative score.

On the 25th February I was:

  • quite excited about reading a distributed systems book I’d just bought (I know?!)
  • thinking about how to apply the tag clustering technique to meetup topics
  • preparing my submission to PyData London and thinking about what was gonna go in it
  • thinking about the soak testing we were about to start doing on our project

Each of those is a type of anticipation so it makes sense that this day scores highly. I looked through some other days which specifically rank highly for anticipation and couldn’t figure out what I was anticipating so even this is a bit hit and miss!

I have a few avenues to explore further but if you have any other ideas for what I can try next let me know in the comments.

Categories: Programming

Changes to Trusted Certificate Authorities in Android Nougat

Android Developers Blog - Fri, 07/08/2016 - 18:41

Posted by Chad Brubaker, Android Security team

In Android Nougat, we’ve changed how Android handles trusted certificate authorities (CAs) to provide safer defaults for secure app traffic. Most apps and users should not be affected by these changes or need to take any action. The changes include:

  • Safe and easy APIs to trust custom CAs.
  • Apps that target API Level 24 and above no longer trust user or admin-added CAs for secure connections, by default.
  • All devices running Android Nougat offer the same standardized set of system CAsβ€”no device-specific customizations.

For more details on these changes and what to do if you’re affected by them, read on.

Safe and easy APIs

Apps have always been able customize which certificate authorities they trust. However, we saw apps making mistakes due to the complexities of the Java TLS APIs. To address this we improved the APIs for customizing trust.

User-added CAs

Protection of all application data is a key goal of the Android application sandbox. Android Nougat changes how applications interact with user- and admin-supplied CAs. By default, apps that target API level 24 willβ€”by designβ€”not honor such CAs unless the app explicitly opts in. This safe-by-default setting reduces application attack surface and encourages consistent handling of network and file-based application data.

Customizing trusted CAs

Customizing the CAs your app trusts on Android Nougat is easy using the Network Security Config. Trust can be specified across the whole app or only for connections to certain domains, as needed. Below are some examples for trusting a custom or user-added CA, in addition to the system CAs. For more examples and details, see the full documentation.

Trusting custom CAs for debugging

To allow your app to trust custom CAs only for local debugging, include something like this in your Network Security Config. The CAs will only be trusted while your app is marked as debuggable.

                <!-- Trust user added CAs while debuggable only -->
                <certificates src="user" />  
Trusting custom CAs for a domain

To allow your app to trust custom CAs for a specific domain, include something like this in your Network Security Config.

           <domain includeSubdomains="true">internal.example.com</domain>  
                <!-- Only trust the CAs included with the app  
                     for connections to internal.example.com -->  
                <certificates src="@raw/cas" />  
Trusting user-added CAs for some domains

To allow your app to trust user-added CAs for multiple domains, include something like this in your Network Security Config.

           <domain includeSubdomains="true">userCaDomain.com</domain>  
           <domain includeSubdomains="true">otherUserCaDomain.com</domain>  
                  <!-- Trust preinstalled CAs -->  
                  <certificates src="system" />  
                  <!-- Additionally trust user added CAs -->  
                  <certificates src="user" />  
Trusting user-added CAs for all domains except some

To allow your app to trust user-added CAs for all domains, except for those specified, include something like this in your Network Security Config.

                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
           <domain includeSubdomains="true">sensitive.example.com</domain>  
                <!-- Only allow sensitive content to be exchanged  
             with the real server and not any user or  
    admin configured MiTMs -->  
                <certificates src="system" />  
Trusting user-added CAs for all secure connections

To allow your app to trust user-added CAs for all secure connections, add this in your Network Security Config.

                <!-- Trust preinstalled CAs -->  
                <certificates src="system" />  
                <!-- Additionally trust user added CAs -->  
                <certificates src="user" />  
Standardized set of system-trusted CAs

To provide a more consistent and more secure experience across the Android ecosystem, beginning with Android Nougat, compatible devices trust only the standardized system CAs maintained in AOSP.

Previously, the set of preinstalled CAs bundled with the system could vary from device to device. This could lead to compatibility issues when some devices did not include CAs that apps needed for connections as well as potential security issues if CAs that did not meet our security requirements were included on some devices.

What if I have a CA I believe should be included on Android?

First, be sure that your CA needs to be included in the system. The preinstalled CAs are only for CAs that meet our security requirements because they affect the secure connections of most apps on the device. If you need to add a CA for connecting to hosts that use that CA, you should instead customize your apps and services that connect to those hosts. For more information, see the Customizing trusted CAs section above.

If you operate a CA that you believe should be included in Android, first complete the Mozilla CA Inclusion Process and then file a feature request against Android to have the CA added to the standardized set of system CAs.

Categories: Programming

Verbal Aikido for Product Managers

Xebia Blog - Sun, 07/03/2016 - 13:41
"Well eh ok, I guess so" mumbled the student in the training exercise where he was practicing how to say no to feature gluttony. I decided to give the class an additional exercise to awaken their inner diplomat. β€œDiplomacy is the art of telling people to go to hell in such a way that they

SQLite and Android N

Eric.Weblog() - Eric Sink - Wed, 06/15/2016 - 19:00

The upcoming release of Android N is going to cause problems for many apps that use SQLite. In some cases, these problems include an increased risk of data corruption.


SQLite is an awesome and massively popular database library. It is used every day by billions of people. If you are keeping a list of the Top Ten Coolest Software Projects Ever, SQLite should be on the list.

Many mobile apps use SQLite in one fashion or another. Maybe the developers of the app used the SQLite library directly. Or maybe they used another component or library that builds on SQLite.

SQLite is a library, so the traditional way to use it is to just link it into your application. For example, on a platform like Windows Phone 8.1, the app developer simply bundles the SQLite library as part of their app.

But iOS and Android have a SQLite library built-in to the platform. This is convenient, because developers do not need to bundle a SQLite library with their software.


The SQLite library that comes with Android is actually not intended to be used except through the android.database.sqlite Java classes. If you are accessing this library directly, you are actually breaking the rules.

And the problem is

Beginning with Android N, these rules are going to be enforced.

If your app is using the system SQLite library without using the Java wrapper, it will not be compatible with Android N.

Does your app have this problem?

If your app is breaking the rules, you *probably* know it. But you might not.

I suppose most Android developers use Java. Any app which is only using android.database.sqlite should be fine.

But if you are using Xamarin, it is rather more likely that your app is breaking the rules. Many folks in the Xamarin community tend to assume that "SQLite is part of the platform, so you can just call it".

Xamarin.Android 6.1 includes a fix for this problem for Mono.Data.Sqlite (see their release notes).

However, that is not the only way of accessing SQLite in the .NET/Xamarin world. In fact, I daresay it is one of the less common ways.

Perhaps the most popular SQLite wrapper is sqlite-net (GitHub). If you are using this library on Android and not taking the extra steps to bundle a SQLite library, your app will break on Android N.

Are you using Akavache? Or Couchbase Lite? Both of these libraries use SQLite under the hood (by way of SQLitePCL.raw, which I maintain), so your app will need to be updated to work on Android N.

There are probably dozens of other examples. GitHub says the sqlite-net library has 857 forks. Are you using one of those? Do you use the MvvmCross SQLite plugin? Do any of the components or libraries in your app make use of SQLite without you being aware of it?

And the Xamarin community is obviously not the whole story. There are dozens of other ways to build mobile apps. I can think of PhoneGap/Cordova, Alpha Anywhere, Telerik NativeScript, and Corona, just off the top of my head. How many of these environments (or their surrounding ecosystems) provide (perhaps accidentally) a rule-breaking way to access the Android system SQLite? I don't know.

What I *do* know is that even Java developers might have a problem.

It's even worse than that

Above, I said: "Any app which is only using android.database.sqlite should be fine." The key word here is "only". If you are using the Java classes but also have other code (perhaps some other library) that accesses the system SQLite, then you have the problems described above. But you also have another problem.

To fix this, you are going to have to modify that "other code" to stop accessing the system SQLite library directly. One way to do this is to change the other code to call through android.database.sqlite. But that might be a lot of work. Or that other code might be a 3rd party library that you do not maintain. So you are probably interested in an easier solution.

Why not just bundle another instance of the SQLite library into your app? This is what people who use sqlite-net on Xamarin will need to do, so it should make sense in this case too, right? Unfortunately, no.

What will happen here is that your android.database.sqlite code will continue using the system SQLite library, and your "other code" will use the second instance of the SQLite library that you bundled with your app. So your app will have two instances of the SQLite library. And this is Very Bad.

The Multiple SQLite Problem

Basically, having multiple copies of SQLite linked into the same appliication can cause data corruption. For more info, see this page on sqlite.org. And also the related blog entry I wrote back in 2014.

You really, really do not want to have two instances of the SQLite library in your app.


One example of a library which is going to have this problem is our own Zumero Client SDK. The early versions of our sync library bundled a copy of the SQLite library, to follow the rules. But later, to avoid possible data corruption from The Multiple SQLite Problem, we changed it to call the system SQLite directly. So, although I might like to claim we did it for a decent reason, our library breaks the rules, and we did it knowingly. All Android apps using Zumero will need to be updated for Android N. A new release of the Zumero Client SDK, containing a solution to this problem, is under development and will be released soon-ish.

Informed consent?

I really cannot recommend that you have two instances of the SQLite library in your app. The possibility of corruption is quite real. One of our developers created an example project to demonstrate this.

But for the sake of completeness, I will mention that it might be possible to prevent the corruption by ensuring that only one instance of the SQLite library is accessing a SQLite file at any given time. In other words, you could build your own layer of locking on top of any code that uses SQLite.

Only you can decide if this risk is worth it. I cannot feel good about sending anyone down that path.

Stop using android.database.sqlite?

It also makes this blog entry somewhat more complete for me to mention that changing your "other code" to go through android.database.sqlite is not your only option. You might prefer to leave your "other code" unchanged and rewrite the stuff that uses android.database.sqlite, ending up with both sets of code using one single instance of SQLite that is bundled with your app.

A Lament

Life was better when there were two kinds of platforms, those that include SQLite, and those that do not. Instead, we now have this third category of platforms that "previously included SQLite, but now they don't, but they kinda still do, but not really".

An open letter to somebody at Google

It is so tempting to blame you for this, but that that would be unfair. I fully admit that those of us who broke the rules have no moral high ground at all.

But it also true that because of the multiple SQLite problem, and the sheer quantity of apps that use the Android system SQLite directly, enforcing the rules now is the best way to maximize the possibility of Android apps that break or experience data corruption.

Would it really be so bad to include libsqlite in the NDK?


Continuum of Design

Actively Lazy - Tue, 05/17/2016 - 06:45

How best to organise your code? At one end of the scale there is a god class – a single, massive entity that stores all possible functions; at the other end of the scale are hundreds of static methods, each in their own class. In general, these two extremes are both terrible designs. But there is a continuum of alternatives between these two extremes – choosing where along this scale your code should be is often difficult to judge and often changes over time.

Why structure code at all?

It’s a fair enough question – what is so bad about the two extremes? They are, really, more similar to each other than any of the points between. In one there is a single class with every method in it; in the other, I have static methods one-per class or maybe I’m using a language which doesn’t require me to even group by classes. In both cases, there is noΒ organisation. No structure larger than methods with which to help organise or group related code.

Organising code is about making it easier for human beings to understand. The compiler or runtime doesn’t care how your code is organised, it will run it just the same. It will work the same and look the same – from the outside there is no difference. But for a developer making changes, the difference is critical. How do I find what I need to change? How can I be sure what my change will impact? How can I find other similar things that might be affected? These are all questions we have to answer when making changes to code and they require us to be able toΒ reason about the code.

Bounded contexts help contain understanding – by limiting the size of the problem I need to think about at any one time I can focus on a smaller portion of it, but even within that bounded context organisation helps. It is much easier for me to understand how 100 methods work when they are grouped into 40 classes, with relationships between the classes that make sense in the domain – than it is for me to understand a flat list of 100 methods.

An Example

Let’s imagine we’re writing software to manage a small library (for you kids too young to remember: a library is a place where you can borrow books and return them when you’re done reading them; a book is like a physically printed blog but without the comments). We can imagine the kinds of things (methods) this system might support:

  • Add new title to the catalog
  • Add new copy of title
  • Remove copy of a title
  • User borrows a copy
  • User returns a copy
  • Register a new user
  • Print barcode for new copy
  • Fine a user for a late return
  • Pay outstanding fines for a user
  • Find title by ISBN
  • Find title by name, author
  • Find copy by scanned id
  • Change user’s address
  • List copies user has already borrowed
  • Print overdue book letter for user

This toy example is small enough that written in one-class it would probably be manageable; pretty horrible, but manageable. How else could we go about organising this?

Horizontal Split

We could split functionality horizontally: by technical concern. For example, by the database that data is stored in; or by the messaging system that’s used. This can work wellΒ in some cases, but can often lead to more god classes because your class will be as large as the surface area of the boundary layer. If this is small and likely to remain small it can be a good choice, but all too often the boundaryΒ is large or grows over time and you have another god class.

For example, functionality related to payments might be grouped simply into a PaymentsProvider – with only one or twoΒ methods we might decide it is unlikely to grow larger. Less obviously, we might group printing related functionality into a PrinterManager – while it might only have two methods now, if we later start printing marketing material or management reports the methodsΒ become less closely related and we have the beginnings of a god class.

Vertical Split

The other obvious way to organise functionality is vertically – group methods that relate to the same domain concept together. For example, we could group some of our methods into a LendingManager:

  • User borrows a copy
  • User returns a copy
  • Register a new user
  • Fine a user for a late return
  • Find copy by scanned id
  • List copies user has already borrowed

Even in our toy example this class already has six public methods. A coarse grained grouping like this often ends up being called a SomethingManager or TheOtherService. While this is sometimes a good way to group methods, the lack of clear boundary means new functionality is easily addedΒ over time and we grow ourselves a new god class.

A more fine-grained vertical grouping would organiseΒ methods into the domain objects they relate to – the recognisable nouns in the domain, where the methods are operations on those nouns. The nouns in our library example are obvious: Catalog, Title, Copy, User. Each of these has two or three public methods – but to understand the system at the outer level I only need to understand the four main domain objects and how they relate to each other, not all the individual methods they contain.

The fine-grained structure allows us to reason about a larger system than if it was unstructured. The extents of the domain objects should be relatively clear, if we add new methods to Title it is because there is some new behaviour in the domain that relates to Title – functionality should only grow slowly, we should avoid new god classes; instead new functionality will tend to add new classes, with new relationships to the existing ones.

What’s the Right Way?

Obviously there’s no right answer in all situations. Even in our toy example it’s clear to see that different structures make sense for different areas of functionality. There are a range of design choices and no right or wrong answers. Different designs will ultimately all solve the same problem, but humans will find some designs easier to understand and change than others. This is what makes design such a hard problem: the right answer isn’t always obvious and might not be known for some time. EvenΒ worse, the right design today can look like the wrong design tomorrow.

Categories: Programming, Testing & QA

Do You Need To Learn Math To Be A Programmer?

Making the Complex Simple - John Sonmez - Thu, 04/28/2016 - 13:00

This was a very interesting question I got from one of Simple Programmer readers… Do you need to learn math to be a programmer? Is math really that necessary for programmers? Will you be a bad programmer if you don’t know math? In what ways math can help you as a programmer and developer?Β Watch this […]

The post Do You Need To Learn Math To Be A Programmer? appeared first on Simple Programmer.

Categories: Programming

Are You Making These Terrible Mistakes When Asking For Advice?

Making the Complex Simple - John Sonmez - Wed, 04/27/2016 - 13:00

Lots of freelancers and creatives adhere to a β€œI never dole out free advice” mantra when asked if someone can pick their brain for a few minutes. Think – when was the last time a person asked you for programming advice? If you could β€œjust take a look” at their blog? Explain to your mother […]

The post Are You Making These Terrible Mistakes When Asking For Advice? appeared first on Simple Programmer.

Categories: Programming

How Budding Programmers Can Get Their Work Noticed

Making the Complex Simple - John Sonmez - Mon, 04/25/2016 - 13:00

Parallel with the rising number of online tech solutions, the need for programmers is constantly growing. Today, in order to boost the information flow and workplace efficacy, almost every company needs a user-friendly website, an app or a highly optimized piece of software. In other words, never before was it easier and more lucrative to […]

The post How Budding Programmers Can Get Their Work Noticed appeared first on Simple Programmer.

Categories: Programming

Why Does Programming Suck?

Making the Complex Simple - John Sonmez - Thu, 04/21/2016 - 13:00

Today I’ve received a very interesting question from a reader. Why does programming suck? While it may seem a little bit controversial, having a programmer talking about why programming sucks, may seem kind of odd but, well… It does suck sometimes. One of the reasons why programming sucks is the technology. Technology changes at a […]

The post Why Does Programming Suck? appeared first on Simple Programmer.

Categories: Programming