Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=2' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Keeping Android safe: Security enhancements in Nougat

Android Developers Blog - Thu, 09/15/2016 - 22:31

Posted by Xiaowen Xin, Android Security Team

Over the course of the summer, we previewed a variety of security enhancements in Android 7.0 Nougat: an increased focus on security with our vulnerability rewards program, a new Direct Boot mode, re-architected mediaserver and hardened media stack, apps that are protected from accidental regressions to cleartext traffic, an update to the way Android handles trusted certificate authorities, strict enforcement of verified boot with error correction, and updates to the Linux kernel to reduce the attack surface and increase memory protection. Phew!

Now that Nougat has begun to roll out, we wanted to recap these updates in a single overview and highlight a few new improvements.

Direct Boot and encryption

In previous versions of Android, users with encrypted devices would have to enter their PIN/pattern/password by default during the boot process to decrypt their storage area and finish booting. With Android 7.0 Nougat, we’ve updated the underlying encryption scheme and streamlined the boot process to speed up rebooting your phone. Now your phone’s main features, like the phone app and your alarm clock, are ready right away before you even type your PIN, so people can call you and your alarm clock can wake you up. We call this feature Direct Boot.

Under the hood, file-based encryption enables this improved user experience. With this new encryption scheme, the system storage area, as well as each user profile storage area, are all encrypted separately. Unlike with full-disk encryption, where all data was encrypted as a single unit, per-profile-based encryption enables the system to reboot normally into a functional state using just device keys. Essential apps can opt-in to run in a limited state after reboot, and when you enter your lock screen credential, these apps then get access your user data to provide full functionality.

File-based encryption better isolates and protects individual users and profiles on a device by encrypting data at a finer granularity. Each profile is encrypted using a unique key that can only be unlocked by your PIN or password, so that your data can only be decrypted by you.

Encryption support is getting stronger across the Android ecosystem as well. Starting with Marshmallow, all capable devices were required to support encryption. Many devices, like Nexus 5X and 6P also use unique keys that are accessible only with trusted hardware, such as the ARM TrustZone. Now with 7.0 Nougat, all new capable Android devices must also have this kind of hardware support for key storage and provide brute force protection while verifying your lock screen credential before these keys can be used. This way, all of your data can only be decrypted on that exact device and only by you.

The media stack and platform hardening

In Android Nougat, we’ve both hardened and re-architected mediaserver, one of the main system services that processes untrusted input. First, by incorporating integer overflow sanitization, part of Clang’s UndefinedBehaviorSanitizer, we prevent an entire class of vulnerabilities, which comprise the majority of reported libstagefright bugs. As soon as an integer overflow is detected, we shut down the process so an attack is stopped. Second, we’ve modularized the media stack to put different components into individual sandboxes and tightened the privileges of each sandbox to have the minimum privileges required to perform its job. With this containment technique, a compromise in many parts of the stack grants the attacker access to significantly fewer permissions and significantly reduced exposed kernel attack surface.

In addition to hardening the mediaserver, we’ve added a large list of protections for the platform, including:

App security improvements

Android Nougat is the safest and easiest version of Android for application developers to use.

  • Apps that want to share data with other apps now must explicitly opt-in by offering their files through a Content Provider, like FileProvider. The application private directory (usually /data/data/) is now set to Linux permission 0700 for apps targeting API Level 24+.
  • To make it easier for apps to control access to their secure network traffic, user-installed certificate authorities and those installed through Device Admin APIs are no longer trusted by default for apps targeting API Level 24+. Additionally, all new Android devices must ship with the same trusted CA store.
  • With Network Security Config, developers can more easily configure network security policy through a declarative configuration file. This includes blocking cleartext traffic, configuring the set of trusted CAs and certificates, and setting up a separate debug configuration.

We’ve also continued to refine app permissions and capabilities to protect you from potentially harmful apps.

  • To improve device privacy, we have further restricted and removed access to persistent device identifiers such as MAC addresses.
  • User interface overlays can no longer be displayed on top of permissions dialogs. This “clickjacking” technique was used by some apps to attempt to gain permissions improperly.
  • We’ve reduced the power of device admin applications so they can no longer change your lockscreen if you have a lockscreen set, and device admin will no longer be notified of impending disable via onDisableRequested(). These were tactics used by some ransomware to gain control of a device.
System Updates

Lastly, we've made significant enhancements to the OTA update system to keep your device up-to-date much more easily with the latest system software and security patches. We've made the install time for OTAs faster, and the OTA size smaller for security updates. You no longer have to wait for the optimizing apps step, which was one of the slowest parts of the update process, because the new JIT compiler has been optimized to make installs and updates lightning fast.

The update experience is even faster for new Android devices running Nougat with updated firmware. Like they do with Chromebooks, updates are applied in the background while the device continues to run normally. These updates are applied to a different system partition, and when you reboot, it will seamlessly switch to that new partition running the new system software version.


We’re constantly working to improve Android security and Android Nougat brings significant security improvements across all fronts. As always, we appreciate feedback on our work and welcome suggestions for how we can improve Android. Contact us at security@android.com.

Categories: Programming

Quote of the Day

Herding Cats - Glen Alleman - Thu, 09/15/2016 - 12:43

Thought interferes with the probability of events, and in the long run therefore, with entropy - David L. Waston (1930)

All project work is probabilistic, driven by underlying uncertainties, both reducible and irreducible. Willing an outcome in the presence of these uncertainties does not make is so. The only useful process to manage in the presence of uncertainty is to estimate the outcome of any decisions made by the participants of the processes by which the project operates

Related articles Managing in the Presence of Uncertainty Estimating Processes in Support of Economic Analysis Herding Cats: Book of the Month Herding Cats: Making Decisions in the Presence of Uncertainty Making Decisions In The Presence of Uncertainty Herding Cats: The Problems with Schedules
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Wed, 09/14/2016 - 12:39

It is probably dangerous to use this theory of information in fields for which it was not designed, but I think the danger will not keep people from using it - J. C. R. Lickider (1950)

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Running Robot Framework's Remote Server as Java agent

Xebia Blog - Wed, 09/14/2016 - 07:48
Robot Framework is a great automated testing tool that uses a keyword-driven approach. When you want to run Robot Framework tests within the context of a running system-under-test you can load Robot Framework's RemoteServer as a java agent. This is not something that comes out of the box so we will explain how to do

scikit-learn: First steps with log_loss

Mark Needham - Wed, 09/14/2016 - 06:33

Over the last week I’ve spent a little bit of time playing around with the data in the Kaggle TalkingData Mobile User Demographics competition, and came across a notebook written by dune_dweller showing how to run a logistic regression algorithm on the dataset.

The metric used to evaluate the output in this competition is multi class logarithmic loss, which is implemented by the log_loss function in the scikit-learn library.

I’ve not used it before so I created a small example to get to grips with it.

Let’s say we have 3 rows to predict and we happen to know that they should be labelled ‘bam’, ‘spam’, and ‘ham’ respectively:

>>> actual_labels = ["bam", "ham", "spam"]


To work out the log loss score we need to make a prediction for what we think each label actually is. We do this by passing an array containing a probability between 0-1 for each label

e.g. if we think the first label is definitely ‘bam’ then we’d pass [1, 0, 0], whereas if we thought it had a 50-50 chance of being ‘bam’ or ‘spam’ then we might pass [0.5, 0, 0.5]. As far as I can tell the values get sorted into (alphabetical) order so we need to provide our predictions in the same order.

Let’s give it a try. First we’ll import the function:

>>> from sklearn.metrics import log_loss

Now let’s see what score we get if we make a perfect prediction:

>>> log_loss(actual_labels,  [[1, 0, 0], [0, 1, 0], [0, 0, 1]])
2.1094237467877998e-15

What about if we make a completely wrong prediction?

>>> log_loss(actual_labels,  [[0, 0, 1], [1, 0, 0], [0, 1, 0]])
34.538776394910684

We can reverse engineer this score to work out the probability that we’ve predicted the correct class.

If we look at the case where the average log loss exceeds 1, it is when log(pij) < -1 when i is the true class. This means that the predicted probability for that given class would be less than exp(-1) or around 0.368. So, seeing a log loss greater than one can be expected in the cass that that your model only gives less than a 36% probability estimate for the correct class.

This is the formula of logloss:

NEmt7

In which yij is 1 for the correct class and 0 for other classes and pij is the probability assigned for that class.

The interesting thing about this formula is that we only care about the correct class. The yij value of 0 cancels out the wrong classes.

In our two examples so far we actually already know the probability estimate for the correct class – 100% in the first case and 0% in the second case, but we can plug in the numbers to check we end up with the same result.

First we need to work out what value would have been passed to the log function which is easy in this case. The value of yij is

# every prediction exactly right
>>> math.log(1)
0.0
 
>>> math.exp(0)
1.0
# every prediction completely wrong
>>> math.log(0.000000001)
-20.72326583694641
 
>>> math.exp(-20.72326583694641)
1.0000000000000007e-09

I used a really small value instead of 0 in the second example because math.log(0) trends towards negative infinity.

Let’s try another example where we have less certainty:

>>> print log_loss(actual_labels, [[0.8, 0.1, 0.1], [0.3, 0.6, 0.1], [0.15, 0.15, 0.7]])
0.363548039673

We’ll have to do a bit more work to figure out what value was being passed to the log function this time, but not too much. This is roughly the calculation being performed:

# 0.363548039673 = -1/3 * (log(0.8) + log(0.6) + log(0.7)
 
>>> print log_loss(actual_labels,  [[0.8, 0.1, 0.1], [0.3, 0.6, 0.1], [0.15, 0.15, 0.7]])
0.363548039673

In this case, on average our probability estimate would be:

# we put in the negative value since we multiplied by -1/N
>>> math.exp(-0.363548039673)
0.6952053289772744

We had 60%, 70%, and 80% accuracy for our 3 labels so an overall probability of 69.5% seems about right.

One more example. This time we’ll make one more very certain (90%) prediction for ‘spam’:

>>> print log_loss(["bam", "ham", "spam", "spam"], [[0.8, 0.1, 0.1], [0.3, 0.6, 0.1], [0.15, 0.15, 0.7], [0.05, 0.05, 0.9]])
0.299001158669
 
>>> math.exp(-0.299001158669)
0.741558550213609

74% accuracy overall, sounds about right!

Categories: Programming

Trust But Verify

Herding Cats - Glen Alleman - Wed, 09/14/2016 - 01:10

The mantra of Agile is Trust the team. In some domains, that is an admirable goal. In other domains it's a naïve path to disaster. I work in the latter domain, on mission critical, sometimes national asset programs, but always mission critical - can't fail, must work, must provide proper information when called upon to do so.

When we are called on to perform a Root Cause Analysis of why the system failed to do what it was suppose to be, we find the same thing that Dr. Bill Corcoran suggest is found on all root cause analysis processes.

An inescapable fact is that the competent investigation of every harmful event reveals that the causation of the harm includes the mistaken/ naïve/ unwarranted/ gullible/ imprudent trust and confidence in one or more erroneous/ untrustworthy theories, assumptions, standards, devices, procedures, processes, programs, people, institutions, agencies, contractors, and/or conditions. The functional alternatives include monitoring, curiosity, skepticism, and the “questioning attitude.”

Here's some quotes to apply when you hear agile means trust of the team, why are you questioning our processes, our methods, our organizational models. (Thanks to Dr. Corcorn's news feed today):

  • You get what you inspect; not what you expect - An old U.S. Navy proverb
  • Trust, but verify - Quoted by President Ronald Reagan
  • A sucker is born every day -Attributed to P. T. Barnum
  • The world abounds in unrocked boats with holes just above the current waterline -Salty Wisdom
  • Faith is believing for sure what ain’t so -Mark Twain
  • In God we trust; all others please furnish evidence - Unknown for now

So saying it again for clarity - you can't make a decsion in the presence of uncertaty without estimating the outcome of that decision. When you do, be preared to conduct a Root Cause Analysis of why your project went in the ditch. Trust is necessary but far from sufficient when spending other people's money on non-trivial development efforts.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Root Cause Analysis Estimating and Making Decisions in Presence of Uncertainty Are Estimates Really The Smell of Dysfunction? Making Conjectures Without Testable Outcomes Good Project and Bad Project Estimates
Categories: Project Management

Trust But Verify

Herding Cats - Glen Alleman - Wed, 09/14/2016 - 01:10

The mantra of Agile is Trust the team. In some domains, that is an admirable goal. In other domains it's a naïve path to disaster. I work in the latter domain, on mission critical, sometimes national asset programs, but always mission critical - can't fail, must work, must provide proper information when called upon to do so.

When we are called on to perform a Root Cause Analysis of why the system failed to do what it was suppose to be, we find the same thing that Dr. Bill Corcoran suggest is found on all root cause analysis processes.

An inescapable fact is that the competent investigation of every harmful event reveals that the causation of the harm includes the mistaken/ naïve/ unwarranted/ gullible/ imprudent trust and confidence in one or more erroneous/ untrustworthy theories, assumptions, standards, devices, procedures, processes, programs, people, institutions, agencies, contractors, and/or conditions. The functional alternatives include monitoring, curiosity, skepticism, and the “questioning attitude.”

Here's some quotes to apply when you hear agile means trust of the team, why are you questioning our processes, our methods, our organizational models. (Thanks to Dr. Corcorn's news feed today):

  • You get what you inspect; not what you expect - An old U.S. Navy proverb
  • Trust, but verify - Quoted by President Ronald Reagan
  • A sucker is born every day -Attributed to P. T. Barnum
  • The world abounds in unrocked boats with holes just above the current waterline -Salty Wisdom
  • Faith is believing for sure what ain’t so -Mark Twain
  • In God we trust; all others please furnish evidence - Unknown for now

So saying it again for clarity - you can't make a decsion in the presence of uncertaty without estimating the outcome of that decision. When you do, be preared to conduct a Root Cause Analysis of why your project went in the ditch. Trust is necessary but far from sufficient when spending other people's money on non-trivial development efforts.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Root Cause Analysis Estimating and Making Decisions in Presence of Uncertainty Are Estimates Really The Smell of Dysfunction? Making Conjectures Without Testable Outcomes Good Project and Bad Project Estimates
Categories: Project Management

Internal and External Risks

Evacuation Plan

Having a plan mitigates risk.

Risk management is crucial to the success of all software development, enhancement and maintenance projects.  Risk management at its basest level is avoiding problems that can be avoided and recognizing those that can’t be avoided. In order to recognize and avoid problems, every project must take the steps that need to be taken to consciously look outward and forward. The act of risk management requires both introspection and extrospection.  Extrospection, a rarely used word in the everyday conversation, is even rarer in many Agile approaches. One important way to assess risk is to consider whether there are internal or external risks.

Internal risks are from within the organization and arise during normal operation.  Internal risks are often forecastable, and therefore can be avoided or mitigated.  Internal risks are typically generated by one (or some combination) of human, technical or physical factors.  Many Agile practices naturally address internal risk. Practices like short planning cycles, retrospectives, flexible backlogs, and small teams are geared to addressing the delivering short-term value by addressing the risks that that team perceives to be controllable.

External risks come from outside the organization or project and outside of the team’s control.  External risks tend to only be forecastable in retrospect, and therefore efforts need to be focused on recognition and reaction.  Many external risks stem from legislative, environmental and political changes. The impact of a major earthquake on an organization’s supply is an external risk.  

Recently, Steven Adams published an article on his blog titled, Seven Risks in Software Development.  In his article, Steven the following seven risks:

  1.    Risk of delivering little or no value to the customer or organization.
  2.    Risk of missing the delivery schedule because of poor predictions.
  3.    Risk of unplanned work disrupting the work process and schedule.
  4.    Risk of poor quality in the delivery.
  5.    Risk of work item becoming an outlier … way off!
  6.    Risk of the team not working well together.
  7.    Risk of end-users not using or liking the product.

Steven’s list is a powerful tool for facilitating a discussion of risks that are controllable at a team level.  Steven’s are all internal risks. A lean approach practiced by many teams to identify and manage internal risks (mostly) includes:

  1. Identify knowable risks.
  2. Build mitigation for common risks into the definition of done.
  3. Generate stories for less common risks and add them to the project backlog.
  4. Review risks when grooming stories.
  5. Carve out time during planning to identify emerging risks.

Agile techniques at a team level are designed to capture and manage internal risks. No one believes in not managing risk because not managing risk puts the value a team delivers at risk or at the very least puts their weekend when a risk becomes an issue and had to be dealt with.  Agile techniques tend to give teams a short-term inside the boundary perspective that is very delivery.  External risks often lurk outside the short-term focus, which means our techniques need to be tailored to address both internal and external risks.  

Next: Incorporating External Risks into an Agile Risk Approach


Categories: Process Management

If Traffic is an Iterated Prisoner's Dilemma Game Can Smart Cars Evolve Co-operative Behavior?

 

Can small tribes of cooperating smart cars improve overall traffic even if they are not in the majority? Sure, if every car was a self-driving car maybe traffic jams could dissolve like blood clots on anticoagulants, but what about that messy in-between period? It will be some time before smart cars rule the road. Until then can smart cars make traffic better?

Adoption is hard. This is a general problem in tech. You want people to join your social network yet people won't join until enough people have already joined. What you really want is that virtuous circle to develop, where as more people adopt a technology it causes even more people to adopt it. So startups spend their VC money fast and furiously in hopes of acquiring new customers betting the lifetime value of a customer will be worth the investment. VC money is the dead corpse that feeds the rest of the ecosystem.

Traffic is already an example of a vicious cycle. Horrendous traffic jams are now the norm and "good" traffic windows are just tall tales texted to children. And it keeps on getting worse and not in a worse is better sort of way. Yet the incentives are still not enough for people to self-organize and batch themselves into cars. Cars are more of a synchronous streaming model. Traffic problems will need to be solved at a different level of abstraction. Human drivers are just so hopelessly human.

In some ways traffic is like an iterated game of Prisoner's Dilemma. So in an Evolution of Cooperation sense can overall flows improve if groups of self-driving cars cooperate together within a stream of muggle cars? If smart cars on the road choose to gang up together will that improve commute times in such a way that it will encourage more and more cars to join the gang, becoming part of the solution instead of the problem?

But we have the social network problem. Cars currently are individual, kept in silos organized by manufacturer. Tesla, Uber, Google, etc. don't cooperate at a global traffic planning level. Even cars within a manufacturer don't yet have the ability to slave themselves together in a self-driving conga line of traffic goodness.

Historically we know after individual point solutions are created the next step is to add a scheduling layer. After running a program on an entire CPU we create an OS (Linux, Windows, etc) to run multiple programs on the same CPU. After the container we create an OS (Swarm, Kubernetes, Mesos, etc) to run multiple programs on the same boxes.

We'll need a TrafficOS so all the cars that want to can cooperate together, you know like XMPP before the walls went up. Plus we'll need ecosystem incentives to help drive adoption. 

So many questions. Will drivers volunteer to be part of a smart car peloton even if it means their commute suffers in the short term? What's the tipping point? Will free riders ruin the whole thing? Like the fast lane, should incentives be created to encourage cooperating tribes of smart cars? Should traffic lights favor smart car trains? Should traffic laws allow bullet trains of smart cars to speed down the highway? Should insurance premiums be reduced for time spent protected in smart car convoys? Maybe smart car software should be seeded with altruism "genes" so they cooperate naturally? How can defectors be punished? Maybe we need a reputation system scoring for traffic reciprocity?

Unlike the weather traffic is something we can do something about. Let's just try to do a better job than we did with social networks and IM systems. Traffic is actually important.

Related Articles
Categories: Architecture

SE-Radio Episode 268: Kief Morris on Infrastructure as Code

Kief Morris, cloud specialist at ThoughtWorks and author of the recent book Infrastructure as Code, talks to Sven Johann about why this concept is becoming increasingly important due to cloud computing. They discuss best practices for writing infrastructure code, including why you should treat your servers as cattle, not pets, as well as how to […]
Categories: Programming

Sponsored Post: ScaleArc, Spotify, Aerospike, Scalyr, Gusto, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring?
  • Spotify is looking for individuals passionate in infrastructure to join our Site Reliability Engineering organization. Spotify SREs design, code, and operate tools and systems to reduce the amount of time and effort necessary for our engineers to scale the world’s best music streaming product to 40 million users. We are strong believers in engineering teams taking operational responsibility for their products and work hard to support them in this. We work closely with engineers to advocate sensible, scalable, systems design and share responsibility with them in diagnosing, resolving, and preventing production issues. We are looking for an SRE Engineering Manager in NYC and SREs in Boston and NYC.

  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.

Fun and Informative Events
  • Learn how Nielsen Marketing Cloud (NMC) leverages online machine learning and predictive personalization to drive its success in a live webinar on Tuesday, September 20 at 11 am PT / 2 pm ET. Hear from Nielsen’s Kevin Lyons, Senior VP of Data Science and Digital Technology, and Brent Keator, VP of Infrastructure, as well as from Brian Bulkowski, CTO and Co-Founder at Aerospike, as they describe the front-edge architecture and technical choices – including the Aerospike NoSQL database – that have led to NMC’s success. RSVP: https://goo.gl/xDQcu4
Cool Products and Services
  • ScaleArc's database load balancing software empowers you to “upgrade your apps” to consumer grade – the never down, always fast experience you get on Google or Amazon. Plus you need the ability to scale easily and anywhere. Find out how ScaleArc has helped companies like yours save thousands, even millions of dollars and valuable resources by eliminating downtime and avoiding app changes to scale. 

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

The Dollar Shave Club Architecture Unilever Bought for $1 Billion

This is a guest post by Jason Bosco, the Dollar Shave Club’s Director of Engineering, Core Platform & Infrastructure, on the infrastructure of its ecommerce technology.

With more than 3 million members, Dollar Shave Club will do over $200 million in revenue this year. Although most are familiar with the company’s marketing, this immense growth in just a few years since launch is largely due to its team of 45 engineers.

Dollar Shave Club engineering by the numbers:

Core Stats
Categories: Architecture

Software Development Linkopedia September 2016

From the Editor of Methods & Tools - Tue, 09/13/2016 - 15:11
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about being a better developer, software architecture, tech leadership, shrinking the product backlog, customer journey maps, using sprint data, distributed testing, domain driven design, continuous delivery and testing microservices. Blog: Finding […]

What Test Engineers do at Google

Google Testing Blog - Mon, 09/12/2016 - 16:00
by Matt Lowrie, Manjusha Parvathaneni, Benjamin Pick, and Jochen Wuttke

Test engineers (TEs) at Google are a dedicated group of engineers who use proven testing practices to foster excellence in our products. We orchestrate the rapid testing and releasing of products and features our users rely on. Achieving this velocity requires creative and diverse engineering skills that allow us to advocate for our users. By building testable user journeys into the process, we ensure reliable products. TEs are also the glue that bring together feature stakeholders (product managers, development teams, UX designers, release engineers, beta testers, end users, etc.) to confirm successful product launches. Essentially, every day we ask ourselves, “How can we make our software development process more efficient to deliver products that make our users happy?”.

The TE role grew out of the desire to make Google’s early free products, like Search, Gmail and Docs, better than similar paid products on the market at the time. Early on in Google’s history, a small group of engineers believed that the company’s “launch and iterate” approach to software deployment could be improved with continuous automated testing. They took it upon themselves to promote good testing practices to every team throughout the company, via some programs you may have heard about: Testing on the Toilet, the Test Certified Program, and the Google Test Automation Conference (GTAC). These efforts resulted in every project taking ownership of all aspects of testing, such as code coverage and performance testing. Testing practices quickly became commonplace throughout the company and engineers writing tests for their own code became the standard. Today, TEs carry on this tradition of setting the standard of quality which all products should achieve.

Historically, Google has sustained two separate job titles related to product testing and test infrastructure, which has caused confusion. We often get asked what the difference is between the two. The rebranding of the Software engineer, tools and infrastructure (SETI) role, which now concentrates on engineering productivity, has been addressed in a previous blog post. What this means for test engineers at Google, is an enhanced responsibility of being the authority on product excellence. We are expected to uphold testing standards company-wide, both programmatically and persuasively.

Test engineer is a unique role at Google. As TEs, we define and organize our own engineering projects, bridging gaps between engineering output and end-user satisfaction. To give you an idea of what TEs do, here are some examples of challenges we need to solve on any particular day:
  • Automate a manual verification process for product release candidates so developers have more time to respond to potential release-blocking issues.
  • Design and implement an automated way to track and surface Android battery usage to developers, so that they know immediately when a new feature will cause users drained batteries.
  • Quantify if a regenerated data set used by a product, which contains a billion entities, is better quality than the data set currently live in production.
  • Write an automated test suite that validates if content presented to a user is of an acceptable quality level based on their interests.
  • Read an engineering design proposal for a new feature and provide suggestions about how and where to build in testability.
  • Investigate correlated stack traces submitted by users through our feedback tracking system, and search the code base to find the correct owner for escalation.
  • Collaborate on determining the root cause of a production outage, then pinpoint tests that need to be added to prevent similar outages in the future.
  • Organize a task force to advise teams across the company about best practices when testing for accessibility.
Over the next few weeks leading up to GTAC, we will also post vignettes of actual TEs working on different projects at Google, to showcase the diversity of the Google Test Engineer role. Stay tuned!
Categories: Testing & QA

The 3 Pillars of Successful Products or Why Project Ara was Cancelled

Xebia Blog - Mon, 09/12/2016 - 14:37
Google managed to surprise both the market as well as the fans by cancelling the Project Ara modular phone. But from a Product Owner point of view it was no surprise. Ara phones lack a fundamental pilar that makes a product successful. Context: Ara what? In 2013 Google announced to build the Ara phone. A

Quote of the Day

Herding Cats - Glen Alleman - Mon, 09/12/2016 - 12:25

The perfect symmetry of the whole apparatus - the wire in the middle, the two phones at the ends and the two gossips at the ends of the telephones - may be very fascinating to a mere mathematician - James Clark Maxwell (1878)

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

SPaMCAST 410 – Jessica Long, Storytelling in Agile

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

In Software Process and Measurement Cast 410, we feature our interview with Jessica Long.  Jessica and I discussed storytelling. I find that storytelling is a useful tool to help individuals, teams, and organizations.  Projects can use stories to generate user stories and as a tool in retrospectives.  Stories are also a tool in generating a vision of the future in organizational transformations.  Those are just a few of the multitude of uses for storytelling in changing how value is delivered!

Jessica and I will both be presenting on using stories at the Agile Philly, Agile Tour 2016 on October 10th.  If you are in the Philadelphia area please register and attend!

Jessica’s bio:
Jess Long is an Agile Coach, a writer, a speaker and a mother with a passion for driving meaningful stories across multiple iterations in all facets of life. Transforming Corporate America and living to tell about it is no small feat. She keeps some level of sanity by finding humor in otherwise absurd situations.

Twitter: https://twitter.com/scrumandginger
Blog: https://scrumandginger.com/
LinkedIn: https://www.linkedin.com/in/jessica-long-pmi-acp-csp-cspo-87626614

Re-Read Saturday News

This week we reach the penultimate week in our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 24 and 25. Chapter 24 discusses the value and power in communities. Chapter 25 is Beck’s conclusion and reflection on the book: XP is about people!

Next week we’ll wrap this re-read up and get ready to read The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass).  This will be a new book for me, therefore an initial read, not a re-read!  Steven Adams suggested the book and it has been on my list for a few years. Click the link (The Five Dysfunctions of a Team), buy a copy, and in a few weeks we will begin to read the book together.

Next SPaMCAST

The Software Process and Measurement Cast 411 will be a big show featuring our thoughts on servant leadership. In SPaMCAST 411 we will have a visit from the Kim Pries, the Software Sensei. We will have more from Steve Tendon on the Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross (buy a copy here).  And anchoring the cast will be Gene Hughson with an entry from his Form Follows Function Blog.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Single Factor Analysis and Reduction Reasoning

Herding Cats - Glen Alleman - Sun, 09/11/2016 - 22:14

Much of the discussion around project management processes, especially around agile, and most especially around the misconceptions of Estimating as espoused by the #NoEstimates advocates, starts with the misuse of reductive reasons based on single factor analysis.

Here's how it goes.

  • Single Factor Analysis - is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called factors. 
    • The first assessment using SFA is Estimates are the smell of dysfunction. There may be a correlation, but causation has yet to be determined.
    • There are others -  estimates are evil, estimates are commitments, and similar conjectures around a claim that somehow estimates, the making of estimates, and the use of estimates is somehow - unstated how by the way - are the cause of problems in the software development domain.
  • Reductionism - is about connections between phenomena, or theories, "reducing" one to another, usually considered simpler or more basic
    • This is seen in the quest to reduce complex issues to simple issues,
    • Or worse make the claim that non-simple systems are somehow undesirable and if we only simplified everything the problems we see in our real world systems would somehow be removed.

When we see these two concepts used together we get things like as the cartoon of the Reductionist view of a single concept - If you did it this way, you'd be from 3X to 10X faster.

 

So here's the problem and the solution. Complex systems are part of the solution to all complex problems. Anyone claim complex problems can be solved with simple systems, needs to have a testable working system, in that complex problem space. I work in a complex problem space - literally space flight, aircraft flight, the ground systems that enable those systems to Fly. As well as biopharma, electric utilities (nuclear and conventional fired), complex enterprise IT systems (dozens to many dozens of interacting systems).

When you hear a simple and many time simple-minded solution to a complex problem - Applying No Estimates will remove the dysfunction on software projects (this is the ontological inverse of the statement estimates are the smell of dysfunction).  We can be reminded by H L Menken's quote:

For every complex problem there is an answer that is clear, simple, and wrong.

 

 

Categories: Project Management

SPaMCAST 410 - Jessica Long, Storytelling in Agile

Software Process and Measurement Cast - Sun, 09/11/2016 - 22:00

In Software Process and Measurement Cast 410, we feature our interview with Jessica Long.  Jessica and I discussed storytelling. I find that storytelling is a useful tool to help individuals, teams, and organizations.  Projects can use stories to generate user stories and as a tool in retrospectives.  Stories are also a tool in generating a vision of the future in organizational transformations.  Those are just a few of the multitude of uses for storytelling in changing how value is delivered!

Jessica and I will both be presenting on using stories at the Agile Philly, Agile Tour 2016 on October 10th.  If you are in the Philadelphia area please register and attend!

Jessica’s bio:
Jess Long is an Agile Coach, a writer, a speaker and a mother with a passion for driving meaningful stories across multiple iterations in all facets of life. Transforming Corporate America and living to tell about it is no small feat. She keeps some level of sanity by finding humor in otherwise absurd situations.

Twitter: https://twitter.com/scrumandginger
Blog: https://scrumandginger.com/
LinkedIn: https://www.linkedin.com/in/jessica-long-pmi-acp-csp-cspo-87626614

Re-Read Saturday News

This week we reach the penultimate week in our re-read of Kent Beck’s XP Explained, Second Edition with a discussion of Chapters 24 and 25. Chapter 24 discusses the value and power in communities. Chapter 25 is Beck’s conclusion and reflection on the book: XP is about people!

Next week we'll wrap this re-read up and get ready to to read The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass).  This will be a new book for me, therefore an initial read, not a re-read!  Steven Adams suggested the book and it has been on my list for a few years. Click the link (The Five Dysfunctions of a Team), buy a copy, and in a few weeks we will begin to read the book together.

Next SPaMCAST

The Software Process and Measurement Cast 411 will be a big show featuring our thoughts on servant leadership. In SPaMCAST 411 we will have a visit from the Kim Pries, the Software Sensei. We will have more from Steve Tendon on the Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban published J Ross (buy a copy here).  And anchoring the cast will be Gene Hughson with an entry from his Form Follows Function Blog.  

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Extreme Programming Explained, Second Edition: Re-Read Week 13 (Chapters 24 – 25)

XP Explained Cover

We conclude the main portion of the re-read of Extreme Programing Explained, Second Edition (2005) with Chapters 24 and 25. Next week will present a few final thoughts before we shift gears and start reading The Five Dysfunctions of a Team (if you do not own a copy, it is time to order one – use the link to support the blog and podcast).  This week in XP Explained; Chapter 24 discusses the value of community as an asset to support the adoption and use of XP. Chapter 25 is Beck and Andres’s concluding notes on XP Explained.

Chapter 24: Community and XP

A supportive community is a huge asset for XP practitioners (this true for any profession or movement). Communities provide a mechanism for people interested in XP to connect, encourage each other and share ideas and experiences. The power of a community is generated by the interchange between people in a way that helps both the individuals and the group to achieve their goals. Finding and getting involved in a community can be as easy as getting involved or forming a community of practice within your organization or reaching out to one of the XP online communities.

Beck counsels that the role of a community member should be weighted towards the act of listening rather than talking.  Listening is the combination of hearing and interpreting.  Listening helps new members to learn how a community works before jumping in with both feet.  Listening also helps members to understand who will be helpful and who just talks to hear their own voice. Listening is just as powerful tool for community members as it is for those involved in coaching and facilitating learning. If you embrace the ‘listen first, talk second’ rule then communities have value to any individual if he or she participates.

One final note, communities provide a mechanism for enforcing accountability between members.  Promises to people that you have close ties with are more difficult to break.  Mastermind groups are a common example of a community that builds in holding members responsible.

Chapter 25: Conclusion

Beck states that he created/documented XP to make life better for developers.  One main takeaway from the re-read is that there can be no improvement without first improving yourself (we will explore final thoughts more next week).  

Beck concludes that XP is more of statement about creating a balance of true values and integrity, and less about the practices and techniques that move you down the path of practicing XP. The books end with the quote:

“XP is a way of thinking about and acting on your ideals.”

 

Previous installments of Extreme Programing Explained, Second Edition (2005) on Re-read Saturday:

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1

Week 2, Chapters 2 – 3

Week 3, Chapters 4 – 5

Week 4, Chapters 6 – 7  

Week 5, Chapters 8 – 9

Week 6, Chapters 10 – 11

Week 7, Chapters 12 – 13

Week 8, Chapters 14 – 15

Week 9, Chapters 16 – 17

Week 10, Chapters 18 – 19

Week 11, Chapters 20 – 21

Week 12, Chapters 22 – 23

Remember we are going to read The Five Dysfunctions of a Team by Patrick Lencioni next.  This will be a new book for me; therefore an initial read, not a re-read!  Steven Adams suggested the book and it has been on my list for a few years. Click the link (The Five Dysfunctions of a Team), buy a copy, and in a few weeks, we will begin to read the book together.


Categories: Process Management