Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/7&page=1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Stuff The Internet Says On Scalability For April 29th, 2016

Hey, it's HighScalability time:


The Universe in one image (Pablo Budassi). Imagine an ancient being leaning over, desperately scrying to figure out what they have wrought.

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • 50 minutes: Facebook daily average use; 1.65 billion: Facebook Monthly active users; 25PB: size of Internet archive; 7 years: speedup of encryption adoption from the Snowden revelations; 10 million: strands of DNA Microsoft is buying to store data; 300TB: open data from CERN; 2PB: data from PanSTARRS' imaging survey; 100 billion: words translated by Google per day; 204 million: Weather Channel views in March on Facebook; 

  • Quotable Quotes:
    • @antevens: -> Describe your perfect date. ......<- YYYY-MM-DD HH:MM:SS.XXXXXX
    • @ValaAfshar: 1995: top 15 Internet companies worth $17 billion. 2015: top 15 Internet companies worth $2.4 trillion.
    • @BenedictEvans: The move to mobile took away Facebook's monopoly of social, but gave it much greater scale, engagement & revenue potential.
    • Sundar Pichai: We will move from mobile first to an AI first world.
    • Chris Sacca~ We [Google] literally could feel a scale that had never been felt before on the planet. We had a globe where you could visualize in searches in real-time. A dot would indicate every single search on the planet. In the middle of the night there would be a search in the Gobi desert.
    • @stack72: Just had a recruiter contact me about a role with "microservices on a servers architecture” - twice I’ve seen that now in 2 days #TheFuture?
    • Jason Waxman [Intel]: We see that the world is moving to scale computing in data centers. Our projection is between 70 and 80 percent of the compute, network, and storage will be going into what we call scale data centers by 2025.
    • @BenedictEvans: In 2009 only half of Facebook's MAUs were on it every day. Mobile has taken that to 2/3, at much greater scale.
    • Dan Rayburn: Amazon and Google Enticing Customers With Cheap Storage, But Beware Of Egress Charges
    • @manumarchal: CERN LHC computing challenge is more than 400k CPUs + 300PB of data. It's is also global distribution. #dotScale
    • @bridgetkromhout: Decouple and segregate systems requiring different trust levels for faster iteration. @adrianco #craftconf
    • @dkalintsev: GE on stage at AWS Summit: “50% TCO saving compared to best what we could do in-house”
    • @etherealmind: How messed up was GE management to let their costs get this out of control ?
    • @Ellen_Friedman: #dotscale Oliver Keeble CERN - superb: computing is key. Collisions are transient; data is persisted at huge scale
    • @stratecheryAggregation Theory leads to monopoly; expect more antitrust cases, but only in Europe
    • @kelseyhightower: Moving to microservices won't save you. Borrowing money in smaller chunks doesn't change the fact that you're broke.
    • @jrauser: 1/ Inspired by this HN comment …, I offer a story about software rewrites and Bezos as a technical leader.
    • aytekin: This is a story that has happened over and over again. When you rewrite software, you lose all those hundreds of tiny things which were added for really good reasons. Don't do it blindly.
    • @BWJones: The F-35 program, which at $1.5 T would fund the entire NIH biomedical research portfolio for 41 years.
    • @balinski: "Centralization is a disease" #dotScale #scalability #cloudcomputing
    • Tony Bain: So despite the noise surrounding NoSQL, in a head to head comparison of volume of use, NoSQL use seems so very small.  At a guess, I would predict that for every NoSQL database in existence there would be at least 1000 relational databases.  Probably more.  You would be forgiven for thinking NoSQL use was almost insignificant. 
    • @jaksprats: NVM is gonna put big data on a single machine, very interesting for non-BulkSynchronousParallel GraphDBs like Neo4j
    • @frontofstore: US department stores' sales per sq ft down 26% in last ten years - many closures forecast, anchors killing malls.
    • There are even more Quotable Quotes in the full article. See you there.

  • If you thought HyperCard was a trip you were correct. Bill Atkinson in a fascinating two part Triangulation interview (12) shared that HyperCard was inspired by a LSD trip. It's a far ranging interview that covers Steve Jobs, why the movies about Jobs sucked, Apple's early days, the web's HyperCard inspiration, photography, spirituality, color theory, philosophy, learning, and lots more.

  • In case you were wondering (I certainly was): Pied Piper compression (Silicon Valley HBO). This is the Pied Piper code shown on Silicon Valley HBO Season 3 Episode 1. Worth a deca-unicorn or two.

  • Design Details has a fun podcast with Facebookers talking about Facebook bots and the Facebook design process in general. 124: Dazzle (feat. Jeremy Goldberg). Are bots useful? (yes, but not a convincing argument). Do we have to be nice to bots? (to a point because you never know who if you are talking to a person). Bots aren't all automated, they can be a combination of automated and human interactions. Bots should use strategies to help convince people they are talking with another human, like playing with typing indicator delays to simulate typing. Same for simulating reading. Animation and delays should speed up over time. Regressive design, the idea that over time parts of the UI remove themselves as users use the application more. Fight for designs you believe in. Understand, identify, execute. Truly understand what you are doing at a deep level. Identify the things you can be the most impactful on. Facebook measures you on impact. Lots of talk about design crits and pillars and pillar centered design crits. We often think of ourselves as problem solvers, our job isn't so much problem solving as communicating proposed solutions to problems. 

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

The Product Samurai Strategy Canvas

Xebia Blog - Fri, 04/29/2016 - 16:00
"May you live in interesting times" said Feng Menglong in 1627, and it's never been a more fitting expression than today. With companies leapfrogging in the age of disruption to change the way they work and the business models that they use. Scrum has brought us autonomous hyper productive teams that can quadruple your output,

Digital Transformation Defined

This post is a walkthrough multiple definitions of digital transformation from multiple sources.

Digital transformation can be elusive if you can’t define it.

Lucky for us, there is no shortage of definitions for digital transformation.

I find that rather than use one single definition for digital transformation, it’s actually more helpful to look at a range of definitions to really internalize what digital transformation means from multiple angles.

Before you walk through the definitions, be sure to review Satya’s pillars for Digital Transformation so you have a simple mental model to work with.

Wikipedia on Digital Transformation

Wikipedia has a simple explanation:

“Digital transformation refers to the changes associated with the application of digital technology in all aspects of human society.”

What I like about that definition is that it goes beyond pure business and includes all impact on society, whether it’s education, government, sports, arts, leisure, etc.

Altimeter on Digital Transformation

Altimeter defined digital transformation from a customer-focused lens in their online report, The 2014 State of Digital Transformation:

“The realignment of, or new investment in, technology and business models to more effectively engage digital customers at every touchpoint in the customer experience lifecycle.”

What I like about Altimeter’s definition is that it’s outside in vs. inside out.  The big idea is to leverage technology to adapt to your customer’s changing preferences.  So if you “transform”, but there is no visible impact to your customers or to the market, then you didn’t really transform.

Capgemini and MIT Center for Digital Business on Digital Transformation

Capgemini and MIT Center for Digital Business define Digital Transformation in Digital Transformation: A Roadmap for Billion-Dollar Organizations like this:

“Digital transformation – the use of technology to radically improve performance or reach of enterprises.”

While their definition may look simplistic, the power is in the data behind the definition.  It’s a global study of how 157 executives in 50 large traditional companies are managing – and benefiting from – digital transformation.

Agile Elephant on Digital Transformation

Agile Elephant defines digital transformation like this:

“Digital transformation is the process of shifting your organisation from a legacy approach to new ways of working and thinking using digital, social, mobile and emerging technologies.  It involves a change in leadership, different thinking, the encouragement of innovation and new business models, incorporating digitisation of assets and an increased use of technology to improve the experience of your organisation’s employees, customers, suppliers, partners and stakeholders.”

While this definition may seem more elaborate, I find this elaboration can really help get somebody’s head into the digital transformation game.

MIT Sloan’s 9 Elements of Digital Transformation

In The Nine Elements of Digital Transformation, George Westerman, Didier Bonnet and Andrew McAfee identify the key attributes of digital transformation:

Category Items Transforming Customer Experience
  1. Customer Understanding
  2. Top-Line Growth
  3. Customer Touch Points
Transforming Operational Processes
  1. Process Digitization
  2. Worker Enablement
  3. Performance Management
Transforming Business Models
  1. Digitally Modified Businesses
  2. New Digital Businesses
  3. Digital Globalization

 

The nine elements are excerpted from their digital report, Digital Transformation: A Roadmap for Billion-Dollar Organizations.  Here are quick summaries of each:

  1. Customer Understanding – Customer Understanding is where “Companies are starting to take advantage of previous investments in systems to gain an in-depth understanding of specific geographies and market segments.”
  2. To-Line Growth – Top-Line Growth is where “Companies are using technology to enhance in-person sales conversations.”
  3. Customer Touch Points – Customer Touch Points are where “Customer service can be enhanced significantly by digital initiatives.”
  4. Process Digitization – Process Digitization is where “Automation can enable companies to refocus their people on more strategic tasks.”
  5. Worker Enablement – Worker Enablement is where “Individual-level work has, in essence, been virtualized — separating the work process from the location of the work.”
  6. Performance Management – Performance Management is where “Transactional systems give executives deeper insights into products, regions and customers, allowing decisions to be made on real data and not on assumptions.”
  7. Digitally Modified Businesses – Digitally Modified Businesses is “finding ways to augment physical with digital offerings and to use digital to share content across organizational silos.”
  8. New Digital Businesses – New Digital businesses is where “companies are introducing digital products that complement traditional products.”
  9. Digital Globalization – Digital Globalization is where “Companies are increasingly transforming from multinational to truly global operations.”

Sidenote – George, Didier, and Andrew sum up the power of digital transformation when they say, “”Whether it is in the way individuals work and collaborate, the way business processes are executed within and across organizational boundaries, or in the way a company understands and serves customers, digital technology provides a wealth of opportunity.”

Digital Business Transformation

I think it’s worth pointing out the distinction between Digital Transformation and Digital “Business” Transformation.

Digital Business Transformation is specifically about transforming the business with digital technologies.

There are many lenses to look at but in particular it helps to view it through the lens of business model innovation.   So you can think of it as innovating in your business models through digital technologies.   Your business model is simply the WHO (customers), the WHAT (value prop), the HOW (value chain), and your WHY (profit model.)

An exec from SAP at Davos said it well when he said “new business models are driven by different interactions with companies and their customers.”

In pragmatic terms, that means evolving your business model and interaction patterns to meet the changing demands of your customers all along your value chain.  For example, consider how millennials want to interact with a business in today’s world.  They want to learn about a company or brand through their friends and family on social networks and through real stories from authentic people, and they want access to services anytime, anywhere, from any device.

Another way to think about this is how many companies are learning how to wrap their engineering teams around their customer’s end-to-end journey to directly address the customer’s pains, needs, and desired outcomes.

Hopefully, this helps give you a good enough understanding to get going with your Digital Transformation and to understand the difference between Digital Transformation and Digital Business Transformation so that you can pave your path forward.

If nothing else, go back to the Altimeter Group’s definition of Digital Transformation,“The realignment of, or new investment in, technology and business models to more effectively engage digital customers at every touchpoint in the customer experience lifecycle.”, and use Satya’s pillars for Digital Transformation as a guide to stay grounded.

Additional Resources

Digital Transformation: A Roadmap for Billion-Dollar Organizations, by Capgemini and MIT Center for Digital Business

The 2014 State of Digital Transformation, by Altimeter

The Nine Elements of Digital Transformation, by George Westerman, Didier Bonnet and Andrew McAfee

You Might Also Like

All Digital Transformation

Microsoft Stories of Digital Business Transformation

Re-Imagine Customer Experience

Re-Imagine Operations

Satya Nadella on Digital Transformation

Categories: Architecture, Programming

How Walmart Canada’s responsive redesign boosted conversions by 20%: a case Study

With conversion optimization on the rise, it is a great idea to look into case studies which help you learn and adopt them to your personal needs positively. To find the material on conversion optimization you need from case studies, here are some few pointers:

  • Find out which case studies reflect your situation currently in business and your future aspirations.
  • Find out why a certain aspect of a case study worked and how to adopt it to specifically address your website’s needs.
  • Ensure you keep the references of your case study in order to go back to them when you need them.
Find out which case studies reflect your situation currently in business and your future aspirations. Find out why a certain aspect of a case study worked and how to adopt it to specifically address your website’s needs. Ensure you keep the references of your case study in order to go back to them when you need them.

Find out which case studies reflect your situation currently in business and your future aspirations.Find out why a certain aspect of a case study worked and how to adopt it to specifically address your website’s needs.Ensure you keep the references of your case study in order to go back to them when you need them.

Receptive web design is time consuming and requires monetary resources to implement. It took Walmart Canada almost a year of work to refine the site and make it fully responsive. However, their results show that it provided a Return on Investment in months owing to the increased revenue of mobile gadgets.

Research

Categories: Architecture

How Walmart Canada’s responsive redesign boosted conversions by 20%: a case Study

With conversion optimization on the rise, it is a great idea to look into case studies which help you learn and adopt them to your personal needs positively. To find the material on conversion optimization you need from case studies, here are some few pointers:

  • Find out which case studies reflect your situation currently in business and your future aspirations.
  • Find out why a certain aspect of a case study worked and how to adopt it to specifically address your website’s needs.
  • Ensure you keep the references of your case study in order to go back to them when you need them.
Find out which case studies reflect your situation currently in business and your future aspirations. Find out why a certain aspect of a case study worked and how to adopt it to specifically address your website’s needs. Ensure you keep the references of your case study in order to go back to them when you need them.

Find out which case studies reflect your situation currently in business and your future aspirations.Find out why a certain aspect of a case study worked and how to adopt it to specifically address your website’s needs.Ensure you keep the references of your case study in order to go back to them when you need them.

Receptive web design is time consuming and requires monetary resources to implement. It took Walmart Canada almost a year of work to refine the site and make it fully responsive. However, their results show that it provided a Return on Investment in months owing to the increased revenue of mobile gadgets.

Research

Categories: Architecture

SWAYAM: India’s First MOOCs Platform

It’s always cool to see the work our team is doing around the world to help hack a better world.

Our Digital Advisory Services team is helping the Government of India, the Ministry of Human Resource Development (HRD), to reimagine the student experience and to develop India’s first MOOCs (Massive Open Online Courses) platform.

Apparently, the presentation went so well that the honorable HRD minister, Smriti Irani tweeted our Student Experience Journey Map that helps show the vision and the Digital Transformation opportunities.

Way to go!

image

Categories: Architecture, Programming

The Platform Advantage of Amazon, Facebook, and Google

Where’s the mag­ic? [Amazon] The databas­ing and stream­ing and sync­ing in­fras­truc­ture we build on is pret­ty slick, but that’s not the se­cret. The man­age­ment tools are nifty, too; but that’s not it ei­ther. It’s the trib­al knowl­edge: How to build Cloud in­fras­truc­ture that works in a fal­li­ble, messy, un­sta­ble world.

Tim Bray, Senior Principal Engineer at Amazon, in Cloud Eventing

Ben Thompson makes the case in Apple's Organizational Crossroads and in a recent episode of Exponent that Apple has a services problem. With the reaching of peak iPhone Apple naturally wants to turn to services as a way to expand revenues. The problem is Apple has a mixed history of delivering services at scale and Ben suggests that the strength of Apple, its functional organization, is a weakness when it comes to making services. The same skill set you need to create great devices is not the same skill set you need to create great services. He suggests: “Apple’s services need to be separated from the devices that are core to the company, and the managers of those services need to be held accountable via dollars and cents.”

If Apple has this problem they are not the only ones. Only a few companies seemed to have cross the chasm of learning how to deliver a stream of new features at a worldwide scale: Amazon, Facebook, and Google. And of these Amazon is the clear winner.

This is the Amazon Web Services console, it shows the amazing number of services Amazon produces, and it doesn’t even include whole new platforms like the Echo:

Categories: Architecture

Sponsored Post: Aerospike, TrueSight Pulse, Redis Labs, InMemory.Net, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Software Engineer (DevOps). You are one of those rare engineers who loves to tinker with distributed systems at high scale. You know how to build these from scratch, and how to take a system that has reached a scalability limit and break through that barrier to new heights. You are a hands on doer, a code doctor, who loves to get something done the right way. You love designing clean APIs, data models, code structures and system architectures, but retain the humility to learn from others who see things differently. Apply to AppDynamics

  • Software Engineer (C++). You will be responsible for building everything from proof-of-concepts and usability prototypes to deployment- quality code. You should have at least 1+ years of experience developing C++ libraries and APIs, and be comfortable with daily code submissions, delivering projects in short time frames, multi-tasking, handling interrupts, and collaborating with team members. Apply to AppDynamics
Fun and Informative Events
  • Discover the secrets of scalability in IT. The cream of the Amsterdam and Berlin tech scene are coming together during TechSummit, hosted by LeaseWeb for a great day of tech talk. Find out how to build systems that will cope with constant change and create agile, successful businesses. Speakers from SoundCloud, Fugue, Google, Docker and other leading tech companies will share tips, techniques and the latest trends in a day of interactive presentations. But hurry. Tickets are limited and going fast! No wonder, since they are only €25 including lunch and beer.

  • How can your business stand out from the crowd? Bringing to market an innovative differentiation – without too many technical challenges – can be the key. The most forward-looking organizations are coding business logic using the fastest, most agile and scalable technologies available today. In a webinar on May 11 entitled “Exposing Differentiation: A New Era of Scalable Infrastructure Arrives”, Data Scientist Dez Blanchfield and Chief Analyst Dr. Robin Bloor will explain how a nexus of innovations has transformed what’s possible. They’ll be briefed by Brian Bulkowski, CTO and Co-Founder of Aerospike (the high-performance NoSQL database), who will discuss how leading companies are changing their infrastructure to meet the new demands of customized digital experiences, fraud prevention, risk analysis, and other application and data uses. Sign up here to reserve your seat!
Cool Products and Services
  • TrueSight Pulse is SaaS IT performance monitoring with one-second resolution, visualization and alerting. Monitor on-prem, cloud, VMs and containers with custom dashboards and alert on any metric. Start your free trial with no code or credit card.

  • Turn chaotic logs and metrics into actionable data. Scalyr is a tool your entire team will love. Get visibility into your production issues without juggling multiple tools and tabs. Loved and used by teams at Codecademy, ReturnPath, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

Layers, hexagons, features and components

Coding the Architecture - Simon Brown - Mon, 04/25/2016 - 21:02

This blog post is a follow-up to the discussions I've had with people after my recent Modular Monoliths talks. I've been enthusiastically told that the "ports & adapters" (hexagonal) architectural style is "vastly", "radically" and "hugely" different to a traditional layered architecture. I remain unconvinced, hence this blog post, which has a Java spin, but I'm also interested in how the concepts map to other programming languages. I'm also interested in exploring how we can better structure our code to prevent applications becoming big balls of mud. Layers are not the only option.

Setting the scene

Imagine you're building a simple web application where users interact with a web page and information is stored in a database. The UML class diagrams that follow illustrate some of the typical ways that the source code elements might be organised.

Some approaches to organising code in a simple Java web app

Let's first list out the types in the leftmost diagram:

  • CustomerController: A web controller, something like a Spring MVC controller, which adapts requests from the web.
  • CustomerService: An interface that defines the "business logic" related to customers, sometimes referred to in DDD terms as a "domain service". This may or may not be needed, depending on the complexity of the domain.
  • CustomerServiceImpl: The implementation of the above service.
  • CustomerDao: An interface that defines how customer information will be persisted.
  • JdbcCustomerDao: An implementation of the above data access object.

I'll talk about the use of interfaces later, but let's assume we're going to use interfaces for the purposes of dependency injection, substitution, testing, etc. Now let's look at the four UML class diagrams, from left to right.

  1. Layers: This is what a typical layered architecture looks like. Code is sliced horizontally into layers, which are used as a way to group similar types of things. In a "strict layered architecture", layers should only depend on lower layers. In Java, layers are typically implemented as packages. As you can see from the diagram, all layer (inter-package) dependencies point downwards.
  2. Hexagonal (ports & adapters): Thomas Pierrain has a great blog post that describes the hexagonal architecture, as does Alistair Cockburn of course. The essence is that the application is broken up into two regions: inside and outside. The inside region contains all of the domain concepts, whereas the outside region contains the interactions with the outside world (UIs, databases, third-party integrations, etc). One rule is that the outside depends on the inside; never the other way around. From a static perspective, you can see that the JdbcCustomerRepository depends on the domain package. Particularly when coupled with DDD, another rule is that everything on the inside is expressed in the ubiquitous language, so you'll see terms like "Repository" rather than "Data Access Object".
  3. Feature packages: This is a vertical slicing, based upon related features, business concepts or aggregate roots. In typical Java implementations, all of the types are placed into a single package, which is named to reflect the concept that is being grouped. Mark Needham has a blog post about this, and the discussion comments are definitely worth reading.
  4. Components: This is what I refer to as "package by component". It's similar to packaging by feature, with the exception that the application (the UI) is separate from the component. The goal is to bundle all of the functionality related to a single component into a single Java package. It's akin to taking a service-centric view of an application, which is something we're seeing with microservice architectures.
How different are these architectural styles?

On the face of it, these do all look like different ways to organise code and, therefore, different architectural styles. This starts to unravel very quickly once you start looking at code examples though. Take a look at the following example implementations of the ports & adapters style.

Spot anything? Yes, the interface (port) and implementation class (adapter) are both public. Most of the code examples I've found on the web have liberal usage of the public access modifier. And the same is true for examples of layered architectures. Marking all types as public means you're not taking advantage of the facilities that Java provides with regards to encapsulation. In some cases there's nothing preventing somebody writing some code to instantiate the concrete repository implementation, violating the architecture style. Coaching, discipline, code reviews and automated architecture violation checks in the build pipeline would catch this, assuming you have them. My experience suggests otherwise, especially when budgets and deadlines start to become tight. If left unchecked, this is what can turn a codebase into a big ball of mud.

Organisation vs encapsulation

Looking at this another way, when you make all types in your application public, the packages are simply an organisation mechanism (a grouping, like folders) rather than being used for encapsulation. Since public types can be used from anywhere in a codebase, you can effectively ignore the packages. The net result is that if you ignore the packages (because they don't provide any means of encapsulation and hiding), a ports & adapters architecture is really just a layered architecture with some different naming. In fact, if all types are public, all four options presented before are exactly the same.

Approaches without packages

Conceptually ports & adapters is different from a traditional layered architecture, but syntactically it's really the same, especially if all types are marked as public. It's a well implemented n-layer architecture, where n is the number of layers through a slice of the application (e.g. 3; web-domain-database).

Utilising Java's access modifiers

The way Java types are placed into packages can actually make a huge difference to how accessible (or inaccessible) those types can be when Java's access modifiers are applied appropriately. Ignoring the controllers ... if I bring the packages back and mark (by fading) those types where the access modifier can be made more restrictive, the picture becomes pretty interesting.

Access modifiers made more restrictive

The use of Java's access modifiers does provide a degree of differentiation between a layered architecture and a ports & adapters architecture, but I still wouldn't say they are "vastly" different. Bundling the types into a smaller number of packages (options 3 & 4) allows for something a little more radical. Since there are fewer inter-package dependencies, you can start to restrict the access modifiers. Java does allow interfaces to be marked as package protected (the default modifier) although if you do this you'll notice that the methods must still be marked as public. Having public methods on a type that's inaccessible outside of the package is a little odd, but it's not the end of the world.

With option 3, "vertical slicing", you can take this to the extreme and make all types package protected. The caveat here is that no other code (e.g. web controllers) outside of the package will be able to easily reuse functionality provided by the CustomerService. This is not good or bad, it's just a trade-off of the approach. I don't often see interfaces being marked as package protected, but you can use this to your advantage with frameworks like Spring. Here's an example from Oliver Gierke that does just this (the implementation is created by the framework). Actually, Oliver's blog post titled Whoops! Where did my architecture go, which is about reducing the number of public types in a codebase, is a recommended read.

I'm not keen on how the presentation tier (CustomerController) is coupled in option 3, so I tend to use option 4. Re-introducing an inter-package dependency forces you to make the CustomerComponent interface public again, but I like this because it provides a single API into the functionality contained within the package. This means I can easily reuse that functionality across other web controllers, other UIs, APIs, etc. Provided you're not cheating and using reflection, the smaller number of public types results in a smaller number of possible dependencies. Options 3 & 4 don't allow callers to go behind the service, directly to the DAO. Again, I like this because it provides an additional degree of encapsulation and modularity. The architecture rules are also simpler and easier to enforce, because the compiler can do some of this work for you. This echoes the very same design principles and approach to modularity that you'll find in a modern microservices architecture: a remotable service interface with a private implementation. This is no coincidence. Caveats apply (e.g. don't have all of your components share a single database schema) but a well-structured modular monolith will be easier to transform into a microservices architecture.

Testing

In the spirit of YAGNI, you might realise that some of those package protected DAO interfaces in options 3 and 4 aren't really necessary because there is only a single implementation. This post isn't about testing, so I'm just going to point you to Unit and integration are ambiguous names for tests. As I mention in my "Modular Monoliths" talk though, I think there's an interesting relationship between the architecture, the organisation of the code and the tests. I would like to see a much more architecturally-aligned approach to testing.

Conclusions?

I've had the same discussion about layers vs ports & adapters with a number of different people and opinions differ wildly as to how different the two approaches really are. A Google search will reveal the same thing, with numerous blog posts and questions on Stack Overflow about the topic. In my mind, a well implemented layered architecture isn't that different to a hexagonal architecture. They are certainly conceptually different but this isn't necessarily apparent from the typical implementations that I see. And that raises another interesting question: is there a canonical ports & adapters example out there? Of course, module systems (OSGi, Java 9, etc) change the landscape because they allow us to differentiate between public and published types. I wonder how this will affect the code we write and, in particular, whether it will allow us to build more modular monoliths. Feel free to leave a comment or tweet me @simonbrown with any thoughts.

Categories: Architecture

The Joy of Deploying Apache Storm on Docker Swarm

This is a guest repost from Baqend Tech on deploying and redeploying an Apache Storm cluster on top of Docker Swarm instead of deploying on VMs. It's an interesting topic because of the experience Wolfram Wingerath called it "a real joy", which is not a phrase you hear often in tech. Curious, I asked what made using containers such a good experience over using VMs? Here's his reply:

Being pretty new to Docker and Docker Swarm, I'm sure there are many good and bad sides I am not aware of, yet. From my point of view, however, the thing that makes deployment (and operation in general) on top of Docker way more fun than on VMs or even on bare metal is that Docker abstracts from heterogeneity and many issues. Once you have Docker running, you can start something like a MongoDB or a Redis server with a single-line statement. If you have a Docker Swarm cluster, you can do the same, but Docker takes care of distributing the thing you just started to some server in your cluster. Docker even takes care of downloading the correct image in case you don't have it on your machine right now. You also don't have to fight as much with connectivity issues, because every machine can reach every other machine as long as they are in the same Docker network. As demonstrated in the tutorial, this even goes for distributed setups, as long as you have an _overlay_ network.

 

When I wrote the lines you were quoting in your email, I had a situation in the back of my head that had occurred a few months back when I had to set up and operate an Apache Storm cluster with 16+ nodes. There were several issues such as my inexperience with AWS (coming from OpenStack) and strange connectivity problems relating to Netty (used by Storm) and AWS hostname resolution that had not occurred in my OpenStack setup and eventually cost us several days and several hundred bucks to fix. I really think that you can shield from problems like that by using Docker, simply because your environment remains the same: Docker.

On to the tutorial...
Categories: Architecture

Solving Agile portfolio planning for Lawns 'R' Us

Xebia Blog - Mon, 04/25/2016 - 09:28
Agile portfolio planning is a great (chief) product owner tool to plan and trace initiatives across various teams. Implementing it can be difficult and cumbersome at times. This post explores the number one critical success factor to do Agile portfolio planning right; Outcome oriented decision making. Outcome goals are valuable for streamlining your Agile portfolio

Satya Nadella on Digital Transformation

Satya posted his mental model for Digital Transformation:

image

I like the simplicity.

What I like is that there are four clear pillars or areas to look at for driving Digital Transformation:

  1. Customers
  2. Employees
  3. Operations
  4. Products

Collectively, these four Digital Transformation pillars set the stage for transforming your business model.

What I also like is that this matches what I learned while driving Digital Business Transformation with our field with customers, and as part of Satya’s innovation team.

Effectively, to generate new sources of revenue, organizations re-imagine their customer experience along their value chain.  As they connect and engage with their customers in new ways, this transforms their employee experience, and their operations.  As they gain new insight into their customers behavior and needs, they transform their products and services.

In a world of infinite compute and infinite storage…how would you re-imagine your business for a mobile-first, cloud-first world?

You Might Also Like

Digital Transformation is Business Transformation

How Leaders are Building Digital Skills

Microsoft Stories of Digital Business Transformation

Re-imagine Your Customer Experience

Re-imagine Your Operations

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For April 22nd, 2016

Hey, it's HighScalability time:


A perfect 10. Really stuck that landing. Nadia Comaneci approves.

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • $1B: Supercell’s Clash Royale projected annual haul; 3x: Messenger and WhatsApp send more messages than SMS; 20%: of big companies pay zero corporate taxes; Tens of TB's RAM: Netflix's Container Runtime; 1 Million: People use Facebook over Tor; $10.0 billion: Microsoft raining money in the cloud; 

  • Quotable Quotes:
    • @nehanarkhede: @LinkedIn's use of @apachekafka:1.4 trillion msg/day, 1400 brokers. Powers database replication, change capture etc
    • @kenkeiter~ Full-duplex on a *single antenna* -- this is huge.  (single chip, too -- that's the other huge part, obviously) 
    • John Langford: In the next few years, I expect machine learning to solve no important world issues.
    • Dan Rayburn: By My Estimate, Apple’s Internal CDN Now Delivers 75% Of Their Own Content
    • @BenedictEvans: If Google sees the device as dumb glass, Apple sees the cloud as dumb pipes & dumb storage. Both views could lead to weakness
    • @JordanRinke: We need less hackathons, more apprenticeships. Less bootcamps, more classes. Less rockstars, more mentors. Develop people instead of product
    • @alicegoldfuss: Nagios screaming / Data center ablaze? No / Cable was unplugged
    • Mark Bates: As I was working on the software part time, I was keen to minimise the [cognitive] scope required when making changes. A RoR monolith was the best choice in this case.
    • Google: Our tests have shown that AMP documents load an average of four times faster and use 10 times less data than the equivalent non-amp’ed result.
    • @stevesi: In earning's call @sundarpichai says going “from mobile-first to AI-first world" emphasizing AI and machine learning across services.
    • Rex Sorgatz: Unfortunately, the entire thesis of my story is that having the history of recorded music in your pocket dictates that you will develop tastes outside “the usual.”
    • Newzoo: Clash Royale has rocketed to such quick success because of its strong core gameplay elements combined with some serious pressure to spend real money to keep up with your friends
    • vgt:  I'm going to plug Google Cloud's Preemptible VMs as a simpler alternative to Spot Instances: - Preemptible VMs are sold at a fixed 70% off discount, removing pricing volatility entirely
    • @mfdii~ "Cloud Native" is code words for "rewrite the entire f*cking app"
    • There are so many great Quotable Quotes this week they wouldn't all fit in the summary. Please see the full post to read them all.

  • Imperfection as a strategy. Why a Chip That’s Bad at Math Can Help Computers Tackle Harder Problems: In a simulated test using software that tracks objects such as cars in video, Singular’s approach [computer chips are hardwired to be incapable of performing mathematical calculations correctly] was  capable of processing frames almost 100 times faster than a conventional processor restricted to doing correct math—while using less than 2 percent as much power.

  • You have to fight magic with magic, super-villains with super-heroes, and algorithms with algorithms. How I Investigated Uber Surge Pricing in D.C. Also, Investigating the algorithms that govern our lives.

  • Mitchell Hashimoto in The Cloudcast #246  on some cloud trends. Seeing a lot of interest in non-Amazon clouds right now. A lot of interest in Azure is coming from more boring successful companies, not hot Silicon Valley startups.  This is not a clean market segmentation, but there are three flavors of cloud: Google Compute for the green field startup crowd, Amazon for enterprise, and Azure for super-enterprise. One enterprise attractor for Azure is Azure Stack, an on-premises solution. Mitchell is seeing a broad adoption of the cloud across industries you may not expect to be using the cloud. Also seeing a transition to a multi-cloud strategy to create pricing leverage. The idea seems to be to rehearse and plan to move to another cloud, though they may not actually do it, but when pricing negotiations come up there's a lot of leverage saying you can move to a completely different platform. The cloud is not so much a pay as you go model for this use case, it's more about trying to lock-in long term cost savings. International companies are interested in price, but also what features are available in different regions and when they become available.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

How Twitter Handles 3,000 Images Per Second

Today Twitter is creating and persisting 3,000 (200 GB) images per second. Even better, in 2015 Twitter was able to save $6 million due to improved media storage policies.

It was not always so. Twitter in 2012 was primarily text based. A Hogwarts without all the cool moving pictures hanging on the wall. It’s now 2016 and Twitter has moved into to a media rich future. Twitter has made the transition through the development of a new Media Platform capable of supporting photos with previews, multi-photos, gifs, vines, and inline video.

Henna Kermani, a Software Development Engineer at Twitter, tells the story of the Media Platform in an interesting talk she gave at Mobile @Scale London: 3,000 images per second. The talk focuses primarily on the image pipeline, but she says most of the details also apply to the other forms of media as well.

Some of the most interesting lessons from the talk:

  • Doing the simplest thing that can possibly work can really screw you. The simple method of uploading a tweet with an image as an all or nothing operation was a form of lock-in. It didn’t scale well, especially on poor networks, which made it difficult for Twitter to add new features.

  • Decouple. By decoupling media upload from tweeting Twitter was able independently optimize each pathway and gain a lot of operational flexibility. 

  • Move handles not blobs. Don’t move big chunks of data through your system. It eats bandwidth and causes performance problems for every service that has to touch the data. Instead, store the data and refer to it with a handle.

  • Moving to segmented resumable uploads resulted in big decreases in media upload failure rates.

  • Experiment and research. Twitter found through research that a 20 day TTL (time to live) on image variants (thumbnails, small, large, etc) was a sweet spot, a good balance between storage and computation. Images had a low probability of being accessed after 20 days so they could be deleted, which saves nearly 4TB of data storage per day, almost halves the number of compute servers needed, and saves millions of dollars a year.

  • On demand. Old image variants could be deleted because they could be recreated on the fly rather than precomputed. Performing services on demand increases flexibility, it lets you be lot smarter about how tasks are performed, and gives a central point of control.

  • Progressive JPEG is a real winner as a standard image format. It has great frontend and backend support and performs very well on slower networks.

Lots of good things happened on Twitter’s journey to a media rich future, let’s learn how they did it...

The Old Way - Twitter in 2012
Categories: Architecture

Testing Promises with Mocha

Xebia Blog - Tue, 04/19/2016 - 21:03
If you test Javascript promises with Mocha, there are several styles you can use to write your tests. If you follow the Mocha docs on testing asynchronous code you risk writing 'evergreen' tests. Evergreen tests never fail, even if your code is broken. That is something you clearly do not want to happen. So what

Experimenteren kun je leren

Xebia Blog - Mon, 04/18/2016 - 17:04
Validated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast. In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt

Experimenteren kun je leren

Xebia Blog - Mon, 04/18/2016 - 17:04

pdcaValidated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast.

In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt behoefte. Een goed experiment kan ongelooflijk waardevol zijn, mits goed uitgevoerd. En hier zit meteen een veel voorkomend probleem: het experiment wordt niet goed afgerond. Er wordt wel een proef gedaan, maar daar zit vaak geen goede hypothese achter en de lessen worden niet of nauwelijks meegenomen. Ik heb gemerkt dat, om een hoger lerend vermogen in de organisatie te krijgen, het handig om een vaste structuur aan te houden voor experimenten.

Er zijn veel structuren die goed werken. Toyota (of Kanban) Kata vind ik persoonlijk heel erg fijn, maar ook de “gewone” Plan-Do-Check-Act werkt erg goed. Die structuur voor een staat met een simpel voorbeeld hieronder uitgelegd:

  1. Hypothese

Welk probleem ga je oplossen? En hoe wil je dat gaan doen?

Als het hele team vanuit huis inbelt voor de stand up dan worden we niet minder effectief dan als iedereen aanwezig is en kunnen we beter omgaan met thuiswerkdagen

  1. Voorspelling van de uitkomsten

Wat is je verwachting van de uitkomsten? Wat ga je zien?

Geen lagere productiviteit, hogere score op team happiness omdat mensen vanuit huis kunnen werken

  1. Experiment

Op welke manier ga je toetsen of je het probleem kunt oplossen? Is dit experiment ook safe to fail?

De komende zes weken belt iedereen in vanuit huis voor de stand up. We scoren in de retrospective op productiviteit en happiness. Daarna doen we zes weken de stand up samen op kantoor

  1. Observaties

Verzamel zo veel mogelijk data tijdens je experiment. Wat zie je gebeuren?

Het opzetten van de call duurt erg lang (10-15 minuten). Het is lastig iedereen aan het woord te laten. Bij het inbellen kunnen we het gewone board niet gebruiken omdat niemand post-its kan verhangen.

A well designed experiment is as likely to fail as it is to succeed – Free to Don Reinertsen

 Dit is vast niet het beste experiment dat geformuleerd kan worden. Maar daar gaat het niet om. Waar het om gaat is dat het leerproces ontstaat bij de verschillen tussen de voorspelling en de observaties. Het is dus belangrijk om allebei deze stappen te doen en bewust stil te staan bij het leerproces wat daarop volgt. Op basis van je observaties kun je een nieuw experiment formuleren voor nieuwe verbeteringen.

Hoe doe jij je experimenten? Ik ben benieuwd naar wat goed werkt in jouw organisatie.

 

Hadoop and Salesforce Integration: the Ultimate Successful Database Merger

How we can transfer salesforce data to hadoop? It is big challenge to everyday users. What are different features of data transfer tools.

Categories: Architecture

LDAP server setup and client authentication

Agile Testing - Grig Gheorghiu - Fri, 04/15/2016 - 19:24
We recently bought at work a CloudBees Jenkins Enterprise license and I wanted to tie the user accounts to a directory service. I first tried to set up Jenkins authentication via the AWS Directory Service, hoping it will be pretty much like talking to an Active Directory server. That proved to be impossible to set up, at least for me. I also tried to have an LDAP proxy server talking to the AWS Directory Service and have Jenkins authenticate against the LDAP proxy. No dice. I ended up setting up a good old-fashioned LDAP server and managed to get Jenkins working with it. Here are some of my notes.

OpenLDAP server setup
I followed this excellent guide from Digital Ocean. The server was an Ubuntu 14.04 EC2 instance in my case. What follows in terms of the server setup is taken almost verbatim from the DO guide.

Set the hostname

# hostnamectl set-hostname my-ldap-server

Edit /etc/hosts and make sure this entry exists:
LOCAL_IP_ADDRESS my-ldap-server.mycompany.com my-ldap-server
(it makes a difference that the FQDN is the first entry in the line above!)

Make sure the following types of names are returned when you run hostname with different options:


# hostnamemy-ldap-server
# hostname -fmy-ldap-server.mycompany.com
# hostname -dmycompany.com

Install slapd

# apt-get install slapd ldap-utils# dpkg-reconfigure slapd
(here you specify the LDAP admin password)

Install the SSL Components

# apt-get install gnutls-bin ssl-cert

Create the CA Template
# mkdir /etc/ssl/templates
# vi /etc/ssl/templates/ca_server.conf# cat /etc/ssl/templates/ca_server.confcn = LDAP Server CAcacert_signing_key

Create the LDAP Service Template

# vi /etc/ssl/templates/ldap_server.conf# cat /etc/ssl/templates/ldap_server.conforganization = "My Company"cn = my-ldap-server.mycompany.comtls_www_serverencryption_keysigning_keyexpiration_days = 3650

Create the CA Key and Certificate

# certtool -p --outfile /etc/ssl/private/ca_server.key# certtool -s --load-privkey /etc/ssl/private/ca_server.key --template /etc/ssl/templates/ca_server.conf --outfile /etc/ssl/certs/ca_server.pem
Create the LDAP Service Key and Certificate

# certtool -p --sec-param high --outfile /etc/ssl/private/ldap_server.key# certtool -c --load-privkey /etc/ssl/private/ldap_server.key --load-ca-certificate /etc/ssl/certs/ca_server.pem --load-ca-privkey /etc/ssl/private/ca_server.key --template /etc/ssl/templates/ldap_server.conf --outfile /etc/ssl/certs/ldap_server.pem

Give OpenLDAP Access to the LDAP Server Key

# usermod -aG ssl-cert openldap# chown :ssl-cert /etc/ssl/private/ldap_server.key# chmod 640 /etc/ssl/private/ldap_server.key

Configure OpenLDAP to Use the Certificate and Keys
IMPORTANT NOTE: in modern versions of slapd, configuring the server is not done via slapd.conf anymore. Instead, you put together ldif files and run LDAP client utilities such as ldapmodify against the local server. The Distinguished Name of the entity you want to modify in terms of configuration is generally dn: cn=config but it can also be the LDAP database dn: olcDatabase={1}hdb,cn=config.
# vi addcerts.ldif# cat addcerts.ldifdn: cn=configchangetype: modifyadd: olcTLSCACertificateFileolcTLSCACertificateFile: /etc/ssl/certs/ca_server.pem-add: olcTLSCertificateFileolcTLSCertificateFile: /etc/ssl/certs/ldap_server.pem-add: olcTLSCertificateKeyFileolcTLSCertificateKeyFile: /etc/ssl/private/ldap_server.key

# ldapmodify -H ldapi:// -Y EXTERNAL -f addcerts.ldif# service slapd force-reload# cp /etc/ssl/certs/ca_server.pem /etc/ldap/ca_certs.pem# vi /etc/ldap/ldap.conf
* set TLS_CACERT to following:TLS_CACERT /etc/ldap/ca_certs.pem
# ldapwhoami -H ldap:// -x -ZZAnonymous

Force Connections to Use TLS
Change olcSecurity attribute to include 'tls=1':

# vi forcetls.ldif# cat forcetls.ldifdn: olcDatabase={1}hdb,cn=configchangetype: modifyadd: olcSecurityolcSecurity: tls=1

# ldapmodify -H ldapi:// -Y EXTERNAL -f forcetls.ldif# service slapd force-reload# ldapsearch -H ldap:// -x -b "dc=mycompany,dc=com" -LLL dn(shouldn’t work)
# ldapsearch -H ldap:// -x -b "dc=mycompany,dc=com" -LLL -Z dn(should work)

Disallow anonymous bind
Create user binduser to be used for LDAP searches:


# vi binduser.ldif# cat binduser.ldifdn: cn=binduser,dc=mycompany,dc=comobjectClass: topobjectClass: accountobjectClass: posixAccountobjectClass: shadowAccountcn: binduseruid: binduseruidNumber: 2000gidNumber: 200homeDirectory: /home/binduserloginShell: /bin/bashgecos: suseruserPassword: {crypt}xshadowLastChange: -1shadowMax: -1shadowWarning: -1

# ldapadd -x -W -D "cn=admin,dc=mycompany,dc=com" -Z -f binduser.ldifEnter LDAP Password:adding new entry "cn=binduser,dc=mycompany,dc=com"
Change olcDissalows attribute to include bind_anon:


# vi disallow_anon_bind.ldif# cat disallow_anon_bind.ldifdn: cn=configchangetype: modifyadd: olcDisallowsolcDisallows: bind_anon

# ldapmodify -H ldapi:// -Y EXTERNAL -ZZ -f disallow_anon_bind.ldif# service slapd force-reload
Also disable anonymous access to frontend:

# vi disable_anon_frontend.ldif# cat disable_anon_frontend.ldifdn: olcDatabase={-1}frontend,cn=configchangetype: modifyadd: olcRequiresolcRequires: authc

# ldapmodify -H ldapi:// -Y EXTERNAL -f disable_anon_frontend.ldif# service slapd force-reload

Create organizational units and users
Create helper scripts:

# cat add_ldap_ldif.sh
#!/bin/bash

LDIF=$1

ldapadd -x -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -Z -f $LDIF

# cat modify_ldap_ldif.sh#!/bin/bash

LDIF=$1

ldapmodify -x -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -Z -f $LDIF

# cat set_ldap_pass.sh#!/bin/bash

USER=$1PASS=$2

ldappasswd -s $PASS -w adminpassword -D "cn=admin,dc=mycompany,dc=com" -x "uid=$USER,ou=users,dc=mycompany,dc=com" -Z
Create ‘mypeople’ organizational unit:


# cat add_ou_mypeople.ldifdn: ou=mypeople,dc=mycompany,dc=comobjectclass: organizationalunitou: usersdescription: all users
# ./add_ldap_ldif.sh add_ou_mypeople.ldif
Create 'groups' organizational unit:


# cat add_ou_groups.ldifdn: ou=groups,dc=mycompany,dc=comobjectclass: organizationalunitou: groupsdescription: all groups

# ./add_ldap_ldif.sh add_ou_groups.ldif
Create users (note the shadow attributes set to -1, which means they will be ignored):


# cat add_user_myuser.ldifdn: uid=myuser,ou=mypeople,dc=mycompany,dc=comobjectClass: topobjectClass: accountobjectClass: posixAccountobjectClass: shadowAccountcn: myuseruid: myuseruidNumber: 2001gidNumber: 201homeDirectory: /home/myuserloginShell: /bin/bashgecos: myuseruserPassword: {crypt}xshadowLastChange: -1shadowMax: -1shadowWarning: -1
# ./add_ldap_ldif.sh add_user_myuser.ldif# ./set_ldap_pass.sh myuser MYPASS

Enable LDAPS
In /etc/default/slapd set:

SLAPD_SERVICES="ldap:/// ldaps:/// ldapi:///"

Enable debugging
This was a life saver when it came to troubleshooting connection issues from clients such as Jenkins or other Linux boxes. To enable full debug output, set olcLogLevel to -1:

# cat enable_debugging.ldifdn: cn=configchangetype: modifyadd: olcLogLevelolcLogLevel: -1
# ldapadd -H ldapi:// -Y EXTERNAL -f enable_debugging.ldif
# service slapd force-reload

Configuring Jenkins LDAP authentication
Verify LDAPS connectivity from Jenkins to LDAP server
In my case, the Jenkins server is in the same VPC and subnet as the LDAP server, so I added an /etc/hosts entry on the Jenkins box pointing to the FQDN of the LDAP server so it can hit its internal IP address:

IP_ADDRESS_OF_LDAP_SERVER my-ldap-server.mycompany.com
I verified that port 636 (used by LDAPS) on the LDAP server is reachable from the Jenkins server:
# telnet my-ldap-server.mycompany.com 636Trying IP_ADDRESS_OF_LDAP_SERVER...Connected to my-ldap-server.mycompany.com.Escape character is '^]'.
Set up LDAPS client on Jenkins server (StartTLSdoes not work w/ Jenkins LDAP plugin!)
# apt-get install ldap-utils
IMPORTANT: Copy over /etc/ssl/certs/ca_server.pem from LDAP server as /etc/ldap/ca_certs.pem on Jenkins server and then:
# vi /etc/ldap/ldap.confset:TLS_CACERT /etc/ldap/ca_certs.pem
Add LDAP certificates to Java keystore used by Jenkins
As user jenkins:
$ mkdir .keystore$ cp /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/security/cacerts .keystore/(you may need to customize the above line in terms of the path to the cacerts file -- it is the one under your JAVA_HOME)

$ keytool --keystore /var/lib/jenkins/.keystore/cacerts --import --alias my-ldap-server.mycompany.com:636 --file /etc/ldap/ca_certs.pemEnter keystore password: changeitOwner: CN=LDAP Server CAIssuer: CN=LDAP Server CASerial number: 570bddb0Valid from: Mon Apr 11 17:24:00 UTC 2016 until: Tue Apr 11 17:24:00 UTC 2017Certificate fingerprints:....Extensions:....
Trust this certificate? [no]:  yesCertificate was added to keystore
In /etc/default/jenkins, set JAVA_ARGS to:
JAVA_ARGS="-Djava.awt.headless=true -Djavax.net.ssl.trustStore=/var/lib/jenkins/.keystore/cacerts -Djavax.net.ssl.trustStorePassword=changeit"  
As root, restart jenkins:

# service jenkins restart
Jenkins settings for LDAP plugin
This took me a while to get right. The trick was to set the rootDN to dc=mycompany, dc=com and the userSearchBase to ou=mypeople (or to whatever name you gave to your users' organizational unit). I also tried to get LDAP groups to work but wasn't very successful.
Here is the LDAP section in /var/lib/jenkins/config.xml:
 <securityRealm class="hudson.security.LDAPSecurityRealm" plugin="ldap@1.11">    <server>ldaps://my-ldap-server.mycompany.com:636</server>    <rootDN>dc=mycompany,dc=com</rootDN>    <inhibitInferRootDN>true</inhibitInferRootDN>    <userSearchBase>ou=mypeople</userSearchBase>    <userSearch>uid={0}</userSearch> <groupSearchBase>ou=groups</groupSearchBase> <groupMembershipStrategy class="jenkins.security.plugins.ldap.FromGroupSearchLDAPGroupMembershipStrategy"> <filter>member={0}</filter> </groupMembershipStrategy>    <managerDN>cn=binduser,dc=mycompany,dc=com</managerDN>    <managerPasswordSecret>JGeIGFZwjipl6hJNefTzCwClRcLqYWEUNmnXlC3AOXI=</managerPasswordSecret>    <disableMailAddressResolver>false</disableMailAddressResolver>    <displayNameAttributeName>displayname</displayNameAttributeName>    <mailAddressAttributeName>mail</mailAddressAttributeName>    <userIdStrategy class="jenkins.model.IdStrategy$CaseInsensitive"/>    <groupIdStrategy class="jenkins.model.IdStrategy$CaseInsensitive"/>
 </securityRealm>

At this point, I was able to create users on the LDAP server and have them log in to Jenkins. With CloudBees Jenkins Enterprise, I was also able to use the Role-Based Access Control and Folder plugins in order to create project-specific folders and folder-specific groups specifying various roles. For example, a folder MyProjectNumber1 would have a Developers group defined inside it, as well as an Administrators group and a Readers group. These groups would be associated with fine-grained roles that only allow certain Jenkins operations for each group.
I tried to have these groups read by Jenkins from the LDAP server, but was unsuccessful. Instead, I had to populate the folder-specific groups in Jenkins with user names that were at least still defined in LDAP.  So that was half a win. Still waiting to see if I can define the groups in LDAP, but for now this is a workaround that works for me.
Allowing users to change their LDAP password
This was again a seemingly easy task but turned out to be pretty complicated. I set up another small EC2 instance to act as a jumpbox for users who want to change their LDAP password.
The jumpbox is in the same VPC and subnet as the LDAP server, so I added an /etc/hosts entry on the jumpbox pointing to the FQDN of the LDAP server so it can hit its internal IP address:

IP_ADDRESS_OF_LDAP_SERVER my-ldap-server.mycompany.com
I verified that port 636 (used by LDAPS) on the LDAP server is reachable from the jumpbox:
# telnet my-ldap-server.mycompany.com 636Trying IP_ADDRESS_OF_LDAP_SERVER...Connected to my-ldap-server.mycompany.com.Escape character is '^]'.
# apt-get install ldap-utils
IMPORTANT: Copy over /etc/ssl/certs/ca_server.pem from LDAP server as /etc/ldap/ca_certs.pem on the jumpbox and then:
# vi /etc/ldap/ldap.confset:TLS_CACERT /etc/ldap/ca_certs.pem
Next, I followed this LDAP Client Authentication guide from the Ubuntu documentation.
# apt-get install ldap-auth-client nscd
Here I had to answer the setup questions on LDAP server FQDN, admin DN and password, and bind user DN and password. 
# auth-client-config -t nss -p lac_ldap
I edited /etc/auth-client-config/profile.d/ldap-auth-config and set:
[lac_ldap]nss_passwd=passwd: ldap filesnss_group=group: ldap filesnss_shadow=shadow: ldap filesnss_netgroup=netgroup: nis
I edited /etc/ldap.conf and made sure the following entries were there:
base dc=mycompany,dc=comuri ldaps://my-ldap-server.mycompany.combinddn cn=binduser,mycompany,dc=combindpw BINDUSERPASSrootbinddn cn=admin,mycompany,dc=comport 636ssl ontls_cacertfile /etc/ldap/ca_certs.pemtls_cacertdir /etc/ssl/certs
I allowed password-based ssh logins to the jumpbox by editing /etc/ssh/sshd_config and setting:
PasswordAuthentication yes
# service ssh restart

IMPORTANT: On the LDAP server, I had to allow users to change their own password by adding this ACL:
# cat set_userpassword_acl.ldif
dn: olcDatabase={1}hdb,cn=configchangetype: modifyadd: olcAccessolcAccess: {0}to attrs=userpassword by dn="cn=admin,dc=mycompany,dc=com" write by self write by anonymous auth by users none
Then:
# ldapmodify -H ldapi:// -Y EXTERNAL -f set_userpassword_acl.ldif

At this point, users were able to log in via ssh to the jumpbox using a pre-set LDAP password, and change their LDAP password by using the regular Unix 'passwd' command.
I am still fine-tuning the LDAP setup on all fronts: LDAP server, LDAP client jumpbox and Jenkis server. The setup I have so far allows me to have a single sign-on account for users to log in to Jenkins. Some of my next steps is to use the same user LDAP accounts  for authentication and access control into MySQL and other services.

Stuff The Internet Says On Scalability For April 15th, 2016

Hey, it's HighScalability time:


What happens when Beyoncé meets eCommerce? Ring the alarm.

 

If you like this sort of Stuff then please consider offering your support on Patreon.
  • $14 billion: one day of purchases on Alibaba; 47 megawatts: Microsoft's new data center space for its MegaCloud; 50%: do not speak English on Facebook; 70-80%: of all Intel servers shipped will be deployed in large scale datacenters by 2025; 1024 TB: of storage for 3D imagery currently in Google Earth; $7: WeChat average revenue per user; 1 trillion: new trees; 

  • Quotable Quotes:
    • @PicardTips: Picard management tip: Know your audience. Display strength to Klingons, logic to Vulcans, and opportunity to Ferengi.
    • Mark Burgess: Microservices cannot be a panacea. What we see clearly from cities is that they can be semantically valuable, but they can be economically expensive, scaling with superlinear cost. 
    • ethanpil: I'm crying. Remember when messaging was built on open platforms and standards like XMPP and IRC? The golden year(s?) when Google Talk worked with AIM and anyone could choose whatever client they preferred?
    • @acmurthy: @raghurwi from @Microsoft talking about scaling Hadoop YARN to 100K+ clusters. Yes, 100,000 
    • @ryanbigg: Took a Rails view rendering time from ~300ms to 50ms. Rewrote it in Elixir: it’s now 6-7ms. #MyElixirStatus
    • Dmitriy Samovskiy: In the past, our [Operations] primary purpose in life was to build and babysit production. Today operations teams focus on scale.
    • @Agandrau: Sir Tim Berners-Lee thinks that if we can predict what the internet will look like in 20 years, than we are not creative enough. #www2016
    • @EconBizFin: Apple and Tesla are today’s most talked-about companies, and the most vertically integrated
    • Kevin Fishner: Nomad was able to schedule one million containers across 5,000 hosts in Google Cloud in under five minutes.
    • David Rosenthal: The Web we have is a huge success disaster. Whatever replaces it will be at least as big a success disaster. Lets not have the causes of the disaster be things we knew about all along.
    • Kurt Marko: The days of homogeneous server farms with racks and racks of largely identical systems are over.
    • Jonathan Eisen: This is humbling, we know virtually nothing right now about the biology of most of the tree of life.
    • @adrianco: Google has a global network IP model (more convenient), AWS regional (more resilient). Choices...
    • @jason_kint: Stupid scary stats in this. Ad tech responsible for 70% of server calls and 50% of your mobile data plan.
    • apy: I found myself agreeing with many of Pike’s statements but then not understanding how he wound up at Go. 
    • @TomBirtwhistle: The head of Apple Music claims YouTube accounts for 40% of music consumption yet only 4% of online industry revenue 
    • @0x604: Immutable Laws of Software: Anyone slower than you is over-engineering, anyone faster is introducing technical debt
    • surrealvortex: I'm currently using flame graphs at work. If your application hasn't been profiled recently, you'll usually get lots of improvement for very little effort. Some 15 minutes of work improved CPU usage of my team's biggest fleet by ~40%. Considering we scaled up to 1500 c3.4xlarge hosts at peak in NA alone on that fleet, those 15 minutes kinda made my month :)
    • @cleverdevil: Did you know that Virtual Machines spin up in the opposite direction in the southern hemisphere? Little known fact.
    • ksec: Yes, and I think Intel is not certain to win, just much more likely. The Power9 is here is targeting 2H 2017 release. Which is actually up against Intel Skylake/Kabylake Xeon Purley Platform in similar timeframe.
    • @jon_moore: Platforms make promises; constraints are the contracts that allow platforms to do their jobs. #oreillysacon
    • @CBILIEN: Scaling data platforms:compute and storage have to be scaled independently #HS16Dublin

  • A morning reverie. Chopped for programmers. Call it Crashed. You have three rounds with four competitors. Each round is an hour. The competitors must create a certain kind of program, say a game, or a productivity tool, anything really, using a basket of three selected technologies, say Google Cloud, wit.ai, and Twilio. Plus the programmer can choose to use any other technologies from the pantry that is the Internet. The program can take any form the programmer chooses. It could be a web app, iOS or Android app, an Alexa skill, a Slack bot, anything, it's up to the creativity of the programmer. The program is judged by an esteemed panel based on creativity, quality, and how well the basket technologies are highlighted. When a programmer loses a round they have been Crashed. The winner becomes the Crashed Champion. Sound fun?

  • Jeff Dean when talking about deep learning at Google makes it clear a big part of their secret sauce is being able to train neural nets at scale using their bespoke distributed infrastructure. Now Google has released Tensor Flow with distributed computing support. It's not clear if this is the same infrastructure Google uses internally, but it seems to work: using the distributed trainer, we trained the Inception network to 78% accuracy in less than 65 hours using 100 GPUs. Also, the tensorflow playground is a cool way to visualize what's going on inside.

  • Christopher Meiklejohn with an interesting history of the Remote Procedure Call. It started way back in 1974: RFC 674, “Procedure Call Protocol Documents, Version 2”. RFC 674 attempts to define a general way to share resources across all 70 nodes of the Internet

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture