Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Sponsored Post: Apple, InMemory.Net, Sentient, Couchbase, VividCortex, Internap, Transversal, MemSQL, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple is hiring a Software Engineer for Maps Services. The Maps Team is looking for a developer to support and grow some of the core backend services that support Apple Map's Front End Services. Ideal candidate would have experience with system architecture, as well as the design, implementation, and testing of individual components but also be comfortable with multiple scripting languages. Please apply here.

  • Sentient Technologies is hiring several Senior Distributed Systems Engineers and a Senior Distributed Systems QA Engineer. Sentient Technologies, is a privately held company seeking to solve the world’s most complex problems through massively scaled artificial intelligence running on one of the largest distributed compute resources in the world. Help us expand our existing million+ distributed cores to many, many more. Please apply here.

  • Linux Web Server Systems EngineerTransversal. We are seeking an experienced and motivated Linux System Engineer to join our Engineering team. This new role is to design, test, install, and provide ongoing daily support of our information technology systems infrastructure. As an experienced Engineer you will have comprehensive capabilities for understanding hardware/software configurations that comprise system, security, and library management, backup/recovery, operating computer systems in different operating environments, sizing, performance tuning, hardware/software troubleshooting and resource allocation. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Rise of the Multi-Model Database. FoundationDB Webinar: March 10th at 1pm EST. Do you want a SQL, JSON, Graph, Time Series, or Key Value database? Or maybe it’s all of them? Not all NoSQL Databases are not created equal. The latest development in this space is the Multi Model Database. Please join FoundationDB for an interactive webinar as we discuss the Rise of the Multi Model Database and what to consider when choosing the right tool for the job.
Cool Products and Services
  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • Top Enterprise Use Cases for NoSQL. Discover how the largest enterprises in the world are leveraging NoSQL in mission-critical applications with real-world success stories. Get the Guide.
    http://info.couchbase.com/HS_SO_Top_10_Enterprise_NoSQL_Use_Cases.html

  • VividCortex Developer edition delivers a groundbreaking performance management solution to startups, open-source projects, nonprofits, and other organizations free of charge. It integrates high-resolution metrics on queries, metrics, processes, databases, and the OS and hardware to deliver an unprecedented level of visibility into production database activity.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • Aerospike demonstrates RAM-like performance with Google Compute Engine Local SSDs. After scaling to 1 M Writes/Second with 6x fewer servers than Cassandra on Google Compute Engine, we certified Google’s new Local SSDs using the Aerospike Certification Tool for SSDs (ACT) and found RAM-like performance and 15x storage cost savings. Read more.

  • Diagnose server issues from a single tab. The Scalyr log management tool replaces all your monitoring and analysis services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. It's a universal tool for visibility into your production systems. Log aggregation, server metrics, monitoring, alerting, dashboards, and more. Not just “hosted grep” or “hosted graphs,” but enterprise-grade functionality with sane pricing and insane performance. Trusted by in-the-know companies like Codecademy – try it free! (See how Scalyr is different if you're looking for a Splunk alternative.)

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

101 Proven Practices for Focus

“Lack of direction, not lack of time, is the problem. We all have twenty-four hour days.” -- Zig Ziglar

Here is my collection of 101 Proven Practices for Focus.   It still needs work to improve it, but I wanted to shared it, as is, because focus is one of the most important skills we can develop for work and life.

Focus is the backbone of personal effectiveness, personal development, productivity, time management, leadership skills, and just about anything that matters.   Focus is a key ingredient to helping us achieve the things we set out to do, and to learn the things we need to learn.

Without focus, we can’t achieve great results.

I have a very healthy respect for the power of focus to amplify impact, to create amazing breakthroughs, and to make things happen.

The Power of Focus

Long ago one of my most impactful mentors said that focus is what separates the best from the rest.  In all of his experience, what exceptional people had, that others did not, was focus.

Here are a few relevant definitions of focus:
A main purpose or interest.
A center of interest or activity.
Close or narrow attention; concentration.

I think of focus simply as  the skill or ability to direct and hold our attention.

Focus is a Skill

Too many people think of focus as something either you are good at, or you are not.  It’s just like delayed gratification.

Focus is a skill you can build.

Focus is actually a skill and you can develop it.   In fact, you can develop it quite a bit.  For example, I helped a colleague get themselves off of their ADD medication by learning some new ways to retrain their brain.   It turned out that the medication only helped so much, the side effects sucked, and in the end, what they really needed was coping mechanisms for their mind, to better direct and hold their attention.

Here’s the surprise, though.  You can actually learn how to direct your attention very quickly.  Simply ask new questions.  You can direct your attention by asking questions.   If you want to change your focus, change the question.

101 Proven Practices at a Glance

Here is a list of the 101 Proven Practices for Focus:

  1. Align  your focus and your values
  2. Ask new questions to change your focus
  3. Ask yourself, “What are you rushing through for?”
  4. Beware of random, intermittent rewards
  5. Bite off what you can chew
  6. Breathe
  7. Capture all of your ideas in one place
  8. Capture all of your To-Dos all in one place
  9. Carry the good forward
  10. Change your environment
  11. Change your physiology
  12. Choose one project or one thing to focus on
  13. Choose to do it
  14. Clear away all distractions
  15. Clear away external distractions
  16. Clear away internal distractions
  17. Close your distractions
  18. Consolidate and batch your tasks
  19. Create routines to help you focus
  20. Decide to finish it
  21. Delay gratification
  22. Develop a routine
  23. Develop an effective startup routine
  24. Develop an effective shutdown routine
  25. Develop effective email routines
  26. Develop effective renewal activities
  27. Develop effective social media routines
  28. Direct your attention with skill
  29. Do less, focus more
  30. Do now what you could put off until later
  31. Do things you enjoy focusing on
  32. Do worst things first
  33. Don’t chase every interesting idea
  34. Edit later
  35. Exercise your body
  36. Exercise your mind
  37. Expand your attention span
  38. Find a way to refocus
  39. Find the best time to do your routine tasks
  40. Find your flow
  41. Finish what you started
  42. Focus on what you control
  43. Force yourself to focus
  44. Get clear on what you want
  45. Give it the time and attention it deserves
  46. Have a time and place for things
  47. Hold a clear picture in your mind of what you want to accomplish
  48. Keep it simple
  49. Keep your energy up
  50. Know the tests for success
  51. Know what’s on your plate
  52. Know your limits
  53. Know your personal patterns
  54. Know your priorities
  55. Learn to say no – to yourself and others
  56. Limit your starts and stops
  57. Limit your task switching
  58. Link it to good feelings
  59. Make it easy to pick back up where you left off
  60. Make it relentless
  61. Make it work, then make it right
  62. Master your mindset
  63. Multi-Task with skill
  64. Music everywhere
  65. Narrow your focus
  66. Pair up
  67. Pick up where you left off
  68. Practice meditation
  69. Put the focus on something bigger than yourself
  70. Rate your focus each day
  71. Reduce friction
  72. Reduce open work
  73. Reward yourself along the way
  74. See it, do it
  75. Set a time frame for focus 
  76. Set goals
  77. Set goals with hard deadlines
  78. Set mini-goals
  79. Set quantity limits
  80. Set time limits
  81. Shelve things you aren’t actively working on
  82. Single Task
  83. Spend your attention with skill
  84. Start with WHY
  85. Stop starting new projects
  86. Take breaks
  87. Take care of the basics
  88. Use lists to avoid getting overwhelmed or overloaded
  89. Use metaphors
  90. Use Sprints to scope your focus
  91. Use the Rule of Three
  92. Use verbal cues
  93. Use visual cues
  94. Visualize your performance
  95. Wake up at the same time each day
  96. Wiggle your toes – it’s a fast way to bring yourself back to the present
  97. Write down your goals
  98. Write down your steps
  99. Write down your tasks
  100. Write down your thoughts
  101. Work when you are most comfortable

When you go through the 101 Proven Practices for Focus, don’t expect it to be perfect.  It’s a work in progress.   Some of the practices for focus need to be fleshed out better.   There is also some duplication and overlap, as I re-organize the list and find better ways to group and label ideas.

In the future, I’m going to revamp this collection to have some more precision, better naming, and some links to relevant quotes, and some science where possible.   There is a lot more relevant science that explains why some of these techniques work, and why some work so well.

What’s important is that you find the practices that resonate for you, and the things that you can actually practice.

Getting Started

You might find that from all the practices, only one or two really resonate, or help you change your game.   And, that’s great.   The idea of having a large list to select from is that it’s more to choose from.  The bigger your toolbox, the more you can choose the right tool for the job.  If you only have a hammer, then everything looks like a nail.

If you don’t consider yourself an expert in focus, that’s fine.  Everybody has to start somewhere.  In fact, you might even use one of the practices to help you get better:  Rate your focus each day.

Simply rate yourself, on a scale of 1-10, where 10 is awesome and 1 means you’re a squirrel with a sugar high, dazed and confused, and chasing all the shiny objects that come into site.   And then see if your focus improves over the course of a week.

If you adopt just one practice, try either Align  your focus and your values or Ask new questions to change your focus.  

Feel Free to Share It With Friends

At the bottom of the 101 Proven Practices for Focus, you’ll find the standard sharing buttons for social media to make it easier to share.

Share it with friends, family, your world, the world.

The ability to focus is really a challenge for a lot of people.   The answer to improve your attention and focus is through proven practices, techniques, and skill building.  Too many people hope the answer lies in a pill, but pills don’t teach you skills.

Even if you struggle a bit in the beginning, remind yourself that growth feels awkward.   You' will get better with practice.  Practice deliberately.  In fact, the side benefit of focusing on improving your focus, is, well, you guessed it … you’ll improve your focus.

What we focus on expands, and the more we focus our attention, and apply deliberate practice, the deeper our ability to focus will grow.

Grow your focus with skill.

You Might Also Like

The Great Inspirational Quotes Revamped

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Motivational Quotes Revamped

The Great Personal Development Quotes Collection Revamped

The Great Positive Thinking Quotes Collection

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming

A product manager's perfection....

Xebia Blog - Tue, 03/03/2015 - 15:59

is achieved not there are no more features to add, but when there are no more features to take away. -- Antoine de Saint Exupéry

Not only was Antoine a brilliant writer, philosopher and pilot (well arguably since he crashed in the Mediterranean) but most of all he had a sharp mind about engineering, and I frequent quote him when I train product owners, product managers or in general product companies, about what makes a good product great. I also tell them their most important word in their vocabulary is "no". But the question then becomes, what is the criteria to say "yes"?

Typically we will look at the value of a feature and use different ways to prioritise and rank different features, break them down to their minimal proposition and get the team going. But what if you already have a product? and it’s rate of development is slowing. Features have been stacked on each other for years or even decades, and it’s become more and more difficult for the teams to wade through the proverbial swamp the code has become?

Too many features

Too many features

Turns out there are a number of criteria that you can follow:

1.) Working software, means it’s actually being used.

Though it may sound obvious, it’s not that easy to figure out. I was once part of a team that had to rebuild a rather large piece (read huge) of software for an air traffic control system. The managers ensured us that every functionality was a must keep, but the cost would have been prohibitory high.

One of the functions of the system was a record and replay mode for legal purposes. It basically registers all events throughout the system to serve as evidence that picture compilers would be accountable, or at least verifiable. One of our engineers had the bright insight that we could catalogue this data anonymously to figure out which functions were used and which were not.

Turned out the Standish Group was pretty right in their claims that 80% of the software is never used. Carving that out was met with fierce resistance, but it was easier to convince management (and users) with data, than with gut.

Another upside? we also knew what functions they were using a lot, and figured out how to improve those substantially.

2.) The cost of platforms

Yippee we got it running on a gazillion platforms! and boy do we have a reach, the marketing guys are going frenzy. Even if is the right choice at the time, you need to revisit this assumption all the time, and be prepared to clean up! This is often looked upon as a disinvestment: “we spent so much money on finally getting Blackberry working” or “It’s so cost effective that we can also offer it on platform XYZ”.

In the web world it’s often the number of browsers we support, but for larger software systems it is more often operating systems, database versions or even hardware. For one customer we would refurbish hardware systems, simply because it was cheaper than moving on to a more modern machine.

Key take away: If the platform is deprecated, remove it entirely from the codebase, it will bog the team down and you need their speed to respond to an ever increasing pace of new platforms.

3.) Old strategies

Every market and every product company pivots at least every few years (or dies). Focus shifts from consumer groups, to type of clients, type of delivery, shift to service or something else which is novel, hip and most of all profitable. Code bases tend to have a certain inertia. The larger the product, the bigger the inertia, and before you know it there a are tons of features in their that are far less valuable in the new situation. Cutting away perfectly good features is always painful but at some point you end up with the toolbars of Microsoft Word. Nice features, but complete overkill for the average user.

4.) The cause and effect trap

When people are faced with an issue they tend to focus on fixing the issue as it manifests itself. It's hard for our brain to think in problems, it tries to think in solutions. There is an excellent blog post here that provides a powerful method to overcome this phenomena by asking five times "why".

  • "We need the system to automatically export account details at the end of the day."
  • "Why?"
  • "So we can enter the records into the finance system"
  • "So it sounds like the real problem is getting the data into the finance system, not exporting it. Exporting just complicates the issue. Let's implement a data feed that automatically feeds the data to the finance system"

The hard job is to continuously keep evaluating your features, and remove those who are no longer valuable. It may seem like your throwing away good code, but ultimately it is not the largest product that survives, but the one that is able to adapt fast enough to the changing market. (Freely after Darwin)

 

Tutum, first impressions

Xebia Blog - Mon, 03/02/2015 - 16:40

Tutum is a platform to build, run and manage your docker containers. After shortly playing with it some time ago, I decided to take a bit more serious look at it this time. This article describes first impressions of using this platform, more specifically looking at it from a continuous delivery perspective.

The web interface

First thing to notice is the clean and simple web interface. Basically there are two main sections, which are services and nodes. The services view lists the services or containers you have deployed with status information and two buttons, one to stop (or start) and one to terminate the container, which means to throw it away.

You can drill down to a specific service, which provides you with more detailed information per service. The detail page provides you information about the containers, a slider to scale up and down, endpoints, logging, some metrics for monitoring and more .

Screen Shot 2015-02-23 at 22.49.33

The second view is a list of nodes. The list contains the VM's on which containers can be deployed. Again with two simple buttons to start/stop and to terminate the node. For each node it displays useful information about the current status, where it runs, and how many containers are deployed on it.

The node page also allows you to drill down to get more information on a specific node.  The screenshot below shows some metrics in fancy graphs for a node, which can potentially be used to impress your boss.

Screen Shot 2015-02-23 at 23.07.30

 

Creating a new node

You’ll need a node to deploy containers on it. In the node view you see two big green buttons. One states: “Launch new node cluster”. This will bring up a form with currently four popular providers Amazon, Digital Ocean, Microsoft Azure and Softlayer. If you have linked your account(s) in the settings you can select that provider from a dropdown box. It only takes a few clicks to get a node up and running. In fact you create a node cluster, which allows you to easily scale up or down by adding or removing nodes from the cluster.

You also have an option to ‘Bring you own node’. This allows you to add your own Ubuntu Linux systems as nodes to Tutum. You need to install an agent onto your system and open up a firewall port to make your node available to Tutum. Again very easy and straight forward.

Creating a new service

Once you have created a node, you maybe want to do something with it. Tumtum provides jumpstart images with popular types of services for storage, cacheing, queueing and more, providing for example MongoDB, Elasticsearch or Tomcat. Using a wizard it takes only four steps to get a particular service up and running.

Besides the jumpstart images that Tutum provides, you can also search public repositories for your image of choice. Eventually you would like to have your own images running your homegrown software. You can upload your image to a Tutum private registry. You can either pull it from Docker Hub or upload your local images directly to Tutum.

Automating

We all know real (wo)men (and automated processes) don’t use GUI’s. Tutum provides a nice and extensive command line interface for both Linux and Mac. I installed it using brew on my MBP and seconds later I was logged in and doing all kinds of cool stuff with the command line.

Screen Shot 2015-02-24 at 22.23.30

The cli is actually doing rest calls, so you can skip the cli all together and talk HTTP directly to a REST API, or if it pleases you, you can use the python API to create scripts that are actually maintainable. You can pretty much automate all management of your nodes, containers, and services using the API, which is a must have in this era of continuous everything.

A simple deployment example

So let's say we've build a new version of our software on our build server. Now we want to get this software deployed to do some integration testing, or if you feeling lucky just drop it straight into production.

build the docker image

tutum build -t test/myimage .

upload the image to Tutum registry

tutum image push <image_id>

create the service

tutum service create <image_id>

run it on a node

tutum service run -p <port> -n <name> <image_id>

That's it. Of course there are lots of options to play with, for example deployment strategy, set memory, auto starting etc. But the above steps are enough to get your image build, deployed and run. Most time I had to spend was waiting while uploading my image using the flaky-but-expensive hotel wifi.

Conclusion for now

Tutum is clean, simple and just works. I’m impressed with ease and speed you can get your containers up and running. It takes only minutes to get from zero to running using the jumpstart services, or even your own containers. Although they still call it beta, everything I did just worked, and without the need to read through lots of complex documentation. The web interface is self explanatory and the REST API or cli provides everything you need to integrate Tutum in your build pipeline, so you can get your new features in production with automation speed.

I'm wondering how challenging managing would be at a scale of hundreds of nodes and even more containers, when using the web interface. You'd need a meta-overview or aggregate view or something. But then again, you have a very nice API to

The Great Positive Thinking Quotes Collection

I revamped my positive thinking quotes collection on Sources of Insight to help you amp up your ability to generate more positive thoughts. 

It’s a powerful one.

Why positive thinking?

Maybe Zig Ziglar said it best:

“Positive thinking will let you do everything better than negative thinking will.”

Positive Thinking Can Help You Defeat Learned Helplessness

Actually, there’s a more important reason for positive thinking: 

It’s how you avoid learned helplessness.

Learned helplessness is where you give up, because you don’t think you have any control over the situation, or what happens in your life.  You explain negative situations as permanent, personal, and pervasive, instead of temporary, situational, and specific.

That’s a big deal.

If you fall into the learned helplessness trap, you spiral down.  You stop taking action.  After all, why take action, if it won’t matter.  And, this can lead to depression.

But that’s a tale of woe for others, not you.   Because you know how to defeat learned helplessness and how to build the skill of learned optimism.

You can do it by reducing negative thinking, and by practicing positive thinking.   And what better way to improve your positive thinking, than through positive thinking quotes.

Keep a Few Favorite Positive Thinking Quotes Handy

Always keep a few positive thinking quotes at your fingertips so that they are there when you need them.

Here is a quick taste of a few of my favorites from the positive thinking quotes collection:

“A positive attitude may not solve all your problems, but it will annoy enough people to make it worth the effort.” –  Herm Albright

"Attitudes are contagious.  Are yours worth catching?" — Dennis and Wendy Mannering

"Be enthusiastic.  Remember the placebo effect – 30% of medicine is showbiz." — Ronald Spark

"I will not let anyone walk through my mind with their dirty feet." — Mahatma Gandhi

"If the sky falls, hold up your hands." – Author Unknown

Think Deeper About Positivity By Using Categories for Positive Thinking

But this positive thinking quotes collection is so much more.  I’ve organized the positive thinking quotes into a set of categories to chunk it up, and to make it more insightful:

Actions
Adaptability and Flexibility
Anger and Frustration
Appreciation and Gratitude
Attitude, Disposition, and Character
Defeat, Setbacks, and Failures
Expectations
Focus and Perspective
Hope and Fear
Humor
Letting Things Go and Forgiveness
Love and Liking
Opportunity and Possibility
Positive Thinking (General)

The distinctions you can add to your mental repertoire, the more powerful of a positive thinker you will be.

You can think of each positive thinking quote as a distinction that can add more depth.

Draw from Wisdom of the Ages and Modern Sages on the Art and Science of Positive Thinking

I've included positive thinking quotes from a wide range of people including  Anne Frank, Epictetus, Johann Wolfgang von Goethe, Napoleon, Oscar Wilde, Ralph Waldo Emerson, Robert Frost, Voltaire, Winston Churchill, and many, many more.

You might even find useful positive thinking mantras from the people that you work with, or the people right around you. 

For example, here are a few positive thinking thoughts from Satya Nadella:

“The future we're going to invent together, express ourselves in the most creative ways.”

“I want to work in a place where everybody gets more meaning out of their work on an everyday basis.

I want each of us to give ourselves permission to be able to move things forward.  Each of us sometimes overestimate the power others have to do things vs. our own ability to make things happen.

Challenge Yourself to New Levels of Positive Thinking through Positive Quotes

As you explore the positive thinking quotes collection, try to find the quotes that challenge you the most, that really make you think, and give you a new way to generate more positive thoughts in your worst situations. 

In the words of Friedrich Nietzsche, "That which does not kill us makes us stronger."

You can use your daily trials and tribulations in the workplace as your personal dojo to practice and build your positive thinking skills.

The more positivity you can bring to the table, the more you’ll empower yourself in ways you never thought possible.

As you get tested by your worst scenarios, it’s good to keep in mind, the words of F. Scott Fitzgerald:

"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.  One should, for example, be able to see that things are hopeless and yet be determined to make them otherwise."

I’ll also point out that as you grow your toolbox of positive thinking quotes and you build your positive thinking skills, you need to also focus on taking positive action.

Don’t just imagine a better garden, get out and actually weed it.

Don’t Just Imagine Everything Going Well – Imagine How You’ll Deal with the Challenges

Here’s another important tip about positivity and positive thinking …

If you use visualization as part of your approach to getting results, it’s important to include dealing with setbacks and challenges.   It’s actually more effective to imagine the most likely challenges coming up, and walking through how you’ll deal with them, if they occur.   This is way more effective than just picturing the perfect plan where everything goes without a hitch.

The reality is things happen, stuff comes up, and setbacks occur.

But your ability to mentally prepare for the setbacks, and have a plan of action, will make you much more effective in dealing with the challenge that actually do occur.  This will help you respond vs. react in more situations, and to stay in a better place mentally while you evaluate options, and decide a course of action.  (Winging it under stress doesn’t work very well because we shut down our prefrontal cortex – the smart part of our brain – when we go into flight-or-fight mode.)

If I missed any of your favorite positive thinking quotes in my positive thinking quotes collection, please let me know.

In closing, please keep handy one of the most powerful positive thinking quotes of all time:

“May the force be with you.”

Always.

You Might Also Like

The Great Inspirational Quotes Revamped

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Motivational Quotes Revamped

The Great Personal Development Quotes Collection Revamped

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For February 27th, 2015

Hey, it's HighScalability time:


Hear ye puny mortal. 1.3 million Earths doth fill our Sun. Whence comes this monster black hole with a mass 12 billion times that of the Sun?

 

  • 1 Terabit of Data per Second: 5G; 1.9 Terabytes:  customer in stadium data usage during the Super Bowl; 1 TB: free each month on Big Query; 100x: reduced power consumption in radio chip

  • Quotable Quotes:
    • Robin Harris: But now that non-volatile memory technology - flash today, plus RRAM tomorrow - has been widely accepted, it is time to build systems that use flash directly instead of through our antique storage stacks. 
    • Sundar Pichai: That’s the essence of what we [Google] get excited about – working on problems for people at scale, which make a big difference in [people’s] lives. 
    • @timoreilly: Facebook is hacked 600,000 times a day. @futurecrimes First thing to do to protect yourself, turn on 2-factor authentication
    • @architectclippy: I see you have a poorly structured monolith. Would you like me to convert it into a poorly structured set of microservices?
    • Poppy Crum: Your brain wants as much as possible to come up with a robust actionable perception of the world and of the information and data that is coming in.
    • @BenedictEvans: Both Google and Facebook killing XMPP.  IM being euthanized just at the time messaging could become a third run-time for the internet
    • @dhh: 4-tier / micro-service architectures are organizational scaling patterns far more than they're tech. 1st rule of distributed systems: Don't.
    • @amcafee: Ex-Etsy seller: "In practical terms, scaling the handmade economy is an impossibility."
    • kurin: If you're behind a LB you can just drain the traffic to the hosts you're about to upgrade. Also, if you're above your SLA... I mean, some dropped queries aren't the end of the world.
    • @WSJ: Facebook’s 5,000+ staff generate $1.36 million each in annual revenue. The key to productivity is custom-built software tools
    • @jaykreps: Software is mostly human capital (in people's heads): losing the team is usually worse than losing the code.
    • Dylan Tweney: Mobile growth is huge, and could surge at least 3x in the next two years
    • Joe Davison: I learned that there is often more to business than meets the eye, and the only way to succeed is to plan ahead and anticipate all contingencies.
    • @etherealmind: Google published 30000 configuration changes to its network in 1 month 

  • What's different about AI this time around? Less hype, more data, more computation. The Believers: It was a stunning result. These neural nets were little different from what existed in the 1980s. This was simple supervised learning. It didn’t even require Hinton’s 2006 breakthrough. It just turned out that no other algorithm scaled up like these nets. "Retrospectively, it was a just a question of the amount of data and the amount of computations," Hinton says.

  • What lesson did Ozgun Erdogan learn while working on a database at Amazon that never saw the light of day? How to Build Your Distributed Database (1/2): This optimized plan has many computations pushed down in the query tree, and only collects a small amount of data. This enables scalability. Much more importantly, this logical plan formalizes how relational algebra operators scale in distributed systems, and why. That's one key takeaway I had from building a distributed database before. In the land of distributed systems, commutativity is king. Model your queries with respect to the king, and they will scale.

  • Replication for resiliency? Nature thought of that. Nibbled? No Problem: Champaign first observed in the 1980s, some plants respond by making more seeds, ultimately benefiting from injury in a phenomenon called overcompensation. More recently, Paige and postdoc Daniel Scholes suspected a role for endoreduplication, in which a cell makes extra copies of its genome without dividing, multiplying its number of chromosome sets, or “ploidy.”

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Sending Windows logs to Papertrail with nxlog

Agile Testing - Grig Gheorghiu - Thu, 02/26/2015 - 01:04
I am revisiting Papertrail as a log aggregation tool. It's really easy to send Linux logs to Papertrail via syslog or rsyslog or syslog-ng (see this article on how to configure syslog with TLS) but to send Windows logs you need to jump through some hoops.

Papertrail recommends nxlog as their Windows log management tool of choice, so that's what I used. This Papertrail article explains how to install and configure nxlog on Windows (I recommend enabling TLS).  The nxlog.conf template file provided by Papertrail will send Windows Event logs over. I also wanted to send application-specific logs, so here's what I did:

1) Add an Input section to nxlog.conf for each directory containing the files you want to send to Papertrail. For example, if one of your applications logs to C:\MyApp1\logs and your log files end with .log, you could have this input section:

# Monitor MyApp1 log files 
START_ANGLE_BRACKET Input MyApp1 END_ANGLE_BRACKET
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop(); 
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
START_ANGLE_BRACKET /Input END_ANGLE_BRACKET

Some observations:

  • Blogger doesn't like angle brackets so replace START_ANGLE_BRACKET with < and END_ANGLE_BRACKET with >
  • The name MyApp1 is the name of this Input section
  • The File statement points to the location and name of the log files
  • The first Exec statement saves the log line under consideration as the variable $Message
  • The second Exec statement drops messages that contain a specific regular expression, in my case just 'GET /ping' -- which happens to be health checks from the load balancer that pollute the logs; you can replace this with any regular expression that will filter out log lines you don't want sent to Papertrail
  • The next few statements were in the sample Input stanza from the template nxlog.conf file so I just left them there
2) Add more Input sections, one for each log location (i.e. multiple log files under a given directory) that you want to send to Papertrail. You need to give each Input section a unique name (e.g. MyApp1 above).
3) Add a Route section for the Input sections defined previously. If you defined 2 Input sections MyApp1 and MyApp2, your Route section would look something like:

START_ANGLE_BRACKET  Route 2 END_ANGLE_BRACKET
Path MyApp1, MyApp2=> filewatcher_transformer => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
The filewatcher_transformer section was already included in the sample nxlog.conf file from Papertrail. The Route section above says that the files processed by the 2 Input paths MyApp1 and MyApp2 will be processed through the statements defined in the filewatcher_transformer section, then will be sent to Papertrail by virtue of being processed through the statements defined in the syslogout section.
At this point, if you restart the nxlog service on your Windows box, you should start seeing log entries from your application(s) flowing into the Papertrail console.

Deep Learning without Deep Pockets

Now that you’ve transformed your system through successive evolutions of architecture goodness...you've made it cloud native, you now treat a fist full of datacenters as a single computer, you’ve microservicized it, you’ve containerized it, you’re continuously releasing and improving it, you’ve made it reactive, you’ve socialized it, you’ve mobilized it, you’ve Hadoop’ed it, you’ve made it DevOps friendly, and you have real-time dashboards that would make NORAD jealous...what’s next?

Deep learning is what’s next. Making machines that learn. The problem is how?

All the other transformations have been changes good programmers can learn to do. Deep learning is still deep magic. We are waiting for the Hadoop of deep learning to be built.

Until then, if you aren’t Google with Google sized clusters and cloisters of PhDs, what can you do? Greg Corrado, Senior Research Scientist at Google, gave a great presentation at the RE.WORK Deep Learning Summit 2015 (videos) that has some useful suggestions:

Categories: Architecture

The Microsoft Story for the Cloud

How has the Cloud changed your world?

One of the ways we challenge people is to ask, do you want to move to the Cloud, use the Cloud, or be the Cloud?

But to answer that well, you need to really be grounded in your vision for the future, and the role you wan to play.

The Cloud creates a brave new world.  It enables and powers the Digital Economy

Businesses need to cross the Cloud chasm (and some don’t make it) in an effort to stay relevant and to be what’s next.

Businesses need to re-imagine themselves and explore the art of the possible.

Business leaders and IT leaders need to help others forge their way forward in the Digital Frontier.

And it all starts with a story.

A story that inspires the hearts and minds so people can wrap their head around the challenge and the change.

I think Satya says the Microsoft story for the Cloud in a very simple and compelling way:

"We will reinvent productivity to empower every person and every organization on the planet to do more and achieve more." -- Satya Nadella, Microsoft CEO

That’s a pretty simple and yet pretty powerful and compelling story of why do we do what we do.

It’s a great way to re-imagine and inspire our transformation to a productivity and platform company in a Mobile-first, Cloud-first world.   And, it’s a very simple story around productivity and empowerment that inspires and drives people in various roles and responsibilities to co-create the future in a profound way.

What is your simple story for how you re-imagine you or your business in a Mobile-First, Cloud-First world?

You Might Also Like

Business Scenarios for the Cloud

If You Want to Thrive at Microsoft

Microsoft Explained: Making Sense of the Microsoft Platform Story

Satya Nadella is All About Customer Focus, Employee Engagement, and Changing the World

Satya Nadella on The Future is Software

Satya Nadella on Everyone Has to Be a Leader

The Microsoft Story

Categories: Architecture, Programming

Why We Are Moving to the Cloud: Agility, Economics, and Innovation

I was reading the IT Showcase’s page on the Cloud platform.

I really liked the simple little story around why we are moving to the Cloud:

“Three words: Agility, economics and innovation. Cloud technology satisfies the CEO's desire for greater business agility, the CFO's desire to streamline operations, and the CMO's desire for a more innovative way to engage customers.”

Some people move to the Cloud because they see an ROI play.  Others move because they see opportunity cost.  Others move simply because they don’t want to be left behind.

The most common reason I see is business agility and to stay relevant in today’s world.

People are using the Cloud to re-imagine the customer experience, transform the workforce and employee productivity, and to transform operations and back-office activities.

In all cases, these transformations lead to business-model innovation and new opportunities to create and capture value.

Value is a moving target and the Cloud can help you stay in the game.

Are you in the game?

You Might Also Like

10 High-Value Activities in the Enterprise

Cloud Changes the Game from Deployment to Adoption

The Future of Jobs

Management Innovation is at the Top of the Innovation Stack

McKinsey on Unleashing the Value of Big Data

Reenvision Your Customer Experience

Reenvision Your Operations

Categories: Architecture, Programming

Introducing Structurizr

Coding the Architecture - Simon Brown - Tue, 02/24/2015 - 16:36

I've mentioned Structurizr in passing, but I've never actually written a post that explains what it is and why I've built it. First, some background.

"What tool do you use to draw software architecture diagrams?"

I get asked this question almost every time I run one of my workshops, usually just after the section where I introduce the C4 model and show some example diagrams. My answer to date has been "just OmniGraffle or Visio", and recommending that people use a drawing tool to create software architecture diagrams has always bugged me. My Simple Sketches for Diagramming Your Software Architecture article provides an introduction to the C4 model and my thoughts on UML.

Once you have a simple way to think about and describe the architecture of a software system (and this is what the C4 model provides), you realise that the options for communicating it are relatively limited. And this is where the idea for a simple diagramming tool was born. In essence, I wanted to build a tool where the data is sourced from an underlying model and all I need to do is move the boxes around on the diagram canvas.

Part 1: Software architecture as code

Structurizr initially started out as a web application where you would build up the underlying model (the software systems, people, containers and components) by entering information about them through a number of HTML forms. Diagrams were then created by selecting which type of diagram you wanted (system context, container or component) and then by specifying which elements you wanted to see on the diagram. This did work but the user experience, particularly related to data entry, was awful, even for small systems.

Behind the scenes of the web application was a simple collection of domain classes that I used to represent software systems, containers and components. Creating a software architecture model using these classes was really succinct, and it struck me that perhaps this was a better option. The trade-off here is that you need to write code in order to create a software architecture model but, since software architects should code, this isn't a problem. ;-)

These classes have become what is now Structurizr for Java, an open source library for creating software architecture models as code. Having the software architecture model as code opens a number of opportunities for creating the model (e.g. extracting components automatically from a codebase) and communicating it (e.g. you can slice and dice the model to produce a number of different views as necessary). Since the models are code, they are also versionable alongside your codebase and can be integrated with your build system to keep your models up to date. The models themselves can then be output to another tool for visualisation.

Part 2: Web-based software architecture diagrams

structurizr.com is the other half of the story. It's a web application that takes a software architecture model (via an API) and provides a way to visualise it. Aside from changing the colour, size and position of the boxes, the graphical representation is relatively fixed. This in turn frees you up from messing around with creating static diagrams in drawing tools such as Visio.

Structurizr screenshot
A screenshot of Structurizr.

As far as features go, the list currently includes an API for getting/putting models, making models public/private, embedding diagrams into web pages, creating diagrams based upon different page sizes (paper and presentation slide sizes), exporting diagrams to a 300dpi PNG file (for printing or inclusion in a slide deck), automatic generation of a key/legend and a fullscreen presentation mode for showing diagrams directly from the tool. The recent webinar I did with JetBrains includes more information and a demo. Pricing is still to be confirmed, but there will be a free tier for individual use and probably some paid tiers for teams and organisations (e.g. for sharing private models).


An embedded software architecture diagram from structurizr.com (you can move the boxes).

It's worth pointing out that structurizr.com is my vision of what I want from a simple software architecture diagramming tool, but you're free to take the output from the open source library and create your own tooling to visualise the model. Examples include an export to DOT format (for importing into something like Graphviz), XMI format (for importing into UML tools), a desktop app, IDE plugins, etc.

That's a quick introduction to Structurizr and, although it's still a work in progress, I'm slowly adding more users via a closed beta, with the goal of opening up registration next month. It definitely scratches an itch that I have, and I hope other people will find it useful too.

Categories: Architecture

Introducing ASP.NET 5

ScottGu's Blog - Scott Guthrie - Mon, 02/23/2015 - 21:41

The first preview release of ASP.NET 1.0 came out almost 15 years ago.  Since then millions of developers have used it to build and run great web applications, and over the years we have added and evolved many, many capabilities to it. 

I'm excited today to post about a new release of ASP.NET that we are working on that we are calling ASP.NET 5.  This new release is one of the most significant architectural updates we've done to ASP.NET.  As part of this release we are making ASP.NET leaner, more modular, cross-platform, and cloud optimized.  The ASP.NET 5 preview is now available as a preview release, and you can start using it today by downloading the latest CTP of Visual Studio 2015 which we just made available.

ASP.NET 5 is an open source web framework for building modern web applications that can be developed and run on Windows, Linux and the Mac. It includes the MVC 6 framework, which now combines the features of MVC and Web API into a single web programming framework.  ASP.NET 5 will also be the basis for SignalR 3 - enabling you to add real time functionality to cloud connected applications. ASP.NET 5 is built on the .NET Core runtime, but it can also be run on the full .NET Framework for maximum compatibility.

With ASP.NET 5 we are making a number of architectural changes that makes the core web framework much leaner (it no longer requires System.Web.dll) and more modular (almost all features are now implemented as NuGet modules - allowing you to optimize your app to have just what you need).  With ASP.NET 5 you gain the following foundational improvements:

  • Build and run cross-platform ASP.NET apps on Windows, Mac and Linux
  • Built on .NET Core, which supports true side-by-side app versioning
  • New tooling that simplifies modern Web development
  • Single aligned web stack for Web UI and Web APIs
  • Cloud-ready environment-based configuration
  • Integrated support for creating and using NuGet packages
  • Built-in support for dependency injection
  • Ability to host on IIS or self-host in your own process

The end result is an ASP.NET that you'll feel very familiar with, and which is also now even more tuned for modern web development.

Flexible, Cross-Platform Runtime

ASP.NET 5 works with two runtime environments to give you greater flexibility when hosting your app. The two runtime choices are:

.NET Core – a new, modular, cross-platform runtime with a smaller footprint.  When you target the .NET Core, you’ll be able to take advantage of some exciting new benefits:

1) You can deploy the .NET Core runtime with your app which means your app will run with this deployed version of the runtime rather than the version of the runtime that is installed on the host operating system. Your version of the runtime runs side-by-side with versions for other apps. You can update that runtime, if needed, without affecting other apps, or you can continue running on the same version even though other apps on the system have been updated.  This makes app deployment and framework updates much easier and less impactful to other apps running on a system.

2) Your app is only dependent on features it really needs. Therefore, you are never prompted to update/service the runtime for features that are not relevant to your app. You will spend less time testing and deploying updates that are perhaps unrelated to the functionality of your app.

3) Your app can now be run cross-platform. We will provide a cross-platform version of .NET Core for Windows, Linux and Mac OS X systems.  Regardless of which operating system you use for development or which operating system you target for deployment, you will be able to use .NET. The cross-platform version of the runtime has not been released yet, but we are working on it on GitHub and plan to have an official preview of it out soon.

.NET Framework – The API for .NET Core is currently more limited than the full .NET Framework, so you may need to modify existing apps to target .NET Core. If you don't want to have to update your app you can instead run ASP.NET 5 applications on the full .NET Framework (version 4.5.2 and above).  When doing this you have access to the complete set of .NET Framework APIs. Your existing applications and libraries will work without modification on this runtime. MVC 6 - a unified programming model

MVC, Web API and Web Pages provide complementary functionality and are frequently used together when developing a solution. However, in past ASP.NET releases, these programming frameworks were implemented separately and therefore contained some duplication and inconsistencies. With MVC 6, we are merging those models into a single programming model. Now, you can create a single web application that handles the Web UI and data services without needing to reconcile differences in these programming frameworks. You will also be able to seamlessly transition a simple site first developed with Web Pages into a more robust MVC application.

You can now return Razor views and content-negotiated data from the same controller and using the same MVC filter pipeline.

In addition to unifying the existing frameworks we are also adding new features to make server-side Web development easier, like the new tag helpers feature. Tag helpers let you use HTML helpers in your views by simply extending the semantics of tags in your markup.

So instead of writing this:

@Html.ValidationSummary(true, "", new { @class = "text-danger" })<?xml:namespace prefix = "o" />

<div class="form-group">

    @Html.LabelFor(m => m.UserName, new { @class = "col-md-2 control-label" })

    <div class="col-md-10">

        @Html.TextBoxFor(m => m.UserName, new { @class = "form-control" })

        @Html.ValidationMessageFor(m => m.UserName, "", new { @class = "text-danger" })

    </div>

</div>

You can instead write this:

<div asp-validation-summary="ModelOnly" class="text-danger"></div>

<div class="form-group">

    <label asp-for="UserName" class="col-md-2 control-label"></label>

    <div class="col-md-10">

        <input asp-for="UserName" class="form-control" />

        <span asp-validation-for="UserName" class="text-danger"></span>

    </div>

</div>

Tag helpers make authoring your views more natural and readable. They also simplify customizing the output of HTML helpers with additional markup while letting you take full advantage of the HTML editor.

For more examples of creating MVC 6 apps, see these tutorials. Modern web development

This week's ASP.NET 5 preview also includes a number of other great development features that enable you to build even better web applications:

Dynamic Development

In Visual Studio 2015, we take advantage of dynamic compilation to provide a streamlined developer experience. You no longer have to compile your application every time you want to see a change. Instead, just (1) edit the code, (2) save your changes, (3) refresh the browser, and then (4) see your change automatically appear.

image

You enjoy a development experience that is similar to working with an interpreted language without sacrificing the benefits of a compiled language.

You can also optionally use other code editors to work on your ASP.NET 5 projects. Every function within the Visual Studio user interface is matched with cross-platform command-line operations.

Integration with Popular Web Development Tools (Bower, Grunt and Gulp)

Another exciting feature in Visual Studio 2015 is built-in support for Bower, Grunt, and Gulp - popular open source tools that we think should be in every Web developer’s toolkit.

  • Bower is a package manager for client-side libraries, including both JavaScript and CSS libraries.
  • Grunt and Gulp are task runners, which help you to automate your web development workflow. You can use Grunt or Gulp for tasks like compiling LESS, CoffeeScript, or TypeScript files, running JSLint, or minifying JavaScript files.

Bower: To add a JavaScript library to your ASP.NET project add it directly in the bower.json config file:

image

Notice that Visual Studio gives you IntelliSense with a list of available packages. The next time you open the solution, Visual Studio automatically restores any missing packages, so you don’t need to check the packages into source control.

For server-side packages, you’ll still use NuGet Package Manager.

Grunt: In modern web development, you can find yourself managing a lot of tasks, just to build your app: Compiling LESS, TypeScript, or CoffeeScript files, linting, JavaScript minification, running JS unit tests, and so on. Every team will have its own set of requirements, depending on the particular tools that you use. Task runners make it easier to manage and coordinate these tasks. Visual Studio 2015 will support two popular task runners, Grunt and Gulp.

For example, let’s say you want to use Grunt to compile LESS files. Just go into package.json and add the grunt-contrib-less package, which is a third-party Grunt plugin.

image

Use the new Task Runner Explorer in Visual Studio 2015 to bind the task to a build step (pre-build, post-build, clean, or when the solution is opened).

image

This makes it incredibly easy to automate common tasks within your projects - and have them work both for you, as well as across a team wide project.

Simplified dependency management

In ASP.NET 5 you manage dependencies by adding NuGet packages. You can use the NuGet Package Manager or simply edit the JSON file (project.json) that lists the NuGet packages and versions used in your project. The project.json file is easy to work with and you can edit it with any text editor, which enables you to update dependencies even when the app has been deployed to the cloud.

The project.json file looks like:

image

In Visual Studio 2015, IntelliSense assists you with finding the available NuGet packages that you can add as dependencies.

image

And, Intellisense can even help you with the available versions:

image

Cloud-ready configuration

In ASP.NET 5, we eliminated the need to use Web.config file for configuration values. We wanted to make it easier for you to deploy your app to the cloud and have the app automatically read the correct configuration values for that environment. The new system enables you to request named values from a variety of sources (such as JSON, XML, or environment variables). You can decide which formats work best in your situation.

In the Startup.cs file, you can now add or remove the sources for configuration values.

image

The above code snippet shows a project that is set up to retrieve configuration values from a JSON file and environmental variables. You can change this code if you need to specify other sources. In the specified config.json file, you could provide the values.

image

In your host environment, such as Azure, you can set the environmental variables and those values are automatically used instead of local configuration values after the application is deployed. You can deploy your application without worrying about publishing test values.

Dependency injection (DI)

Dependency Injection (DI) is supported in existing ASP.NET frameworks, like MVC, Web API and SignalR, but not in a consistent and holistic way. ASP.NET 5 provides a built-in DI abstraction that is available in a consistent way throughout the entire web stack. You can access services at startup, in middleware, in filters, in controllers, in model binding and virtually any part of the pipeline where you want to use your services. ASP.NET 5 includes a minimalistic DI container to bootstrap the system, but you can easily replace the default container with your container of choice (Autofac, Ninject, etc). Services can be singleton, scoped to the request or transient.

For example, to see how to use constructor injection with ASP.NET MVC 6, create a new ASP.NET 5 Starter Web project and add a simple time service:

using System;

 

namespace WebApplication1

{

    public class TimeService

    {

        public TimeService()

        {

            Ticks = DateTime.Now.Ticks.ToString();

        }

        public String Ticks { get; set; }

    }

}

The simple service class sets the current Ticks when the constructor is called.

Next, register the time service as a transient service in the ConfigureServices method of the Startup class:

public void ConfigureServices(IServiceCollection services)

{

    services.AddMvc();

    services.AddTransient<TimeService>();

}

Then, update the HomeController to use constructor injection and to write the Ticks when the TimeService object was created.

public class HomeController : Controller

{

    public TimeService TimeService { get; set; }

 

    public HomeController(TimeService timeService)

    {

        TimeService = timeService;

    }

 

    public IActionResult About()

    {

        ViewBag.Message = TimeService.Ticks + " From Controller";

        System.Threading.Thread.Sleep(1);

        return View();

    }

 

    // Code removed for brevity

}

Notice the controller doesn't create a TimeService. It's injected when the controller is instantiated.

In MVC 6 you can use the [Activate] attribute to inject services via properties. You can use [Activate] not just on controllers but also on filters, and view components. This means you can simplify your controller code like this:

public class HomeController : Controller

{

    [Activate]

    public TimeService TimeService { get; set; }

 

    // Code removed for brevity

}

MVC 6 also supports DI into Razor views via the @inject keyword. In the code below, I’ve injected the time service into the about view directly and defined a TimeSvc property by which it can be accessed:

@using WebApplication23

@inject TimeService TimeSvc

 

<h3>@ViewBag.Message</h3>

 

<h3>

    @TimeSvc.Ticks From Razor

</h3>

When you run the app, you can see different ticks values from the controller and the view.

image

Fast HTTP performance

ASP.NET 5 introduces a new HTTP request pipeline that is modular so you can add only the components that you need. The pipeline is also no longer dependent on System.Web. By reducing the overhead in the pipeline, your app can experience better throughput and a more tuned HTTP stack. The new pipeline is based on many of the learnings from the Katana project and also supports OWIN.

To customize which components are used in the pipeline, use the Configure method in your Startup class. The Configure method is used to specify which middleware you want to “use” in your request pipeline. ASP.NET 5 already includes ported versions of many of the middleware from the Katana project, like middleware for static files, authentication and diagnostics. The following image shows some of the features you can add or remove to the pipeline for your project.

public void Configure(IApplicationBuilder app)

{

    // Add static files to the request pipeline.

    app.UseStaticFiles();

 

    // Add cookie-based authentication to the request pipeline.

    app.UseIdentity();

 

    // Add MVC and routing to the request pipeline.

    app.UseMvc(routes =>

    {

    routes.MapRoute(

        name: "default",

        template: "{controller}/{action}/{id?}",

        defaults: new { controller = "Home", action = "Index" });

 

});

You can also write your own middleware components and add them to the pipeline. Open source

We are developing ASP.NET 5 as an open source project on GitHub. You can view the code, see when changes were made, download the code, and submit changes. We believe making ASP.NET 5 open source will we make it easier for you to understand the code, understand our intended direction, and contribute to the project.

image

Docs and tutorials

To get started with ASP.NET 5 you can find docs and tutorials on the ASP.NET site at http://asp.net/vnext. The following tutorials will guide you through the steps of creating your first ASP.NET 5 project.

Also read this article for even more ASP.NET and Web Development improvements coming this week.

Hope this help,

Scott

omni
Categories: Architecture, Programming

HappyPancake: a Retrospective on Building a Simple and Scalable Foundation

This is a guest repost by Rinat Abdullin, who worked on HappyPancake, the largest free dating site in Sweden. Initially written in ASP.NET and MS SQL Database server, it eventually became overly complex and expensive to scale. This is the last post in a nearly two year long series of engaging articles on the evolution of the project. For the complete list please see the end of this article.

Our project at HappyPancake completed this week. We delivered a simple and scalable foundation for the next version of largest free dating web site in Sweden (with presence in Norway and Finland).

Journey

Below is the short map of that journey. It lists technologies and approaches that we evaluated for this project. Yellow regions highlight items which made their way into the final design.

Project Deliverables
Categories: Architecture

The Great Motivational Quotes Revamped

When you need to make things happen, motivational quotes can help you dig deep and get going.

I put together a very comprehensive collection of the world’s best motivational quotes a while back.

It was time for a refresh.  Here it is:

Motivational Quotes – The Great Motivational Quotes Collection

Imagine motivational wisdom of the ages and modern sages right at your fingertips all on one page.   I included motivational quotes from Bruce Lee, Tony Robbins, Winston Churchill, Waldo Emerson, Jim Rohn, and more.

See if you can find at least three motivational quotes that you can take with you on the road of life, to help you deal with setbacks and challenges, and to unleash your inner-awesome.

Getting Started with Motivational Quotes

I’ll start you off.   If you don’t already have these in your personal motivational quotes collection, here are a few that I draw from often:

“If you’re going through hell, keep going.” — Winston Churchill

“When it’s time to die, let us not discover that we have never lived.” -Henry David Thoreau

“Don’t ask yourself what the world needs, ask yourself what makes you come alive. And then go do that. Because what the world needs is people who have come alive.”— Howard Thurman

How’s that for a starter set?

Build Better Motivational Thought Habits

You can train your brain with motivational mantras.     Our thoughts are habits.   If you want to build better thought habits, then feed on some of the best motivational quotes of all time.

“An ounce of action is worth a ton of theory.” – Ralph Waldo Emerson

“Positive thinking won’t let you do anything but it will let you do everything better than negative thinking will.” -– Zig Ziglar

“The only person you are destined to become is the person you decide to be.” – Ralph Waldo Emerson

If you train yourself well, you won’t entirely eliminate motivational setbacks, but you’ll be able to defeat procrastination, and you’ll be able to bounce back faster when you find yourself in a slump.   Motivation is a skill you can build, and it will serve you well, in work and life.

You Create Your Future

The most important motivational concept to hold on to is the idea that you create your future.  Or, as Wayne Dyer puts it:

“Go for it now. The future is promised to no one.”

So go for the bold, and get your game face on.

If you need some help kick-starting your fire, stroll through the motivational quotes a few times until something really sinks in or clicks for you.  Life’s better with the right words, and there are just the right words already out there, just waiting to be found.

Enjoy and take your time sifting through the Motivational Quotes – The Great Motivational Quotes Collection.

Also, if you have a favorite motivational quote that I don’t have listed, let me know.

You Might Also Like

The Great Inspirational Quotes Revamped

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Personal Development Quotes Collection Revamped

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming

The Great Inspirational Quotes Collection Revamped

I think of inspiration simply as “breathe life into.”

Whether you're shipping code, designing the next big thing, or simply making things happen, inspirational quotes can help keep you going.

In the spirit of helping people find their Eye of the Tiger or get their mojo on, I’ve put together a hand-crafted collection of the ultimate inspirational quotes:

The Inspirational Quotes Collection

If you’ve seen my collection of inspirational quotes before, it’s completely revamped.   It should be much easier to browse all of the inspirational quotes now so you can see some old familiar quotes that you may have heard of long ago, as well as many inspirational quotes, you have never heard of before.

Dive in, explore the collection of inspirational quotes, and see if you can find at least three inspirational quotes that breathe new life into your moment, your day, your work, or anything you do.

The Power of Inspirational Quotes

Inspirational quotes can help us move mountains.   The right inspirational words and ideas can help us boldly go where we have not gone before, as well as conquer our fears and soar to new heights.

Or, the right inspirational quote can simply help us roar a little louder inside, when we need it most.

Life isn’t always a bowl of cherries.  And work can be an incredible challenge.    And sometimes, even our best laid plans, go up in flames.

So having a repertoire of inspirational quotes and inspiring mantras at your mental fingertips can help you roll with the punches and keep going.

One of the most important inspirational ideas I learned early on goes like this:

Whatever doesn’t kill you makes you stronger.

It helped me turn trials into triumphs, and eventually learn to take on big challenges as a way to grow.

Another inspirational idea that really helped me find my way forward is by Ralph Waldo Emerson, and, it goes like this:

“Do not follow where the path may lead. Go, instead, where there is no path and leave a trail.”

Whenever I went on a new journey, down an unfamiliar path, it helped remind me that I don’t always need a trail, and that many times, it’s about blazing my own trail.

The power of inspirational quotes is their power to light a fire inside and fan the flames until we go and blaze our trail that leaves our self, and others, in awe.

What Lies Within Us

Perhaps, the greatest inspirational quote of all time is another amazing quote by Emerson:

“What lies behind us and what lies before us are tiny matters compared to what lies within us.”

It’s an awe-inspiring reminder to not only do what makes us come alive, but to realize our potential and unleash what we are capable of.

It’s Better to Burn Out, then Fade Away

So many inspirational quotes remind us that life is short and that we have to go for it.   But maybe George Bernard Shaw said it best:

“I want to be all used up when I die.”

One quote that I think about often is by Seth Godin:

“Life is like skiing.  Just like skiing, the goal is not to get to the bottom of the hill. It’s to have a bunch of good runs before the sun sets.”

It’s all about making the journey worth it.

When It’s Over

What do you do when it’s over.  It all depends.   Dr. Seuss has an interesting twist:

“Don’t cry because it’s over. Smile because it happened.”

But the one that I find has true wisdom is from Dave Weinbaum:

“The secret to a rich life is to have more beginnings than endings.”

Here’s to new many more beginnings in your life.

Enjoy and be sure to explore The Inspirational Quotes Collection to soar or roar in your own personal way.

You Might Also Like

The Great Happiness Quotes Collection Revamped

The Great Leadership Quotes Collection Revamped

The Great Love Quotes Collection Revamped

The Great Personal Development Quotes Collection Revamped

The Great Productivity Quotes Collection Revamped

Categories: Architecture, Programming

Stuff The Internet Says On Scalability For February 20th, 2015

Hey, it's HighScalability time:


Networks are everywhere, they can even help reveal disease connections.
  • trillions: number of photons constantly hitting your eyes; $19 billion: Snapchat valuation;  8.5K: average number of questions asked on Stack Overflow per day
  • Quotable Quotes:
    • @BenedictEvans: End of 2014: 3.75-4bn mobiles ~1.5bn PCs  7-800m consumer PCs 1.2-1.3bn closed Android 4-500m open Android 650-675m iOS 80m Macs, ~75m Linux
    • @JeremiahLee: “Humans only use 10% of their internet.” —@nvcexploder #NodeSummit
    • beguiledfoil: Javavu - The feeling experienced when you see new languages make the same mistakes Java made 20 years ago and momentarily mistake said language for Java.
    • @ewolff: If Conway's Law is so important - are #Microservices more an organizational approach than an architecture?
    • @KentLangley: "Apache Spark Continues to Spread Beyond Hadoop." I would say supplant. 
    • Database Soup: An in-memory database is one which lacks the capability of spilling to disk.
    • Matthew Dillon: 1-2 year SSD wear on build boxes has been minimal.
    • @gwenshap: Except there is one writer and many readers - so schema and validation must be done on ingest. Anywhere else is just shifting responsibility
    • @jaykreps: Startup style of engineering (fail fast & iterate) doesn't work for every domain, esp. databases & financial systems
    • Taulant Ramabaja: Decentralization is not a goal in and of itself, it is a strategy
    • Eli Reisman: Etsy runs more Hadoop jobs by 7am than most companies do all day.
    • Dormando: We're [memcached] not sponsored like redis is. I've only ever lost money on this venture.
    • The Trust Engineers: There are more Facebook users than Catholics.

  • Exponent...The new integration is hardware + software + services. Not services like disk storage, but services  like HomeKit, HealthKit, Siri, Car Play, Apple Pay. Services that touch every part of our lives. Apple doesn't build cars, stores, or information services, it wraps them with an Apple layer that provides the customer with an integrated experience while taking full advantage of modularity. Modularity wrapped with integration. Owning the hardware is a better profit model than sercvices in the cloud.

  • Quite a response to You Don't Like Google's Go Because You Are Small on reddit. A vigorous 500+ comments were written. Golang isn't perfect. How disappointing, so many things are.

  • After making Linux work well on multiple cores that next bump in performance comes from Improving Linux networking performance. It's a hard job. For a 100Gb adapter on 3GHz CPU there are only about 200 CPU cycles to process each packet. Good break down of time budgets for for various instructions. The approach is improved batching at multiple layers of the stack and better memory management, which leads directly into Toward a more efficient slab allocator.

  • The process behind creating a Google Doodle for Alessandro Volta’s 270th Birthday reminds me a lot of the process of making old style illustrations as described in Cartographies of Time: A History of the Timeline. The idea is to encode symbolically as much of the important information as possible in a single diagram. The coded icon of a tiny skull could mean, for example, a king died while on the throne. A single flame could stand for the fall of man. This art is not completely lost with today's need to convey a lot of information on small screens. This sort of compression has advantages: Strass believed that a graphic representation of history held manifold advantages over a textual one: it revealed order, scale, and synchronism simply and without the trouble of memorization and calculation.
Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...
Categories: Architecture

Exploring container platforms: StackEngine

Xebia Blog - Fri, 02/20/2015 - 15:51

Docker has been around for more than a year already, and there are a lot of container platforms popping up. In this series of blogposts I will explore these platforms and share some insights. This blogpost is about StackEngine.

TL;DR: StackEngine is (for now) just a nice frontend to the Docker binary. Nothing...

Free Book: Is Parallel Programming Hard, And, If So, What Can You Do About It?

The trouble ain’t that people are ignorant: it’s that they know so much that ain’t so." -- Josh Billings

 

Is Parallel Programming Hard? Yes. What Can You Do About It? To answer that, Paul McKenney, Distinguished Engineer at IBM Linux Technology Center, vetran of parallel powerhouses SRI and Sequent, has written an epic 400+ page book: Is Parallel Programming Hard, And, If So, What Can You Do About It? 

The goal of the book? "To help you understand how to design shared-memory parallel programs to perform and scale well with minimal risk to your sanity."

So it's not a book about parallelism in the sense of getting the most out of a distributed system, it's a book in the mechanical-sympathy sense of getting the most out of a single machine.

Some example section titles: Introduction, Alternative to Parallel Programming, What Makes Parallel Programming Hard, Hardware and its Habits, Tools of the Trade, Counting, Partitioning and Synchronized Design, Locking, Data Ownership, Deferred Processing, Data Structures, Validation, Formal Verification, Putting it All Together, Advanced Synchronization, Parallel Real-Time Computing, Ease of Use, Conflicting Visions of the Future.

To get a feel for the kind of things you'll learn in the book, here's an interview where Paul talks about what in parallel programming is the hardest to master:

Categories: Architecture

Azure: Machine Learning Service, Hadoop Storm, Cluster Scaling, Linux Support, Site Recovery and More

ScottGu's Blog - Scott Guthrie - Wed, 02/18/2015 - 17:06

Today we released a number of great enhancements to Microsoft Azure. These include:

  • Machine Learning: General Availability of the Azure Machine Learning Service
  • Hadoop: General Availability of Apache Storm Support, Hadoop 2.6 support, Cluster Scaling, Node Size Selection and preview of next Linux OS support
  • Site Recovery: General Availability of DR capabilities with SAN arrays

I've also included details in this blog post of other great Azure features that went live earlier this month:

  • SQL Database: General Availability of SQL Database (V12)
  • Web Sites: Support for Slot Settings
  • API Management: New Premium Tier
  • DocumentDB: New Asia and US Regions, SQL Parameterization and Increased Account Limits
  • Search: Portal Enhancements, Suggestions & Scoring, New Regions
  • Media: General Availability of Content Protection Service for Azure Media Services
  • Management: General Availability of the Azure Resource Manager

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them: Machine Learning: General Availability of Azure ML Service

Today, I’m excited to announce the General Availability of our Azure Machine Learning service.  The Azure Machine Learning Service is a powerful cloud-based predictive analytics service that makes it possible to quickly create analytics solutions.  It is a fully managed service - which means you do not need to buy any hardware nor manage VMs manually.

Data Scientists and Developers can use our innovative browser-based machine learning IDE to quickly create and automate machine learning workflows.  You can literally drag/drop hundreds of existing ML libraries to jump-start your predictive analytics solutions, and then optionally add your own custom R and Python scripts to extend them.  Our Machine Learning IDE works in any browser and enables you to rapidly develop and iterate on solutions:

image

With today's General Availability release you can easily discover and create web services, train/retrain your models through APIs, manage endpoints and scale web services on a per customer basis, and configure diagnostics for service monitoring and debugging. Additional new capabilities with today's release include:

  • The ability to create a configurable custom R module, incorporate your own train/predict R-scripts, and add python scripts using a large ecosystem of libraries such as numpy, scipy, pandas, scikit-learn etc. You can now train on terabytes of data using “Learning with Counts”, use PCA or one-class SVM for anomaly detection, and easily modify, filter, and clean data using familiar SQLite.
  • Azure ML Community Gallery that allows you to discover & learn experiments, and share through Twitter and LinkedIn. You can purchase marketplace apps through an Azure subscription and consume finished web services for Recommendation, Text Analytics, and Anomaly Detection directly from the Azure Marketplace.
  • A step-by-step guide for the Data Science journey from raw data to a consumable web service to ease the path for cloud-based data science. We have added the ability to use popular tools such as iPython Notebook and Python Tools for Visual Studio along with Azure ML.

Get Started

You can learn the basics of predictive analytics and machine learning using our step-by-step data science guide and tutorials.  No sign-up or credit card is required to get started using Azure Machine Learning (you can use the machine learning IDE and try experiments for free):

image

Also browse our machine learning gallery to run existing machine learning experiments others have already built - and optionally publish your own experiments for others to learn from:

image

Machine Learning and predictive analytics will fundamentally change the way all applications are built in the future.  The new Azure Machine Learning service provides an incredibly powerful and easy way to achieve this.  Start using it for production apps today! HDInsight: General Availability of Apache Storm, Cluster Scaling, Hadoop 2.6, Node Sizes, and Preview of HDInsight on Linux

Today I’m happy to also announce several major enhancements to HDInsight, our managed Hadoop service for powering Big Data workloads in Azure.

General Availability of Apache Storm support

With today's release, we are making it easy for you to do real-time streaming analytics using Hadoop by providing Apache Storm as a fully managed Service and making it generally available on HDInsight. This makes it incredibly easy to stand up and manage Storm clusters. As part of the Storm service on HDInsight we have improved productivity by enabling some key features:

  • Integration with our Azure Event Hubs service - which allows you to easily process any data that is collected via Event Hubs
  • First class .NET experience on top of Apache Storm giving you the option to use both Java and .NET with it
  • Library of spouts and bolts let you easily integrate other Azure services like SQL, HBase and DocumentDB
  • Visual Studio integration that makes it easy for developers to do full project management from within the Visual Studio environment

Creating Storm cluster and running a sample topology

You can easily spin up a new Storm cluster from the Azure management portal. The Storm Dashboard allows you to either upload an existing Storm topology or pick one of the sample topologies from the dropdown.  Topologies can be authored in code, or higher level programming models like Trident can be used. You can also monitor and manage all the topologies that are currently on your cluster via the Storm Dashboard.

image

.NET Topologies and a Visual Studio Experience

One of the big improvements we have done on top of Storm is to enable developers to write Storm topologies in .NET. One of the things I am particularly excited about with the Storm release is the Visual Studio experience that we have enabled for Storm on HDInsight. With the latest version of the Azure SDK, you will get Storm project templates under HDInsight. This will quickly get you started with writing Storm topologies without having to worry or setup the right references or write the skeleton code that is needed for every Storm topology.

Since Storm is available as part of the HDInsight service, all HDInsight features also apply to Storm clusters. For example, you can easily scale up or scale down a Storm cluster with no impact to the existing running topologies. This will enable you to easily grow or shrink Storm clusters depending on the speed of ingest data and latency requirements with no impact on the data which is being processed.  At the time of the cluster creation you have the choice to pick from a long list of available VMs to use for their Storm cluster on HDInsight.

HDInsight 3.2 Support

I’m pleased to announce the availability of the next major version of Hadoop in HDInsight clusters for Windows and Linux. This includes Hadoop 2.6, Hive 0.14, and substantial updates to all of the components in the stack.  Hive 0.14 contains work to improve performance and scalability through Tez, adds a powerful cost based optimizer, and introduces capabilities for handling UPDATE, INSERT and DELETE SQL statements, temporary tables which live for the duration of a development session and more. You can find more details on the Hive 0.14 release here.   Pig 0.14 adds support for ORC, allowing a single high performance format to be leveraged across Pig and Hive.  Additionally Pig can now target Tez instead of Map/Reduce, resulting in substantial performance improvements by changing the execution engine. Details on the Pig 0.14 release are here.  These bring the latest improvements in the open source ecosystem to HDInsight. 

To get started with a 3.2 cluster, use the Azure Management portal or the command-line. In addition to the VS tools for Storm, we've also updated the VS tools to include Hive query authoring.  We've also added improved statement completion, local validation, access in Visual Studio to the YARN task logs, and support for HDInsight clusters on Linux. In order to get these, you just need to install the Azure SDK for Visual Studio which contains the latest HDInsight tooling.

Cluster Scaling

Many of our customers have asked for the ability to change HDInsight cluster sizes on the fly.  This capability is now accessible in both the Azure portal, as well as through the command line and SDK's.  You can grow or shrink a Hadoop cluster to fit your workload by simply dragging the sizing slider.  We'll add more nodes to your cluster while it is processing and when your larger jobs are done, you can reduce the size of the cluster.  If you need more cores available in your subscription, you can open a Billing support ticket to request a larger quota. 

Node Size Selection

Finally, you can also now specify the VM sizes for the nodes within your HDInsight cluster.  This lets you optimize your cluster's resources to fit your workload.  We've made the entire A and D series of VM sizes available.  For each of the different types of roles within a cluster, we'll let you specify the machine type.  This allows you to tune the amount of CPU, RAM and SSD available to your jobs. 

HDInsight on Linux

Today we are also releasing a preview version of our HDInsight service that allows you to deploy HDInsight clusters using Ubuntu Linux containers.  This expands the operating system options you can use when running managed Hadoop workloads on Azure (previously HDInsight only supported Windows Server containers).

The new Linux support enables you to easily use familiar tools like SSH and Ambari to build Big Data workloads in Azure.  HDInsight on Linux clusters are built on the same Hadoop distribution as the Windows clusters, are fully integrated with Azure storage, and make it easy for customers leveraging Hadoop to take advantage of the SLA, management and support that HDInsight offers.  To get started, sign up for the preview here.  You can then easily create Linux clusters using the Azure Management Portal or via our command-line interfaces.

SSH connectivity to your HDInsight clusters is enabled by default for all HDInsight on Linux clusters. You can use an SSH client of your choice to connect to the cluster.  Additionally, SSH tunneling can be leveraged for forwarding traffic from your browser to all of the Hadoop web applications.

Learn More

For more information about Azure HDInsight, check out the following resources:

Site Recovery: General Availability of Enterprise DR with SANs

With today’s Azure release, we are also adding another significant capability to Azure Site Recovery’s disaster recovery and replication portfolio. Enterprises that seek to leverage their Storage Area Network (SAN) Arrays to enable high performance synchronous and asynchronous replication across their on-premises Hyper-V private clouds can now orchestrate end-to-end storage array-based replication and disaster recovery with Azure Site Recovery and System Center Virtual Machine Manager (SCVMM).

The addition of SAN as a replication channel enables key scenarios such as Synchronous Replication, Multi-VM Consistency, and support for Guest Clusters with Azure Site Recovery. With support for Shared VHDX and iSCSI Target LUNs, ASR will now be able to better meet the needs of enterprise-class applications such as SQL Server, SharePoint, and SAP etc.

To enable SAN Replication, in the Azure Management Portal select SAN when configuring SCVMM clouds in ASR. ASR in turn validates that the cloud being configured has host clusters that have been correctly zoned to a Storage Array, either via Fibre Channel or iSCSI. Once the cloud configuration is complete and the storage pools have been mapped, Replication Groups (group of storage LUNs that replicate together and thereby enable multi-VM replication consistency) can be enabled for replication. ASR automates the creation of target LUNs, target Replication Groups, and starts the array-based replication. 

Here’s an example of a Recovery Plan that can failover a SQL Guest Cluster deployed on a Replication Group:

image

Learn More

Visit the Azure Site Recovery forum on MSDN for additional information.

Getting started with Azure Site Recovery is easy - all you need is to simply sign up for a free Microsoft Azure trial. SQL Database: General Availability of SQL Database (V12)

Earlier this month we released the general availability version of our SQL Database (V12) service version.  We introduced a preview of this new release last December, and it includes a ton of new capabilities. These include:

  • Better management of large databases. We now support heavier database workload management with parallel queries, table partitioning, online indexing, worry-free large index rebuilds with the previous 2GB size limit removed, and more alter database commands.

  • Support for more programmability capabilities: You can now build even more robust applications with CLR, T-SQL Windows functions, XML index, and change tracking support.

  • Up to 100x performance improvements with support for In-memory columnstore queries for data mart and analytic workloads.

  • Improved monitoring and troubleshooting: Extended Events (XEvents) and visibility into over 100 new table views via an expanded set of Database Management Views (DMVs).

  • New S3 performance level: Today's preview introduces a new pricing option for SQL Databases. The new "S3" performance tier delivers 100 DTU of performance (twice the DTU level of the existing S2 tier) and all of the features available in the Standard tier. It enables an even more cost effective way to run applications with higher performance needs.

You can now take advantage of all of these features in general availability - with all databases backed by an enterprise grade SLA.

Upcoming Security Features

I'm also excited to announce a number of new security features that will start rolling out this month and this spring.  These features will help customers better protect their cloud data and help further meet corporate and industry compliance policies. These security enhancements include:

  • Row-Level Security
  • Dynamic Data Masking
  • Transparent Data Encryption

Available in preview today, customers can now implement Row-Level Security on databases to enable implementation of fine-grained access control over rows in a database table for greater control over which users can access which data.

Coming soon, SQL Database will introduce Dynamic Data Masking which is a policy-based security feature that helps limit the exposure of data in a database by returning masked data to non-privileged users who run queries over designated database fields, like credit card numbers, without changing data on the database. Finally, Transparent Data Encryption is coming soon to SQL Database V12 databases for encryption at rest on all databases.

Stay tuned over the coming months for details as we continue to rollout the V12 service general availability and upcoming security features. Web Sites: Support for Slot Settings

The Azure Web Sites service has always provided the ability to store application settings and connection strings as a part of your Web Site’s metadata.  Those settings become available at runtime via environment variables and, if you use .NET, the standard configuration manager API.  This feature has now been updated to work better with another Web Sites feature: deployment slots. 

Deployment slots provide an easy way for you to safely deploy and test new releases of your web applications prior to swapping them live into production.  Let’s say you have a website called mysite.azurewebsites.net with a deployment slot at mysite-staging.azurewebsites.net.  You can swap these slots at any given time, and with no downtime. This provides a nice infrastructure for upgrading your website. Until now, when you swapped the staging slot with the production site, all settings and connection strings would swap as well. Sometimes that’s exactly what you want and it works great. 

But what if, for testing purposes, your site uses a database and you explicitly want each slot to have its own database (e.g. a production database and a testing database)?  Prior to this month's release that would have been difficult to automate since the swap operation would move the staging connection string to the production site and vice versa. You would have to do something unnatural like going to the staging slot and manually updating the settings to the production values before performing the swap operation. Then, you would execute the swap, and finally manually update the staging settings to point to the staging database. That workflow is very complicated and error prone.  

New Slot Settings Support

Slot specific settings are the solution to this problem.  Simply go to the Azure Preview Portal, navigate to your Web Site’s Settings page, and you’ll see a new checkbox next to each app setting and connection string.  Check the boxes next to each app settings setting and/or connection string that should not participate in swap operations.  Each deployment slot has its own version of this settings page where you can go and enter the slot specific setting values.  You now have a lot more flexibility when it comes to managing deployment slots and flowing configuration between them during swaps:

image 

API Management: New Premium Tier

Earlier this month we released a preview of our new Premium Tier for our API Management Service.  The Azure API Management Service provides a great offering that helps customers expose web-based APIs to customers - and provides support for API protection via rate-limiting, quotas and keys, detailed analytics, easy developer on-boarding and much more.

As the strategic value of APIs increase, customers are demanding even more performance, higher availability and more enterprise-grade features. And in response we're delighted to introduce a new Premium tier of API Management which will offer a 99.95% SLA after preview and includes a number of key new features:

Multiple Geography Deployment

Until now each API Management service resided in a single region selected when the service is created. I’m pleased to announce the introduction of a new multi-region deployment feature that allows API publishers to easily distribute a single API Management service across any number of Azure regions. Customers who want to reduce latency for distributed API consumers and require extremely high availability can now enable multi-geo with minimal configuration.

image

Premium tier customers will now see an updated capacity section on the scale tab of the Azure Management portal. Additional units and regions can be added with a few clicks of the relevant dropdown controls and API Management will provision additional proxies beyond the primary region in a matter of minutes.

Multi-geo is particularly effective when combined with the API Management caching policy, which can provide a CDN-like capability for your mission critical and performance sensitive APIs. For more information on multiple-geography deployment, check out the documentation.

Azure Virtual Network / VPN integration

Many customers are already managing their on-premises APIs using API Management's mutual certificate authentication to secure their backend. The new Premium offering introduces a great new capability for organizations that prefer to use a VPN solution or want to leverage their Azure ExpressRoute connection. Available in the Premium Tier, VPN connectivity settings are available on the configure tab of the Azure Management Portal and can even be combined with multi-geo, with a separate VPN for each region. More information is available in the documentation.

image

Active Directory Integration

Prior to today’s release, API Management's developer portal allowed developers to self-serve sign up using a custom account created with their e-mail address or using popular social identity providers like Facebook, Twitter, Google and Microsoft account. Sometimes businesses and enterprises want more control and would like to restrict sign in options, often preferring Azure Active Directory.

With our latest release, we now allow you to configure Azure Active Directory as an identity provider for Azure API Management. Administrators can disable all other identity providers and restrict access to APIs and documentation based on AD group membership. What's more, access can be extended to allow multiple AAD tenants to access your developer portal, making it even easier to share your APIs with business partners.

image

Learning More

Check out the Azure Active Directory documentation for more information on the integration, and the pricing page for more information on the new premium tier. DocumentDB: New Asia and US Regions, SQL Parameterization and Increased Account Limits

Earlier this month we released the following new features and capabilities in our Azure DocumentDB service - which provides a fully managed NoSQL JSON database service:

  • New regional availability
  • Larger accounts and documents: Increased the number of capacity units per account and upported document size doubled
  • SQL parameterization: Support for handle and escape user input, preventing accidental exposure of data

New Regions

We have added new support for provisioning DocumentDB accounts in the East Asia, Southeast Asia, and US East Azure regions (in addition to our existing US West, East Europe and West Europe regions). We’ll continue to invest in regional expansion in order to give you the flexibility and choice you need when deciding where to locate your DocumentDB data.

Larger Accounts and Documents

Throughout the preview process we’ve steadily increased the maximum document and database sizes.  With this month's release we've increased the maximum size of an individual document from 256Kb to 512Kb. The Capacity Unit (CU) limit per DocumentDB Account has also been raised from 5 to 50 which means you can now scale a single DocumentDB account to 500GB of storage and 100,000 Request Units of provisioned throughput. As always, our preview quotas can be adjusted on a per account basis - contact us if you have a need for increased capacity.

SQL Parameterization

Instead of inventing a new query language, DocumentDB supports querying documents using SQL (Structured Query Language) over hierarchical JSON documents. We are pleased to announce that we have extended our SQL query capabilities by adding support for parameterized SQL queries in the Azure DocumentDB REST API and SDKs. Using this feature, you can now write parameterized SQL queries. Parameterized SQL provides robust handling and escaping of user input, preventing accidental exposure of data through “SQL injection”.

Let’s take a look at a sample using the .NET SDK. In addition to plain SQL strings and LINQ expressions, we’ve added a new SqlQuerySpec class that can be used to build parameterized queries.  Here’s a sample that queries a “Books” collection with a single user supplied parameter for author name:

IQueryable<Book> queryable = client.CreateDocumentQuery<Book>(<?xml:namespace prefix = "o" />

collectionSelfLink,

new SqlQuerySpec {

             QueryText = "SELECT * FROM books b WHERE (b.Author.Name = @name)",

             Parameters = new SqlParameterCollection()  {

              new SqlParameter("@name", "Herman Melville")

              }

       });

Note:

  • SQL parameters in DocumentDB use the familiar @ notation borrowed from T-SQL
  • Parameter values can be any valid JSON (strings, numbers, Booleans, null, even arrays or nested JSON)
  • Since DocumentDB is schema-less, parameters are not validated against any type
  • You could just as easily supply additional parameters by adding additional SqlParameters to the SqlParameterCollection

The DocumentDB REST API also natively supports parameterization. The .NET sample shown above translates to the following REST API call. To use parameterized queries, you need to specify the Content-Type Header as application/query+json and the query as JSON in the body, as shown below.

POST https://contosomarketing.documents.azure.com/dbs/XP0mAA==/colls/XP0mAJ3H-AA=/docs

HTTP/1.1 x-ms-documentdb-isquery: True

x-ms-date: Mon, 18 Aug 2014 13:05:49 GMT

authorization: type%3dmaster%26ver%3d1.0%26sig%3dkOU%2bBn2vkvIlHypfE8AA5fulpn8zKjLwdrxBqyg0YGQ%3d

x-ms-version: 2014-08-21

Accept: application/json

Content-Type: application/query+json

Host: contosomarketing.documents.azure.com

Content-Length: 50

{     

    "query": "SELECT * FROM books b WHERE (b.Author.Name = @name)",    

    "parameters": [         

        {"name": "@name", "value": "Herman Melville"}        

    ]

}

Queries can be issued against document collections, as well as system metadata collections like Databases, DocumentCollections, and Attachments using the approach shown above. To try this out, download the latest build of the DocumentDB SDK on any of the supported platforms (.NET, Java, Node.js, JavaScript, or Python).

As always, we’d love to hear from you about the DocumentDB features and experiences you would find most valuable. Submit your suggestions on the Microsoft Azure DocumentDB feedback forum. Search: Portal Enhancements, Suggestions & Scoring, New Regions

Earlier this month we released a bunch of great enhancements to our Azure Search service.  Azure Search provides developers with all of the features needed to build out search experiences for web and mobile applications without having to deal with the typical complexities that come with managing, tuning and scaling a large search service.

Azure Portal Enhancements

Last month we added the ability to create and manage your search indexes from the Azure Preview Portal. Since then, you have told us that this has really helped to speed up development as it greatly reduced the amount of code required, but we also heard that you needed more. As a result, we extended the portal by adding the ability to add Scoring Profiles as well as configure Cross Origin Resource Sharing from the portal.

Portal Support of Scoring Profiles

Scoring Profiles boost items up in the search results based on different factors that you control. For example, below, I have a hotels index and all other things being equal, I want highly rated hotels close to the users’ current location to appear at the top of the users search results. To do this, in the Azure Preview Portal, choose Add Scoring Profile and provide a name for it. In this case I am going to call it “closeToUser”. You can create one or more scoring profiles and name them as needed in the search request, allowing you to provide different search results based on different use cases.

image

Once closeToUser has been created, I can start adding weights and functions. For example, in this scoring profile, I chose to add:

  • Weighting: Use hotelName as a weighted field, such that if the search term is found in the hotelName, it gets a weighted boost
  • Distance: Leverage the spatial capabilities of Azure Search to boost a hotel if it is found to be closer to the user’s specified location
  • Magnitude: Provide a boost to the hotels that have higher ratings

All of these functions and weights are then combined into a final score that is used to rank documents.

image

Scoring Profiles can often be tricky and it tends to be mixed with the rest of the query. With Azure Search, scoring profiles experience has been simplified and they are separated from search queries so the scoring model stays outside of application code and can be updated independently. In addition, these scoring profiles are modeled as a set of high-level scoring functions combined with a way to do the typical field weights making editing and maintenance of scoring much simpler.

As demonstrated above, this user experience requires no coding and you can simply choose the fields that are important and apply the function or weight that makes the most sense. It is important to note that scoring profiles is a method of boosting the relevance of a document and should not be confused with sorting. There are a number of other functions available which you can learn more about in the MSDN documentation.

Cross Origin Resource Sharing (CORS)

Web Browsers commonly apply a same-origin restriction policy to network requests, preventing client-side web applications from issuing requests to another domain for security reasons. For example, JavaScript code that came from http://www.contoso.com could not issue a request to another domain such as http://www.northwindtraders.com. For Azure Search developers, this is important in cases where all the data is already publicly accessible and they want to save on latency by going straight to the search index from mobile devices or a browser.

CORS is a method that allows you to relax this restriction in a controlled way so you don’t compromise security. Azure Search uses CORS to allow JavaScript code inside browsers to make search requests directly to the Azure Search service and eliminate the need to proxy all requests through the originating server. We now offer the ability to configure CORS from the Azure Preview Portal, allowing you to easily enable cross-domain access and limit it to specific origins. This can be done from the index management portion of your search service as shown below.

image

Tag Boosting

As discussed with Scoring Profiles, there are many examples of where you may want to boost certain relevant items. To this end, we have also introduced a new and highly requested function to our set of scoring profile functions called Tag Boosting. This feature is currently part of our experimental API version, made available to you so you can test and provide feedback on these potential new features.

Tag Boosting allows you to boost documents that have tags in common with the search query. The tags for the search query are provided as a scoring parameter in each search request and then any document that contain these terms would get a boost. This capability can not only be helpful to enable search result customization, but could also be used for cases where you have specific items you want to promote. As an example, during a sporting event, a retailer might want to promote items that are related to the teams participating in that sporting event.

Improved Suggestions

Suggestions (auto-complete) is a feature that allows you to provide type-ahead suggestions as the user types. Just like scoring profiles, this is a great way to allow your users to find the content they are looking for quickly. When we first implemented search suggestions in Azure Search, we heard a number of requests to extend the capabilities of this feature to better suit your requirements. As a result, we have an entirely new implementation of suggestions to address these items. In particular, it will do infix matching for suggestions and if fuzzy matching is enabled, it’ll show more flexibility for spelling mistakes. It also allows up to 100 suggestions per result, has no limit in length other than field limits and doesn’t have the 3-character minimum length.

This enhancement is still under the experimental API version as we are continuing to gather feedback. For more information on this and to see a more detailed example of suggestions, please see the post on the Suggestions in the Azure Blog.

New Regions

As a final note, I wanted to point out that we are continuing to expand the global footprint of Azure Search. With the addition of East Asia and West Europe you can now provision Azure Search services in 8 regions across the globe. Media: General Availability of Content Protection Service

Earlier this month we released the general availability of our new Content Protection service for Azure Media Services. This is backed by an enterprise grade SLA for all customers.

We understand the importance of protecting your premium media content, and our robust new DRM offering features both static and dynamic encryption with first party PlayReady license delivery and an AES 128-bit key delivery service. You can either dynamically encrypt during delivery of your media or statically encrypt during the content processing workflow, and our content protection options are available for both live and on-demand workflows.

For more information on functionality and pricing, visit the Media Services Content Protection blog post, the Media Services Pricing webpage, or this Securing Media article.

Management: General Availability of the Azure Resource Manager

Earlier this month we reached general availability of the new Azure Resource Manager, and now provide a world-side SLA of the service. The Azure Resource Manager provides a core set of management capabilities that are fundamental to the Microsoft Azure Platform and form the basis of our new deployment and management model for all Azure services.  You can use the Azure Resource Manager to deploy and manage your Azure solutions at no cost.

The Azure Resource Manager provides a simple, and customizable experience to manage your applications running in Azure along with enterprise grade authentication and authorization capabilities. Benefits include:

Application Lifecycle Boundaries: Azure Resource Manager provides a deployment container called a Resource Group that serves as the lifecycle boundary of resources/services deployed in it - making it easy for you to deploy, manage and visualize services that are contained within it. You no longer have to deploy parts of your application ala carte and then stitch them together manually. A resource Group container supports one-click deployment and tear down of the entire application in a single operation.

Enterprise Grade Access Control: OAuth and Role-Based Access Control (RBAC) are now natively integrated into Azure Management and consistently apply to all services supported by the Resource Manager. Access and operations performed on these services are also logged automatically to enable you to audit them later. You can now use a rich set of platform and resource specific roles that can be applied at the subscription, resource group, or resource level - giving you granular control over who has access to what operation within your organization.

Rich Tagging and Categorization: The Azure Resource Manager supports metadata tagging of resource groups and contained resources, and you can use this tagging support to group objects in ways suitable to your own needs such as management, billing or monitoring. For example, you could mark certain resources or resource groups as being "Dev/Test" and use that to help filter your resources or charge back their bills differently to internal groups in your organization.  This provides the power needed to manage and monitor departmental applications, subscriptions, and billing data in a more streamlined fashion, especially for larger organizations.

Declarative Deployment Templates: The new Azure Resource Manager supports both an imperative API as well as a declarative template model that you can use to deploy rich multi-tier applications on Azure.  These applications can be composed from multiple Azure services (including both IaaS and PaaS based services) and support the ability for you to pass parameters and connection-strings across them.  For example, you could declarative create a SQL DB, Web Site and VM using a single template and automatically wire-up the connection-string details between them.

image

Learn More

Check out the following resources to learn more about the Azure Resource Manager, and start using it today:

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:twitter.com/scottgu omni

Categories: Architecture, Programming

Try, Option or Either?

Xebia Blog - Wed, 02/18/2015 - 09:45

Scala has a lot of different options for handling and reporting errors, which can make it hard to decide which one is best suited for your situation. In Scala and functional programming languages it is common to make the errors that can occur explicit in the functions signature (i.e. return type), in contrast with the common practice in other programming languages where either special values are used (-1 for a failed lookup anyone?) or an exception is thrown.

Let's go through the main options you have as a Scala developer and see when to use what!

Option
A special type of error that can occur is the absence of some value. For example when looking up a value in a database or a List you can use the find method. When implementing this in Java the common solution (at least until Java 7) would be to return null when a value cannot be found or to throw some version of the NotFound exception. In Scala you will typically use the Option[T] type, returning Some(value) when the value is found and None when the value is absent.

So instead of having to look at the Javadoc or Scaladoc you only need to look at the type of the function to know how a missing value is represented. Moreover you don't need to litter your code with null checks or try/catch blocks.

Another use case is in parsing input data: user input, JSON, XML etc.. Instead of throwing an exception for invalid input you simply return a None to indicate parsing failed. The disadvantage of using Option for this situation is that you hide the type of error from the user of your function which, depending on the use-case, may or may not be a problem. If that information is important keep on reading the next sections.

An example that ensures that a name is non-empty:

def validateName(name: String): Option[String] = {
  if (name.isEmpty) None
  else Some(name)
}

You can use the validateName method in several ways in your code:

// Use a default value

 validateName(inputName).getOrElse("Default name")

// Apply some other function to the result
 validateName(inputName).map(_.toUpperCase)

// Combine with other validations, short-circuiting on the first error
// returning a new Option[Person]
 for {
   name <- validateName(inputName)
   age <- validateAge(inputAge)
 } yield Person(name, age)

Either
Option is nice to indicate failure, but if you need to provide some more information about the failure Option is not powerful enough. In that case Either[L,R] can be used. It has 2 implementations, Left and Right. Both can wrap a custom type, respectively type L and type R. By convention Right is right, so it contains the successful result and Left contains the error. Rewriting the validateName method to return an error message would give:

def validateName(name: String): Either[String, String] = {
 if (name.isEmpty) Left("Name cannot be empty")
 else Right(name)
 }

Similar to Option Either can be used in several ways. It differs from option because you always have to specify the so-called projection you want to work with via the left or right method:

// Apply some function to the successful result
validateName(inputName).right.map(_.toUpperCase)

// Combine with other validations, short-circuiting on the first error
// returning a new Either[Person]
for {
 name <- validateName(inputName).right
 age <- validateAge(inputAge).right
} yield Person(name, age)

// Handle both the Left and Right case
validateName(inputName).fold {
  error => s"Validation failed: $error",
  result => s"Validation succeeded: $result"
}

// And of course pattern matching also works
validateName(inputName) match {
  case Left(error) => s"Validation failed: $error",
  case Right(result) => s"Validation succeeded: $result"
}

// Convert to an option:
validateName(inputName).right.toOption

This projection is kind of clumsy and can lead to several convoluted compiler error messages in for expressions. See for example the excellent and in detail discussion of the Either type in the The Neophyte's Guide to Scala Part 7. Due to these issues several alternative implementations for a kind of Either have been created, most well known are the \/  type in Scalaz and the Or type in Scalactic. Both avoid the projection issues of the Scala Either and, at the same time, add additional functionality for aggregating multiple validation errors into a single result type.

Try

Try[T] is similar to Either. It also has 2 cases, Success[T] for the successful case and Failure[Throwable] for the failure case. The main difference thus is that the failure can only be of type Throwable. You can use it instead of a try/catch block to postpone exception handling. Another way to look at it is to consider it as Scala's version of checked exceptions. Success[T] wraps the result value of type T, while the Failure case can only contain an exception.

Compare these 2 methods that parse an integer:

// Throws a NumberFormatException when the integer cannot be parsed
def parseIntException(value: String): Int = value.toInt

// Catches the NumberFormatException and returns a Failure containing that exception
// OR returns a Success with the parsed integer value
def parseInt(value: String): Try[Int] = Try(value.toInt)

The first function needs documentation describing that an exception can be thrown. The second function describes in its signature what can be expected and requires the user of the function to take the failure case into account. Try is typically used when exceptions need to be propagated, if the exception is not needed prefer any of the other options discussed.

Try offers similar combinators as Option[T] and Either[L,R]:

// Apply some function to the successful result
parseInt(input).map(_ * 2)

// Combine with other validations, short-circuiting on the first Failure
// returning a new Try[Stats]
for {
  age <- parseInt(inputAge)
  height <- parseDouble(inputHeight)
} yield Stats(age, height)

// Use a default value
parseAge(inputAge).getOrElse(0)

// Convert to an option
parseAge(inputAge).toOption

// And of course pattern matching also works
parseAge(inputAge) match {
  case Failure(exception) => s"Validation failed: ${exception.message}",
  case Success(result) => s"Validation succeeded: $result"
}

Note that Try is not needed when working with Futures! Futures combine asynchronous processing with the Exception handling capabilities of Try! See also Try is free in the Future.

Exceptions
Since Scala runs on the JVM all low-level error handling is still based on exceptions. In Scala you rarely see usage of exceptions and they are typically only used as a last resort. More common is to convert them to any of the types mentioned above. Also note that, contrary to Java, all exceptions in Scala are unchecked. Throwing an exception will break your functional composition and probably result in unexpected behaviour for the caller of your function. So it should be reserved as a method of last resort, for when the other options don’t make sense.
If you are on the receiving end of the exceptions you need to catch them. In Scala syntax:

try {
  dangerousCode()
} catch {
  case e: Exception => println("Oops")
} finally {
  cleanup
}

What is often done wrong in Scala is that all Throwables are caught, including the Java system errors. You should never catch Errors because they indicate a critical system error like the OutOfMemoryError. So never do this:

try {
  dangerousCode()
} catch {
  case _ => println("Oops. Also caught OutOfMemoryError here!")
}

But instead do this:

import scala.util.control.NonFatal

try {
  dangerousCode()
} catch {
  case NonFatal(_) => println("Ooops. Much better, only the non fatal exceptions end up here.")
}

To convert exceptions to Option or Either types you can use the methods provided in scala.util.control.Exception (scaladoc):

import scala.util.control.Exception._

val i = 0
val result: Option[Int] = catching(classOf[ArithmeticException]) opt { 1 / i }
val result: Either[Throwable, Int] = catching(classOf[ArithmeticException]) either { 1 / i }

Finally remember you can always convert an exception into a Try as discussed in the previous section.

TDLR;

  • Option[T], use it when a value can be absent or some validation can fail and you don't care about the exact cause. Typically in data retrieval and validation logic.
  • Either[L,R], similar use case as Option but when you do need to provide some information about the error.
  • Try[T], use when something Exceptional can happen that you cannot handle in the function. This, in general, excludes validation logic and data retrieval failures but can be used to report unexpected failures.
  • Exceptions, use only as a last resort. When catching exceptions use the facility methods Scala provides and never catch { _ => }, instead use catch { NonFatal(_) => }

One final advice is to read through the Scaladoc for all the types discussed here. There are plenty of useful combinators available that are worth using.