Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Linking Animations to Scroll Position in React Native

Xebia Blog - Thu, 12/15/2016 - 11:37
When you want to link a custom animation to the scroll position in a ScrollView, like in the card example below, you are in for some bad performance on low end devices. Let’s figure out why and learn how to make it buttery smooth. Read more

Ask High Scalability: How to build anonymous blockchain communication?

This question came in over the Internets. If you have any ideas please consider sharing them if you have the time...

I am building a 2 way subscription model I am working on a blockchain project where in I have to built a information/data portal where in I will have 2 types of users data providers and data recievers such that there should be anonimity between both of these.

Please guide me how can I leverage blockchain (I think Etherium would be useful in this context but not sure) so that data providers of my system can send messages to data receivers anonymously and vice versa data receivers can request for data through my system to data providers.

I believe, it work if we can create a system where in if a user has data, it will send description to the server, The system will host this description about data without giving the data provider details.

Simultaneously server will store info which user has the data. When data receiver user logs in to system and wants and sees the description of data and wants to analyze that data, it will send request to server for that data. This request is stored in the server and it will allow access to data without receiver knowing who wants to access that data, but it will trigger a message to receiver that an anonymous user wants to access data and would data.

Can you please guide me how to build architecture of this system and how to proceed to do a POC?

Categories: Architecture

A Scalable Alternative to RESTful Communication: Mimicking Google’s Search Autocomplete with a Single MigratoryData Server

This is a guest post by Mihai Rotaru, CTO of MigratoryData.

Using the RESTful HTTP request-response approach can become very inefficient for websites requiring real-time communication. We propose a new approach and exemplify it with a well-known feature that requires real-time communication, and which is included by most websites: search box autocomplete.

Google, which is one of the most demanding web search environments, seems to handle about 40,000 searches per second according to an estimation made by Internet Live Stats. Supposing that for each search, a number of 6 autocomplete requests are made, we show that MigratoryData can handle this load using a single 1U server.

More precisely, we show that a single MigratoryData server running on a 1U machine can handle 240,000 autocomplete requests per second from 1 million concurrent users with a mean round-trip latency of 11.82 milliseconds.

The Current Approach and Its Limitations
Categories: Architecture

A Scalable Alternative to RESTful Communication: Mimicking Google’s Search Autocomplete with a Single MigratoryData Server

This is a guest post by Mihai Rotaru, CTO of MigratoryData.

Using the RESTful HTTP request-response approach can become very inefficient for websites requiring real-time communication. We propose a new approach and exemplify it with a well-known feature that requires real-time communication, and which is included by most websites: search box autocomplete.

Google, which is one of the most demanding web search environments, seems to handle about 40,000 searches per second according to an estimation made by Internet Live Stats. Supposing that for each search, a number of 6 autocomplete requests are made, we show that MigratoryData can handle this load using a single 1U server.

More precisely, we show that a single MigratoryData server running on a 1U machine can handle 240,000 autocomplete requests per second from 1 million concurrent users with a mean round-trip latency of 11.82 milliseconds.

The Current Approach and Its Limitations
Categories: Architecture

Microservices, not so much news after all?

Xebia Blog - Sun, 12/11/2016 - 18:07
A while ago at Xebia we tried to streamline our microservices effort. In a kick-off session, we got quite badly side tracked (as is often the case) by a meta discussion about what would be the appropriate context and process to develop microservices. After an hour of back-and-forth, we reached consensus that might be helpful

Stuff The Internet Says On Scalability For December 9th, 2016

Hey, it's HighScalability time:

 

Here's a 1 TB hard drive in 1937. Twenty workers operated the largest vertical letter file in the world. 4000 SqFt. 3000 drawers, 10 feet long. (from @BrianRoemmele)
If you like this sort of Stuff then please support me on Patreon.
  • 98%~ savings in green house gases using Gmail versus local servers; 2x: time spent on-line compared to 5 years ago; 125 million: most hours of video streamed by Netflix in one day; 707.5 trillion: value of trade in one region of Eve Online; $1 billion: YouTube's advertisement pay-out to the music industry; 1 billion: Step Functions predecessor state machines run per week in AWS retail; 15.6 million: jobs added over last 81 months;

  • Quotable Quotes:
    • Gerry Sussman~ in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems...programming today is more like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?
    • @themoah: Last year Black Friday weekend: 800 Windows servers with .NET. This year: 12 Linux servers with Scala/Akka. #HighScalability #Linux #Scala
    • @swardley: If you're panicking over can't find AWS skills / need to go public cloud - STOP! You missed the boat. Focus now on going serverless in 5yrs.
    • @jbeda: Nordstrom is running multitenant Kubernetes cluster with namespace per team. Using RBAC for security.
    • Tim Harford: What Brailsford says is, he is not interested in team harmony. What he wants is goal harmony. He wants everyone to be focused on the same goal. He doesn’t care if they like each other and indeed there are some pretty famous examples of people absolutely hating each other. 
    • @brianhatfield: SUB. MILLISECOND. PAUSE. TIME. ON. AN. 18. GIG. HEAP. (Trying out Go 1.8 beta 1!)
    • haberman: If you can make your system lock-free, it will have a bunch of nice properties: - deadlock-free - obstruction-free (one thread getting scheduled out in the middle of a critical section doesn't block the whole system) - OS-independent, so the same code can run in kernel space or user space, regardless of OS or lack thereof 
    • Neil Gunther: The world of performance is curved, just like the real world, even though we may not always be aware of it. What you see depends on where your window is positioned relative to the rest of the world. Often, the performance world looks flat to people who always tend to work with clocked (i.e., deterministic) systems, e.g., packet networks or deep-space networks.
    • @yoz: I liked Westworld, but if I wanted hours of watching tech debt and no automated QA destroy a virtual world, I’d go back to Linden Lab
    • @adrianco: I think we are seeing the usual evolution to utility services, and new higher order (open source) functionality emerges /cc @swardley
    • Neil Gunther: a buffer is just a queue and queues grow nonlinearly with increasing load. It's queueing that causes the throughput (X) and latency (R) profiles to be nonlinear.
    • Juho Snellman: I think [QUIC] encrypting the L4 headers is a step too far. If these protocols get deployed widely enough (a distinct possibility with standardization), the operational pain will be significant.
    • @Tobarja: "anyone who is doing microservices is spending about 25% of their engineering effort on their platform" @jedberg 
    • @cdixon: 2016 League of Legends finals: 43M viewers 2016 NBA finals: 30.8M viewers 
    • @mikeolson: 7 billion people on earth; 3 billion images shared on social media every day. @setlinger at #StrataHadoop
    • @swardley: When you think about AWS Lambda, AWS Step Functions et al then you need to view this through the lens of automating basic doctrine i.e. not just saying it and codifying in maps and related systems but embedding it everywhere. At scale and at the speed of competition that I expect us to reach then this is going to be essential.
    • Jakob Engblom: hardware accelerators for particular  common expensive tasks seems to be the right way to add performance at the smallest cost in silicon area and power consumption.
    • Joe Duffy: The future for our industry is a massively distributed one, however, where you want simple individual components composed into a larger fabric. In this world, individual nodes are less “precious”, and arguably the correctness of the overall orchestration will become far more important. I do think this points to a more Go-like approach, with a focus on the RPC mechanisms connecting disparate pieces
    • @cmeik: AWS Lambda is cool if you never had to worry about consistency, availability and basically all of the tradeoffs of distributed systems.
    • prions: As a Civil Engineer myself, I feel like people don't realize the amount of underlying stuff that goes into even basic infrastructure projects. There's layers of planning, design, permitting, regulations and bidding involved. It usually takes years to finally get to construction and even then there's a whole host of issues that arise that can delay even a simple project. 
    • Netflix: If you can cache everything in a very efficient way, you can often change the game. 
    • The Attention Merchants: One [school] board in Florida cut a deal to put the McDonald’s logo on its report cards (good grades qualified you for a free Happy Meal). In recent years, many have installed large screens in their hallways that pair school announcements with commercials. “Take your school to the digital age” is the motto of one screen provider: “everyone benefits.” What is perhaps most shocking about the introduction of advertising into public schools is just how uncontroversial and indeed logical it has seemed to those involved.

  • Just how big is Netflix? The story of the tape is told in Another Day in the Life of a Netflix Engineer. Netflix runs way more than 100K EC2 instances and more than 80,000 CPU cores. They use both predictive and reactive autoscaling, aiming for not too much or too little, just the right amount. Of those 100K+ instances they will autoscale up and down 20% of that capacity everyday. More than 50Gbps ELB traffic per region. More than 25Ggps is telemetry data from devices sending back customer experience data. At peak Netflix is responsible over 37% of Internet traffic. The monthly billing file for Netflix is hundreds of megabytes with over 800 million lines of information. There's a hadoop cluster at Amazon whose only purpose is to load Netflix's bill. Netflix considers speed of innovation to be a strategic advantage. About 4K code changes are put into production per day. At peak over 125 million hours of video were streamed in a day. Support for 130 countries was added in one day. That last one is the kicker. Reading about Netflix over all these years you may have got the idea Netflix was over engineered, but going global in one day was what it was all about. Try that if you are racking and stacking. 

  • Oh how I miss stories that began Once upon a time. The start of so many stories these days is The attack sequence begins with a simple phishing scheme. This particular cautionary tale is from Technical Analysis of Pegasus Spyware, a very, almost lovingly, detailed account of the total ownage of the "secure" iPhone. The exploit made use of three zero-day vulnerabilities: CVE-2016-4657: Memory Corruption in WebKit, CVE-2016-4655: Kernel Information Leak, CVE-2016-4656: Kernel Memory corruption leads to Jailbreak. Do not read if you would like to keep your Security Illusion cherry intact. 

  • Composing RPC calls gets harder has the graph of calls and dependencies explodes. Here's how Twitter handles it. Simplify Service Dependencies with Nodes. Here's their library on GitHub. It's basically just a way to setup a dependency graph in code and have all the RPCs executed to the plan. It's interesting how parallel this is to setting up distributed services in the first place. They like it: We have saved thousands of lines of code, improved our test coverage and ended up with code that’s more readable and friendly for newcomers. Also, AWS Step Functions.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

PlantUML and Structurizr

Coding the Architecture - Simon Brown - Thu, 12/08/2016 - 10:28

Despite the seemingly high unpopularity of UML these days, it continues to surprise me how many software teams tell me that they use PlantUML. If you've not seen it, PlantUML is basically a tool that allows you to create UML diagrams using text. While the use of text is very developer friendly, PlantUML isn't a modelling tool. By this I mean you're not creating a single consistent model of a software system and creating different UML diagrams (views) based upon that model. Instead, you're simply creating a single UML diagram at a time, which means that you need to take responsibility for the consistent naming of elements across diagrams.

Some teams I've worked with have solved this problem by writing small applications that generate the PlantUML diagram definitions based upon a model of their software systems, but these tend to be bespoke solutions that are never shared outside of the team that created them. Something I've done recently is create a PlantUML exporter for Structurizr. Using PlantUML in conjunction with the Structurizr open source library allows you to create a model of your software system and have the resulting PlantUML diagrams be consistent with that model. If you use the Structurizr's component finder, which uses reflection to identify components in your codebase, you can create component level UML diagrams automatically.

A PlantUML component diagram

Even if you're not interested in using my Structurizr software as a service, I would encourage you to at least use the open source library to create a model of your software system, extracting components from your code where possible. Once you have a model, you can visualise that model in a number of different ways.

Categories: Architecture

Using Helm to install Traefik as an Ingress Controller in Kubernetes

Agile Testing - Grig Gheorghiu - Tue, 12/06/2016 - 23:15
That was a mouthful of a title...Hope this post lives up to it :)

First of all, just a bit of theory. If you want to expose your application running on Kubernetes to the outside world, you have several choices.

One choice you have is to expose the pods running your application via a Service of type NodePort or LoadBalancer. If you run your service as a NodePort, Kubernetes will allocate a random high port on every node in the cluster, and it will proxy traffic to that port to your service. Services of type LoadBalancer are only supported if you run your Kubernetes cluster using certain specific cloud providers such as AWS and GCE. In this case, the cloud provider will create a specific load balancer resource, for example an Elastic Load Balancer in AWS, which will then forward traffic to the pods comprising your service. Either way, the load balancing you get by exposing a service is fairly crude, at the TCP layer and using a round-robin algorithm.

A better choice for exposing your Kubernetes application is to use Ingress resources together with Ingress Controllers. An ingress resource is a fancy name for a set of layer 7 load balancing rules, as you might be familiar with if you use HAProxy or Pound as a software load balancer. An Ingress Controller is a piece of software that actually implements those rules by watching the Kubernetes API for requests to Ingress resources. Here is a fragment from the Ingress Controller documentation on GitHub:

What is an Ingress Controller?

An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the ApiServer's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for ingress.
Writing an Ingress Controller

Writing an Ingress controller is simple. By way of example, the nginx controller does the following:
  • Poll until apiserver reports a new Ingress
  • Write the nginx config file based on a go text/template
  • Reload nginx
As I mentioned in a previous post, I warmly recommend watching a KubeCon presentation from Gerred Dillon on "Kubernetes Ingress: Your Router, Your Rules" if you want to further delve into the advantages of using Ingress Controllers as opposed to plain Services.
While nginx is the only software currently included in the Kubernetes source code as an Ingress Controller, I wanted to experiment with a full-fledged HTTP reverse proxy such as Traefik. I should add from the beginning that only nginx offers the TLS feature of Ingress resources. Traefik can terminate SSL of course, and I'll show how you can do that, but it is outside of the Ingress resource spec.

I've also been looking at Helm, the Kubernetes package manager, and I noticed that Traefik is one of the 'stable' packages (or Charts as they are called) currently offered by Helm, so I went the Helm route in order to install Traefik. In the following instructions I will assume that you are already running a Kubernetes cluster in AWS and that your local kubectl environment is configured to talk to that cluster.

Install Helm

This is pretty easy. Follow the instructions on GitHub to download or install a binary for your OS.

Initialize Helm

Run helm init in order to install the server component of Helm, called tiller, which will be run as a Kubernetes Deployment in the kube-system namespace of your cluster.

Get the Traefik Helm chart from GitHub

I git cloned the entire kubernetes/charts repo, then copied the traefik directory locally under my own source code repo which contains the rest of the yaml files for my Kubernetes resource manifests.

# git clone https://github.com/kubernetes/charts.git helmcharts# cp -r helmcharts/stable/traefik traefik-helm-chart
It is instructive to look at the contents of a Helm chart. The main advantage of a chart in my view is the bundling together of all the Kubernetes resources necessary to run a specific set of services. The other advantage is that you can use Go-style templates for the resource manifests, and the variables in those template files can be passed to helm via a values.yaml file or via the command line.
For more details on Helm charts and templates, I recommend this linux.com article.
Create an Ingress resource for your application service
I copied the dashboard-ingress.yaml template file from the Traefik chart and customized it so as to refer to my application's web service, which is running in a Kubernetes namespace called tenant1.

# cd traefik-helm-chart/templates# cp dashboard-ingress.yaml web-ingress.yaml# cat web-ingress.yaml{{- if .Values.tenant1.enabled }}apiVersion: extensions/v1beta1kind: Ingressmetadata:  namespace: {{ .Values.tenant1.namespace }}  name: {{ template "fullname" . }}-web-ingress  labels:    app: {{ template "fullname" . }}    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"    release: "{{ .Release.Name }}"    heritage: "{{ .Release.Service }}"spec:  rules:  - host: {{ .Values.tenant1.domain }}    http:      paths:      - path: /        backend:          serviceName: {{ .Values.tenant1.serviceName }}          servicePort: {{ .Values.tenant1.servicePort }}{{- end }}
The variables referenced in the template above are defined in the values.yaml file in the Helm chart. I started with the variables in the values.yaml file that came with the Traefik chart and added my own customizations:
# vi traefik-helm-chart/values.yamlssl:  enabled: trueacme:  enabled: true  email: admin@mydomain.com  staging: false  # Save ACME certs to a persistent volume. WARNING: If you do not do this, you will re-request  # certs every time a pod (re-)starts and you WILL be rate limited!  persistence:    enabled: true    storageClass: kubernetes.io/aws-ebs    accessMode: ReadWriteOnce    size: 1Gidashboard:  enabled: true  domain: tenant1-lb.dev.mydomain.comgzip:  enabled: falsetenant1:  enabled: true  namespace: tenant1  domain: tenant1.dev.mydomain.com  serviceName: web  servicePort: http
Note that I added a section called tenant1, where I defined the variables referenced in the web-ingress.yaml template above. I also enabled the ssl and acme sections, so that Traefik can automatically install SSL certificates from Let's Encrypt via the ACME protocol.
Install your customized Helm chart for Traefik
With these modifications done, I ran 'helm install' to actually deploy the various Kubernetes resources included in the Traefik chart. 
I specified the directory containing my Traefik chart files (traefik-helm-chart) as the last argument passed to helm install:
# helm install --name tenant1-lb --namespace tenant1 traefik-helm-chart/NAME: tenant1-lbLAST DEPLOYED: Tue Nov 29 09:51:12 2016NAMESPACE: tenant1STATUS: DEPLOYED
RESOURCES:==> extensions/IngressNAME                                  HOSTS                    ADDRESS   PORTS     AGEtenant1-lb-traefik-web-ingress   tenant1.dev.mydomain.com             80        1stenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com             80        0s
==> v1/PersistentVolumeClaimNAME                    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGEtenant1-lb-traefik-acme   Pending                                      0s
==> v1/SecretNAME                            TYPE      DATA      AGEtenant1-lb-traefik-default-cert   Opaque    2         1s
==> v1/ConfigMapNAME               DATA      AGEtenant1-lb-traefik   1         1s
==> v1/ServiceNAME                         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEtenant1-lb-traefik-dashboard   10.3.0.15    <none>        80/TCP    1stenant1-lb-traefik   10.3.0.215   <pending>   80/TCP,443/TCP   1s
==> extensions/DeploymentNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEtenant1-lb-traefik   1         1         1            0           1s

NOTES:1. Get Traefik's load balancer IP/hostname:
    NOTE: It may take a few minutes for this to become available.
    You can watch the status by running:
        $ kubectl get svc tenant1-lb-traefik --namespace tenant1 -w
    Once 'EXTERNAL-IP' is no longer '<pending>':
        $ kubectl describe svc tenant1-lb-traefik --namespace tenant1 | grep Ingress | awk '{print $3}'
2. Configure DNS records corresponding to Kubernetes ingress resources to point to the load balancer IP/hostname found in step 1
At this point you should see two Ingress resources, one for the Traefik dashboard and on for the custom web ingress resource:
# kubectl --namespace tenant1 get ingressNAME                           HOSTS                       ADDRESS   PORTS     AGEtenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com           80        50stenant1-lb-traefik-web-ingress tenant1.dev.mydomain.com            80        51s
As per the Helm notes above (shown as part of the output of helm install), run this command to figure out the CNAME of the AWS ELB created by Kubernetes during the creation of the tenant1-lb-traefik service of type LoadBalancer:
# kubectl describe svc tenant1-lb-traefik --namespace tenant1 | grep Ingress | awk '{print $3}'a5be275d8b65c11e685a402e9ec69178-91587212.us-west-2.elb.amazonaws.com
Create tenant1.dev.mydomain.com and tenant1-lb.dev.mydomain.com as DNS CNAME records pointing to a5be275d8b65c11e685a402e9ec69178-91587212.us-west-2.elb.amazonaws.com.

Now, if you hit http://tenant1-lb.dev.mydomain.com you should see the Traefik dashboard showing the frontends on the left and the backends on the right:
Screen Shot 2016-11-29 at 10.54.07 AM.pngIf you hit http://tenant1.dev.mydomain.com you should see your web service in action.
You can also inspect the logs of the tenant1-lb-traefik pod to see what's going on under the covers when Traefik is launched and to verify that the Let's Encrypt SSL certificates were properly downloaded via ACME:
# kubectl --namespace tenant1 logs tenant1-lb-traefik-3710322105-o2887time="2016-11-29T00:03:51Z" level=info msg="Traefik version v1.1.0 built on 2016-11-18_09:20:46AM"time="2016-11-29T00:03:51Z" level=info msg="Using TOML configuration file /config/traefik.toml"time="2016-11-29T00:03:51Z" level=info msg="Preparing server http &{Network: Address::80 TLS:<nil> Redirect:<nil> Auth:<nil> Compress:false}"time="2016-11-29T00:03:51Z" level=info msg="Preparing server https &{Network: Address::443 TLS:0xc4201b1800 Redirect:<nil> Auth:<nil> Compress:false}"time="2016-11-29T00:03:51Z" level=info msg="Starting server on :80"time="2016-11-29T00:03:58Z" level=info msg="Loading ACME Account..."time="2016-11-29T00:03:59Z" level=info msg="Loaded ACME config from store /acme/acme.json"time="2016-11-29T00:04:01Z" level=info msg="Starting provider *main.WebProvider {\"Address\":\":8080\",\"CertFile\":\"\",\"KeyFile\":\"\",\"ReadOnly\":false,\"Auth\":null}"time="2016-11-29T00:04:01Z" level=info msg="Starting provider *provider.Kubernetes {\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Endpoint\":\"\",\"DisablePassHostHeaders\":false,\"Namespaces\":null,\"LabelSelector\":\"\"}"time="2016-11-29T00:04:01Z" level=info msg="Retrieving ACME certificates..."time="2016-11-29T00:04:01Z" level=info msg="Retrieved ACME certificates"time="2016-11-29T00:04:01Z" level=info msg="Starting server on :443"time="2016-11-29T00:04:01Z" level=info msg="Server configuration reloaded on :80"time="2016-11-29T00:04:01Z" level=info msg="Server configuration reloaded on :443"
To get an even better warm and fuzzy feeling about the SSL certificates installed via ACME, you can run this command against the live endpoint tenant1.dev.mydomain.com:
# echo | openssl s_client -showcerts -servername tenant1.dev.mydomain.com -connect tenant1.dev.mydomain.com:443 2>/dev/nullCONNECTED(00000003)---Certificate chain 0 s:/CN=tenant1.dev.mydomain.com   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3-----BEGIN CERTIFICATE-----MIIGEDCCBPigAwIBAgISAwNwBNVU7ZHlRtPxBBOPPVXkMA0GCSqGSIb3DQEBCwUA-----END CERTIFICATE----- 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3   i:/O=Digital Signature Trust Co./CN=DST Root CA X3-----BEGIN CERTIFICATE-----uM2VcGfl96S8TihRzZvoroed6ti6WqEBmtzw3Wodatg+VyOeph4EYpr/1wXKtx8/KOqkqm57TH2H3eDJAkSnh6/DNFu0Qg==-----END CERTIFICATE--------Server certificatesubject=/CN=tenant1.dev.mydomain.comissuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3---No client certificate CA names sent---SSL handshake has read 3009 bytes and written 713 bytes---New, TLSv1/SSLv3, Cipher is AES128-SHAServer public key is 4096 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONESSL-Session:    Protocol  : TLSv1    Cipher    : AES128-SHA    Start Time: 1480456552    Timeout   : 300 (sec)    Verify return code: 0 (ok)etc.
Other helm commands
You can list the Helm releases that are currently running (a Helm release is a particular versioned instance of a Helm chart) with helm list:
# helm listNAME        REVISION UPDATED                  STATUS   CHARTtenant1-lb    1        Tue Nov 29 10:13:47 2016 DEPLOYED traefik-1.1.0-a

If you change any files or values in a Helm chart, you can apply the changes by means of the 'helm upgrade' command:

# helm upgrade tenant1-lb traefik-helm-chart
You can see the status of a release with helm status:
# helm status tenant1-lbLAST DEPLOYED: Tue Nov 29 10:13:47 2016NAMESPACE: tenant1STATUS: DEPLOYED
RESOURCES:==> v1/ServiceNAME               CLUSTER-IP   EXTERNAL-IP        PORT(S)          AGEtenant1-lb-traefik   10.3.0.76    a92601b47b65f...   80/TCP,443/TCP   35mtenant1-lb-traefik-dashboard   10.3.0.36   <none>    80/TCP    35m
==> extensions/DeploymentNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEtenant1-lb-traefik   1         1         1            1           35m
==> extensions/IngressNAME                                  HOSTS                    ADDRESS   PORTS     AGEtenant1-lb-traefik-web-ingress   tenant1.dev.mydomain.com             80        35mtenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com             80        35m
==> v1/PersistentVolumeClaimNAME                    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGEtenant1-lb-traefik-acme   Bound     pvc-927df794-b65f-11e6-85a4-02e9ec69178b   1Gi        RWO           35m
==> v1/SecretNAME                            TYPE      DATA      AGEtenant1-lb-traefik-default-cert   Opaque    2         35m
==> v1/ConfigMapNAME               DATA      AGEtenant1-lb-traefik   1         35m




Sponsored Post: Loupe, New York Times, ScaleArc, Aerospike, Scalyr, Gusto, VividCortex, MemSQL, InMemory.Net, Zohocorp

Who's Hiring?
  • The New York Times is looking for a Software Engineer for its Delivery/Site Reliability Engineering team. You will also be a part of a team responsible for building the tools that ensure that the various systems at The New York Times continue to operate in a reliable and efficient manner. Some of the tech we use: Go, Ruby, Bash, AWS, GCP, Terraform, Packer, Docker, Kubernetes, Vault, Consul, Jenkins, Drone. Please send resumes to: technicaljobs@nytimes.com

  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.
Fun and Informative Events
  • Your event here!
Cool Products and Services
  • A note for .NET developers: You know the pain of troubleshooting errors with limited time, limited information, and limited tools. Log management, exception tracking, and monitoring solutions can help, but many of them treat the .NET platform as an afterthought. You should learn about Loupe...Loupe is a .NET logging and monitoring solution made for the .NET platform from day one. It helps you find and fix problems fast by tracking performance metrics, capturing errors in your .NET software, identifying which errors are causing the greatest impact, and pinpointing root causes. Learn more and try it free today.

  • ScaleArc's database load balancing software empowers you to “upgrade your apps” to consumer grade – the never down, always fast experience you get on Google or Amazon. Plus you need the ability to scale easily and anywhere. Find out how ScaleArc has helped companies like yours save thousands, even millions of dollars and valuable resources by eliminating downtime and avoiding app changes to scale. 

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network. 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

The Tech that Turns Each of Us Into a Walled Garden


(source)

 

How we treat each other is based on empathy. Empathy is based on shared experience. What happens when we have nothing in common?

Systems are now being constructed so we’ll never see certain kinds of information. Each of us live in our own algorithmically created Skinner Box /silo/walled garden, fed only information AIs think will be simultaneously most rewarding to you and their creators (Facebook, Google, etc).

We are always being manipulated, granted, but how we are being manipulated has taken a sharp technology driven change and we should be aware of it. This is different. Scary different. And the technology behind it all is absolutely fascinating.

Divided We Are Exploitable
Categories: Architecture

AzureCon Keynote Announcements: India Regions, GPU Support, IoT Suite, Container Service, and Security Center

ScottGu's Blog - Scott Guthrie - Thu, 10/01/2015 - 06:43

Yesterday we held our AzureCon event and were fortunate to have tens of thousands of developers around the world participate.  During the event we announced several great new enhancements to Microsoft Azure including:

  • General Availability of 3 new Azure regions in India
  • Announcing new N-series of Virtual Machines with GPU capabilities
  • Announcing Azure IoT Suite available to purchase
  • Announcing Azure Container Service
  • Announcing Azure Security Center

We were also fortunate to be joined on stage by several great Azure customers who talked about their experiences using Azure including: Jet.com, Nascar, Alaska Airlines, Walmart, and ThyssenKrupp. Watching the Videos

All of the talks presented at AzureCon (including the 60 breakout talks) are now available to watch online.  You can browse and watch all of the sessions here.

image

My keynote to kick off the event was an hour long and provided an end-to-end look at Azure and some of the big new announcements of the day.  You can watch it here.

Below are some more details of some of the highlights:

Announcing General Availability of 3 new Azure regions in India

Yesterday we announced the general availability of our new India regions: Mumbai (West), Chennai (South) and Pune (Central).  They are now available for you to deploy solutions into.

This brings our worldwide presence of Azure regions up to 24 regions, more than AWS and Google combined. Over 125 customers and partners have been participating in the private preview of our new India regions.   We are seeing tremendous interest from industry sectors like Public Sector, Banking Financial Services, Insurance and Healthcare whose cloud adoption has been restricted by data residency requirements.  You can all now deploy your solutions too. Announcing N-series of Virtual Machines with GPU Support

This week we announced our new N-series family of Azure Virtual Machines that enable GPU capabilities.  Featuring NVidia’s best of breed Tesla GPUs, these Virtual Machines will help you run a variety of workloads ranging from remote visualization to machine learning to analytics.

The N-series VMs feature NVidia’s flagship GPU, the K80 which is well supported by NVidia’s CUDA development community. N-series will also have VM configurations featuring the latest M60 which was recently announced by NVidia. With support for M60, Azure becomes the first hyperscale cloud provider to bring the capabilities of NVidia’s Quadro High End Graphics Support to the cloud. In addition, N-series combines GPU capabilities with the superfast RDMA interconnect so you can run multi-machine, multi-GPU workloads such as Deep Learning and Skype Translator Training.

Announcing Azure Security Center

This week we announced the new Azure Security Center—a new Azure service that gives you visibility and control of the security of your Azure resources, and helps you stay ahead of threats and attacks.  Azure is the first cloud platform to provide unified security management with capabilities that help you prevent, detect, and respond to threats.

image

The Azure Security Center provides a unified view of your security state, so your team and/or your organization’s security specialists can get the information they need to evaluate risk across the workloads they run in the cloud.  Based on customizable policy, the service can provide recommendations. For example, the policy might be that all web applications should be protected by a web application firewall. If so, the Azure Security Center will automatically detect when web apps you host in Azure don’t have a web application firewall configured, and provide a quick and direct workflow to get a firewall from one of our partners deployed and configured:

image

Of course, even with the best possible protection in place, attackers will still try to compromise systems. To address this problem and adopt an “assume breach” mindset, the Azure Security Center uses advanced analytics, including machine learning, along with Microsoft’s global threat intelligence network to look for and alert on attacks. Signals are automatically collected from your Azure resources, the network, and integrated security partner solutions and analyzed to identify cyber-attacks that might otherwise go undetected. Should an incident occur, security alerts offer insights into the attack and suggest ways to remediate and recover quickly. Security data and alerts can also be piped to existing Security Information and Events Management (SIEM) systems your organization has already purchased and is using on-premises.

image

No other cloud vendor provides the depth and breadth of these capabilities, and they are going to enable you to build even more secure applications in the cloud.

Announcing Azure IoT Suite Available to Purchase

The Internet of Things (IoT) provides tremendous new opportunities for organizations to improve operations, become more efficient at what they do, and create new revenue streams.  We have had a huge interest in our Azure IoT Suite which until this week has been in public preview.  Our customers like Rockwell Automation and ThyssenKrupp Elevators are already connecting data and devices to solve business problems and improve their operations. Many more businesses are poised to benefit from IoT by connecting their devices to collect and analyze untapped data with remote monitoring or predictive maintenance solutions.

In working with customers, we have seen that getting started on IoT projects can be a daunting task starting with connecting existing devices, determining the right technology partner to work with and scaling an IoT project from proof of concept to broad deployment. Capturing and analyzing untapped data is complex, particularly when a business tries to integrate this new data with existing data and systems they already have. 

The Microsoft Azure IoT Suite helps address many of these challenges.  The Microsoft Azure IoT Suite helps you connect and integrate with devices more easily, and to capture and analyze untapped device data by using our preconfigured solutions, which are engineered to help you move quickly from proof of concept and testing to broader deployment. Today we support remote monitoring, and soon we will be delivering support for predictive maintenance and asset management solutions.

These solutions reliably capture data in the cloud and analyze the data both in real-time and in batch processing. Once your devices are connected, Azure IoT Suite provides real time information in an intuitive format that helps you take action from insights. Our advanced analytics then enables you to easily process data—even when it comes from a variety of sources, including devices, line of business assets, sensors and other systems and provide rich built-in dashboards and analytics tools for access to the data and insights you need. User permissions can be set to control reporting and share information with the right people in your organization.

Below is an example of the types of built-in dashboard views that you can leverage without having to write any code:

image

To support adoption of the Azure IoT Suite, we are also announcing the new Microsoft Azure Certified for IoT program, an ecosystem of partners whose offerings have been tested and certified to help businesses with their IoT device and platform needs. The first set of partners include Beaglebone, Freescale, Intel, Raspberry Pi, Resin.io, Seeed and Texas Instruments. These partners, along with experienced global solution providers are helping businesses harness the power of the Internet of Things today.  

You can learn more about our approach and the Azure IoT Suite at www.InternetofYourThings.com and partners can learn more at www.azure.com/iotdev. Announcing Azure IoT Hub

This week we also announced the public preview of our new Azure IoT Hub service which is a fully managed service that enables reliable and secure bi-directional communications between millions of IoT devices and an application back end. Azure IoT Hub offers reliable device-to-cloud and cloud-to-device hyper-scale messaging, enables secure communications using per-device security credentials and access control, and includes device libraries for the most popular languages and platforms.

Providing secure, scalable bi-directional communication from the heterogeneous devices to the cloud is a cornerstone of any IoT solution which Azure IoT hub addresses in the following way:

  • Per-device authentication and secure connectivity: Each device uses its own security key to connect to IoT Hub. The application back end is then able to individually whitelist and blacklist each device, enabling complete control over device access.
  • Extensive set of device libraries: Azure IoT device SDKs are available and supported for a variety of languages and platforms such as C, C#, Java, and JavaScript.
  • IoT protocols and extensibility: Azure IoT Hub provides native support of the HTTP 1.1 and AMQP 1.0 protocols for device connectivity. Azure IoT Hub can also be extended via the Azure IoT protocol gateway open source framework to provide support for MQTT v3.1.1.
  • Scale: Azure IoT Hub scales to millions of simultaneously connected devices, and millions of events per seconds.

Getting started with Azure IoT Hub is easy. Simply navigate to the Azure Preview portal, and use the Internet of Things->Azure IoT Hub. Choose the name, pricing tier, number of units and location and select Create to provision and deploy your IoT Hub:

image

Once the IoT hub is created, you can navigate to Settings and create new shared access policies and modify other messaging settings for granular control.

The bi-directional communication enabled with an IoT Hub provides powerful capabilities in a real world IoT solution such as the control of individual device security credentials and access through the use of a device identity registry.  Once a device identity is in the registry, the device can connect, send device-to-cloud messages to the hub, and receive cloud-to-device messages from backend applications with just a few lines of code in a secure way.

Learn more about Azure IoT Hub and get started with your own real world IoT solutions. Announcing the new Azure Container Service

’We’ve been working with Docker to integrate Docker containers with both Azure and Windows Server for some time. This week we announced the new Azure Container Service which leverages the popular Apache Mesos project to deliver a customer proven orchestration solution for applications delivered as Docker containers.

image[24]

The Azure Container Service enables users to easily create and manage a Docker enabled Apache Mesos cluster. The container management software running on these clusters is open source, and in addition to the application portability offered by tooling such as Docker and Docker Compose, you will be able to leverage portable container orchestration and management tooling such as Marathon, Chronos and Docker Swarm.

When utilizing the Azure Container Service, you will be able to take advantage of the tight integration with Azure infrastructure management features such as tagging of resources, Role Based Access Control (RBAC), Virtual Machine Scale Sets (VMSS) and the fully integrated user experience in the Azure portal. By coupling the enterprise class Azure cloud with key open source build, deploy and orchestration software, we maximize customer choice when it comes to containerize workloads.

The service will be available for preview by the end of the year. Learn More

Watch the AzureCon sessions online to learn more about all of the above announcements – plus a lot more that was covered during the day.  We are looking forward to seeing what you build with what you learn!

Hope this helps,

Scott omni

Categories: Architecture, Programming

Announcing General Availability of HDInsight on Linux + new Data Lake Services and Language

ScottGu's Blog - Scott Guthrie - Mon, 09/28/2015 - 21:54

Today, I’m happy to announce several key additions to our big data services in Azure, including the General Availability of HDInsight on Linux, as well as the introduction of our new Azure Data Lake and Language services. General Availability of HDInsight on Linux

Today we are announcing general availability of our HDInsight service on Ubuntu Linux.  HDInsight enables you to easily run managed Hadoop clusters in the cloud.  With today’s release we now allow you to configure these clusters to run using both a Windows Server Operating System as well as an Ubuntu based Linux Operating System.

HDInsight on Linux enables even broader support for Hadoop ecosystem partners to run in HDInsight providing you even greater choice of preferred tools and applications for running Hadoop workloads. Both Linux and Windows clusters in HDInsight are built on the same standard Hadoop distribution and offer the same set of rich capabilities.

Today’s new release also enables additional capabilities, such as, cluster scaling, virtual network integration and script action support. Furthermore, in addition to Hadoop cluster type, you can now create HBase and Storm clusters on Linux for your NoSQL and real time processing needs such as building an IoT application.

Create a cluster

HDInsight clusters running using Linux can now be easily created from the Azure Management portal under the Data + Analytics section.  Simply select Ubuntu from the cluster operating system drop-down, as well as optionally choose the cluster type you wish to create (we support base Hadoop as well as clusters pre-configured for workloads like Storm, Spark, HBase, etc).

image

All HDInsight Linux clusters can be managed by Apache Ambari. Ambari provides the ability to customize configuration settings of your Hadoop cluster while giving you a unified view of the performance and state of your cluster and providing monitoring and alerting within the HDInsight cluster.

image

Installing additional applications and Hadoop components

Similar to HDInsight Windows clusters, you can now customize your Linux cluster by installing additional applications or Hadoop components that are not part of default HDInsight deployment. This can be accomplished using Bash scripts with script action capability.  As an example, you can now install Hue on an HDInsight Linux cluster and easily use it with your workloads:

image

Develop using Familiar Tools

All HDInsight Linux clusters come with SSH connectivity enabled by default. You can connect to the cluster via a SSH client of your choice. Moreover, SSH tunneling can be leveraged to remotely access all of the Hadoop web applications from the browser.

image New Azure Data Lake Services and Language

We continue to see customers enabling amazing scenarios with big data in Azure including analyzing social graphs to increase charitable giving, analyzing radiation exposure and using the signals from thousands of devices to simulate ways for utility customers to optimize their monthly bills. These and other use cases are resulting in even more data being collected in Azure. In order to be able to dive deep into all of this data, and process it in different ways, you can now use our Azure Data Lake capabilities – which are 3 services that make big data easy.

The first service in the family is available today: Azure HDInsight, our managed Hadoop service that lets you focus on finding insights, and not spend your time having to manage clusters. HDInsight lets you deploy Hadoop, Spark, Storm and HBase clusters, running on Linux or Windows, managed, monitored and supported by Microsoft with a 99.9% SLA.

The other two services, Azure Data Lake Store and Azure Data Lake Analytics introduced below, are available in private preview today and will be available broadly for public usage shortly. Azure Data Lake Store

Azure Data Lake Store is a hyper-scale HDFS repository designed specifically for big data analytics workloads in the cloud. Azure Data Lake Store solves the big data challenges of volume, variety, and velocity by enabling you to store data of any type, at any size, and process it at any scale. Azure Data Lake Store can support near real-time scenarios such as the Internet of Things (IoT) as well as throughput-intensive analytics on huge data volumes. The Azure Data Lake Store also supports a variety of computation workloads by removing many of the restrictions constraining traditional analytics infrastructure like the pre-definition of schema and the creation of multiple data silos. Once located in the Azure Data Lake Store, Hadoop-based engines such as Azure HDInsight can easily mine the data to discover new insights.

Some of the key capabilities of Azure Data Lake Store include:

  • Any Data: A distributed file store that allows you to store data in its native format, Azure Data Lake Store eliminates the need to transform or pre-define schema in order to store data.
  • Any Size: With no fixed limits to file or account sizes, Azure Data Lake Store enables you to store kilobytes to exabytes with immediate read/write access.
  • At Any Scale: You can scale throughput to meet the demands of your analytic systems including the high throughput needed to analyze exabytes of data. In addition, it is built to handle high volumes of small writes at low latency making it optimal for near real-time scenarios like website analytics, and Internet of Things (IoT).
  • HDFS Compatible: It works out-of-the-box with the Hadoop ecosystem including other Azure Data Lake services such as HDInsight.
  • Fully Integrated with Azure Active Directory: Azure Data Lake Store is integrated with Azure Active Directory for identity and access management over all of your data.
Azure Data Lake Analytics with U-SQL

The new Azure Data Lake Analytics service makes it much easier to create and manage big data jobs. Built on YARN and years of experience running analytics pipelines for Office 365, XBox Live, Windows and Bing, the Azure Data Lake Analytics service is the most productive way to get insights from big data. You can get started in the Azure management portal, querying across data in blobs, Azure Data Lake Store, and Azure SQL DB. By simply moving a slider, you can scale up as much computing power as you’d like to run your data transformation jobs.

image

Today we are introducing a new U-SQL offering in the analytics service, an evolution of the familiar syntax of SQL.  U-SQL allows you to write declarative big data jobs, as well as easily include your own user code as part of those jobs. Inside Microsoft, developers have been using this combination in order to be productive operating on massive data sets of many exabytes of scale, processing mission critical data pipelines. In addition to providing an easy to use experience in the Azure management portal, we are delivering a rich set of tools in Visual Studio for debugging and optimizing your U-SQL jobs. This lets you play back and analyze your big data jobs, understanding bottlenecks and opportunities to improve both performance and efficiency, so that you can pay only for the resources you need and continually tune your operations.

image Learn More

For more information and to get started, check out the following links:

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Online AzureCon Conference this Tuesday

ScottGu's Blog - Scott Guthrie - Mon, 09/28/2015 - 04:35

This Tuesday, Sept 29th, we are hosting our online AzureCon event – which is a free online event with 60 technical sessions on Azure presented by both the Azure engineering team as well as MVPs and customers who use Azure today and will share their best practices.

I’ll be kicking off the event with a keynote at 9am PDT.  Watch it to learn the latest on Azure, and hear about a lot of exciting new announcements.  We’ll then have some fantastic sessions that you can watch throughout the day to learn even more.

image

Hope to see you there!

Scott

omni
Categories: Architecture, Programming

Better Density and Lower Prices for Azure’s SQL Elastic Database Pools

ScottGu's Blog - Scott Guthrie - Wed, 09/23/2015 - 21:41

A few weeks ago, we announced the preview availability of the new Basic and Premium Elastic Database Pools Tiers with our Azure SQL Database service.  Elastic Database Pools enable you to run multiple, isolated and independent databases that can be auto-scaled automatically across a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.

Today, we are announcing some nice changes to the pricing structure of Elastic Database Pools as well as changes to the density of elastic databases within a pool.  These changes make it even more attractive to use Elastic Database Pools to build your applications.

Specifically, we are making the following changes:

  • Finalizing the eDTU price – With Elastic Database Pools you purchase units of capacity that we can call eDTUs – which you can then use to run multiple databases within a pool.  We have decided to not increase the price of eDTUs as we go from preview->GA.  This means that you’ll be able to pay a much lower price (about 50% less) for eDTUs than many developers expected.
  • Eliminating the per-database fee – In additional to lower eDTU prices, we are also eliminating the fee per database that we have had with the preview. This means you no longer need to pay a per-database charge to use an Elastic Database Pool, and makes the pricing much more attractive for scenarios where you want to have lots of small databases.
  • Pool density – We are announcing increased density limits that enable you to run many more databases per Elastic Database pool. See the chart below under “Maximum databases per pool” for specifics. This change will take effect at the time of general availability, but you can design your apps around these numbers.  The increase pool density limits will make Elastic Database Pools event more attractive.

image

 

Below are the updated parameters for each of the Elastic Database Pool options with these new changes:

image

For more information about Azure SQL Database Elastic Database Pools and Management tools go the technical overview here.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Announcing the Biggest VM Sizes Available in the Cloud: New Azure GS-VM Series

ScottGu's Blog - Scott Guthrie - Wed, 09/02/2015 - 18:51

Today, we’re announcing the release of the new Azure GS-series of Virtual Machine sizes, which enable Azure Premium Storage to be used with Azure G-series VM sizes. These VM sizes are now available to use in both our US and Europe regions.

Earlier this year we released the G-series of Azure Virtual Machines – which provide the largest VM size provided by any public cloud provider.  They provide up to 32-cores of CPU, 448 GB of memory and 6.59 TB of local SSD-based storage.  Today’s release of the GS-series of Azure Virtual Machines enables you to now use these large VMs with Azure Premium Storage – and enables you to perform up to 2,000 MB/sec of storage throughput , more than double any other public cloud provider.  Using the G5/GS5 VM size now also offers more than 20 gbps of network bandwidth, also more than double the network throughout provided by any other public cloud provider.

These new VM offerings provide an ideal solution to your most demanding cloud based workloads, and are great for relational databases like SQL Server, MySQL, PostGres and other large data warehouse solutions. You can also use the GS-series to significantly scale-up the performance of enterprise applications like Dynamics AX.

The G and GS-series of VM sizes are available to use now in our West US, East US-2, and West Europe Azure regions.  You’ll see us continue to expand availability around the world in more regions in the coming months. GS Series Size Details

The below table provides more details on the exact capabilities of the new GS-series of VM sizes:

Size

Cores

Memory

Max Disk IOPS

Max Disk Bandwidth

(MB per second)

Standard_GS1

2

28

5,000

125

Standard_GS2

4

56

10,000

250

Standard_GS3

8

112

20,000

500

Standard_GS4

16

224

40,000

1,000

Standard_GS5

32

448

80,000

2,000

Creating a GS-Series Virtual Machine

Creating a new GS series VM is very easy.  Simply navigate to the Azure Preview Portal, select New(+) and choose your favorite OS or VM image type:

image

Click the Create button, and then click the pricing tier option and select “View All” to see the full list of VM sizes. Make sure your region is West US, East US 2, or West Europe to select the G-series or the GS-Series:

image

When choosing a GS-series VM size, the portal will create a storage account using Premium Azure Storage. You can select an existing Premium Storage account, as well, to use for the OS disk of the VM:

image

Hitting Create will launch and provision the VM. Learn More

If you would like more information on the GS-Series VM sizes as well as other Azure VM Sizes then please visit the following page for additional details: Virtual Machine Sizes for Azure.

For more information on Premium Storage, please see: Premium Storage overview. Also, refer to Using Linux VMs with Premium Storage for more details on Linux deployments on Premium Storage.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Announcing Great New SQL Database Capabilities in Azure

ScottGu's Blog - Scott Guthrie - Thu, 08/27/2015 - 17:13

Today we are making available several new SQL Database capabilities in Azure that enable you to build even better cloud applications.  In particular:

  • We are introducing two new pricing tiers for our  Elastic Database Pool capability.  Elastic Database Pools enable you to run multiple, isolated and independent databases on a private pool of resources dedicated to just you and your apps.  This provides a great way for software-as-a-service (SaaS) developers to better isolate their individual customers in an economical way.
  • We are also introducing new higher-end scale options for SQL Databases that enable you to run even larger databases with significantly more compute + storage + networking resources.

Both of these additions are available to start using immediately.  Elastic Database Pools

If you are a SaaS developer with tens, hundreds, or even thousands of databases, an elastic database pool dramatically simplifies the process of creating, maintaining, and managing performance across these databases within a budget that you control. 

image

A common SaaS application pattern (especially for B2B SaaS apps) is for the SaaS app to use a different database to store data for each customer.  This has the benefit of isolating the data for each customer separately (and enables each customer’s data to be encrypted separately, backed-up separately, etc).  While this pattern is great from an isolation and security perspective, each database can end up having varying and unpredictable resource consumption (CPU/IO/Memory patterns), and because the peaks and valleys for each customer might be difficult to predict, it is hard to know how much resources to provision.  Developers were previously faced with two options: either over-provision database resources based on peak usage--and overpay. Or under-provision to save cost--at the expense of performance and customer satisfaction during peaks.

Microsoft created elastic database pools specifically to help developers solve this problem.  With Elastic Database Pools you can allocate a shared pool of database resources (CPU/IO/Memory), and then create and run multiple isolated databases on top of this pool.  You can set minimum and maximum performance SLA limits of your choosing for each database you add into the pool (ensuring that none of the databases unfairly impacts other databases in your pool).  Our management APIs also make it much easier to script and manage these multiple databases together, as well as optionally execute queries that span across them (useful for a variety operations).  And best of all when you add multiple databases to an Elastic Database Pool, you are able to average out the typical utilization load (because each of your customers tend to have different peaks and valleys) and end up requiring far fewer database resources (and spend less money as a result) than you would if you ran each database separately.

The below chart shows a typical example of what we see when SaaS developers take advantage of the Elastic Pool capability.  Each individual database they have has different peaks and valleys in terms of utilization.  As you combine multiple of these databases into an Elastic Pool the peaks and valleys tend to normalize out (since they often happen at different times) to require much less overall resources that you would need if each database was resourced separately:

databases sharing eDTUs

Because Elastic Database Pools are built using our SQL Database service, you also get to take advantage of all of the underlying database as a service capabilities that are built into it: 99.99% SLA, multiple-high availability replica support built-in with no extra charges, no down-time during patching, geo-replication, point-in-time recovery, TDE encryption of data, row-level security, full-text search, and much more.  The end result is a really nice database platform that provides a lot of flexibility, as well as the ability to save money.

New Basic and Premium Tiers for Elastic Database Pools

Earlier this year at the //Build conference we announced our new Elastic Database Pool support in Azure and entered public preview with the Standard Tier edition of it.  The Standard Tier allows individual databases within the elastic pool to burst up to 100 eDTUs (a DTU represents a combination of Compute + IO + Storage performance) for performance. 

Today we are adding additional Basic and Premium Elastic Database Pools to the preview to enable a wider range of performance and cost options.

  • Basic Elastic Database Pools are great for light-usage SaaS scenarios.  Basic Elastic Database Pools allows individual databases performance bursts up to 5 eDTUs.
  • Premium Elastic Database Pools are designed for databases that require the highest performance per database. Premium Elastic Database Pools allows individual database performance bursts up to 1,000 eDTUs.

Collectively we think these three Elastic Database Pool pricing tier options provide a tremendous amount of flexibility and optionality for SaaS developers to take advantage of, and are designed to enable a wide variety of different scenarios. Easily Migrate Databases Between Pricing Tiers

One of the cool capabilities we support is the ability to easily migrate an individual database between different Elastic Database Pools (including ones with different pricing tiers).  For example, if you were a SaaS developer you could start a customer out with a trial edition of your application – and choose to run the database that backs it within a Basic Elastic Database Pool to run it super cost effectively.  As the customer’s usage grows you could then auto-migrate them to a Standard database pool without customer downtime.  If the customer grows up to require a tremendous amount of resources you could then migrate them to a Premium Database Pool or run their database as a standalone SQL Database with a huge amount of resource capacity.

This provides a tremendous amount of flexibility and capability, and enables you to build even better applications. Managing Elastic Database Pools

One of the the other nice things about Elastic Database Pools is that the service provides the management capabilities to easily manage large collections of databases without you having to worry about the infrastructure that runs it.   

You can create and mange Elastic Database Pools using our Azure Management Portal or via our Command-line tools or REST Management APIs.  With today’s update we are also adding support so that you can use T-SQL to add/remove new databases to/from an elastic pool.  Today’s update also adds T-SQL support for measuring resource utilization of databases within an elastic pool – making it even easier to monitor and track utilization by database.

image Elastic Database Pool Tier Capabilities

During the preview, we have been and will continue to tune a number of parameters that control the density of Elastic Database Pools as we progress through the preview.

In particular, the current limits for the number of databases per pool and the number of pool eDTUs is something we plan to steadily increase as we march towards the general availability release.  Our plan is to provide the highest possible density per pool, largest pool sizes, and the best Elastic Database Pool economics while at the same time keeping our 99.99 availability SLA.

Below are the current performance parameters for each of the Elastic Database Pool Tier options in preview today:

 

Basic Elastic

Standard Elastic

Premium Elastic

Elastic Database Pool

eDTU range per pool (preview limits)

100-1200 eDTUs

100-1200 eDTUs

125-1500 eDTUs

Storage range per pool

10-120 GB

100-1200 GB

63-750 GB

Maximum database per pool (preview limits)

200

200

50

Estimated monthly pool and add-on  eDTU costs (preview prices)

Starting at $0.2/hr (~$149/pool/mo).

Each additional eDTU $.002/hr (~$1.49/mo)

Starting at $0.3/hr (~$223/pool mo). 

Each additional eDTU $0.003/hr (~$2.23/mo)

Starting at $0.937/hr (`$697/pool/mo).

Each additional eDTU $0.0075/hr (~$5.58/mo)

Storage per eDTU

0.1 GB per eDTU

1 GB per eDTU

.5 GB per eDTU

Elastic Databases

eDTU max per database (preview limits)

0-5

0-100

0-1000

Storage max per DB

2 GB

250 GB

500 GB

Per DB cost (preview prices)

$0.0003/hr (~$0.22/mo)

$0.0017/hr (~$1.26/mo)

$0.0084/hr (~$6.25/mo)

We’ll continue to iterate on the above parameters and increase the maximum number of databases per pool as we progress through the preview, and would love your feedback as we do so.

New Higher-Scale SQL Database Performance Tiers

In addition to the enhancements for Elastic Database Pools, we are also today releasing new SQL Database Premium performance tier options for standalone databases. 

Today we are adding a new P4 (500 DTU) and a P11 (1750 DTU) level which provide even higher performance database options for SQL Databases that want to scale-up. The new P11 edition also now supports databases up to 1TB in size.

Developers can now choose from 10 different SQL Database Performance levels.  You can easily scale-up/scale-down as needed at any point without database downtime or interruption.  Each database performance tier supports a 99.99% SLA, multiple-high availability replica support built-in with no extra charges (meaning you don’t need to buy multiple instances to get an SLA – this is built-into each database), no down-time during patching, point-in-time recovery options (restore without needing a backup), TDE encryption of data, row-level security, and full-text search.

image

Learn More

You can learn more about SQL Databases by visiting the http://azure.microsoft.com web-site.  Check out the SQL Database product page to learn more about the capabilities SQL Databases provide, as well as read the technical documentation to learn more how to build great applications using it.

Summary

Today’s database updates enable developers to build even better cloud applications, and to use data to make them even richer more intelligent.  We are really looking forward to seeing the solutions you build.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Announcing Windows Server 2016 Containers Preview

ScottGu's Blog - Scott Guthrie - Wed, 08/19/2015 - 17:01

At DockerCon this year, Mark Russinovich, CTO of Microsoft Azure, demonstrated the first ever application built using code running in both a Windows Server Container and a Linux container connected together. This demo helped demonstrate Microsoft's vision that in partnership with Docker, we can help bring the Windows and Linux ecosystems together by enabling developers to build container-based distributed applications using the tools and platforms of their choice.

Today we are excited to release the first preview of Windows Server Containers as part of our Windows Server 2016 Technical Preview 3 release. We’re also announcing great updates from our close collaboration with Docker, including enabling support for the Windows platform in the Docker Engine and a preview of the Docker Engine for Windows. Our Visual Studio Tools for Docker, which we previewed earlier this year, have also been updated to support Windows Server Containers, providing you a seamless end-to-end experience straight from Visual Studio to develop and deploy code to both Windows Server and Linux containers. Last but not least, we’ve made it easy to get started with Windows Server Containers in Azure via a dedicated virtual machine image. Windows Server Containers

Windows Server Containers create a highly agile Windows Server environment, enabling you to accelerate the DevOps process to efficiently build and deploy modern applications. With today’s preview release, millions of Windows developers will be able to experience the benefits of containers for the first time using the languages of their choice – whether .NET, ASP.NET, PowerShell or Python, Ruby on Rails, Java and many others.

Today’s announcement delivers on the promise we made in partnership with Docker, the fast-growing open platform for distributed applications, to offer container and DevOps benefits to Linux and Windows Server users alike. Windows Server Containers are now part of the Docker open source project, and Microsoft is a founding member of the Open Container Initiative. Windows Server Containers can be deployed and managed either using the Docker client or PowerShell. Getting Started using Visual Studio

The preview of our Visual Studio Tools for Docker, which enables developers to build and publish ASP.NET 5 Web Apps or console applications directly to a Docker container, has been updated to include support for today’s preview of Windows Server Containers. The extension automates creating and configuring your container host in Azure, building a container image which includes your application, and publishing it directly to your container host. You can download and install this extension, and read more about it, at the Visual Studio Gallery here: http://aka.ms/vslovesdocker.

Once installed, developers can right-click on their projects within Visual Studio and select “Publish”:

image

Doing so will display a Publish dialog which will now include the ability to deploy to a Docker Container (on either a Windows Server or Linux machine):

image

You can choose to deploy to any existing Docker host you already have running:

image

Or use the dialog to create a new Virtual Machine running either Window Server or Linux with containers enabled.  The below screen-shot shows how easy it is to create a new VM hosted on Azure that runs today’s Windows Server 2016 TP3 preview that supports Containers – you can do all of this (and deploy your apps to it) easily without ever having to leave the Visual Studio IDE:

image Getting Started Using Azure

In June of last year, at the first DockerCon, we enabled a streamlined Azure experience for creating and managing Docker hosts in the cloud. Up until now these hosts have only run on Linux. With the new preview of Windows Server 2016 supporting Windows Server Containers, we have enabled a parallel experience for Windows users.

Directly from the Azure Marketplace, users can now deploy a Windows Server 2016 virtual machine pre-configured with the container feature enabled and Docker Engine installed. Our quick start guide has all of the details including screen shots and a walkthrough video so take a look here https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/azure_setup.

image

Once your container host is up and running, the quick start guide includes step by step guides for creating and managing containers using both Docker and PowerShell. Getting Started Locally Using Hyper-V

Creating a virtual machine on your local machine using Hyper-V to act as your container host is now really easy. We’ve published some PowerShell scripts to GitHub that automate nearly the whole process so that you can get started experimenting with Windows Server Containers as quickly as possible. The quick start guide has all of the details at https://msdn.microsoft.com/en-us/virtualization/windowscontainers/quick_start/container_setup.

Once your container host is up and running the quick start guide includes step by step guides for creating and managing containers using both Docker and PowerShell.

image Additional Information and Resources

A great list of resources including links to past presentations on containers, blogs and samples can be found in the community section of our documentation. We have also setup a dedicated Windows containers forum where you can provide feedback, ask questions and report bugs. If you want to learn more about the technology behind containers I would highly recommend reading Mark Russinovich’s blog on “Containers: Docker, Windows and Trends” that was published earlier this week. Summary

At the //Build conference earlier this year we talked about our plan to make containers a fundamental part of our application platform, and today’s releases are a set of significant steps in making this a reality.’ The decision we made to embrace Docker and the Docker ecosystem to enable this in both Azure and Windows Server has generated a lot of positive feedback and we are just getting started.

While there is still more work to be done, now users in the Window Server ecosystem can begin experiencing the world of containers. I highly recommend you download the Visual Studio Tools for Docker, create a Windows Container host in Azure or locally, and try out our PowerShell and Docker support. Most importantly, we look forward to hearing feedback on your experience.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Another elasticsearch blog post, now about Shield

Gridshore - Thu, 01/29/2015 - 21:13

<p>I just wrote another piece of text on my other blog. This time I wrote about the recently release elasticsearch plugin called Shield. If you want to learn more about securing your elasticsearch cluster, please head over to my other blog and start reading</p>

http://amsterdam.luminis.eu/2015/01/29/elasticsearch-shield-first-steps-using-java/

The post Another elasticsearch blog post, now about Shield appeared first on Gridshore.

Categories: Architecture, Programming

New blog posts about bower, grunt and elasticsearch

Gridshore - Mon, 12/15/2014 - 08:45

Two new blog posts I want to point out to you all. I wrote these blog posts on my employers blog:

The first post is about creating backups of your elasticsearch cluster. Some time a go they introduced the snapshot/restore functionality. Of course you can use the REST endpoint to use the functionality, but how easy is it if you can use a plugin to handle the snapshots. Or maybe even better, integrate the functionality in your own java application. That is what this blogpost is about, integrating snapshot/restore functionality in you java application. As a bonus there are the screens of my elasticsearch gui project snowing the snapshot/restore functionality.

Creating elasticsearch backups with snapshot/restore

The second blog post I want to put under you attention is front-end oriented. I already mentioned my elasticsearch gui project. This is an Angularjs application. I have been working on the plugin for a long time and the amount of javascript code is increasing. Therefore I wanted to introduce grunt and bower to my project. That is what this blogpost is about.

Improve my AngularJS project with grunt

The post New blog posts about bower, grunt and elasticsearch appeared first on Gridshore.

Categories: Architecture, Programming

New blogpost on kibana 4 beta

Gridshore - Tue, 12/02/2014 - 12:55

If you are like me interested in elasticsearch and kibana, than you might be interested in a blog post I wrote on my employers blog about the new Kibana 4 beta. If so, head over to my employers blog:

http://amsterdam.luminis.eu/2014/12/01/experiment-with-the-kibana-4-beta/

The post New blogpost on kibana 4 beta appeared first on Gridshore.

Categories: Architecture, Programming