Quantcast
Channel: Custom Development – yuriburger.net

Developing Java on Azure

$
0
0

Part 2! In a previous post, I discussed the options for running Java workloads on Azure. Now it is time to discover some of the choices Azure offers to our software engineers!

Developing Java applications

Sourcecode

Application consist of source code that needs to be centrally stored and managed. Azure has many options for storing sourcecode, but also for working with external repositories. Although not something very specific for Java projects, still good to know Azure supports your favourite repository flavour!

Azure DevOps Git Repos

Azure DevOps (yes, this is the new name for VSTS f.k.a. Visual Studio Online) supports Git for your projects code repositories. Upon creation of a new Azure DevOps project, you are presented with some options for configuring the included Git repo.

Integration with GitHub, Gitlabs and others

Besides its own repositories, Azure has some great support for other flavours. If you configure a build pipeline for instance, you can pick from popular choices:

MongoDB

Our favourite document based database is also available on Azure. You can of course run your own VM or docker image, but Azure also offers a MongoDB as a Service. This service is called Azure Cosmos DB and is Microsoft’s globally distributed, multi-model database service. Azure Cosmos DB provides global distribution, elastic scaling and very low latencies with very high availability. And it comes with 5 different APIs including a MongoDB API.

MySQL

The popular relational database. Again, we can run this with our own VM or docker image, but it is also available as a managed service.

Azure DevOps

Azure DevOps is Microsofts collection of developer tools, cloud services and CI/CD pipelines. 

  • Azure Boards: Work Item Tracking
  • Azure Repos: the Git repos mentioned above
  • Azure Pipelines: build, test, deploy with CI/CD
  • Azure Test Plans: test tools
  • Azure Artifacts: package, share and ship your software

Although (again) not very specific for our Java projects, it is perfectly suitable. It supports continuous integration (CI) and continuous delivery (CD) pipelines for our Java apps in Azure out of the box.

More info: https://azure.microsoft.com/en-us/services/devops/

Jenkins

If Jenkins is your CI tool of choice, you can easily integrate it with Azure DevOps and still gain alot of benefits from the Azure tools. This way you can keep Jenkins for your CI builds and use Azure for the (continuous) deployments!

More info: https://docs.microsoft.com/en-us/azure/devops/pipelines/release/integrate-jenkins-vsts-cicd?view=vsts 

Maven

Microsoft provides a Maven Plugin for the Azure App Service. It provides seamless integration into Maven projects, and makes it easier for developers to deploy to different kinds of Azure Web Apps:

More info: https://github.com/Microsoft/azure-maven-plugins/tree/develop/azure-webapp-maven-plugin

Azure Pipelines

Azure Pipelines is part of the Azure DevOps tools, but also available standalone if that is all you need. Azure Pipelines offers cloud-hosted pipelines for Linux, macOS, and Windows with 10 free parallel jobs and unlimited minutes for open source projects.

If your source code resides on GitHub, it is even easier to get started with CD/CD! If you browse the GitHub CI Marketplace, you will find a plan to integrate with Azure in just a few clicks! 

More info: https://github.com/marketplace/azure-pipelines 

Plugins

There are several plugins available for IntelliJ and Eclipse that can really boost your Azure workflow: 

  • Azure Toolkit for IntelliJ provides templates and functionality that allow you to easily create, develop, test, and deploy cloud application to Azure.
  • Azure Toolkit for Eclipse provides the same functionality, but for the Eclipse IDE.

 See https://github.com/microsoft/azure-tools-for-java for more information on how to set this up.

Azure Functions

And to end this post, a Azure Service that natively supports Java: Azure Functions. Azure Functions are an easy way to run small pieces of code, or “functions,” in the cloud (think “AWS Lambda”). Functions is a solution for processing data, integrating systems, working with the internet-of-things (IoT), and building simple APIs and microservices. You can run them through triggers or on a schedule. It includes support for many languages, including Java!

More info: https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-java

But wait, there’s still more!

Microsoft has a lot of options for running and developing Java based workloads on Azure, but this is still not not the entire story ;). There are many, many more services that support Java in one way or another, but we will leave them for another time. In one more follow up post I will be reviewing options for operating your custom Java solutions on Azure.

/Y.


LEAP 2019 – day 1

$
0
0

LEAP is an event aimed at architects working with Microsofts product portfolio. It focusses on gaining indepth knowledge about current products and future evolutions. The program includes five days of guided architecture-focused topics that enable you to deliver tomorrow’s solutions and allows for questions and dialogue with engineers and program managers from Microsoft.

The VX Company Software Development team @ LEAP 2019

This year we (four collegues from VX Company) are attending this event and this blogposts summarizes on the topics.

The keynote

Welcome @ LEAP: Be Inspired, Connect, Learn (and take at least 3 new “things” home)

After event registration and the formal welcome by our host, Scott Guthrie kicks of with the keynote. After a high level overview of Azure with statistics backed by customer stories, Scott mentions his team pushes hard on hybrid this year. They aim to provide a consistent hybrid cloud experience with:

  • Azure SQL Database Managed Instance General Available: deliver SQL Server as a Service and compatible with the full feature set but without requireing any changes to your app or solution.
  • Azure Migrate (analysis and automated migration to Azure).
  • Azure Stack: run Azure everywhere, at the edge and even disconnected to meet regulatory requirements.

Next generation IoT services is another focus this year. You can now connect thousands of Azure certified IoT devices
and control them through Azure IoT Central for device and security management. Other news on IoT:

  • Azure Digital Twins: build next-generation IoT (Internet of Things) solutions to model the relationships and interactions among people, places, and devices.
  • You can now run Azure Cognitive Services inside your own containers and even on your IoT devices.

We also look at increased developer support with GitHub Actions, one of the serverless options within GitHub. Other news on dev support:

  • Keep GitHub as a separate company and make sure it remains open and the unique thing it is 🙂
  • Azure Artifacts (maven, NPM, NuGet) for package management as a part of Azure DevOps.
  • Microsoft Learn: a new way for hands-on and interactive learning.

Microservices Reference Architectures

From the Patterns & Practises team we get an insight on creating a microservices architecture with both Azure Kubernetes Service and Azure Service Fabric. The Azure Architecture Centre also has documentation on this topic: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/microservices/aks.

After the basics (what is a microservice and why would we build them?) we start with a “Bad example” based on the 3-tier model.

The solution to this nightmare: create a microservices architecture with Azure Kubernetes Service:

There is no single right way of doing this. Options change and microservices are still considered a new concept. Coming soon: Vertical Pod Autoscaler is now in preview as another option for autoscaling (besides Pod Autoscaling and AKS Cluster Autoscaling).

Another solution to the 3-tier model could be Azure Service Fabric:

A reference architecture for microservices based on Service Fabric is being developed, so coming soon.

Data + Cloud + ML

A talk about two decades of data innovation and finishing with Azure SQL DB Hyperscale.

  • Data Warehouse to Data Lake
  • The growing importance of Data Governance and how to handle GDPR scenarios. Important topics are Data Compliancy and Data Lineage
  • Azure Databricks is a Spark ecosystem, integrated with Azure Datalake
  • Azure Datafactory (a cloud based ETL, SSIS as a Service)

Because of regulations like GDPR, we will expect to see a deeper ingegration. Increased data security, serverless and with a unified higher level experience. Responsible Data use and data governance will require RBAC data access.

In-database analytics: we can use R code inside SQL server. This avoids getting data out of SQL and putting it back in. SQL RBAC still applies. More on this: In-server analytics.

Azure SQL DB Hyperscale, a highly scalable service tier for single databases that adapts on-demand to your workload’s needs and scales automatically up to 100TB (but can grow even beyond). It solves a lot of the scale and performance issues compared to other DB models. But is also solves important parts of the data compliancy, lineage and goverance issues. Currently in preview.

Cosmos DB

  • Guaranteed low latency (<10ms for both read and write in a single region) at 99th percentile, worldwide.
  • Multi model, multi master database service.
  • Reserved Capacity is also available for Request Units, saving up to 65% on Azure Costs.
  • You can use Azure Functions (or the SDK) to control throughput limits.

Cosmos DB relies heavily on pre-provisioning, so calculating (estimating required throughput) upfront is important.

Tip: use the new SDK (.NET SDK 3.0) for calculating document and throughput costs.

Full Stack Azure Monitoring

The final session of the first day covers Azure Monitoring. Application Insights and Log Analytics are now part of a single product offering: Azure Monitor. It provides:

  • Unified Monitoring
  • Data Driven Insights
  • Workflow Integrations

Azure Monitor is used to monitor cloud native applications. It provides monitoring at scale integrated with your DevOps (or similar) Pipelines, works across tools, languages and platforms. If used in conjunction with Azure it relates to Azure Resource Groups and components.

It can be used for Artificial Intelligence for IT (AIOPS):
Big data analytics, machine learning and other artificial intelligence technologies to automate the identification and resolution of issues.

Recent feature Azure Monitor Insights for Resource Groups, currently in preview. It provides a single entrypoint for diagnosing and triaging issues from a Azure Resource Group perspective.

Coming soon? Troubleshoot Java based application issues using the Debug Snapshots. For .NET applications, this is currently available in preview. For more information: Snapshot Debugger

That rounded up day 1, more to come.

LEAP 2019 – day 2

$
0
0

Recap of day 2 of the LEAP event. You can find the recap of day 1 here.

Power BI – developer extensibility

Day 2 kicks of with a session on Power BI. Not how to use it, but how to extend it and aims to cover common developer scenarios.

Important take away from this session: developer live for Power BI starts at http://dev.powerbi.com. You will find:

  • documentation
  • developer scenarios
  • source code

“The app owns the data” meaning users of the dashboards do not need PowerBI Pro licenses to use them. With anonymous embedding you do not get the benefits of filtereing data based on user access or roles. With true Power BI Embedding you can filter and present data based on identity or role.

Power BI Embedding through code is done with “powerbi.js”. It is as easy as grabbing an embed token and url, pick a dataset and a view mode.

Different SKU’s are available and licensing can be tricky. There is a comparison matrix available to assist you with that.

Explore APIs and experiment with the Power BI Embedded Playground:

For non Power BI User Embedding you now still need a Master App Account. An App Only Token Account is coming soon, so things like MFA will not get into your way!

PowerBI supports Streaming and Realtime Dashboards with a number of datatypes.

The team added PowerShell capabilities for Power BI Admins to answer questions like: what datasets are published in the cloud, are Identities and Roles required, Identity creators of datasets and what connections are used?

Identity

No slides, just the speaker (Stuart Kwan) and some whiteboard diagrams. The session will provide the tools to think about federated identity. The first one is based on a “Three body diagram”: Client – Secure Token Service – Server. With every identity scenario you should always consider the three basic mechanics:

  • First: Trust Graph (who trusts who)
  • Second: Protocol Sequence (how we communicate)
  • Third: Claims Transformation (convert to new token with the claims from the trusted STS)

Manually craft a login sequence:

https://login.microsoftonline.com/%5Btenant%5D/oauth2/authorize?client_id=%5Bclientid%5D&response_type=id_token&scope=openid&nonce=randomstring

In a passive scenario the JWT token is consumed and not used again. It is replaced with a session cookie.

The second scenario is based on the “Four body diagram”: Client – STS1 – STS2 – Server.

Stuart finishes up talking us through a full protocol sequence diagram for an active client. In this case Outlook (OL) that uses Azure AD Authentication Library (ADAL) and a Web View to connect to Exchange Online (EXO). In this case it is configured in a federated scenario with Azure AD (AAD) and an on-premises Active Directory Federation Services server (ADFS). It exchanges both an Access Token (AT) as well as an Refresh Token (RT):

The Code Behind The Vulnerability

This talk starts with an explanation of the Microsoft Bulletin Process and continues with a detailed view on 10 different security bulletins.

The slides from this session will be posted on: https://idunno.org/

Diagnostics in the cloud with Azure Monitor APM

Monitoring has become more complex over the last years. Different languages, different ways of logging, different systems. Azure Monitor aims to make it easier, all the way down to the application through observability.

Monitoring tells you whether the system works. Observability lets you ask why it’s not working.

Baron Schwartz

Azure Monitor is now the consolidated result of a lot of products and services, including Application Insights and Log Analytics:

  • Unified Monitoring
  • Data Driven Insights
  • Partner Integrations

Demo 1: Different Application Insights Dashboards.

See the Azure documentation for more information on the different types of dashboards.

Demo 2: Live Stream Analytics.

It allows you to select and filter metrics and performance counters to watch in real time, without any disturbance to your services:

  • Inspect stack traces
  • Profiler
  • Snapshot debugger
  • Live performance testing

Demo 3: Application Insights Profiler

It is intended to capture data from production in 2 minute samples per hour by capturing requests to show what code is doing over time.

Demo covers: Application Insights Profiler to run on demand performance tests. You can run the loadtest directly from the Azure Portal: see the docs for more information.

Demo 4: Snapshot debugging from the Application Insights blades.

It allows you to collect a debug snapshot from your live web application and take it to Visual Studio for analysis (and link up the correct symbols with source code if available).

AI, Microsoft and Partners

What is AI and what can it do. What toolchains does Microsoft support?


The difference between ML and AI: “If it is written in Python, then it is machine learning, but if it is written in PowerPoint, it is AI.

The Big Bang in AI: the SuperVision algorythm

  • Beginner: Start with a blogpost Machine Learning is fun!
  • Medium: Coursera courses on Machine Learning
  • Advanced: reddit.com/r/machinelearning

You need a lot of labeled data to train your models. Label yourself or acquire pre-labeled data.

DevOps

Accelerated State of DevOps: https://cloudplatformonline.com/2018-state-of-devops.html

Demo: getting started with Azure DevOps

You get 10 Parallel jobs with unlimited minutes per month for free:

Azure Pipelines is part of the Azure DevOps tools, but also available standalone if that is all you need. Azure Pipelines offers cloud-hosted pipelines for Linux, macOS, and Windows with 10 free parallel jobs and unlimited minutes for open source projects.

If your source code resides on GitHub, it is even easier to get started with CD/CD! If you browse the GitHub CI Marketplace, you will find a plan to integrate with Azure in just a few clicks! 

Demo: setting up diverse pipelines with Azure Pipelines.

Run your tests and record the test results for each test step using Microsoft Test Runner. You can use the web runner for web apps, or the desktop runner for desktop app data collection.

Link for the slides: https://aka.ms/leap2019

LEAP 2019 – day 3

$
0
0

Recap of day 3 of the LEAP event (day 1 here, day 2 here).

Day 3 will be (almost entirely) about containers and Kubernetes. The day starts with a high level overview of where we are heading with all of this.

Containers/ microservices

The real world is messy, so the regular sales pitch on Kubernetes and containers does not apply. A little bit of history:

Software deployment used to be a manual deployment: create a VM, SSH into it and deploy the code after building and packaging the code.

Netflix started to talk about Immutable Infrastructures and the idea is to build complete machine images. Images are immutable, so they were never updated. Software updates involved building a new image and throwing away the old ones.

Besides the packaging with machine images, Netflix started working on Blue/Green deployment patterns. This meant two groups of services can exist at one time (blue and green) and that would not really work with real machines or VM’s.

Source: https://martinfowler.com/bliki/BlueGreenDeployment.html

Docker came around and people started noticing. Container images ensured code run everywhere (not only on “my machine”) and deployment became predictable and reliable. But containers manage software on a single machine and were hard to manage across machines.

Orchestration became a requirement, not only for distribution. Application Health Checks and restarting applications became another important requirement. Failing machines make Kubernetes orchestrate the reliable transfers among container images.

Dependencies are hard and make our software brittle. Kubernetes forces you into patterns where you get rid of dependencies. Orchestration offers Service Load Balancers and that enables scale at every layer (back-end and front-end are not required to operate together on this)

What is the role of Operations in all of this? A smaller operating surface to support, because everything is packaged up in an image. There is less to operate:

  • Hardware has been abstracted
  • The OS has been abstracted (no specific OS support needed)
  • The cluster has been abstracted (by Azure AKS for instance)

So only the application layer needs operational support. But how do we handle the legacy applications?

How about connectivity between On Premises VM’s and the cloud? Azure Kubernetes clusters support Azure Virtual Networks, so connectivity should not be too hard. Express Route or normal VPN enable all kinds of scenario’s.

Another option can be Selector-Less Service, ideal for projecting local services into your Kubernetes cluster.

Connecting through an Azure Load Balancer is another option on bridging the old and the new worlds.

To create cloud native applications, we need cloud native developers. It is not all about cool new tech.

Visual Studio Code extensions make working with Kubernetes (cloud and local) easy: https://github.com/Azure/vscode-kubernetes-tools

How do you control the interaction between developers/ops and Kubernetes? Open Policy Controller (custom controllers that handle API request into the Kubernetes service (part of Open Policy Agent).

Cluster Services force a standardized way of doing things (if you deploy into the cluster, you will get the standard monitoring). So instead of teaching people how to monitor, we enable this upon deployment into the cluster.

Containers as an Azure primitive

Azure Container Instances: the easy way to host containers. Small in size and oriented towards application workloads. Is is a container primitive, micro-billed (by the second) with invisible infrastructure.

ACI and AKS:

  • ACI will not solve your orchestration challenges
  • But will work together with Kubernetes
  • And ACI remeains simpler to maintain

Virtual Nodes (a.k.a. Virtual Kubelet) is an addon for AKS. You can tag certain pods for out-of-band scaling with ACI!

ACI and Logic Apps:

  • Intended to be used together, especially for task based scenarios
  • Integrate ACI and Logic Apps, for example for ad-hoc sentiment analysis
  • BOTS (PR Review Bot)

Samples: https://github.com/Azure-Samples/aci-logicapps-integration

ACI and Virtual Machines:

  • ACI is not intended to replace VM’s
  • ACI usally cheaper compared to VM’s
  • Guest customization required (network, cpu, etc.)? Provisioned VM’s are a better choice.

ACI or Functions:

  • App security might be decisive
  • Level of customization (more customization, ACI better match)
  • Or combined: ACI + Functions = Simple orchestration

Service Fabric + Mesh

Focussed on Microservices and the balance between business benefits (cost visibility, partner integration, faster future releases) and developer benefits (agility, flexibility):

  • Any code or framework
  • DevOps integration
  • integrated diagnostics and monitoring
  • High available
  • Auto scaling/ sizing
  • Manage state
  • Security hardened
  • Intelligent (message routing, service discovery)

Service Fabric is a complete microservices platform and compared to Kubernetes can control the entire ecosystem. Service Fabric is available Fully Managed, as Dedicated Azure Clusters and on a Bring Your Own Infrastructure basis.

Service Fabric Mesh is the Azure Fully Managed option.

Operational Best Practises in AKS

Also available from the docs.

Cluster Isolation Parameters:

  • Physical Isolation: every team has its own cluster. Pro: no special Kubernetes skills or security required.
  • Logical Isolation: isolation by namespace, lower cost. Separated between Dev/Staging and Prod.
  • Logical Isolated environments are a single point of failure. You should apply a good mix between logical and physical.
  • Kubernetes Network Policies: coming soon to AKS
  • Kubernetes Resource Quotas: limit compute. Every spec should have this.
  • Developers are better left coding and not be bothered too much with Kubernetes. Cross Functional or Kubernetes Team are the answer here.

Networking:

  • Basic networking: uses Kubenet, drawback is hard to connect to on-premises networks due two IP addressing (0.0.0.0/8).
  • Advanced networking: uses Azure CNI.
  • Services can be public (which is the default scenario), but also private (Interal Service, only VNet or ExpressRoute access).
  • Ingress as an abstraction to manage external access to services in the cluster. Multiple services, single public IP.
  • Use Azure Application Gateway with WAF to secure services. This would require a private loadbalancer. More info on this here.

Cluster Security:

  • Key risks include elevated access, access sensitive data, run malicious content.
  • How to secure? I&AM through AAD.
  • Just released: API to rotate credentials. Coming soon: API to rotate AAD credentials.
  • ClusterRoleBinding on Groups (GUID from AAD) is recommended of course. This saves on administration effert.
  • There is currently no way to invalidate the admin certificate. If an admin leaves put still has access to the kubeconfig (which holds the entire certificate) you are in trouble.
  • Coming soon: PodSecurityPolicy and NodeRestriction.
  • Kured Daemonset can be used to schedule node reboots.

Container Security:

Pod Security:

Tenants and Azure Kubernetes Service

Slicing and Dicing, Hard and Soft Multitenancy in Azure Kubernetes Service.

All security concepts still apply, no real magic going on in Kubernetes. And you don’t need multi tenancy most of the time:

  • Kubernetes does not give you multi-tenancy, AKS also doesn’t. Yet.
  • There are tools you can use, patterns to practise.
  • But you need the normal tools: Azure Networking
  • And in addition AKS RBAC.

If you are not going to physical isolate your clusters, you have to handle multi tenancy your self in your SaaS application.

https://azure.microsoft.com/en-us/resources/designing-distributed-systems

Soft Multitenancy: AKS + Best cluster isolation practises

ACI Virtual Nodes get HyperVisor protection. Normal Kubernetes nodes run VM’s and share the same “Docker” security: no true Kernel isolation.

AKS Engine: Units of Kubernetes on Azure. The AKS product being built, but can also be used to run AKS yourself and configure the way the clusters run (maybe run it using a micro hypervisor like Kata.Containers.

Kubernetes Operating Tooling

Terraform: deploy Kubernetes clusters, pods and services across environments and clouds. https://www.terraform.io/

HELM: package manager for Kubernetes. It is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources. Charts can describe complex apps; provide repeatable app installs. https://helm.sh/

Draft: generate Docker files and HELM charts for your app. https://github.com/Azure/draft

Brigade: Brigade is a tool for running scriptable, automated tasks in the cloud as part of your Kubernetes cluster. https://brigade.sh/

Kashti: a simple UI to visualize Brigade. https://github.com/Azure/kashti

Cloud Native Application Bundle: A spec for packaging distributed apps. CNABs facilitate the bundling, installing and managing of container-native apps — and their coupled services. https://cnab.io/

Porter: a cloud installer. https://porter.sh/

LEAP 2019 – day 4

$
0
0

Notes (not a complete recap today) of day 4 of the LEAP event (day 1, day 2, day 3).

Machine Learning Fundamentals

The real measure how well a ML model is performing is how well it works with data it has never seen. The training phase could use 70% of data and the Test phase the remaining 30%.

https://docs.microsoft.com/en-us/azure/machine-learning/studio/algorithm-cheat-sheet

More information here: Hackaton Blogpost

Neural Networks are one type of algorithm. Deep Learning is based on Neural Networks.

Activation Function to get the values between 0 and 1 and are an important feature of the neural networks. They basically decide whether a neuron should be activated or not. Whether the information that the neuron is receiving is relevant for the given information or should it be ignored.

More for you to study on:

How do we test a trained RNN model (like the word RNN or the character RNN)? We measure perplexity per word!

“Perplexity is a measurement of how well a probability distribution or probability model predicts a sample. It may be used to compare probability models. A low perplexity indicates the probability distribution is good at predicting the sample.” Source: https://en.wikipedia.org/wiki/Perplexity

Deep Learning requires large datasets. Transfer Learning tries to counter this by taking a different approach by using a big source of data that is already trained (for example from Imagenet). Chop of the last layer and add your own inputs and train from there.

More information: check these out

Introduction to New Azure Machine Learning Service

The Azure Machine Learning Service is brand rew, was released December ’18. It is considered a code first service. You can use one of the deep learning frameworks and transfer into ONNX (Open Neural Network Exchange)

  • TensorFlow
  • PyTorch
  • Scikit-Learn
  • MXNet
  • Chainer
  • Keras

Demo: https://github.com/maxluk/dogbreeds-webinar

AI & Cognitive Services

AI is about amplifying and augmenting human ingenuity. Azure Cognitive Services is here for you if you want to start today. Microsoft offers prebuilt AI for your business: infuse apps, websites and bots with human-like intelligence.

Start AI with ethics (“with great power comes great responsibility”) https://aka.ms/ai-ethics

Demo on AI: https://aidemos.microsoft.com/

Create your own Computer Vision projects: https://customvision.ai

Create a bot in 3 minutes: https://www.qnamaker.ai/

Analyze video content: https://www.videoindexer.ai

Support for 6 key AI capabilities through containers:

  • Key Phrase Extraction
  • Language Detection
  • Sentiment Analysis
  • Face and emotion detection
  • OCR / Text Recognition
  • Language Understanding

https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support

LEAP 2019 – day 5

$
0
0

Some notes on the final day @ LEAP! This was the final day and filled with various talks. For me the most interesting one today was related to the upcoming .NET 3.0 🙂

.NET 3.0

  • Microsoft has the focus on two new workloads for .NET: AI and IoT
  • Open Sourced WinForms and WPF in December 2018
  • Improved performance
https://www.techempower.com/benchmarks/
  • Repl: Browse your API as a filesystem (list the methods etc.)
  • Swagger IntelliSense helps decorating API’s
  • Windows desktop support with WPF and WinForms
  • Designers and templates will come later

Demo: proposal of some new features for Visual Studio and .NET 3.0. Under NDA, very fresh. Mentioned the NSwag package for producing OpenAPI Spec documentation.

Migrate Full Framework to Core by going through the .NET Standard. Nuget Microsoft.Windows.Compatibility will fix build errors. There is also a .NET Analyzer (Microsoft.DotNet.Analyzers.Compatibility) to provide full text warnings and hints.

Microservices: https://github.com/dotnet-architecture/eShopOnContainers

  • ML.NET 0.9 enables simple entry point to ML for developers. Few lines of code and your good.
  • EF for Cosmos DB will come soon,
  • followed by EF for Redis.

Blazor is now Razor Components and part of .NET Core. Because of WebAssembly it runs in all desktop and mobile browsers and is truly client-side. It’s a full single-page application (SPA) framework inspired by the latest JavaScript SPA frameworks, featuring support for offline/PWA applications, app size trimming, and browser-based debugging.

https://blazor.net/docs/index.html

Blazor uses only the latest web standards. No plugins or transpilation needed. It runs in the browser on a real .NET runtime (Mono) implemented in WebAssembly that executes normal .NET assemblies.

It is currently being moved to the ASP.NET Core 3.0 repo, so there is most of the information on this topic: https://github.com/aspnet/AspNetCore/tree/master/src/Components

Let Azure DevOps deploy a Spring Boot application to Azure Kubernetes Service

$
0
0

What are we trying to accomplish? Azure runs a very decent Kubernetes service these days. I have used it as the main infrastructure in a couple of projects and am quite impressed. Impressed enough to even suggest it as the target platform for our largest projects 😊

So let’s take a typical Java project, a project consisting of one or several Spring Boot apps and deploy it to Kubernetes. And since Microsoft also runs a popular DevOps toolchain (with services like Git Repos, CI/CD pipelines, etc.), let’s try to use  that as much as possible.

The process we would like to support is as follows:

Worlds colliding.

Planets colliding (c) 2007-2019 ThornErose

Prepare our environment

Get ready for some preparations, because we need quite a few moving parts:

  • An Azure DevOps project, that will contain our build and release pipelines;
  • Azure Kubernetes Service, our target infrastructure;
  • Azure Container Registry, our Docker image(s) will be pushed there;
  • A Git repo to hold our code (can be Azure DevOps Repos, GitHub Repositories, ….);
  • A Spring Boot demo application to show it all off 😊

Azure DevOps

Microsoft offers a complete set of DevOps services like Git Repos, CI/CD pipelines, work tracking tools, etc. If you do not have an account already, you can start for free easily. We will use these services for the build and release pipelines mentioned in this article, but you can also choose to use Azure Repos for storing the source code files. More info: https://azure.microsoft.com/en-us/services/devops

Once you have an account, you will need to create a project that will contain our pipelines, default settings are fine.

Azure Kubernetes Service

For our infrastructure we need an Azure Kubernetes Service (a Kubernetes Cluster) to deploy our apps in. You can use any method you like, I will just list the Azure CLI commands. More info here: https://docs.microsoft.com/en-us/azure/aks/

  • az group create –name spring-demo-rg –location westeurope
  • az aks create –resource-group spring-demo-rg –name spring-demo-aks –node-count 1 –generate-ssh-keys
  • az aks install-cli
  • az aks get-credentials –resource-group spring-demo-rg –name spring-demo-aks

Azure Container Registry

Kubernetes needs access to a docker registry to pull the images. You can use any supported registry, but Microsoft also provides this service: the Azure Container Registry. More info here: https://docs.microsoft.com/en-us/azure/container-registry/

az acr create –resource-group spring-demo-rg –name springdemo –sku Basic

A Git Repo

You need to use a Git repository to store our source files. Azure DevOps Build Pipelines need to be able to access the files, so you need a supported Git repo. Can be Azure DevOps Repos, Bitbucket, GitHub, etc.

Spring Boot Application

For demonstration purposes we create a simple Spring Boot application. In this example I added a web dependency but anything will do. If you do not want to do this yourself, you can also grab the code from GitHub: https://github.com/yuriburger/spring-demo

You need to extract the content and add them to an online Git repository. I prefer GitHub, but any supported online repo will do (Azure Repos, BitBucket, etc.). Now it is time to add some sample code to test our application after deployment.

A simple test controller, just to provide a response and prove our application is running:

package com.example.springdemo;

import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

@RestController
@RequestMapping(path = "/test")
public class TestController {
    @RequestMapping(method = RequestMethod.GET, produces = "application/json")
    public ResponseEntity<?> getTest()
    {
        return new ResponseEntity<>("{\"message\":\"App running!\"}", HttpStatus.OK);
    }
}

An executable Jar file

Because we want to run the application from a Docker container, we need an easy way to execute the Jar file. There are a couple of ways for doing this, you can check https://www.baeldung.com/executable-jar-with-maven for some of the options. In this case I choose the Spring Boot Maven Plugin.

<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <executions>
                <execution>
                    <goals>
                        <goal>repackage</goal>
                    </goals>
                    <configuration>
                        <classifier>spring-boot</classifier>
                        <mainClass>
                            org.baeldung.executable.ExecutableMavenJar
                        </mainClass>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

A Dockerfile

As part of the build, we need to “convert” our application into a container image. There are several useful tools (like Maven Build plugins), that can assist here. Personally I’d like to work directly with Dockerfile objects and not to generate them from code or pom.xml settings.

Spotify has a nice plugin that allows you to work with a standard Dockerfile and still integrate with the Maven build process. Check this out for more information: https://github.com/spotify/dockerfile-maven

For the demo, no plugins were used. This makes the process easy to understand, but also involves “hardcoding” filenames.

FROM openjdk:8-jre-alpine
VOLUME /tmp
COPY /target/spring-demo-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

Prepping is done, now let’s get started!

Creating the build

Service Connection

Azure DevOps needs to connect to our Azure Container Registry to be able to upload the final Docker images. This works by creating a “Service Connection” and authorizing Azure DevOps. Go to “Project Settings”, “Pipelines”, “Service connections”.

The Pipeline

Creating a pipeline is pretty straightforward: from the menu on the left, choose “Pipelines” and “Build”. In my case I need the build to get the source files from GitHub:

This will open a Yaml editor and here we can update the build by specifying the required steps/tasks:

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

steps:
- task: Docker@2
  displayName: Login to ACR
  inputs:
    command: login
    containerRegistry: acrRegistryServiceConnection1
- task: Maven@3
  inputs:
    mavenPomFile: 'pom.xml'
    mavenOptions: '-Xmx3072m'
    javaHomeOption: 'JDKVersion'
    jdkVersionOption: '1.8'
    jdkArchitectureOption: 'x64'
    publishJUnitResults: false
    testResultsFiles: '**/surefire-reports/TEST-*.xml'
    goals: 'package -Dmaven.test.skip=true'
- task: Docker@2
  displayName: Build and Push
  inputs:
    command: buildAndPush
    repository: spring-demo
    tags: latest
  • Docker task: Login to ACR. This is an Azure DevOps built in task and uses the service connection we created earlier and connect to the Azure container registry.
  • Maven task: “package”. Also a built in Task. Builds and packages the Jar file we need for our container image.
  • Docker task: Build and Push. Creates the final image and uploads this to the Azure Container Registry.

Creating the release

Service Connection

Azure DevOps needs to connect to our Azure Kubernetes Service to be able to update deployment. This works by creating a “Service Connection” and authorizing Azure DevOps. Go to “Project Settings”, “Pipelines”, “Service connections”.

Next we add a Kubernetes deployment and service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: spring-demo-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test-server
  template:
    metadata:
      labels:
        app: test-server
    spec:
      containers:
        - name: test-server
          image: loggingdemo.azurecr.io/spring-demo:latest
          imagePullPolicy: "Always"
          ports:
            - name: http-port
              containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: spring-demo-service
spec:
  ports:
    - name: http-port
      port: 80
      targetPort: http-port
      protocol: TCP
  selector:
    app: test-server

After this initial deployment in Kubernetes, we configure the release pipeline to update our components. There are a lot of ways to accomplish this, but in many cases easier is better 😊

Comparable to creating a build pipeline, we select “Pipelines” and “Release” from the menu on the left.

  • Service Connection: the connection to Kubernetes created earlier
  • Command: patch
  • Arguments:

    deployment spring-demo-deployment -p “{\”spec\”: {\”template\”: {\”metadata\”: { \”labels\”: {  \”redeploy\”: \”$(Release.ReleaseId)\”}}}}}”

This will “patch” the deployment by adding a piece of metadata (in this case adding the release ID) and trigger Kubernetes to do a rolling update. If all goes as planned, we should see a deployment update on Kubernetes once the Azure Release Pipeline succeeds:

/Y.

Speaking at Techorama Netherlands 2019

$
0
0

October 1st, Pamela and I will be speaking at the Techorama Event in the Netherlands. We hope to see you there! 

Link to the event agenda: https://sched.co/TH5p

Microsoft and Java, a love story.

We are not talking about a summer love or a head over heels thing here: this is unconditional true love. Pamela and Yuri will share their take on this modern Romeo and Juliette love story between two former rivals.

Shakespeare’s Romeo and Juliet, though a tragedy, is considered the epitome of a love story. Our story today does not qualify as a tragedy but can be considered as an era defining love story.

As the stage sets and the plot unfolds, we will see a demonstration of Microsofts continued commitment to support Java developers with powerful tools and services.

  • Java is re-inventing itself, getting all new and exciting features, making it easy to implement alongside with .NET Core.
  • Microsoft Azure is an open cloud, supporting all kinds of frameworks, tools and workloads. It  provides some serious tools for building and releasing your Java or JVM based software.
  • Microsoft runs the perfect cloud platform for Java workloads.
  • Azure Monitoring will provide you with the needed metrics and insights to run even the most critical Java based enterprise stacks.

 And as a bonus, we will build a fully functional pipeline during the session. Because nothing says “I love you” more than a CI/CD pipeline!


Canary deployments with just Azure DevOps

$
0
0

Well, with Azure DevOps and Azure Kubernetes Service. But that’s it, no additional software required. Although there are plenty 3rd party solutions out there that address canary deployments, you might just get it done using “just” Azure DevOps. And in some cases, less is actually more! But to be fair, these other solutions provide a more holistic approach when it comes to deploying cloud native apps. If you are interested in those, these are the ones I know/ worked with:

If you just want to get some control on releasing new functionality in a modern cloud-native fashion, you can read on 🙂

Canary releases allow you to expose new code to a small group within the app’s “population”. It requires you to examine the canary carefully and look for unexpected behavior. If nothing hints at the app malfunctioning, you can decide to cut-over and roll out the new code to the entire group.

The backstory on the bird

For those interested, canary releases get their name from a technique that was used in the mining industry. Miners would carry a canary in a small cage to spot for mine-gas (mostly Methane). If the bird dropped, they would evacuate in fear of explosions and “air” the mine shafts before returning to their job.

Resuscitation device for the canary

Deployment scenario

The concept is simple: expose new code to a small percentage of the users. If the new code doesn’t fail, expose all users. With deployments (and in our case docker images, pods and Kubernetes in mind) it could look something like this:

First, all users are on the “left”, then a canary is deployed:

After canary analyses, the group is switched-over:

Our app will be deployed as a docker image on a managed Kubernetes cluster. So we need an Azure Container Registry and an Azure Kubernetes Service. These components don’t require any special setup, but for the deployment of our app we need to make sure we have them in place.

Within Kubernetes we usually end up with an Ingress type of setup. In the following example, we have both an app and a backend api running behind an Ingress Controller:

This setup ensures that incoming traffic for the App and Api gets routed through Kubernetes services to the pods running the containers. The way this works is that the Ingress rules match an incoming Request URI (a.k.a. path) on a Service name and the Service selects the pods based on the labels:

If we now create two deployments, one for the stabel release and one for the canary release, we can have Azure DevOps control both from a DevOps Release Pipeline. We start with one replica on the stable track (in real life there would probably be multiple replicas) and zero replicas on the canary track. Because both deployments have the “app=api” label, both are considered valid endpoints for the api-service.

Azure DevOps Release Pipeline

The multi-stage release pipeline can now control our Kubernetes deployments. We create 2 stages: Canary and Stable.

Both stages have 2 tasks: one for setting the correct image and one for scaling the deployment. These are standard Azure DevOps Kubernetes Tasks:

Canary stage:

- task: Kubernetes@0
  displayName: 'Set Canary Image'
  inputs:
    kubernetesServiceConnection: 'wherefore-aks'
    namespace: default
    command: set
    arguments: 'image deployments/api-canary api=wherefore.azurecr.io/wherefore-art-thou-api:$(Release.Artifacts._api.BuildId) --record'


- task: Kubernetes@0
  displayName: 'Scale Canary 1'
  inputs:
    kubernetesServiceConnection: 'wherefore-aks'
    namespace: default
    command: scale
    arguments: 'deployments/api-canary --replicas=1'

As you can see, we set the correct image for our api-canary deployment (in this example the artifact build ID is used for the image tag) and scale the canary deployment (api-canary) to 1 replica.

The Stable stage is just the opposite:

- task: Kubernetes@0
  displayName: 'Set Deployment Image'
  inputs:
    kubernetesServiceConnection: 'wherefore-aks'
    namespace: default
    command: set
    arguments: 'image deployments/api-deployment api=wherefore.azurecr.io/wherefore-art-thou-api:$(Release.Artifacts._api.BuildId) --record'

- task: Kubernetes@0
  displayName: 'Scale Canary 0'
  inputs:
    kubernetesServiceConnection: 'wherefore-aks'
    namespace: default
    command: scale
    arguments: 'deployments/api-canary --replicas=0'

We set the correct image for our api-deployment and scale the canary deployment (api-canary) down to 0 replicas. One important missing piece is the pre-deployment approval for the Stable stage. If we do not enable this, the Canary deployment would be practically useless as the Release would immediately go through to the Stable stage.

If we now create a new release, things should work as expected. The release is waiting on our Stable stage pre-deployment and in Kubernetes we have two deployments for our Api: one api-deployment and one api-canary.

If we approve the deployment, the api-deployment would be upgraded with the new image and the api-canary would be scaled down again: Canary deployment with just Azure DevOps 😉

/Y.

A Dependency Track Dashboard Widget for Azure DevOps

$
0
0

This post is about adding a widget to your Azure DevOps Dashboard showing the Dependency Track information on one or more of your projects. If you are unaware what a great product Dependency Track is, I would suggest a detour and read up on it here: https://dependencytrack.org/

TL;DR

OWASP Dependency Track software is described as a “Software Supply Chain Component Analysis platform. It allows organizations to identify and reduce risk from the use of third-party and open source components. It usually starts with software creating a Bill of Material of all components used and this Bill of Material is then inspected and analyzed for known vulnerabilities.

What the widget does

The Dependency Track software itself has excellent dashboards and reports, but we need a simple way to notify the team on the status of the most recent analysis. We can of course setup email alerts for this (and we should), but we can also hint at the status by showing it on one of our Team Dashboards:

Adding the widget

Adding the widget is easy as it is available through the Marketplace: https://marketplace.visualstudio.com/items?itemName=yuriburgernet.dependencytrackwidget

If you install it in your tenant, you can add it to one of your dashboards:

After this, you need to configure the widget with three parameters:

  • A project tag: this can be any tag that is used by Dependency Track projects;
  • A Dependency Track Url: this is the Url to your Dependency Track instance. Usually something like: https://servername/api/v1/project/tag/
  • A Dependency Track API key: the API key with at least the ‘VIEW_PORTFOLIO’ and ‘VULNERABILITY_ANALYSIS’ permissions. See the official Dependency Track docs for more information.

Note: you need to configure your Dependency Track projects with a tag to be able to query them using this widget. This is done through the Project Details page. See the official Dependency Track docs if you need more information.

Any feedback is welcome, and if you want to peek at the code you will find the GitHub repo here: https://github.com/yuriburger/dependency-track-widget

/Y.



Latest Images