Quantcast
Channel: Custom Development – yuriburger.net
Viewing all 53 articles
Browse latest View live

Versioning SharePoint Add-Ins in your VSO Scripted Build

0
0

In a previous post I wrote a bit about building your SharePoint Apps (or Add-Ins as they are now called) using the new scriptable build system in Visual Studio Online.

http://yuriburger.net/2015/06/22/building-your-sharepoint-apps-using-the-new-scriptable-build-in-vso/

But there was a slight problem with this approach as it would always produce add-in packages with the version number from the AppManifest.xml file. And if you are working in a team where everybody keeps their version number at 1.0.0.0 all built add-ins keep this single version. This is maybe ok if you are doing production stuff and you want to avoid triggering updates. But if you are still developing and testing this makes traceability harder and so I decided that the add-in versioning should be part of the build process and not depend on people updating manifest files.

AppManifestVersion

The new build system has excellent support for PowerShell, so I created a simple script for automatically versioning my Add-Ins during the build. In this script I will take the generated build number, which is in the format of Year.MonthDay.Index by default, and add that after the major version number. So todays first build should produce an add-in with the version number of 1.2015.1018.1

The Script
Write-Host "Versioning started"

# Apply buildnumber to manifest files
$buildNumber = $Env:BUILD_BUILDNUMBER
$buildNumberYear = $buildNumber.Substring(0,4)
$buildNumberDay = $buildNumber.Substring(4,4)
$buildNumberIndex = $buildNumber.Split(".")[1]

# Apply the buildnumber to the manifest files
$files = Get-ChildItem -Path $PSParentPath -Filter AppManifest.xml -Recurse

if ($files)
{
	Write-Host "Will apply $buildNumber to $($files.count) files."
 
	foreach ($file in $files) {
		[System.Xml.XmlDocument] $doc = new-object System.Xml.XmlDocument
		$doc.Load($($file.FullName))
		$doc.App.Version = "1." + $buildNumberYear + "." + $buildNumberDay + "." + $buildNumberIndex
		$doc.Save($file.FullName)
 
		Write-Host "$($file.FullName) - version applied"
	}
}
else
{
	Write-Warning "No Manifest files found."
}

Using the file is easy as you just need to make sure the Visual Studio build servers have access to it. I usually include it in the solution, but outside the main application project structure.

SolutionStructure

Setting it up

VSO has PowerShell steps that allow you to run scripts at any phase during the build. For versioning, this step should take place before we do a build using the Visual Studio build step.

  1. Add a PowerShell step to your build
  2. Select the path to the script, no parameters needed.

All set and ready to go:

ScriptedBuild

VersionApplied

You can download the script here.

Happy building (with VSO),

Y.



Errors resizing a SharePoint 2013 App Part (Client Web Part)

0
0

Since SharePoint Client Web Parts are simply iFrames, resizing them from within your App logic (i.e. App.js) can be a little challenging. Fortunately Microsoft provides a way to do this dynamically using postmessages. More info on this can be found on MSDN.

Basically, how this works is, you send a special ‘resize’ message to your iFrames parent (the SharePoint Web containing the App Part). Based on this message, the iFrame is resized according to the supplied dimensions:

window.parent.postMessage(“<message senderId={your ID}>resize(120, 300)</message>”, this.location.hostname);

This only works if your Client Web Parts have the Web Part Title enabled (via the Client Web Part Chrome settings). If you don’t, the resizing function breaks, because it cannot find the Web Parts title DIV:

Chrome

SCRIPT5007: Unable to get property ‘style’ of undefined or null reference

The code throwing the error:

if (resizeWidth)
{
document.getElementById(webPartDivId + '_ChromeTitle').style.cssText = widthCssText;
cssText = 'width:100% !important;'
}

Unfortunately, there is not much we can do about this. If you really need the App Part title gone and keep the dynamic resizing my choice would be to hide it using CSS. Meanwhile we wait for the fix.


Application Lifecycle Management – Improving Quality in SharePoint Solutions

0
0

Introduction

“Application Lifecycle Management (ALM) is a continuous process of managing the life of an application through governance, development and maintenance. ALM is the marriage of business management to software engineering made possible by tools that facilitate and integrate requirements management, architecture, coding, testing, tracking, and release management.”

In this and future blog posts we will look at how ALM and the tools that MS provides support us in ensuring high quality solutions. Specifically, we explore a few different types of testing and how they relate to our SharePoint solutions.

  • Manual Tests (this post)
  • Load Tests
  • Code Review/ Code Analysis
  • Unit Tests
  • Coded UI Tests

To get things straight, I like testing. I think it is by far the best (academic) method to prove you did things right. And the best part, even before the UATs start!

This post is not meant to be exhaustive nor used as the perfect recipe for integrating testing in your products lifecycle. It is aimed at getting you started with some of the testing basics and how to set them up for your SharePoint projects. Whether you start a project with the design of your tests (yes, those guys need designing too) or use them to sign off your project, that is of course completely up to you.

Note: it is perfectly feasible to have different tests target the same area. For instance a load test might show memory degradation over time because of memory leaks in your custom solution. A sanity test could warn you for the same issue, but does so by analyzing your custom code and look for the incorrect disposal of objects.

The Visual Studio 2012 Start Screen contains a lot of how-to videos related to testing, so make sure you check those out too!

Videos

Core Concepts

Before we start, let’s look at some of the core concepts.

Tests are part of a test plan.

Sounds pretty simple, right? You cannot start implementing any kind of realistic testing unless you at least determine the following:

Goal you need to set the bar at a certain level. Even if you are just going to test a small part of the products functionality, state so in your test plan goals.
Exceptions highlight areas or components not part of your tests. Examples are external line of business systems, interfaces and third party assemblies. Or make it an opt-in, so something along the lines “anything not addressed here, is not part of the test…..”.
Tests covered what type of tests do you cover (and not what do your tests cover)?
Software used list the tools you need to implement and execute your test plan.
Test data describe your test dataset if you need one.
Test strategy how and in what order are we going to execute these tests? And describe how we are going to report on the findings and results. Answer questions like: what depth do we need and who the actors are in our play.
Test environment describe the required conditions needed for correct execution of the tests.

Tests need a design.

Every test, even the modest ones need some form of design:

Name Descriptive name or title plus a brief description.
Link to a requirement if possible, link a test to a requirement (or set of requirements), a user story, product backlog item or functional description.
Test data used what is needed from your test data set to perform this test? Link or attach the items to your test case.
Measurement what indicators do we use to determine a pass or a fail?
Expected result(s) For example, in case of a load test you would expect the environment to stay in the specified “Green Zone”. In case of a manual UI test every step could have its expected result:
  • Step 1: Login as user x with password y.
  • Expected result: Intranet landing page should render without errors.
  • Step 2: Navigate to news by clicking the Global Navigation Menu Item labeled “News”.
  • Expected result: News overview page should render, with a list of articles ordered by date descending.
Pass/ fail criteria

Manual Tests

Simple but powerful. These type of tests are easy to setup (as in non-complex), but usually require a decent amount of effort to work out. My advice, keep them simple by design with small descriptive steps. You could administer the different tests using Microsoft Excel, but of course Microsoft offers tooling for this.

Microsoft Test Manager allows you to plan, manage, design and execute manual tests. It can also be used to run exploratory test sessions and automated tests, directly from a test plan. Connected to TFS (or Team Foundation Service) it enables the logging of bugs and can provide trace information and snapshot capabilities. It enables the tester to provide as much valuable information using recorded actions and a commenting system.

Microsoft Test Manager requires one of the following Visual Studio editions: Visual Studio Ultimate, Visual Studio Premium or Visual Studio Test Professional.

More information: http://msdn.microsoft.com/en-us/library/jj635157.aspx

To get you started using Microsoft Test Manager, check the hyperlink mentioned above or follow these steps.

  1. Fire up Test Manager, either from a Visual Studio test case or directly from the program group.
  2. Connect it to Team Foundation Server or Team Foundation Service. For more information about TFS and Team Foundation Service, see previous blog posts:
  3. Create a test plan (see “Core Concepts”)
m1 m2

Your test plan should contain at least one Test Suite that will hold the actual tests to be run. You can add Test Suites directly by hand or automatically by adding requirements to your plan. As a bonus, a default static Test Suite is created automatically for you but you can also nest Test Suites if you like.

Your test plan should contain at least one Test Suite that will hold the actual tests to be run. You can add Test Suites directly by hand or automatically by adding requirements to your plan. As a bonus, a default static Test Suite is created automatically for you but you can also nest Test Suites if you like.

  1. Add Test Suites by adding requirements to your plan. The query by default shows all items from the “Requirements” category and thus includes bugs, backlog items, etc.
m3a m3
  1. Finally, we can start adding Test Cases to our Test Suite. If you already have Test Cases setup through Visual Studio or TFS web, you can use the query based “Add” method. There is also an option to create them directly from Microsoft Test Manager through the “New” button:

m4

These Test Cases are stored in TFS and automatically linked to the Product Backlog Item. You can also attach or link additional items or documents for the Tester to use.

The actual test run is also performed from Microsoft Test Manager. It shows the tester the steps and expected outcome using a “split-screen”:

m5

The real fun starts when testers provide feedback whenever a step fails. From the results of this test, a bug can be created and stored in TFS. This bug then contains the steps performed and any extra information (comment, video, screenshot, etc.) the tester provided.

m6

More posts to come:)

Update: Part two is ready, so now it is officially a series (allbeit a small one).

  • Load Tests
  • Code Review/ Code Analysis
  • Unit Tests
  • Coded UI Tests

/Y.


SharePoint 2013 Solution Type Diagram

0
0

WhatToBuild.png

Update May 16 2-2014:
A recent announcement by the Office 365 team just made our architectural decisions for SharePoint Apps a little bit easier. Although the program was in Preview and you couldn’t submit Autohosted Apps in the Office Store, it was still an option in some (or rare) cases. Not anymore: the Office 365 Autohosted Apps Preview program will end on June 30, 2014. While current Apps will not be affected for now, this path is no longer an option for new projects. For more information, see the: Office Blog.

I for one am glad to finally see some clear info on this and am curious to see how this evolves. Like the Office 365 team states in their post, I liked the goal of Autohosted apps: hassle free development experience for this type of solutions.
End update May 16 2-2014:

A couple of years ago I wrote this post about the Solution Type in SharePoint 2010. Since the introduction of SharePoint 2013 we now have the new App Model to consider so I thought it was time to create an updated version.

I use the Solution Type Diagram in my day to day decisions to get an idea on what kind of solution I would like to architect or at least start out with. Remember, as with every SharePoint project circumstances may differ and it always depends (J). So this is my version and you probably need to adapt it a little to suite your needs and your customer needs (your mileage may vary).

Decision.png

Let me walk through the considerations on this fairly simple chart:

Decision: “Just Artifacts?”

We all probably know by now that Microsoft has deprecated the Sandbox Solution model. But as MSDN states, this is true only for the use of custom managed code within the sandbox solution. If you just need to deploy SharePoint artifacts like lists, content types, images or JavaScript code Sandbox solutions are still a viable option and in fact not deprecated. More information on this subject can be found on MSDN:

http://msdn.microsoft.com/en-us/library/jj163114.aspx

So if we can stick with just declarative markup for deploying our SharePoint artifacts, the Sandbox solution is usually the fastest way to go. In all other cases it is not and we need to move further down the Solution Type diagram.

Decision: “CAM compatible?”

Compatible with the Cloud Application Model as Microsoft likes to call it (or App Model). So first you look at the technical side of the question. Can we deliver and meet the requirements with the Apps Model components. So we don’t need to deploy legacy stuff like Timer Jobs, Application Pages, etc. If we do, we could opt for separating these legacy bits and try to come up with a hybrid solution. But we also look at a more concept level. The Apps Model is pretty much intended for on demand installation, like picking an app from the store and have and end user install it him or herself. So if you have an Intranet Branding solution complete with My Site branding, an app might just not be the right delivery model. But if you have a collection of Web Parts an app is probably the right way to go.

My bottom line: Apps are intended for End Users, Full Trust and Sandbox solutions are for Administrators.

Since the App Model is new and still sort of version 1.0 there are a lot of caveats. Most of these are technical like the issues with apps published through Microsoft UAG. Or apps reading or writing lists on an anonymous Office 365 public facing internet site.

Decision: “Office 365?”

Do we target Office 365 or SharePoint Online? And I mean SharePoint Online or Office 365 multi-tenant (not the Office 365 dedicated version). And I tend to look a little bit further down the road with this one. So you might have customers asking for an on premise solution, but if you know the customer is considering or even evaluating O365 you might want to answer yes to this question. You don’t want to design a solution that could potentially block a customer from migrating to the cloud or implementing a hybrid environment.

The next couple of decisions are when we actually are compatible with the App model.

Decision: “Workflow, scheduled tasks?”

If we are just using HTML, CSS, JavaScript, local SharePoint lists and external data through REST a SharePoint Hosted App is probably what we are looking for. Remember, no Server Side Custom Code is available! If we do need things like custom workflow (or workflow behavior), scheduled tasks or remote/ app event receivers SharePoint Hosted is not an option and we need to move further down the diagram.

Decision: “Office 365?”

Again we look if we need to target SharePoint Online or Office 365. If we don’t, Auto Hosted is not an option and we need to go with a Provider Hosted app.

Decision: “Sell through the store?”

Do we need to sell through the Microsoft Store? This option is currently only available for SharePoint Hosted and Provider Hosted Apps. Auto Hosted Apps are not permitted in the store (yet). In that case you can of course opt to license, sell and distribute the app yourself.

So these decisions should eventually get you to a solution type:

Sandbox Solution (No Code Sandbox Solution) Still a viable option and O365 compatible
Sandbox Solution (with custom managed code) Deprecated, use with caution
Full Trust Solution Avoid if possible since it blocks customers from moving to the cloud
Apps The preferred solution model but watch out for technical and architectural caveats

And while you are at it, remember to check the availability of your choice if you are targeting Office 365 or SharePoint online. Luckily not much diversity there, although it never hurts to check if this has been changed. See TechNet for more information: http://technet.microsoft.com/en-us/library/5e1ee081-cab8-4c1b-9783-21c38ddcb8b0

Developer features O365 Small Business O365 Midsize Business O365 Enterprise E*O365 Education A*

O365 Government G*

App Deployment: Autohosted Apps Yes Yes Yes
App Deployment: Cloud-Hosted Apps Yes Yes Yes
App Deployment: SharePoint-Hosted Apps Yes Yes Yes
Full-Trust Solutions No No No
REST API Yes Yes Yes
Sandboxed Solutions Yes Yes Yes

/Y.


Trigger a Microsoft Flow from your own app

0
0

If this then Microsoft Flow

Most of us know about IFTTT. And for those who do not: it stands for If This, Then That and is basically a cloud based workflow engine that lets you wire up different backend systems and apps and create channels and rules between them. Well, Flow is Microsoft’s take on IFTTT and is part of the new PowerApps and Flow offering. It aims at providing a Web environment for creating and managing Flows that can be used standalone or combined with the brand new PowerApps.

Microsoft Flow promises to able to support almost any “time consuming task or process” and currently supports 42 services. Yes 42, the answer to life, the universe and everything!

Note: Microsoft Flow is not to be confused with Microsoft Azure Logic Apps. Although they do share technology, platform and some of the offered functionality, Logic Apps are more targeted at the developer covering complex scenarios and tight integration with Visual Studio, Azure Resource Manager and the Azure Portal.  For more information on Logic Apps, see the service description: https://azure.microsoft.com/en-us/services/logic-apps/

Be warned, this stuff is awesome, but also new,  paint not even dry.  No guarantees :) 

Because it is still in preview the offering is far from complete.  But if we look at the connections already available today it already is a compelling tool.  Although you can create flows that act on other events (Facebook post, Twitter mention) or from a PowerApp there is currently no way to fire a flow on demand say from an external website or your own Single Page App. Note: Unlike IFTTT,  because they do have an option to create actions fired by Http requests.

I expect Flow to land this functionality in the future and render this guidance useless:) but for now I will share my findings.

Scenario

So I decided to try to make the following simplified scenario work:

  1. Phone user presses a button in my app
  2. This triggers a Microsoft Flow
  3. The Flow does “stuff”, maybe something like adding a Dynamics CRM Online opportunity

We simplify this a bit more and create a modest Angular 1.x app and trigger our flow from a HTML input button. The Flow itself just sends out an email, but any of the available services should work. We can also send all kinds of payload data to the Flow, but I will save that for a future post.

While I was doing the research for a small proof of concept, I found this great topic on the Flow powerusers forum providing valuable insight on the inner workings of Flow. Make sure to check it out, although all the required info will be included in this post. https://powerusers.microsoft.com/t5/Flow-Forum/Example-Http-action/td-p/1105

Requirements, pre-requisites and setup

Flows run on Azure (they share technology with Azure Logic Apps and as far as I can tell they actually are Logic Apps), so we need a few things from Microsoft Azure.

  • An Azure Active Directory Tenant
  • A user account within the Azure Active Directory Tenant with a configured Microsoft Flow
  • A client application in this Azure Active Directory Tenant
  • Permission for this application to access the Azure Service Management APIs
  • An Authorization header token. According to the forementioned forum topic, we should be able to use the same token that would be used to talk to the Azure Resource Manager endpoint.

Assuming we have a working Azure Active Directory in place, let’s start with creating the user account. In this case, a simple user account will work and the account only needs some basic permissions to be able to get the correct Authorization token. It is also this account that will be used to login to Microsoft Flow and create the flow we will trigger from the app.

1

We can now use the new Role Based Access Control system to allow this account to get the Azure Resource Manager token. In this case I added the account to the Reader role on my Azure subscription, although we could have probably setup a more fine grained permission.

2

Next stop: creating our client App in the Azure Active Directory.

Azure Active Directory Client Application

From the Active Directory page, Add an application and make sure it is of type “Native Client Application”. You also need to supply a redirect URI for the OAuth2 response. Make sure you apply an applicable value, in my testcase I use my local IIS Express web server.

345

The most important part of information we need for our client application is the Client ID (in the form of a GUID), so make sure you make note of it.

Permissions

To be able to access the Service Management APIs we need set this permission from the “Configure”tab of your Client Application:

6

Token and Implicit Grant

In order to trigger the flow we call an endpoint (see the forum topic mentioned earlier). This endpoint expects the same token that would be used to talk to the Azure Resource Manager endpoint, so we can acquire the correct token from https://management.core.windows.net/

This is all undocumented, so if things go wrong you would have to dive in yourself to see what is wrong. To check how Flow works under the hood, I find it the easiest to login to PowerApps or Flow and use the Google Chrome Dev Tools to check out things like the required Authorization tokens and Http payload.

The Request Headers for instance will show you the Authorization info, which you can decode with tools like https://jwt.io

7

In this simple scenario (calling a Flow from an Angular Single Page App) I would like to leverage the Azure AD Sign On process to acquire this token. I do not care about refresh tokens and only need an access token. This type of authentication is supported by Azure AD although not enabled by default and is known by the name “oauth2AllowImplicitFlow”. You can change this setting by downloading and editing your application’s manifest.

  1. Go to your Application registered in Azure Active Directory and from the Dashboard click Manage Manifest/ Download Manifest;
  2. Open the file and look for the oauth2AllowImplicitFlow property and make sure to change this to “true”;
  3. You can then upload the file back to your application (Manage Manifest/ Upload Manifest) and save the changes.

A lot of pieces to the puzzle, but this concludes the Azure part of our exercise. Next we need to create a simple Flow that we will call from our application.

Flow

We create a flow by logging into https://flow.microsoft.com/ . We need to use the same account we setup earlier.

  1. From the start screen we need to navigate to “My Flows” and select “Create from blank” to start creating the flow;
  2. For the trigger we select “PowerApps” as if we were going to trigger this flow from a PowerApp;
  3. The action can be anything, but for this exercise I will pick a generic “Mail – Send Email” action;
  4. Name the Flow and click “Create”.

8

The final piece of information we need now is the GUID identifying our Flow. We can get this GUID from the URL once we manage or edit our Flow.

10

Our Client App

Time to create our app! I provided the complete source code of the demo app at the end of this article.

Adal JS

The app uses the Active Directory Authentication Library for JavaScript (ADAL JS) for handling our authentication. Please see the GitHub project for more information: https://github.com/AzureAD/azure-activedirectory-library-for-js

The first thing we need to do is configure ADAL Authentication Service Provider with the correct settings. This is typically done during the app config phase:

app.config(['$httpProvider', 'adalAuthenticationServiceProvider', function ($httpProvider, adalAuthenticationServiceProvider) {
adalAuthenticationServiceProvider.init(
{
cacheLocation: 'localStorage', //optional
anonymousEndpoints: ['Scripts/Login'],
clientId: 'your client ID from the Azure App',
endpoints: {
'https://management.core.windows.net/': 'https://management.core.windows.net/'
}
},
$httpProvider // Http provider to inject request interceptor and attach tokens
);
}]);

Couple of things to note here:

  • Make sure you use the client ID from the Azure App we registered earlier;
  • Couple of optional settings, you can ignore them as they are needed for this particular app;
  • We get our token from https://management.core.windows.net/, but we will send our trigger to a different endpoint. In this case the interceptor will not attach the tokens, because it does not match any of the endpoints. We will have to handle this in our code as we will see in a later snippet.

The trigger

The trigger itself is a Http POST to a predefined URL with the correct access token attached.

var resource = adalAuthenticationService.getResourceForEndpoint('https://management.core.windows.net/');
var tokenForFlow = adalAuthenticationService.acquireToken(resource);

$http({
method: 'POST',
headers: {
'Authorization': 'Bearer ' + tokenForFlow.$$state.value
},
url: 'https://msmanaged-na.azure-apim.net/apim/logicflows/5cc77c51-9a87-4716-99d7-3aac38bb4839/triggers/manual/run?api-version=2015-02-01-preview'
}).then(function successCallback(response) {

}, function errorCallback(response) {

});

Again, couple of things to note:

  • We need to get the token by hand (with the help of adalAuthenticationService.acquireToken ) because the interceptor does not help us here;
  • The POST url contains the GUID of our Flow, so make sure you update this accordingly.

But before we can do any of this stuff, we need to sign in to Azure AD. Fortunately the ADAL JS library makes this extremely simple, we just need to call the login method!

vm.login = function () {
adalAuthenticationService.login();
}

Run it

If we now run the app we need to login to Azure AD using the account we created earlier. If we then trigger our Flow, all should work as designed.

11

You can download the demo application here: Blog.Examples.DemoCallFlow.zip

/Y.


Speaking at the TechDays 2016

0
0

October 5th Jan Steenbeek and I will be speaking at the TechDays Netherlands event. Hope to see you there!

More information: Architectural Choices and Developer Comfort

Architectural Choices and Developer Comfort

The SharePoint and Office 365 ecosystem is changing rapidly. It seems that frameworks are on sale every other week and our tools get deprecated as we speak. A challenging time for all developers, not only for those targeting Office 365 and SharePoint solutions. Yuri and Jan take stock of the current state of affairs. We discuss key reasons to choose between SPA’s, Office Add-ins, the major APIs, display templates, Microsoft Azure backends and other rewarding technologies. We demonstrate all the good stuff by building a full demo application intended to highlight interesting options for rapid application development on the SharePoint platform, utilizing Angular, Breeze.js, TypeScript, Office UI Fabric and even the brand new SharePoint Framework.


TechDays 2016 slidedeck and code available here

0
0

Jan Steenbeek and I just wrapped up our early session at the TechDays NL. You can find the slidedeck and code here: DoMiBo


Speaking at SharePoint Saturday Oslo

0
0

On October 22 Jan Steenbeek and I will be speaking at the SharePoint Saturday Oslo event. We hope to see you there!

Architectural Choices and Developer Comfort

The SharePoint and Office 365 ecosystem is changing rapidly. It seems that frameworks are on sale every other week and our tools get deprecated as we speak. A challenging time for all developers, not only for those targeting Office 365 and SharePoint solutions. Yuri and Jan take stock of the current state of affairs.

We discuss key reasons to choose between SPA’s, Office Add-ins, the major APIs, display templates, Microsoft Azure backends and other rewarding technologies. We demonstrate all the good stuff by building a full demo application intended to highlight interesting options for rapid application development on the SharePoint platform, utilizing Angular, Breeze.js, TypeScript, Office UI Fabric and even the brand new SharePoint Framework.



SPS Oslo 2016 Slidedeck and code available here

Login to Umbraco BackOffice using IdentityServer4

0
0

This post will work through the details in setting up IdentityServer4 and Umbraco to enable the OWIN Identity features of the Umbraco BackOffice.

UloveI

Disclaimer: I have been working with content management systems for a very long time (Microsoft Content Management Server anyone 😊), but Umbraco was pretty new to me. These blog posts are my personal notes and reminders, but also shared for everyone to read. If you spot any errors or have feedback that will help me or others, please feel free to comment!

Goal: Login to Umbraco BackOffice using IdentityServer4 (or any other OpenID Connect or OAuth 2.0 Authentication Service).

My environment:

  • IdentityServer4 (1.3.1);
  • Umbraco 7 (7.5.11);
  • MSSQL databases for both Umbraco and IdentityServer4

Also in the mix:

IdentityServer4 is designed for flexibility and part of that is allowing you to use any database you want for your users and their profile data and passwords. Since I want to show you how we can extend the Umbraco BackOffice by working with roles and claims, I choose to start with ASP.NET Core Identity as the user store. The people from IdentityServer4 have provided excellent documentation on how to set this up, so I am not going to repeat the obvious parts:

https://identityserver4.readthedocs.io/en/release/quickstarts/6_aspnet_identity.html

Highlevel steps:

  1. Setup (install) IdentityServer4 through Nuget in Visual Studio;
  2. Follow the Quick Start mentioned above;
  3. Configure IdentityServer4 by adding Clients and Identity Resources;
  4. Configure Kestrel (ports);
  5. Seed Users (and roles, claims) for test/ development purposes;
  6. Setup Umbraco;
  7. Configure Umbraco BackOffice to support an external Identity Provider;
  8. Transform Claims to support Auto Link feature.

TLDR; version:

  1. Download the source code;
  2. Open in Visual Studio;
  3. Restore packages;
  4. Hit F5.

Setup IdentityServer

The first part is pretty easy and documented by the IdentityServer4 documentation. Just a couple of things we need for our setup to keep in mind:

  • Follow the steps described here: https://identityserver4.readthedocs.io/en/release/quickstarts/6_aspnet_identity.html
  • We use the Visual Studio template for ASP.NET Identity and several Nuget packages for IdentityServer4. They provide us with a lot of code we need for the ASP.NET Identity implementation. They also contain MVC Views and Controllers for application logic (Account Login, Registration, etc.). Make sure you review your actual requirements before taking this solution to production;
  • The following Nuget packages are needed: IdentityServer4, IdentityServer4.AspNetIdentity;
  • You can follow all the steps from the mentioned documentation/ quickstart. We will configure our specific client needs in the next steps;
  • The template creates a default connection string. Update the appsettings.json to point to the database of your choice. You need to do this before creating the user database with “dotnet ef database update”.

Configure IdentityServer

After finishing the initial setup we need to configure the IdentityServer.

  • Configure Clients;
  • Configure Identity Resources;
  • Configure Service Startup.

Configure Clients

For the purpose of this demo we create our clients through code. There is also documentation on the IdentityServer4 project site that enables configuration through Entity Framework databases. Check the links at the end of this article for more information.

We start with a separate class file to store our configuration. This file should contain the following client config:

public static IEnumerable<Client> GetClients()
        {
            return new List<Client>
            {
                new Client
                {
                    ClientId = "u-client-bo",
                    ClientSecrets = new List<Secret>
                    {
                        new Secret("secret".Sha256()),
                    },
                    ClientName = "Umbraco Client",
                    AllowedGrantTypes = GrantTypes.Hybrid,
                    RequireConsent = false,
                    RedirectUris           = { "http://localhost:22673/Umbraco" },
                    PostLogoutRedirectUris = { "http://localhost:22673/Umbraco" },
                    AllowedScopes =
                    {
                        IdentityServerConstants.StandardScopes.OpenId,
                        IdentityServerConstants.StandardScopes.Profile,
                        IdentityServerConstants.StandardScopes.Email,
                        "application.profile",
                    },
                    AllowAccessTokensViaBrowser = true,
                    AlwaysIncludeUserClaimsInIdToken = false
                }
            };
        }

The important parts:

  • ClientId: this needs to be in sync with the Umbraco BackOffice client used later in this article.
  • AllowedGrantTypes: since this is a MVC server application, we can trust the client. If you want to keep the token away from the browser, you can use the “authorization code flow” or the “hybrid” flow.
  • Redirect- and PostLogoutRedirectUris: needed for the interactive part of the login flow.
  • AllowAccessTokensViaBrowser: if you need browser based clients accessing tokens.
  • AlwaysIncludeUserClaimsInIdToken: add additional claims to the ID token.
  • AllowedScopes: determines what Identity (and Resource) information we are allowed to access. The “application.profile” is a custom scope that includes all the roles/ claims we need for our Umbraco integration.

Configure IdentityResources

For the purpose of this demo we create our clients through code. There is also documentation on the IdentityServer4 project site that enables configuration through Entity Framework databases. Check the links at the end of this article for more information. In this case we add the following code to the previously used Config class:

public static IEnumerable<IdentityResource> GetIdentityResources()
        {
            var customProfile = new IdentityResource(
                name: "application.profile",
                displayName: "Application profile",
                claimTypes: new[] { ClaimTypes.GivenName, ClaimTypes.Surname }
            );

            return new List<IdentityResource>
            {
                new IdentityResources.OpenId(),
                new IdentityResources.Profile(),
                new IdentityResources.Email(),
                customProfile
            };
        }

The important parts:

  • customProfile: this is our custom scope including custom claims (“role” and “permission”) and the claim(types) we need for the Umbraco Account Linking feature.
  • List of resources: contain the default resources + our custom profile.

Note: ASP.NET Identity does not support GivenName and Surname out of the box. There are several options to extend this and in this case I chose to store these values as Custom ASP.NET Identity User Claims. After seeding the users (next chapter), you will find these claims in the database table “AspNetUserClaims”. The Umbraco Auto Link External Account feature requires the claims GivenName, Surname and Email to contain values.

On the next page we are going to use these parts and fire up IdentityServer.

Startup

ASP.NET Identity uses Entity Framework by default to get hold of the users and profile data. The connection to this store is the first thing we need to setup. The connection string is stored in the appsettings.json and you can update this to point to another database. If you already did this during the IdentityServer setup, you are good to go. If not, you can update the setting and perform another Entity Framework migration (dotnet ef database update) to create the tables.

Update the connection string:

 // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            // Add framework services.
            services.AddDbContext<ApplicationDbContext>(options =>
                options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));

Wire up ASP.NET Identity (should already be done by the Visual Studio template):


            services.AddIdentity<ApplicationUser, IdentityRole>()
                .AddEntityFrameworkStores<ApplicationDbContext>()
                .AddDefaultTokenProviders();

Configure IdentityServer:

services.AddIdentityServer()
                .AddTemporarySigningCredential()
                .AddInMemoryIdentityResources(Config.GetIdentityResources())
                .AddInMemoryClients(Config.GetClients())
                .AddAspNetIdentity<ApplicationUser>();

The important parts:

  • Adds the IdentityResources from the Config class created earlier.
  • Adds the Clients from the Config class created earlier.
  • Wire up ASP.NET Identity and IdentityServer

And update the Kestrel configuration in Program.cs:

public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseUrls("http://localhost:5000")
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup<Startup>()
                .UseApplicationInsights()
                .Build();

            host.Run();
        }

Note: using SSL might be considered or recommended, although not implemented in the demo source code.

Note 2: In the sourcecode you will find a section that adds (“seeds”) several user accounts during service startup so we can start using our configuration.

Note 3: If you want to smoke test your server setup, you can navigate to http://localhost:5000 and try to login with one of the seeded user accounts.

Setup Umbraco

Since we are going to work with Umbraco and Visual Studio, we need to set it up manually. Umbraco has the steps worked out on the official documentation website: https://our.umbraco.org/documentation/Getting-Started/Setup/Install/install-umbraco-with-nuget

Just follow the steps mentioned there, but note the following:

  • Use a blank web application.
  • Visual Studio 2017 also works, although not mentioned on the website.
  • After Nuget installation, F5 runs Umbraco and the setup starts.
  • Create an empty database for Umbraco CMS.
  • You can create a local user, but we will configure IdentityServer support next.

By default, Umbraco uses a SQL Compact Edition, but I like to use a “real” one even for development. During the installation, you can customize the database info and provide a connectionstring.

Configure Umbraco BackOffice

Umbraco is built with ASP.NET and can easily be extended to support ASP.NET Identity. The first step is to install the required package: UmbracoCms.IdentityExtensions.

PM> Install-Package UmbracoCms.IdentityExtensions

This will install the basic files and classes we need. Next step is to install an Identity Provider we can use as an example to explore the options. This step is optional, but will also install documentation we can use to build our own Identity Extension. You choose from a couple of different providers, I choose Microsoft OAuth.

PM> Install-Package UmbracoCms.IdentityExtensions.Microsoft

Because we will be working with IdentityServer and OpenID Connect, we need to install the required package:

PM> Install-Package Microsoft.Owin.Security.OpenIdConnect

Configure OWIN startup class

The IdentityExtensions package deployed several files to App_Start folder. The main file to look for is the UmbracoCustomOwinStartup.cs. This file allows us to configure OWIN for Umbraco and open up BackOffice for third party Identity Providers (in our case IdentityServer).

The files contains plenty comments to make sure you understand all the options. For our purposes we need the following:

 var identityOptions = new OpenIdConnectAuthenticationOptions
            {
                ClientId = "u-client-bo",
                SignInAsAuthenticationType = Constants.Security.BackOfficeExternalAuthenticationType,
                Authority = "http://localhost:5000",
                RedirectUri = "http://localhost:22673/Umbraco",
                ResponseType = "code id_token token",
                Scope = "openid profile application.profile",
                PostLogoutRedirectUri = "http://localhost:22673/Umbraco"
            };

            // Configure BackOffice Account Link button and style
            identityOptions.ForUmbracoBackOffice("btn-microsoft", "fa-windows");
            identityOptions.Caption = "OpenId Connect";

            // Fix Authentication Type
            identityOptions.AuthenticationType = "http://localhost:5000";

            // Configure AutoLinking
            identityOptions.SetExternalSignInAutoLinkOptions(
                new ExternalSignInAutoLinkOptions(autoLinkExternalAccount: true));

            identityOptions.Notifications = new OpenIdConnectAuthenticationNotifications
            {
                SecurityTokenValidated = ClaimsTransformer.GenerateUserIdentityAsync
            };

            app.UseOpenIdConnectAuthentication(identityOptions);

There is a lot going on here:

  • identityOptions: this needs to match with our IdentityServer Clients Config. If you mess this up, IdentityServer will provide error messages if configured correctly (see “Running in Kestrel” at the end of this article);
  • ForUmbracoBackOffice: this sets classes and style for the link/connect buttons in the Umbraco BackOffice UI;
  • SetExternalSignInAutoLinkOptions: auto link Umbraco and External account;
  • Notifications: we need to transform some of the claims to enable AutoLink. This is done using a ClaimsTransformer, see next topic.

Transform Claims

ASP.NET Identity by default does things a little bit different: the “name” claim contains the email address and the email address claim is empty. But we need a regular email claim (and a couple of others) to enable Umbraco Auto Link so we construct them ourselves.

Create a separate class file for the transform: ClaimsTransformer.

public class ClaimsTransformer
    {
        public static async Task GenerateUserIdentityAsync(
            SecurityTokenValidatedNotification<OpenIdConnectMessage, OpenIdConnectAuthenticationOptions> notification)
        {
            var identityUser = new ClaimsIdentity(
                notification.AuthenticationTicket.Identity.Claims,
                notification.AuthenticationTicket.Identity.AuthenticationType,
                ClaimTypes.Name,
                ClaimTypes.Role);

            var newIdentityUser = new ClaimsIdentity(identityUser.AuthenticationType,
                ClaimTypes.GivenName, ClaimTypes.Role);

            newIdentityUser.AddClaim(identityUser.FindFirst(ClaimTypes.NameIdentifier));

This method takes care of construction the new ClaimsIdentity containing the first two required claims:

  • NameIdentifier
  • EmailAddress

But we need a couple more claims:

  • GivenName
  • Surname

These claims are available through the UserInfo Endpoint (see http://docs.identityserver.io/en/release/endpoints/userinfo.html for more information). In order to use the UserInfoClient, we need to install the required Nuget Package:

PM> Install-Package IdentityModel

//Optionally add other claims
            var userInfoClient = new UserInfoClient(
                new Uri(notification.Options.Authority + "/connect/userinfo").ToString());

            var userInfo = await userInfoClient.GetAsync(notification.ProtocolMessage.AccessToken);
            newIdentityUser.AddClaims(userInfo.Claims.Select(t => new Claim(t.Type, t.Value)));

            notification.AuthenticationTicket = new AuthenticationTicket(newIdentityUser,
                notification.AuthenticationTicket.Properties);

Pffff finally, we should now have everything in place to correctly integrate both platforms! Only one more step needed 😉

OWIN Startup

To have Umbraco BackOffice pickup our custom OWIN Startup Class, we edit the web.config:

UmbracoBO4

Change the appSetting value in the web.config called “owin:appStartup” to be “UmbracoCustomOwinStartup”.

That’s it, time to test!

Test Umbraco BackOffice Login

This part is the fun part. Just build and run both projects and you should  be presented with Umbraco BackOffice at “/Umbraco”:

UmbracoBO1

If all worked out, you should see the third party login button (perfectly styled of course). Go for the “Sign in with OpenId Connect” and you will be redirected to the IdentityServer.

UmbracoBO2

Just sign in and you should be directed back to the Umbraco BackOffice! And if you click your profile button, you will see the linked account options (basically just one: Unlink):

UmbracoBO3

Thats it! Source code after the jump.

Share and Enjoy!

/Y.


Getting started with SonarQube and TypeScript

0
0

tssq

Update

The source code with this post was updated to reflect the new SonarTS version 1.2 and SonarQube version 6.7. For more information on how to extend the basic scenario with code coverage, see this post: Better together: SonarQube, TypeScript and Code Coverage

SonarSource recently released an official first version of a static code analyzer for TypeScript. So if you want to get up and running with SonarQube and Typescript: now you have an easy way to do this.

The supported scenarios are:

  • Run SonarTS as a TSLint extension
  • Run SonarTS as a SonarQube plugin

The first being the easiest and probably the best to get started. The second scenario enables analysis of Typescript files during builds.

First the links:

TSLint

TypeScript developers usually check their code for errors and maintainability using a TypeScript linter called Tslint (https://github.com/palantir/tslint). This linter can be extended with SonarTS, so let’s get this to work. In this case we install TSLint and TypeScript locally in our project and generate a default tslint.json configuration file.

npm install tslint typescript --save-dev
# or globally with: npm install tslint typescript -g
tslint --init

Next step, we install SonarTS

npm install tslint-sonarts –save-dev

And add the Sonar to the tslint.json configuration file:

{
    "defaultSeverity": "error",
    "extends": [
        "tslint:recommended",
        "tslint-sonarts"
    ],
    "jsRules": {},
    "rules": {

    },
    "rulesDirectory": []
}

And finally we need to add two additional files to complete our setup. The first being the “sonar-project.properties” :

sonar.projectKey=Demo:sonar-ts-demo
sonar.projectName=Sonar TS Demo
sonar.projectVersion=1.0
sonar.sourceEncoding=UTF-8
sonar.sources=src
sonar.exclusions=**/node_modules/**,**/*.spec.ts
sonar.tests=src
sonar.test.inclusions=**/*.spec.ts
sonar.ts.tslintconfigpath=tslint.json
sonar.ts.lcov.reportpath=test-results/coverage/coverage.lcov

And the second file tsconfig.json to configure our Typescript root and compiler instructions:

{
    "include": [
        "src/**/*"
    ],
    "exclude": [
        "node_modules",
        "**/*.spec.ts"
    ]
}

When we run the TSLint, we now also receive Sonar Analysis feedback:

tslint --type-check --project tsconfig.json -c tslint.json 'src/**/*.ts'

Rules

The initial release contains two profiles and 70+ rules. Rules can easily be disabled/ enabled using the tslint.json or the tslint-sonarts.json:

{
    "rulesDirectory": [],
    "rules": {
      "no-collection-size-mischeck": true,
      "no-all-duplicated-branches": true,
      "no-duplicated-branches": true,
      "no-empty-destructuring": true,
      "no-use-of-empty-return-value": true,
      "no-identical-conditions": true,
      "no-identical-expressions": true,
      "no-useless-increment": true,
      "no-ignored-return": true,
      "no-self-assignment": true,
      "no-variable-usage-before-declaration": true,
      "no-misspelled-operator": false,
      "no-inconsistent-return": false,
      "no-unconditional-jump": true,
      "no-misleading-array-reverse": true,
      "no-empty-nested-blocks": false,
      "no-multiline-string-literals": true,
      "no-array-delete": true,
      "no-dead-store": true
    }
  }

SonarQube

If you are working with SonarQube Server, you probably want to work with SonarTS as a plugin. This enables static analysis and additional rules, metrics and code coverage import.
Since it is available as an official plugin, we can install it using the Update Center:

1

Next we need to configure our Typescript application to run the sonar-scanner. Sonar Source provides scanners for several project types:

  • MSBuild (local, VSTS, TFS)
  • Maven
  • Gradle
  • Ant
  • Jenkins
  • Command Line

Since we are running Typescript only, the Command Line scanner is the obvious choice. If we add some third party packages, we can integrate the analysis with our grunt, gulp or npm tasks. There are several npm packages that can help us out here and my favourite being the sonarqube-scanner:
https://www.npmjs.com/package/sonarqube-scanner
npm install sonarqube-scanner –save-dev

This package depends on gulp and uses gulp tasks to run the scanner:
A simple gulp task is what we need:

var gulp = require('gulp');
var sonarqubeScanner = require('sonarqube-scanner');

gulp.task('sonar', function(callback) {
  sonarqubeScanner({
    serverUrl : "http://localhost:9000",
    options : {
    }
  }, callback);
});

Running gulp sonar starts the analysis and uploads the results to the SonarQube server. If all goes well you should see a lot of output messages. The important ones to look for:

  • INFO: Quality profile for ts: Sonar way
  • INFO: Sensor TypeScript Sensor
  • INFO: Running rule analysis for ../tsconfig.json with x files
  • INFO: Sensor TypeScript Sensor (done)
  • INFO: ANALYSIS SUCCESSFUL

And if you browse to the SonarQube dashboard, you should see the project and the results.

2.png

You can find the soure code here: https://github.com/yuriburger/sonar-ts-demo

/Y.

Better together: SonarQube, TypeScript and Code Coverage

0
0

tssq

In a previous post we met SonarTS, the first official static code analyzer for TypeScript  by SonarSource. That post focused on getting SonarQube and TypeScript up and running. Now we are ready to extend on that scenario by adding code coverage metrics to our reports.

Couple of things we need to get this to work:

  • A unit test, of course, covering some (or all) of our custom code;
  • A generated coverage report;
  • Configure our scripts to upload the report to our SonarQube installation.

So let’s start with a unit test. There are a few ways to  achieve this (and a couple of frameworks that can help us), but I will not cover any options in detail here. In this case I will work with Karma as a test runner, Jasmine as the test framework, Google Chrome as the browser and write the actual unit test in TypeScript too.

Jasmine: https://jasmine.github.io

Jasmine is our test framework. It allows us to describe our tests and run on any JavaScript enabled platform. Simply add it to our project using npm:

npm install jasmine-core --save-dev

And since we are going to write our tests using TypeScript, we need to install the correct typings:

npm install @types/jasmine --save-dev

Karma: https://karma-runner.github.io

Karma is our test runner and responsible for executing our code and unit tests using real browsers.

npm install karma --save-dev
npm install karma-cli --save-dev
npm install karma-jasmine --save-dev
npm install karma-chrome-launcher --save-dev

Karma-TypeScript: https://www.npmjs.com/package/karma-typescript

This enables us to write unit tests in Typescript with full type checking, seamlessly without extra build steps or scripts. As a bonus we get remapped test coverage with karma-coverage and Istanbul.

npm install karma-typescript --save-dev

Creating the test

We have a single component based on a “Hello World” service.

import { IHelloService } from "./hello.service.interface";

export class HelloComponent {
  constructor(private helloService: IHelloService) {}

  public sayHello(): string {
    return this.helloService.sayHello();
  }
}

And the unit test could look something like this:

import { HelloComponent } from "./hello.component";
import { IHelloService } from "./hello.service.interface";

class MockHelloService implements IHelloService {
  public sayHello(): string {

    return "Hello world!";
  }
}

describe("HelloComponent", () => {

  it("should say 'Hello world!'", () => {
    const mockHelloService = new MockHelloService();
    const helloComponent = new HelloComponent(mockHelloService);

    expect(helloComponent.sayHello()).toEqual("Hello world!");
  });
});

Running the unit test

To run the tests we need to configure Karma with some options:

module.exports = function(config) {

  config.set({
    frameworks: ["jasmine", "karma-typescript"],
    files: [
      { pattern: "src/**/*.ts" }
    ],
  preprocessors: {
    "**/*.ts": ["karma-typescript"]
  },
  karmaTypescriptConfig: {
    reports:
    {
      "lcovonly": {
        "directory": "coverage",
        "filename": "lcov.info",
        "subdirectory": "lcov"
      }
    }
  },
  reporters: ["dots","karma-typescript"],
  browsers: ["Chrome"],
  singleRun: true });
};

The important parts are:

  • Frameworks: both jasmine and karma-typescript. This supports our unit tests written in TypeScript;
  • karmaTypeScriptConfig: here we choose lcov as the default report format. You can enable other reporters too, but this is the format SonarQube understands by default. Please see this link for more reporters: https://www.npmjs.com/package/karma-typescript
  • reporters: “dots” renders “.” as a visual queue when tests are running. “karma-typescript” is the reporter that actually produces the required output format.
  • browsers: Chrome, but you can pick other browsers if you like. It needs to be installed, of course.

Please note: To be able to remap the code to the coverage reports, source maps must be enabled (i.e. by setting the property through tsconfig.json).

When you now run the test, it should compile and automate a Chrome browser to execute the unit test. On completion it should generate a lcov.info report in the coverage folder.

Upload the results

If you already setup SonarQube, you just need to make sure you have the LCOV location configured. If not, see this post or take a look at the sample project.

To specify the report location, add this line to the sonar-project.properties file:

sonar.typescript.lcov.reportPaths=coverage/lcov/lcov.info

When you now run the SonarQube scanner, it should analyze the source code and upload the results + the coverage file to SonarQube.

1

You can find the source code here: https://github.com/yuriburger/sonar-ts-demo

/Y.

Hooking up Umbraco and Azure Functions with webhooks

0
0

feature2

Umbraco allows you to setup post deployment webhooks and you can use these to perform all kinds of post deployment tasks (notifying community members, announcement on Slack channels, signaling a monitoring application, etc.). For this post we will trigger custom code in Azure that sends out a simple email notification, but that can easily be extended into more complex scenarios.
For more information on Umbraco Cloud Webhooks, see the official documentation:

https://our.umbraco.org/documentation/Umbraco-Cloud/Deployment/Deployment-Webhook

Good to know: you can configure a webhook per Umbraco Cloud Project environment….very convenient!

2

So only 2 things we need:
1. An Azure Function waiting for our webhook call;
2. A webhook url for an Umbraco Cloud Project environment.

Setting up Azure

Azure Functions is a serverless compute service that enables you to run code on-demand without having to explicitly provision or manage any infrastructure. You can use Azure Functions to run a script or piece of code in response to a variety of events. In our case we need to respond to a webhook sent by Umbraco Cloud upon deployment.

A detailed description on Azure Functions is available Microsoft Docs: https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function

In short:
Create the Azure Function App by logging into the Azure Portal and clicking the New button found on the upper left-hand corner of the Azure portal. Then select Compute > Function App.
Name the app and configure the required settings, hit the Create button to provision the Function App.

3

Next step: create a function within our new Function App.

Our function will work with the incoming request data from the Umbraco webhook call and send out an email using the SendGrid service. To make SendGrid work with our Async method, we need to switch to Azure Functions V2. The following blogpost provides more information on why and how we need to do this: https://blogs.technet.microsoft.com/livedevopsinjapan/2017/12/04/azure-functions-v2-async-sample-with-sendgrid

Switching is easy and can be done through the function app settings:

4

Now we are ready to create our function. You can add it through the UI and paste the required code or start with the template “Webhook + API” and CSharp. Either way you should end up with the following code for the run.csx:

#r "Newtonsoft.Json"
#r "SendGrid"

using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
using SendGrid.Helpers.Mail;
using System.Text;

public async static Task Run(HttpRequest req, IAsyncCollector messages, TraceWriter log)
{
log.Info("SendGrid message");
using (StreamReader reader = new StreamReader(req.Body, Encoding.UTF8))
{
var body = await reader.ReadToEndAsync();
var subject = String.Empty;

dynamic data = JsonConvert.DeserializeObject(body);
subject = data?.ProjectName;

var message = new SendGridMessage();

message.AddTo("xyz@outlook.com");
message.AddContent("text/html", body);
message.SetFrom("xyz@domain.com");
message.SetSubject(subject);

await messages.AddAsync(message);
return (ActionResult)new OkObjectResult("The E-mail has been sent.");
}
}

And for function.json:

{
"bindings": [
{
"authLevel": "function",
"name": "req",
"type": "httpTrigger",
"direction": "in"
},
{
"name": "$return",
"type": "http",
"direction": "out"
},
{
"type": "sendGrid",
"name": "messages",
"apiKey": "SGKEY",
"to": "xyz@outlook.com",
"from": "xyz@domain.com",
"subject": "New deployment!",
"direction": "out"
}
]
}

Tip: for continuous deployment scenarios you can setup a deployment source like GitHub, a local Git repo or even sources like Dropbox or OneDrive! See this link for more information: https://docs.microsoft.com/en-us/azure/azure-functions/functions-continuous-deployment

Before we head over to our Umbraco Cloud project, we need to take note of our function app url:

5

This url includes our api key (code) which we need to be able to call our function.

Configure webhook

Configuring the webhook is straightforward: just input the url of our function app and add the webhook.

1a

And now we trigger the webhook by deploying a change to the Development environment. If all goes well we should see some logging in Azure and eventually receive the email containing our project info.

6

That is all you need to make this work. You can find the source code used in this post on GitHub: https://github.com/yuriburger/umbraco-webhook-demo

/Y.

JavaScript Unit Tests with Visual Studio Team Services

0
0

feature2

TL;DR: JavaScript Unit Testing with VSTS using real browsers.

  1. We would like to run JavaScript unit tests;
  2. And we prefer a real browser for this, so no PhantomJS or equivalent;
  3. And run our tests from a Visual Studio Team Services build;
  4. VSTS hosted build agents don’t have Chrome or Firefox installed;
  5. So things break;
  6. We fix this by providing a private build agent and our own Selenium standalone server;
  7. Using Docker of course. 😊

The Setup

So let’s start with a unit test that we will use to demonstrate our problem. There are a few ways to write unit tests (and a couple of frameworks that can help us), but I will not cover any options in detail here. In this case I will work with Karma as a test runner, Jasmine as the test framework, Google Chrome as the browser and write the actual unit test in TypeScript too.

Jasmine: https://jasmine.github.io
Jasmine is our test framework. It allows us to describe our tests and run on any JavaScript enabled platform. Simply add it to our project using npm:

npm install jasmine-core --save-dev

And since we are going to write our tests using TypeScript, we need to install the correct typings:

npm install @types/jasmine --save-dev

Karma: https://karma-runner.github.io
Karma is our test runner and responsible for executing our code and unit tests using real browsers.

npm install karma --save-dev
npm install karma-cli --save-dev
npm install karma-jasmine --save-dev
npm install karma-chrome-launcher --save-dev

Karma-TypeScript: https://www.npmjs.com/package/karma-typescript
This enables us to write unit tests in Typescript with full type checking, seamlessly without extra build steps or scripts. As a bonus we get remapped test coverage with karma-coverage and Istanbul.

npm install karma-typescript --save-dev

The Test

We have a single component based on a “Hello World” service.

import { IHelloService } from "./hello.service.interface";

export class HelloComponent {
  constructor(private helloService: IHelloService) {}

  public sayHello(): string {
    return this.helloService.sayHello();
  }
}

And the unit test that looks something like this:

import { HelloComponent } from "./hello.component";
import { IHelloService } from "./hello.service.interface";

class MockHelloService implements IHelloService {
  public sayHello(): string {

    return "Hello world!";
  }
}

describe("HelloComponent", () => {

  it("should say 'Hello world!'", () => {
    const mockHelloService = new MockHelloService();
    const helloComponent = new HelloComponent(mockHelloService);

    expect(helloComponent.sayHello()).toEqual("Hello world!");
  });
});

The Error

Locally this works fine if you have Google Chrome installed and add the following karma config:

module.exports = function(config) {
    config.set({
        frameworks: ["jasmine", "karma-typescript"],
        files: [
            { pattern: "src/**/*.ts" }
        ],
        preprocessors: {
            "**/*.ts": ["karma-typescript"]
        },
        reporters: ["dots","karma-typescript"],
        browsers: ["Chrome"],
        singleRun: true
    });
};

If we run our Karma test locally, all is well and Chrome launches to run our unit test.

1

For this code sample, please see this link: https://github.com/yuriburger/karma-chrome-demo

Next step is to try this on VSTS so let’s create a build. Our build is very simple and runs 3 simple “NPM” tasks:

1a

  • Npm Install: the usual node modules installer;
  • Npm run build: just runs the TypeScript Compiler;
  • Npm run test: this will run our Karma test.

If we save and queue this build, we will end up with an error basically telling us: “No Chrome for you”.

2a

The Fix

No Chrome on the Build Agent, so we need to jump a couple of hoops. The idea is, that we launch a Chrome browser on a remote server (i.e. not on the build agent) and let the remote server connect back to our Test Runner to perform the unit tests. There are some challenges with this approach:

  • Network connection. We cannot easily allow a remote server to connect back to a hosted build agent for a couple of reasons (being ports/firewalls/etc.);
  • We need a server implementing the WebDriver API to drive our browser automation;
  • And a way to launch a remote browser from our Karma test.

For the first two challenges we will bring our own infrastructure (a private build agent and a server hosting the WebDriver API). For the remote browser part, we can easily extend Karma to support his to let’s start with that:

Karma-Webdriver-Launcher: https://www.npmjs.com/package/karma-webdriver-launcher
Remotely launch webdriver instance.

Npm install karma-webdriver-launcher –save-dev

This wires up Karma and a remote Webdriver instance, basically an API to drive browser automation. The way this works, is that during the build (1) Karma starts a Test Runner Server (2) and instructs the launcher (3) to connect to a remote Webdriver instance (4) to fire up a browser (5) and connect back to the Karma Test Runner Server (6) instance to perform the tests.

2b

So the two remaining missing pieces are the build agent and the WebDriver API host. For both we will use docker containers to avoid having to install any software manually. Microsoft provides an official image for the VSTS Agent on Docker Hub and as far as a WebDriver API host: Selenium provides a nice implementation, also available on Docker Hub.

If you are new to Docker make sure you have met all the requirements for running containers on your favorite platform. See https://docs.docker.com/engine/docker-overview/ for more information.

Since we need the two machines communicating to each other, I like to create an isolated network first where we add our named images:

docker network create vsts-net

We can then run our Selenium container on this network:

docker run \
 -d -p 4444:4444 \
 --shm-size=2g --name webdriver \
 --net vsts-net selenium/standalone-chrome

And eventually our private build agent:

docker run \
 -e VSTS_ACCOUNT=<accountname> \
 -e VSTS_TOKEN=<token> \
 -it --name agent \
 --net vsts-net microsoft/vsts-agent

Important parts:

  • Selenium runs on the default port 4444, but you can modify the mapping
  • The shm-size=2g switch is to avoid Chrome crashes and uses the hosts memory (https://bugs.chromium.org/p/chromium/issues/detail?id=519952 for more information)
  • The VSTS container needs your VSTS accountname and an Access Token which you can create on visualstudio.com
  • The Selenium container is accessible using the name “webdriver” and the VSTS container using the name “agent”

The VSTS agent container automatically connects to your VSTS account and downloads the correct agent version. After this it registers itself so we can target our builds to this particular agent. Please note: when used with TFS, make sure you use an image that matches the installed TFS version.

3

4

Lastly we need to update the Karma configuration to enable the WebDriver launcher and add the required hostnames and ports. Remember that we declared our hostnames (webdriver and agent) when we started our docker containers.
Karma.conf.js

module.exports = function(config) {
    var webdriverConfig = {
        hostname: 'webdriver',
        port: 4444
    }

    config.set({
        hostname: 'agent',
        port: 9876,
        config: webdriverConfig,
        frameworks: ["jasmine", "karma-typescript"],
        files: [
            { pattern: "src/**/*.ts" }
        ],
        preprocessors: {
            "**/*.ts": ["karma-typescript"]
        },
        reporters: ["dots","karma-typescript"],
        browsers: ["ChromeSelenium"],
        customLaunchers: {
            ChromeSelenium: {
                base: 'WebDriver',
                config: webdriverConfig,
                browserName: 'ChromeSelenium',
                flags: []
            }
        },
        singleRun: true
    });
};

And if we now run our build from VSTS all tasks should complete nicely.

5

So eventually we had two start a private build agent and a server running Selenium. But by using Docker containters almost no effort was involved and this setup can easily be moved back to the cloud by leveraging Azure Container Services, Docker Cloud, etc.

For the source code used in this blogpost, see the following:

https://github.com/yuriburger/vsts-selenium-demo
https://github.com/yuriburger/karma-chrome-demo

/Y.

Managing External Identities in Umbraco BackOffice with PolicyServer

0
0

Feature

The authors of IdentityServer did a great job providing us with a framework for incorporating identity and access control logic in our apps and APIs. But they also warned us about misusing the IdentityServer software as an authorization/permission management system. So now they have created a new product called PolicyServer and it is available in both Open Source version and a commercial product. I decided to take PolicyServer for a spin and what better way to do this than in conjunction with IdentityServer 😊

The described setup is basically an extension to this original post: Login to Umbraco BackOffice using IdentityServer

For real business scenarios: take a look at their commercial product (a big brother to the Open Source version of Policy Server) https://solliance.net/products/policyserver

Goal: Login to Umbraco BackOffice using IdentityServer and have PolicyServer define our roles in Umbraco CMS.

Setting the stage

Umbraco BackOffice allows users to login using external Identity Providers. Upon successful authentication it will create a local user and (by default) add the user to the built-in “Editor” group. In our scenario we would like to maintain the users roles separately and have these roles reflect the group membership in Umbraco BackOffice.

Umbraco supports this scenario by allowing us to extend the process of user creation (this process is known as AutoLink) allowing us to add our own logic. So we would like to end up with the following process:

PolicyServer

  • The user accesses the Umbraco BackOffice and gets redirected to the Login page;
  • This login page shows a button for the external identity provider (in our case IdentityServer);
  • When an access token is received, Umbraco uses AutoLink to create this BackOffice User;
  • Directly after the AutoLink we fetch the roles from our PolicyServer API;
  • The API returns the information and our logic (“Enroll User”) takes care of the right group membership(s).

So, we need a couple of things: Umbraco 7, IdentityServer 4 and PolicyServer.Local

Highlevel steps:

  1. Setup (install) IdentityServer through Nuget in Visual Studio;
  2. Follow the Quick Start mentioned above and add the QuickStart UI;
  3. Run IdentityServer4 by adding our configuration;
  4. Setup (install) PolicyServer.Local through Nuget in Visual Studio;
  5. Add our application policy;
  6. Setup Umbraco;
  7. Configure Umbraco BackOffice to support an external Identity Provider;
  8. Extend the AutoLink process to enroll the logged in user.

Setup IdentityServer

The first part is pretty easy and documented by the IdentityServer4 documentation. Just a couple of things we need for our setup to keep in mind:

  • Follow the steps described here: http://docs.identityserver.io/en/release/quickstarts/0_overview.html
  • We use the Visual Studio template for an empty ASP.NET Core Web Application and the Nuget package for IdentityServer4.
  • Additionally we add the Quickstart UI https://github.com/IdentityServer/IdentityServer4.Quickstart.UI that contains MVC Views and Controllers for application logic (Account Login, Logout, etc.). Make sure you review your actual requirements before taking this solution to production;
  • The following Nuget package is needed: IdentityServer4;
  • You can follow all the steps from the mentioned documentation/ quickstart. We will configure our specific client needs in the next steps;
  • We run the blogpost demo code on “InMemory” stores, needless to say this is not suitable for production.

Configure IdentityServer

After finishing the initial setup we need to configure the IdentityServer.

  • Configure clients;
  • Configure identity Resources;
  • Configure API Resources (our PolicyServer API);
  • Configure test users;
  • Configure service startup.

For the purpose of this post we create everything through code. There is also documentation on the IdentityServer4 project site that enables configuration through Entity Framework databases.

We start with separate class files to store all of our configuration. The files should contain the parts mentioned above, please see the GitHub repo for full source code:

public static IEnumerable GetClients()
{
// Please see the code in the repo  
}

public static IEnumerable GetIdentityResources()
{
// Please see the code in the repo
}

public static IEnumerable GetApiResources()
{
// Please see the code in the repo
}

public static List GetUsers()
{
// Please see the code in the repo
}

And finally the startup configuration:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddIdentityServer()
        .AddDeveloperSigningCredential()
        .AddInMemoryIdentityResources(Config.MyIdentityResources.GetIdentityResources())
        .AddInMemoryApiResources(Config.MyApiResources.GetApiResources())
        .AddInMemoryClients(Config.MyClients.GetClients())
        .AddTestUsers(Config.MyUsers.GetUsers());
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseStaticFiles();
    app.UseIdentityServer();
    app.UseMvcWithDefaultRoute();
}

If you now run this project, you should be able to access a couple of URLs to test the settings. Remember that you can override the port through the Kestrel Builder Options.

Setting up our PolicyServer API

We use the Visual Studio template for an ASP.NET Core Web API Web Application and the Nuget packages PolicyServer.Local and IdentityServer4.AccessTokenValidation. The idea is, that we run the PolicyServer Client from our own “Policy API”. This might not be the ideal production scenario, but I think keeping it separate from IdentityServer is the right way to go.

See the http://policyserver.io site for what the authors of IdentityServer and PolicyServer have to say about the separation of authentication and authorization for a single application.
First step is to add the required Nuget packages to our freshly created API application:

  • Install-Package PolicyServer.Local
  • Install-Package IdentityServer4.AccessTokenValidation

Configure Startup:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvcCore()
        .AddAuthorization()
        .AddJsonFormatters();

    // Load the PolicyServer policies
    services.AddPolicyServerClient(Configuration.GetSection("Policy"));

    // IdentityServer Access Token Validation
    services.AddAuthentication("Bearer")
        .AddIdentityServerAuthentication(options =>
        {
            options.Authority = "http://localhost:5000";
            options.RequireHttpsMetadata = false;
            options.ApiName = "application.policy";
        });
}

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    if (env.IsDevelopment())
    {
        app.UseDeveloperExceptionPage();
    }

    app.UseAuthentication();

    // This claims augmentation middleware maps the user's authorization data into claims
    app.UsePolicyServerClaimsTransformation();
    app.UseMvc();
}

There are many ways to integrate the PolicyServer client in your application. In this case I use the Claims Transformation that maps the user’s authorization data into claims. Other use cases are well documented in the PolicyServer documentation.

Now add a simple controller that has the Authorize attribute so we end up with a User Context and the augmented claims from the PolicyServer. Our default Get route will return a JsonResult containing our claims as roles:

[Route("api/[controller]")]
[Authorize]
public class PoliciesController : Controller
{
    // GET api/policies
    [HttpGet()]
    public IActionResult Get()
    {
        var roles = User.FindAll("role");
        if (roles == null)
            return BadRequest();

        var result = new JsonResult(from r in roles select new { r.Type, r.Value });
        if (result != null)
            return Ok(result);
        else
            return NotFound();
    }
}

Our Appsettings.json should now contain our authorization policy:

{
  "Policy": {
    "roles": [
      {
        "name": "editor",
        "subjects": [ "1" ]
      },
      {
        "name": "administrator",
        "subjects": [ "2", "3" ]
      }
    ]
  }
}

Setup Umbraco

Setting up Umbraco is the final piece of the puzzle. There are detailed instructions found on the Umbraco Docs Web Site: https://our.umbraco.org/documentation/getting-started/setup/install/install-umbraco-with-nuget but we should start with a new project based on an Empty ASP.NET Web Application .NET Framework (4.6.1).
In addition we add the following Nuget packages:

  • UmbracoCms,
  • IdentityModel,
  • UmbracoCms.IdentityExtensions,
  • Microsoft.Owin.Security.OpenIdConnect

    This should give us all the plumping we need and the first thing we need to do is hookup OWIN to enable the External Identity Provider for our BackOffice Users:
    UmbracoCustomOwinStartup.cs (located in the App_Start):
var identityOptions = new OpenIdConnectAuthenticationOptions
{
    ClientId = "u-client-bo",
    SignInAsAuthenticationType = Constants.Security.BackOfficeExternalAuthenticationType,
    Authority = "http://localhost:5000",
    RedirectUri = "http://localhost:5003/umbraco",
    PostLogoutRedirectUri = "http://localhost:5003/umbraco",
    ResponseType = "code id_token token",
    Scope = "openid profile email application.profile application.policy"
};

// Configure BackOffice Account Link button and style
identityOptions.ForUmbracoBackOffice("btn-microsoft", "fa-windows");
identityOptions.Caption = "OpenId Connect";

// Fix Authentication Type
identityOptions.AuthenticationType = "http://localhost:5000";

// Configure AutoLinking
identityOptions.SetExternalSignInAutoLinkOptions(new ExternalSignInAutoLinkOptions(
    autoLinkExternalAccount: true,
    defaultUserGroups: null,
    defaultCulture: null
    ));

identityOptions.Notifications = new OpenIdConnectAuthenticationNotifications
{
    SecurityTokenValidated = EnrollUser.GenerateIdentityAsync
};

app.UseOpenIdConnectAuthentication(identityOptions);

The EnrollUser.GenerateIdentityAsync contains all the code to transform the needed claims, get the roles from the PolicyServer API and eventually AutoLink the user. The github repo contains all the code, but these are the important parts:

// Call PolicyServer API
var policyClient = new HttpClient();
policyClient.SetBearerToken(notification.ProtocolMessage.AccessToken);

// Get the Roles
var response = await policyClient.GetAsync(new Uri("http://localhost:5001/api/policies"));
if (!response.IsSuccessStatusCode)
{
    Console.WriteLine(response.StatusCode);
}
else
{
    var content = await response.Content.ReadAsStringAsync();
    var roles = JObject.Parse(content)["value"];

    // Pass roles result from PolicyServer
    if (roles != null)
        RegisterUserWithUmbracoRole(userId.Value, roles, notification.Options.Authority);
}
// If we find an administrator we need to update the Umbraco Role
var roleObject = roles.FirstOrDefault(r => r["value"] != null && r["value"].ToString() == "administrator");

if (roleObject == null)
    return;

// Add User to Admin Group
var userGroup = ToReadOnlyGroup(userService.GetUserGroupByAlias("admin"));

if (userGroup == null)
    return;

umbracoUser.AddGroup(userGroup);
userService.Save(umbracoUser);

OWIN Startup
To have Umbraco BackOffice pickup our custom OWIN Startup Class, we edit the web.config:

owin

Change the appSetting value in the web.config called “owin:appStartup” to be “UmbracoCustomOwinStartup”.
That’s it, time to test!

So we head over to Umbraco BackOffice and go for the External Login:

Login

LoginIdS

After logging in, we can see Bob is indeed an Administrator with the Umbraco CMS…. way to go Bob!!

BackOffice

That’s it, please see the GitHub repo for more information.

/Y.


SonarQube Quality Gates and VSTS builds

0
0

Feature

SonarQube includes the concept of quality gates and these gates allow you to answer questions about the quality of the code being analyzed. They come as sets of Boolean conditions and can be based on the usual Sonar metrics including blocker issues, code coverage on new code, reliability rating, etc.

Here are the default settings:

DefaultMetrics

More info here: https://blog.sonarsource.com/quality-gates-shall-your-projects-pass

Based on the outcome you can by default notify users, but in most cases I would argue that you should display alerts on developer team dashboards or integrate the result with the release process. For C# (and most other builds going through MSBuild) Sonar provides a VSTS / TFS extension that performs several tasks including publishing the Quality Gate Status as part of SonarQube Analysis report:

sq-analysis-report-passed

Unfortunately all this good stuff depends on MSBuild and we cannot use this for our JavaScript/ npm/ gulp or grunt builds.

This post combines the concepts found in two earlier writings and extend on this scenario, by adding quality gates in SonarQube and have the Visual Studio Team Services reflect the outcome on the dashboard.

JavaScript Unit Tests with Visual Studio Team Services

Better together: SonarQube, TypeScript and Code Coverage

Setup

Our setup is a combination of the ones described in the earlier posts. We use three docker images to run our setup for this post:

  • VSTS Agent (microsoft/vsts-agent): this runs our build locally using Selenium and uploading the test results to SonarQube;
  • Selenium (selenium/standalone-chrome): runs our unit test using a real Chrome browser;
  • SonarQube (sonarqube): this container will analyze our code and coverage results.

We could create a Docker Compose file, but for the purpose of this demo we can work with the individual images. If you are new to Docker make sure you have met all the requirements for running containers on your favorite platform. See https://docs.docker.com/engine/docker-overview for more information.

Since we need all of the machines to be able to communicate with each other, I like to create an isolated network first where we add our named images:

docker network create vsts-net

We can then run our Selenium container on this network:

docker run \

-d -p 4444:4444 \

--shm-size=2g --name webdriver \

--net vsts-net selenium/standalone-chrome

And our private build agent:

docker run \

-e VSTS_ACCOUNT= \

-e VSTS_TOKEN= \

-it --name agent \

--net vsts-net microsoft/vsts-agent

And eventually our SonarQube instance:

docker run \

-d -p 9000:9000 -p 9092:9092 \

--name sonarqube \

--net vsts-net sonarqube

Important parts:

  • Everything runs on the default ports, but you can change them as you like;
  • The VSTS container needs your VSTS accountname and an Access Token which you can create on visualstudio.com;

Running

Demo

For this blogpost we will use this demo project and have VSTS build it: https://github.com/yuriburger/quality-gate-demo

Important: in the sonar-project.properties file there is a sonar.login that you need to update with your own SonarQube User Token. See https://docs.sonarqube.org/display/SONAR/User+Token for more information.

Our build is simple as it just runs the available npm build steps in the correct order.

Build

  • Npm Install: the usual node modules installer;
  • Npm run build: just runs the TypeScript Compiler;
  • Npm run test: this will run our Karma test;
  • Npm run sonar: this will run the SonarQube analyzer and upload the results.

If we queue this build, hopefully everything works out and we have a successful build 😉

If we now navigate to SonarQube (on http://localhost:9000 if you did not change the port) we should see our project and since we have 100% code coverage in our demo project, the gate is green!

GatePass

Good job!

/Y.

A Quality Gate Dashboard Widget for VSTS

0
0

In a previous post we figured out how we could work with SonarQube Quality Gates in a JavaScript build on VSTS. For regular VSTS builds (based on MSBuild that is) Sonar provides an excellent extension that enables several goodies including a “Publish Quality Gate Result” build task. Unfortunately this task requires MSBuild to function correctly and our usual client side/ JavaScript/ gulp/ grunt or npm builds do not.

So I decided to create and share my own (and first) VSTS extension! Ok, it is a very simple widget and I consider it a working prototype 🙂

What it does:

it is a regular VSTS Dashboard Widget available trough the marketplace: https://marketplace.visualstudio.com/items?itemName=yuriburgernet.qualitygatewidget

Marketplace

If you install it in your VSTS tenant, you can then add the widget to your dashboard:

Screen2

After this, you need to configure the widget with two parameters:

Screen3

If all goes well you should see your project status reflected in the widget:

Screen1

Known issues:

  • You need to provide the full url for your SonarQube Api. Example: https://localhost/api/qualitygates/project_status?projectKey=
  • SonarQube does not support HTTPS natively, so you need to setup a simple proxy for this. If you do not do this, then your browser will block any mixed protocol content you might want to serve.
  • SonarQube does not provide CORS support natively, so you need to setup the response header for this. For VSTS the header could be: Access-Control-Allow-Origin = https://<your account>.visualstudio.com

Any feedback is welcome, and if you want to peek at the code you will find the GitHub repo here: https://github.com/yuriburger/quality-gate-widget

/Y.

Angular 6 Chatbot Client

0
0

Robots

Azure Bot Service speeds up development by providing an integrated environment that’s purpose-built for bot development with the Microsoft Bot Framework connectors and BotBuilder SDKs.

The SDKs provide several ways for your bot to interact with people. Azure Bot Service can be integrated across multiple channels to increase interactions and reach more customers using your website or app to email, GroupMe, Facebook Messenger, Kik, Skype, Slack, Microsoft Teams, Telegram, text/SMS, Twilio, Cortana, and Skype for Business.

Another way for users to interact with your bot is through your own custom built client. In this post, we will see how to setup an Angular client and connect it to your Azure deployed Bot.

Create our bot

Let’s setup our Azure Bot first. There are several ways to do this, but in this case I will create and deploy the bot from the Microsoft Bot Framework Developer Portal https://dev.botframework.com .

Portal

If you click on “My bots” you will need to login. It is best to use an account that also has some Azure credits or MSDN benefits associated.

After login, you can start with creating our first bot 🙂 To start, click the button “Create a bot” and complete the configuration by accepting most of the defaults.

bot2

As I mentioned, bots have different ways to interact and we need to enable the desired channels. To do this, navigate to your bot either through the Bot Framework developer portal or directly with the Azure portal.

Click channels:

bot3

Your custom client can communicate with the bot through a REST API, but the preferred way is with the DirectLine client. This channel is not enabled by default, so we start with that:

bot4 From this page we need to take note of the Secret keys as this is a required setting in our custom client.

When this is done, the final piece of information we require is the messaging endpoint. The Overview page shows this url and is usually something like https://yourbotname.azurewebsites.net/api/messages

Now we can head over to our custom client.

There are no design requirements for working with the DirectLine API. Any JavaScript client should be able to work with this, but the nature of bot messaging suits Angulars “new” asynchronous programming concept very nicely. This Reactive Programming concept is based on data streams and available to Angular through the RxJS library. More information on this topic is found in the docs: https://angular.io/guide/rx-library

The sample app in this post is built using RxJS. This concept can be a little hard to grasp at first, but the main parts will be explained here. I will not cover RxJS itself, so you might want to read up on that before running your own client.

The app is built upon three services: users, threads and messages. All these services rely on a specific model (a user, thread and message). During app setup, we hook up the message service to our bot with the help of the Direct Line client.

User Model and Service

The User Service is pretty basic. It exposes a currentUser stream which any part of our application can subscribe to and know who the current user is.

export class UsersService {
  currentUser: Subject<User> = new BehaviorSubject<User>(null);

  public setCurrentUser(newUser: User): void {
    this.currentUser.next(newUser);
  }
}

The user model that goes with this, is also pretty basic:

export class User {
  id: string;

  constructor(public name: string, public avatarSrc: string) {
    this.id = uuid();
  }
}

Thread Model and Service

Threads play an important role in our app as they are responsible for “threading” the message streams. In this case we only have one bot and thus one thread, but if we extend the scenario we could support multiple bots/ threads.

export class ThreadsService {
  threads: Observable<{ [key: string]: Thread }>;
  currentThread: Subject<Thread> = new BehaviorSubject<Thread>(new Thread());
  currentThreadMessages: Observable<Message[]>;

  constructor(public messagesService: MessagesService) {
    this.threads = messagesService.messages.pipe(
      map((messages: Message[]) => {
        const threads: { [key: string]: Thread } = {};
        // Store the message's thread in our accumulator threads
        messages.map((message: Message) => {
            // code omitted
            // see sample app on GitHub
        return threads;
      })
    );

    this.currentThreadMessages = combineLatest(
      this.currentThread,
      messagesService.messages,
      (currentThread: Thread, messages: Message[]) => {
        // code omitted
        // see sample app on GitHub
      }
    );
  }

  setCurrentThread(newThread: Thread): void {
    this.currentThread.next(newThread);
  }
}

This is RxJS in full action and you can see we expose some important streams:

currentThread: Subject stream (read/write) that holds the select stream.
currentThreadMessages: an Observable stream that contains our future messages.

The model is straight forward:

export class Thread {
  id: string;
  lastMessage: Message;
  name: string;
  avatarSrc: string;

  constructor(id?: string, name?: string, avatarSrc?: string) {
    this.id = id || uuid();
    this.name = name;
    this.avatarSrc = avatarSrc;
  }
}

Message Model and Service

This is the heart of the app as it handles all of our message concerns.

export class MessagesService {
  newMessages: Subject<Message> = new Subject<Message>();
  messages: Observable<Message[]>;
  updates: Subject<any> = new Subject<any>();
  create: Subject<Message> = new Subject<Message>();

  constructor() {
    this.messages = this.updates
      // Watch the updates and accumulate operations on the messages
      .pipe(
        // code omitted
        // see sample app on GitHub
      );

    // Takes a Message and then puts an operation (the inner function)
    // on the updates stream to add the Message to the list of messages.
    this.create
      .pipe(
        map(function(message: Message): IMessagesOperation {
          return (messages: Message[]) => {
            return messages.concat(message);
          };
        })
      )
      .subscribe(this.updates);

    this.newMessages.subscribe(this.create);
  }

  // Add message to stream
  addMessage(message: Message): void {
    this.newMessages.next(message);
  }

  messagesForThreadUser(thread: Thread, user: User): Observable<Message> {
    return this.newMessages.pipe(
      filter((message: Message) => {
        // code omitted
        // see sample app on GitHub
      })
    );
  }
}

There is a lot going on here, most of it related to RxJS. We now have code that adds new messages to the stream and code that exposes a stream with messages for a certain thread (messagesForThreadUser). This last stream is what actually feeds the Angular Chat Window component.

The model for our messages:

export class Message {
  id: string;
  sentAt: Date;
  isRead: boolean;
  author: User;
  text: string;
  thread: Thread;

  constructor(obj?: any) {
    this.id = (obj && obj.id) || uuid();
    this.isRead = (obj && obj.isRead) || false;
    this.sentAt = (obj && obj.sentAt) || new Date();
    this.author = (obj && obj.author) || null;
    this.text = (obj && obj.text) || null;
    this.thread = (obj && obj.thread) || null;
  }
}

To add the DirectLine client to our app, we can make use of the Direct Line client npm package. To install it:

npm install botframework-directlinejs --save

Please note: Although the demo app itself is written for RxJS 6, the botframework-directlinejs still requires rxjs-compat to be available.

With a setup class we configure the app:

export class Setup {
  static init(
    messagesService: MessagesService,
    threadsService: ThreadsService,
    usersService: UsersService
  ): void {
    messagesService.messages.subscribe(() => ({}));
    usersService.setCurrentUser(me);
    threadsService.setCurrentThread(tMoneyPenny);

    this.setupBots(messagesService);
  }

  static setupBots(messagesService: MessagesService): void {
    // Send our messages to Miss Moneypenny
    messagesService
      .messagesForThreadUser(tMoneyPenny, moneypenny)
      .forEach((message: Message): void => {
        directLine
            // code omitted
            // see sample app on GitHub
      }, null);

    // Watch incoming messages from our bot
    directLine.activity$
        // code omitted
        // see sample app on GitHub
  }
}
  1. It subscribes to the messages
  2. Sets the current user and current thread
  3. Connects the message service stream to the bot through the Direct Line client for outgoing and incoming messages

Pfffew finally we can work on our Angular components. Fortunately this is now pretty straight forward: In our app.component On Init we subscribe to the currentThread and set our currentUser.

ngOnInit(): void {
    this.messages = this.threadsService.currentThreadMessages;
    this.draftMessage = new Message();
    this.threadsService.currentThread.subscribe((thread: Thread) => {
      this.currentThread = thread;
    });

    this.usersService.currentUser.subscribe((user: User) => {
      this.currentUser = user;
    });
  }

And to complete the app, we have a chat-message component to render the HTML:

<div>
    <span class="chat-time">
        {{message.author.name}} | {{message.sentAt | fromNow}}
    </span>
    {{message.text}}
</div>
<div class="msg-sent">
    <span class="chat-time">
        {{message.sentAt | fromNow}} | {{message.author.name}}
    </span>
    <p class="msg-sent">{{message.text}}</p>
</div>

Please check out the complete sample app on GitHub: https://github.com/yuriburger/ng-chatbot-demo

If all is setup correctly, you should have a working Chatbot client:

bot5

Happy chatting!
/Y.

Getting coverage reports with .NET Core

0
0

Code coverage calculates the percentage of code that is covered by automated (unit) tests. And unit tests are important to ensure ongoing code quality and predictability in our software. Code coverage reports help investigate how well our software development is doing by showing us that percentage. But we know all this right? Tests are important, so test coverage is also important 🙂

covermec

You can get to this percentage in a few ways, but the easiest is of course with the Visual Studio IDE itself. Although this is a good first step, it is usually not adequate enough for larger/ multiple teams. In that case, generating coverage reports during central builds is your best option.

This post discusses the steps needed to enable code coverage and as a bonus includes how to integrate them with SonarQube.

A fair warning:
The code coverage data collectors (ugh) were only added to the .NET Core SDK recently, so you will need at least .NET Core Sdk 2.1.400. And it is currently only available for Windows based build servers, but my guess is this requirement is likely to drop “soon”.

Oh and integrating them with SonarQube is also not so straightforward. It requires a couple of extra black magic voodoo build steps to manipulate the coverage data and convert it to a format that SonarQube/ SonarCloud understands. But these things are never simple, right?

Local reports with Visual Studio 2017

For local code coverage reports, you need a recent Visual Studio version (15.4 or newer). Setting this up is not that difficult and only requires you to enable the Full Debug Information.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <!-- Code Coverage Analysis requires Full Debug-->
    <DebugType>Full</DebugType>
    <!-- SonarQube requires a valid Project GUID but Core projects dont actually use this -->
    <ProjectGuid>{950585D2-E827-4D75-ADD6-C1C629CBC37C}</ProjectGuid>
  </PropertyGroup>
</Project>

If you then fire up the code coverage analysis from the “Test/Analyze Code Coverage” menu, it shows you the metrics in the results window:

LocalCoverage

Build reports with VSTS or TFS

Local coverage reports are useful, but you get the real benefits from this feature if you enable it as part of the builds.

So what do we currently need to get this to work?

  • .NET Core Sdk >= 2.1.400 on our build machine
  • Full Debug option (same as with the local coverage with Visual Studio)
  • A Windows based build server (for now, see the warning at the top of the post)

There are many alternatives to the built in code coverage reporter (AltCover, Coverlet to name two), but my goal here is to work with the default tooling.

Enabling the reports is simple, and can be configured as part of the build task by suppying the argument “–collect:”Code Coverage”. If you want to show the test/ coverage results on the VSTS/ TFS dashboards, you should also enable the publish option:

test-coverage1b

After a succesful run, the results should show you the test coverage:

test-coverage2

Integration with SonarQube

Big fan of SonarQube, so for me integrating code coverage results there is a big win. It provides me with a single dashboard that tells me how well our projects are doing.

With the default coverage option, we face a couple of challenges:

  • The output is saved with a timestamp and in the Build Agent Temp directory
  • The output format is not compatible with SonarQube as it expects CoverXML
  • The SonarQube scanner expects the test results and the coverage report to reside in a “Test Results” directory within the Build Agent’s source folder

To fix these issues, we will provide additional build tasks for copying the files to the correct location and for converting to CoverXML.

test-coverage3

Visual Studio Test Platform Installer

This out of the box task will download the executable we need to analyze and convert the coverage results.

Prepare analysis on SonarCloud
This is either the SonarCloud task or the SonarQube task. You get them from the Marketplace. Apart from the usual Sonar properties (project, key, version) we need to provide 1 extra property under “Advanced”:

sonar.cs.vscoveragexml.reportsPaths=$(Agent.BuildDirectory)\TestResults\TestCoverage.xml

Copy test results and Analyze coverage file

This first PowerShell task will locate the .trx and .coverage file and copy them to the TestResults directory under the Build directory.

Get-ChildItem -Path $(Agent.TempDirectory) -Filter *.trx -Recurse -ErrorAction SilentlyContinue -Force`
| %{Join-Path -Path $_.Directory -ChildPath $_.Name }`
| Copy-Item -Destination $(Agent.BuildDirectory)\TestResults\TestResults.trx -Force

Get-ChildItem -Path $(Agent.TempDirectory) -Filter *.coverage -Recurse -ErrorAction SilentlyContinue -Force `
|%{Join-Path -Path $_.Directory -ChildPath $_.Name }`
| Copy-Item -Destination $(Agent.BuildDirectory)\TestResults\TestCoverage.coverage -Force

This second PowerShell task will convert the coverage results to CoverageXML. It depends on CodeCoverage.exe, but that part is downloaded with the "Visual Studio Test Platform Installer" task.

$tool= Get-ChildItem -Path '$(Agent.TempDirectory)\VsTest\*\tools\net451\Team Tools\Dynamic Code Coverage Tools\amd64\'`
 -Filter CodeCoverage.exe -Recurse -ErrorAction SilentlyContinue -Force

$parameter1 =  'analyze'
$parameter2 = '/output:$(Agent.BuildDirectory)\TestResults\TestCoverage.xml'
$parameter3 = '$(Agent.BuildDirectory)\TestResults\TestCoverage.coverage'
$parameters= "$parameter1 $parameter2 $parameter3"

Write-Host "Executing $tool $parameters"
$parms = $parameters.Split(" ")
& "$tool" $parms 

Run Code Analysis
Again, the original SonarQube or SonarCloud task. This will actually take the coverage xml file and upload it to Sonar for the results!

test-coverage4

You can grab the scripts mentioned here (and the test project) from the GitHub repo: https://github.com/yuriburger/net-cover-demo

/Y.

Getting coverage reports with Angular

0
0

This artical is basically an extension to the previous post: Improving Angular Style and Code Quality and mashed up with: Better together: SonarQube, TypeScript and Code Coverage

In short: we add SonarQube static analysis to our Angular CLI project and enable code coverage reports.

3

Enable code coverage

If you have used the CLI to generate your Angular project, there is a karma.conf.js file in your source directory which is already properly configured. SonarQube expects us to upload a file in a LCOV format, so we depend on the correct report format:

// karma.conf.js
// ....

coverageIstanbulReporter: {
  dir: require('path').join(__dirname, '../coverage'),
  reports: ['html', 'lcovonly'],
  fixWebpackSourcePaths: true
},

// ....

Now, there are a couple of ways to enable the generation of code coverage reports. The simplest way is to provide an extra CLI paramter:

ng test --watch=false --code-coverage

But usually, the best way is to ensure code coverage reports by default. This is easily achieved by editing the angular.json file in your project root directory:

// angular.json
// ....

"options": {
  "codeCoverage": true,
  "main": "src/test.ts",
}

// ....

If we now run our tests, we should see coverage files being created in the project\coverage directory. This directory now contains html files for local inspection and a single ‘lcov.info’ file we can upload to SonarQube.

1

Add SonarQube

We add SonarQube (or SonarSource, the cloud service) simply by adding a local scanner. But to enable TypeScript code coverage support, we also need to add the SonarTS plugin:

npm install tslint-sonarts --save-dev
npm install sonar-scanner --save-dev

To configure the Sonar scanner, we make use of the sonar-project.properties file in the root of our project. In this file we need a couple of properties, depending on your scenario (on premises, SonarCloud):

sonar.host.url=https://sonarcloud.io
sonar.login=xxxxxxxxx
sonar.organization=xxxxxxxxx
sonar.projectKey=ng-cover-demo
sonar.projectName=ng-cover-demo
sonar.projectVersion=1.0
sonar.sourceEncoding=UTF-8
sonar.sources=src
sonar.exclusions=**/node_modules/**,**/*.spec.ts
sonar.tests=src
sonar.test.inclusions=**/*.spec.ts
sonar.typescript.lcov.reportPaths=coverage/lcov.info

Note: if we run the scanner, it will create a local .scannerwork directory in your project folder. You probably want to exclude this directory from source control and add this directory to your .gitignore file.

Running our sonar scanner should should produce our code coverage data in SonarQube:

npm run sonar

2

You can grab the sources mentioned here (and the test project) from the GitHub repo: https://github.com/yuriburger/ng-cover-demo

Viewing all 53 articles
Browse latest View live




Latest Images