Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Terraform and Azure Better Together! (Deploying Azure Function via Terraform)

$
0
0

You simply gotta love Terraform for obvious reasons but now specially for the awesome Azure integration available to all of us!

As announced yesterday here we now have access to brand new offering Terraform solution in the Azure Marketplace. This solution will enable teams to use shared identity, using Managed Service Identity (MSI), and shared state using Azure Storage. These features will allow you to use a consistent hosted instance of Terraform for DevOps Automation and production scenarios.

So I decided to give this a try and deploy a Hello World Azure Function as I am a fanboy of Serverless so why not!

As explained in the link above, the first step was to deploy the already configure box with all the goodies including the Azure CLI, Terraform bit and get it going!

Next was to SSH to the box (Don't forget to enable Azure Security Center JIT VM Access!) and run the script already provided to set up the shared identity:

Now that the shared identity is setup we needed to run Terraform Init:

For the configuration file, I used the sample from Terraform site  and modified for this effort and saved it as azfunc.tf:

Running Terraform Plan did the validation and the last step was to run Terraform Apply to get this deployment going:

And before you know it my Azure Function was ready to rock and roll!

Make sure to give this a try if you have not and join the rest of us to adopt Terraform!


Troubleshoot Site-to-Site VPN Connectivity Issues with On-Premises Source DB for Azure Database Migration Service

$
0
0

The Azure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure Data platforms with minimal downtime. For Azure Database Migration Service to be able to connect to on-premises source it requires either one of the following Azure hybrid networking prerequisites:

  • Express Route
  • Site-to-Site VPN

What is Site-to-Site VPN?

Site-to-Site VPN is designed for establishing secured connections between site offices and the cloud or bridging on-premises networks with virtual networks on Azure. To establish a Site-to-Site VPN connection, you need a public-facing IPv4 address and a compatible VPN device, or Routing and Remote Access Service (RRAS) running on Windows Server 2012. (For a list of known compatible devices, go to https://msdn.microsoft.com/en-us/library/azure/jj156075.aspx#bkmk_KnownCompatibleVPN.) You can use either static or dynamic gateways for Site-to-Site VPN.

Site-to-Site VPN extends your on-premises network to the cloud. This allows your on-premises servers reach your VMs on the cloud and vice versa your cloud VMs to communicate with your on-premises infrastructure. Although Site-to-Site connections provide reasonable reliability and throughput, some larger enterprises require much more bandwidth between their datacenters and the cloud. Moreover, because VPNs go through the public Internet, there’s no SLA to guarantee the connectivity. For these enterprises, ExpressRoute is the way to go.

In this blog entry I will go concentrate on some common network connectivity troubleshooting techniques when using Site-to-Site VPN option with Azure Database Migration Service to migrate on-premises source to Azure based target. VPN connectivity issues will cause various error messages when trying to connect to source on-premises databases from the Azure Database Migration Service.

Basic situation analysis

When attempting to connect to on-premises while creating Database Migration Service project or migration activity customers may get number of exceptions that prevent them from connecting. These exceptions can be divided into several common categories:

  • Login user security issues. These cover issues where login provided has lack of permissions to the database, account is locked, invalid or expired password, etc. These issues are covered by SQL Server error codes 300, 15113,18452,18456,18487,18488 returned to user with Azure DMS error message
  • Server certificate cannot be trusted, missing or invalid. User typically will get following message back with SQL error code -2146893019 A connection was successfully established, but a trusted certificate is not installed on the computer running SQL Server. Please set up a trusted certificate on the server. Refer to this link for further help: https://support.microsoft.com/en-us/help/2007728/error-message-when-you-use-ssl-for-connections-to-sql-server-the-certi
  • Firewall device placed between Azure cloud and customer on-premises source is preventing Azure DMS from connecting to the source. In this most commonly SQL Server error code 40615 is returned by Azure DMS with the error message “Cannot connect to <servername>” after attempting to connect to source database.
  • Finally, the error may be due to lack of connectivity over Site-to-Site VPN between on-premises source server and Azure VNET preventing Azure DMS from being able to connect to source database. These can be covered via SQL error codes -1,2, 5, 53, 233, 258, 1225, etc. with return error message “SQL connection failed”. Rest of this write-up will provide troubleshooting and diagnostics steps for this error category.

Basic troubleshooting for Azure Site-to-Site VPN connection issues

When seeing connectivity errors in Azure Database Migration Service due to Site-To-Site VPN connectivity issues between on-premises source infrastructure and Azure based target, such as SQL Azure DB or SQL Azure DB Managed Instance. it is important to start by scoping the problem correctly and making sure that all the basic tests were done before moving forward to a deeper troubleshooting. One of the first items we ask customer to do is to create a test Windows based Virtual Machine in the affected Azure VNET and attempt to test connectivity from that VM to other VMs in the same subnet, as well as on-premises server that has SQL Server source instance installed. To create Azure Virtual Machine you can follow this tutorial.

Here are two key questions that you should ask even before you start collecting and analyzing data:

  • Is this VM able to ping other VMs that are located on the same subnet? When doing ping testing in Azure one has to be aware that because the ICMP protocol is not permitted through the Azure load balancer, you will notice that you are unable to ping an Azure VM from the internet, and from within the Azure VM, you are unable to ping internet locations. In case you want to perform such testing we recommend you use SysInternals PSPing utility as described here.
  • Do I have another VM on the same virtual network able to communicate with on-premise resources?

If this VM cannot communicate with other VMs on the same subnet the issue most likely with the VM resources, you created. Probably these VMs are not on the same virtual network or the new VM was created using the Quick Create option. When you use this option, you may not be able to choose the virtual network on which the VM belongs, therefore it won’t be able to communicate with other VMs that belong to a custom virtual network. However, if VM can communicate with other VMs and resources within subnet but cannot communicate with on-premises resources you may have an issue with Site-to-Site VPN connectivity, in which case troubleshooting steps in this document may be useful to you.

Advanced VNET gateway log capture using Azure PowerShell

In addition to the document mentioned above, in case of Azure Site-to-Site VPN connectivity issues, with ARM based Azure VNET gateway resources you can use Azure PowerShell to capture diagnostic logs that can be extremely useful to troubleshoot connectivity issues. Following Azure PowerShell cmdlets will help you on this task:

  1. Start-AzureVNetGatewayDiagnostics
  2. Stop-AzureVnetGatewayDiagnostics
  3. Get-AzureVNetGatewayDiagnostics

Before you can use these Azure PowerShell cmdlets for log capture on your Azure VNET gateway you will need following:

  1. Microsoft Azure PowerShell Module. Be sure to download and install the latest version of the Azure PowerShell Module.
  2. Azure Storage Account. Azure Virtual Network Gateway diagnostics logs are stored in an Azure Storage Account. You can create a new Storage Account by following this article. After you created this storage account you should be able to browse to it using Azure Storage Explorer like in below image:

You can follow steps below to setup diagnostics capture:

  • First, sign in with your Azure subscription via PowerShell. Below example directs PowerShell to log you into Azure and to switch context to subscription named “My Production” as default
#login to azure
 Login-AzureRmAccount
 Select-AzureSubscription -Default -SubscriptionName "My Production”
  • Next, select the Azure Storage Account that you've created for storing the diagnostic logs that your Virtual Network Gateway will be generating.
$storageAccountName = (Get-AzureStorageAccount).StorageAccountName | Out-GridView -Title "Select Azure Storage Account" -PassThru
 $storageAccountKey = (Get-AzureStorageKey -StorageAccountName $storageAccountName).Primary
 $storageContext = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
  • Next, you need to select the Azure VNET on which the Gateway that you'll be diagnosing is provisioned.
$azureVNet = (Get-AzureVNetSite).Name | Out-GridView -Title "Select Azure VNet" -PassThru
  • Now,  you are ready to start capturing diagnostic logs. You will use the Start-AzureVNetGatewayDiagnostics cmdlet to begin your log capture. When using this cmdlet, you must specify a capture duration in seconds, which can be up to 300 seconds (5 minutes). Below example will capture logs for 90 seconds.
$captureDuration = 90
 Start-AzureVNetGatewayDiagnostics -VNetName $azureVNet -StorageContext $storageContext -CaptureDurationInSeconds $captureDuration
  • Now lets wait for duration of capture
Sleep -Seconds $captureDuration
  • After the capture duration has concluded, diagnostics logged will automatically stop and the log stored within Azure storage will be closed. Now you can save capture locally.
$logUrl = (Get-AzureVNetGatewayDiagnostics -VNetName $azureVNet).DiagnosticsUrl
 $logContent = (Invoke-WebRequest -Uri $logUrl).RawContent

$logContent | Out-File -FilePath vpnlog.txt
  • Finally, you can use your favorite text editor to explore log contents and get more information on your issue

Microsoft Support has contributed a script that automates above log gathering to PowerShell gallery and you can download it from here

Additional resources:

 

Announcing our Imagine Cup 2018 10 UK Finalist teams!

$
0
0

See the source imageIt’s that time of the year again when we see the UK’s top student developers coming together to create innovative solutions, looking to change the world. With the Imagine Cup UK National Finals fast approaching, we are excited to announce our 10 finalist teams, competing for their spot at the Worldwide finals this July.

Congratulations to the 10 UK Finalist teams…

1. Black Light - Firepoint: University of Abertay

image

Firefighting Training Simulator using both the HoloLens and Mixed Reality headset to educate and teach the public and trainee fighters on how to deal with different scenarios.

2. Higher Education App: Imperial College London

image

This project is an application and website that allows users to make and share their own highly interactive educational content.

3. Interview Bot: The University of Manchester

image

A web application developed to prepare students for job and internship interviews. The application gives real-time feedback on the user’s response to interview-style questions, including sentiment analysis and a written transcript of the interview.

4. My Uni Bot: University College London & University of York

Image result for UCL logo image

Assists universities in communicating with international applicants in an easier, cheaper and more effective way by answering their questions, reminding of deadlines and missing documents.

5. Pantry Logs: University College London

Image result for UCL logo

A solution to food waste. It scans and recognizes all the food items and logs this information into our mobile app. Our app notifies users when their food is close to expiring and sends recipe recommendations based on the types of food users already have.

6. War Space: Abertay University

image

A Real-Time Strategy game that builds a game level out of the Player's environment and adapts to any changes the Player creates to it.

7. Bad Flamingo: The University of Cambridge

Image result for Cambridge University logo

Pictionary, with an AI twist

8. Soothey - An RNN-Powered Mental Healthcare Bot: University of Oxford

Image result for oxford university logo

Mental illness is more common than diabetes, heart disease or even cancer. We combat this by early prevention with an RNN powered mental healthcare bot, so people with light symptoms can fine-tune their thought patterns using CBT-based techniques.

9. Visual Cognition: University College London & University of Cambridge

Image result for UCL logo Image result for Cambridge University logo

We improve the internet accessibility for those with visual impairments by using machine learning, powered by Microsoft’s Cognitive Services, to generate the missing image descriptions.

10. Wellbeing Bot - Common Room: University of Oxford

Image result for oxford university logo

Our project seeks to bring the myriad counselling services universities and colleges offer to a more accessible, online platform.

The UK Final will be on Friday 6th April at Microsoft HQ, London, UK

Each of these teams will pitch their solution to a panel of expert judges, consisting of

Rob Fraser (Commercial Software Engineering, Microsoft UK and Ireland Lead)

Haiyan Zhang (Innovation Director at Microsoft Research)

Michael Wignall (Microsoft UK CTO)

Louise O’Connor (Executive Producer at Rare Ltd).

They will have the have the difficult task of choosing who will go through to the worldwide finals this July, which will take place at Microsoft HQ, Redmond, Seattle, US.

The UK Winners Prize Pot

1st prize (£5000 and an Xbox One X per person)

2nd prize (£3000)

3rd prize (£2000).

Good luck to all our finalists and we look forward to announcing our UK winners!

If you are interested in entering the Imagine Cup 2019 next year, sign up here.

Binding a Certificate in IIS using C# and Powershell

$
0
0

Other day I was assisting a customer who had a unique need of binding a Certificate from within C# code using Powershell. A direct API call won't work due to some constraints, so Powershell was the other viable option. Customer also didn't want any Powershell window to pop-up, so we needed to code around it.

Here is the code sample:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Management.Automation;
using System.Collections.ObjectModel;

namespace ExecutePowershell
{
 class Program
 {
 static void Main(string[] args)
 {
 ExecutePowershellClass pwrshell = new ExecutePowershellClass();
 pwrshell.ExecuteCommand();
 Console.ReadLine();
 }
 }
 class ExecutePowershellClass
 {
 public void ExecuteCommand()
 {
 using (PowerShell myPowerShellInstance = PowerShell.Create())
 {
 //powershell script to get version number and list of processes currently executing in the machine.
 string sScript= "$PSVersionTable.PSVersion;get-process"; //REPLACE THIS sScript WITH THE POWERSHELL 
 //COMMAND BELOW. BASICALLY BUILD YOUR OWN STRING BASED ON YOUR NEED

// use "AddScript" to add the contents of a script file to the end of the execution pipeline.
 myPowerShellInstance.AddScript(sScript);

// invoke execution on the pipeline (collecting output)
 Collection<PSObject> PSOutput = myPowerShellInstance.Invoke();

// loop through each output object item
 foreach (PSObject outputItem in PSOutput)
 {
 if (outputItem != null)
 {
 Console.WriteLine(outputItem.ToString());
 }
 }
 }
 }
 }
}
Powershell COMMAND to bind a certificate

# Import IIS web administration Module
Import-Module WebAdministration

New-SelfSignedCertificate -DnsName website.test.com -CertStoreLocation cert:LocalMachineMy

$certificate = Get-ChildItem Cert:LocalMachineMy | Where-Object {$_.subject -like "*website.test.com*"} | Select-Object -ExpandProperty Thumbprint

Write-Host $certificate

Get-WebBinding -Port 443 -Name website.test.com | Remove-WebBinding

Remove-Item -Path "IIS:SslBindings*!443!website.test.com"

New-WebBinding -Name "Default Web Site" -IPAddress "*" -HostHeader "website.test.com" -Port 443 -Protocol https -SslFlags 0

get-item -Path "cert:localmachinemy$certificate" | new-item -path IIS:SslBindings.0.0.0!443!website.test.com -Value $certificate -Force

Note: You need to modify the hostname and binding accordingly.

 

Investigating issues email notifications for new account sign up and other account level operations – all regions – 03/24 – Investigating

$
0
0

Initial Update: Saturday, March 24th 2018 00:23 UTC

We're investigating issues with emails not been sent for any account creation, account invitation, changes to credentials and order confirmation for marketplace

  • Next Update: Before Saturday, March 24th 2018 03:00 UTC

Sincerely,
Sudheer Kumar

SCOM: Export Rules, Monitors, Overrides, Settings, Effective Monitoring Configuration with PowerShell

$
0
0
2018.03.23: (This is a reboot of a previous post. I revised the title, script, and photos a bit.)

I was working on a support case recently when it was brought to my attention that this cmdlet, Export-SCOMEffectiveMonitoringConfiguration,  does not output ALL contained instance configurations even when the "-RecurseContainedObjects" parameter is used. Yes, really!  It outputs configuration data for only one instance of each particular type, be it a logical disk or Ethernet adapter etc., you only get monitor and rule configuration output for one of a kind in the resulting .csv file. If you have three logical disks (i.e. C:, D:, E:) you will only see configuration data for one of those, whichever one gets enumerated first by the cmdlet. 

To overcome this limitation I've written a script that targets a group, and "recurses" all contained objects, outputs each configuration set to a separate file, then merges all of the files together to create one complete .CSV. (Thanks to Matt Hofacker for the idea) It can optionally open the complete dataset with PowerShell GridView (great idea StefanRoth!) for easy viewing and filtering. The merged CSV file can be opened in Excel if needed.

Operations Manager 2012, R2 - UR2

Enjoy.

Download

 

 

IIS Manager shows Authorization warning when testing a Physical Path

$
0
0

This is an issue we see time to time that worries our customers. In most cases, this is a by-design benign warning that simply telling us a verification has been deferred to runtime.

If you select your Application pool in IIS Manager and choose the basic settings, you will see following window:

Now if you click on "Test Settings…", you may see a yellow icon next to Authorization section and a corresponding warning message as follows:

Here is the warning details: The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain><computer_name>$ has Read access to the physical path. Then test these settings again.

This is just a warning, not Error. Path authorization couldn't be tested at configuration time, but it will be tested at runtime.

DevConf’s log #1 – Flights, floods, and lack of sleep

$
0
0

I am in South-Africa to share our DevOps journey at DevConf, a community driven, developer focused, one-day conference hosted annually.

image 

After a 23.9167-hour trip from Vancouver to Johannesburg I arrived at the hotel at midnight … followed by 24hours of no sleep, resulting in a few frazzled days. What I really enjoyed for the first two days was the downpour of large rain drops, lightening, and thunder … something we seldom experience in Vancouver.

image

Here are a few pictures to share the experience:

Flight from Vancouver to Calgary.
DSCN4064DSCN4065

Connecting flight at my favourite Schiphol Airport. Noticed a strange plane in the background.
DSCN4068DSCN4066DSCN4069

A bit of swag I packed – VSTS and DevOps stickers and a handful of quick reference posters that summarize key themes of our DevOps journey. My suitcase will never be the same again, after one of the Maple Syrup bottles imploded during the flight.  Smile
WIN_20180315_15_41_28_Pro

It’s a sunny Saturday morning at the Birchwood conference centre, started with a healthy breakfast.
DSCN4072DSCN4073DSCN4075WP_20180324_001

Plans for the weekend – meet with my good old friend Peter, with Robert MacLean who’s running the conference, and locate and secure some Biltong.  


2018 Imagine Cup 報名教學!

$
0
0

又到了一年一度的 Imagine Cup 這個由 Microsoft 針對學生所舉辦的科技大賽的報名時間,就讓 MSP 來告訴你怎麼樣完美的完成報名吧!

Step 1. 登入 Microsoft Imagine

首先請在看文章的你,點選 這裡 進到 Microsoft Imagine 的首頁。
接著請點選右上角的登入

Microsoft Imagine 首頁

之後請登入你的微軟帳號(非學生信箱),若沒有微軟帳號請點選「建立一個吧!」的按鈕。

 

Step 2. 確認 Microsoft Imagine 帳戶資訊

如果是先前已經使用過 Micrsoft Imagine 的帳號,會直接跳到 Micrsoft Imagine 帳戶管理頁面,如圖:

請點選左方的編輯進行資料確認。

非第一次使用 Microsoft Imagine

如果是新建立的帳號則會請你直接輸入資料:

第一次登入 Microsoft Imagine

※要注意國家與 Imagine 競賽地區都要選擇台灣喔!

※完成之後請回到帳戶頁面進行 Imagine Cup 的報名與建立隊伍!

 

Step 3. 報名 2018 Imagine Cup!

回到帳戶頁面之後點選 2018 Imagine Cup 下方的立即報名按鈕。

進行 Imagine Cup 的報名

稍待片刻之後會跳轉到報名完成的頁面,提示你已完成報名!

報名完成的提示頁面

看到這個頁面之後就可以進入下一步囉!

 

Step 4. 來建立團隊吧!

從上面的頁面中點選左上方的帳戶按鈕,回到帳戶管理頁面。
接著請往下滑動頁面,在建立團隊的區塊停下,輸入你的隊名並點選提交!

新增團隊

點選提交之後會出現下列畫面,部分資訊會由系統直接帶入,請將其他必填選項完成。

填寫團隊資訊

當你建立完團隊,回到 Microsoft Imagine 帳戶管理頁面就可以看到自己剛剛所建立的團隊囉!

這時候只需要點選管理,便可以進行團隊資訊的修改與新增隊員(至多 3 人)。

恭喜你完成 2018 Imagine Cup 的報名!

 

Final Step!上傳團隊的專案

前往參加比賽的最後一個步驟,便是提交要參賽的專案!

在完成上面的所有步驟之後,可以回到 Microsoft Imagine 的帳戶管理頁面,在 2018 Imagine Cup 的地方點選「提交 >」的按鈕,便會跳轉到上傳專案的頁面。

 

 

在進行專案的提交時需要準備下面幾項:

  1. 專案標題
  2. 專案敘述(最多 255 個字元)
  3. 專案提案(內容請包涵團隊與專案的介紹)
    • Word 或是 PDF 檔案不能超過 10 頁
    • PPT 檔案不能超過 20 張
    • 檔案限制為 100 MB
  4. 專案檔案
    • 只接受 ZIP
    • 大小限制為 1GB,超過請按照說明操作
  5. 專案檔案介紹
    • 只接受 DOC、DOCX、PDF 與 ZIP
    • 大小限制為 10 MB
  6. 其他選填項目

上傳完所有指定項目之後,按下提交就可以囉!

Devconf discussions – Feature Flags

$
0
0

For the next few days I’ll try to share a few of the discussions before, during, and after DevConf, a community driven, developer focused, one-day conference hosted annually.

image_thumb2 

Feature flags are definitely a hot topic. Here are some of the questions we discussed over the past 2 days and my thoughts.

How do we use feature flags in VSTS?

Buck’s feature flags blog post and his presentation/article are a great source to get an understanding of the custom-built feature flag system we’re using with Team Foundation Server (TFS) and Visual Studio Team Services (VSTS).

Do the Rangers use the same FF as VSTS?

The Rangers use the LaunchDarkly SaaS solution. You can find our learnings in this blog series.

Are feature flags free?

No, they introduce additional testing - for as long as a feature flag is in the code, you need to test all possible ways that the code could run using boolean (true, false) or multi-value flags. Sounds easy for a handful of flags, but when you reach hundreds of flags you’ll have a complex test plan to ensure all possible code paths are functional.

Then there’s the maintenance – deciding when and how to delete feature flags in your code. We expect our autonomous, self-organizing, and self-managing teams to take care of the feature flag maintenance and deleting stale flags. When you’re using LaunchDarkly you get a visual flag status on the dashboard, making is easier (not easy) to identify flags that are stale and candidates for deletion. 
image

When do we remove feature flags?

We have no process (yet), relying on feature teams to decide when to delete their feature flags. As Buck states “Many feature flags go away and the teams themselves take care of that They decide when to go delete the feature flags. It can get unwieldy after a while, so there’s some natural motivation to go clean it up. It’s easy– you delete things."

We’re still in an exploratory stage, looking for ways to identify stale feature flags and ensuring that they are deleted to avoid lingering technical debt that will haunt you.

Reference Material

Lesson Learned #33: How to make “cliconfg” to work with SQL Alias (on-premises) to Azure SQL Azure (PaaS)

$
0
0

I worked on a new situation, when we need to connect to a Azure SQL Database using SQL Alias in the same way that we have SQL Server on-premise. Unfortunately, there is not supported, but we are going to test and make it successfully.

As the alias is not supported in Azure SQL Database let me share with you 3 alternatives:

 

ALTERNATIVE 1: Using SQL SERVER Management Studio and cliconfg 64 bits - “C:windowssyswow64cliconfg.exe”

  • First, lets try to open the cliconfg for 64 bits, we provided the name of the alias, our server name and TCP/IP protocol.

 

  • Second, we are going to connect to our Azure SQL Database using the alias name.

 

  • Third, I need to specify in the user name username@servername, because working with Azure SQL Database, all connections will go through a Proxy/Firewall, (you don’t have a physical server you have a logical entity). This firewall/proxy receive the incoming connection and reroute the connection to the server that is running your database (each database that we have in Azure is running in different virtual machines, except, elastic pool that are running multiple databases in the same SQL Instance and hence in same virtual machine)

 

ALTERNATIVE 2: Using SQL SERVER Management Studio and modify the C:WindowsSystem32driversetchost

 

  • First, I identified the current IP for proxy/firewall for the server (unfortunately, this IP could change without notice and we cannot provide these IPs for this proxy/firewall but normally it might not change too often.

 

  • Change the file C:WindowsSystem32driversetchost adding a new entry.

  • Using SQL Server Management Studio, connect to the server, like this:

 

ALTERNATIVE 3: Using SQL SERVER Management Studio and cliconfg 32 bits - “C:windowssystem32cliconfg.exe”

  • Unfortunately, I don’t have any operating system in 32 bits, but, following the details of Alternative 1 it should be work.

Enjoy!

Azure Load Balancer – A new Standard SKU for Premium Capabilities

$
0
0

A bit of history

When you talk about Cloud Computing, you cannot avoid talking about networking, it is the glue that maintain everything together, and permit users to access their resources, and services to talk each other. At the core of Azure networking, since the very beginning, it is the Azure Load Balancer. If you probably remember the early days of Web and Worker roles in Azure PaaS v1, and the first version of IaaS Virtual Machines, probably you don’t Azure Load Balancer (LB) itself: in front of these resources, you had this piece of networking technology that allowed traffic to be balanced over a set of multiple compute instances.

At that time, LB was invisible and not configurable, with few exceptions. There was also a distinction between Azure Software Load Balancer (SLB) and Azure Internal Load Balancer (ILB), reflecting the first wave of improvement to this part of the platform. Even if ILB term continue to exist in the documentation, we generally talk about “Azure Load Balancer” (LB), it doesn’t matter if the front-end IP address is internal (ILB) or external (SLB). Within the scope of this article, I will refer to Azure Layer-4 Load Balancer as “LB”, Layer-7 Azure Application Gateway is not considered. Additionally, there is no more Azure Service Management (ASM) improvements for LB, all new features, and what discussed here, will be referred to Azure Resource Manager (ARM) API model.

Introduction of a new Load Balancer SKU

Azure is now approaching this 10th birthday, and a new massive wave of improvements is coming under the name of “Azure Standard Load Balancer” (Standard LB) :

Load Balancer Standard

https://azure.microsoft.com/en-us/updates/public-preview-load-balancer-standard

Azure Standard LB will incorporate massive improvements in performance, scale, resiliency and covering new scenarios that previously was not possible to implement. Before proceeding, let me clarify the new naming and the introduction of SKU separation: new improved Azure LB type will be the “Standard” SKU, while the previous LB version will go under “Basic” SKU, that is the current one not in preview. NOTE: There is no retirement announced for “Basic” SKU. Just to emphasize more that these SKUs are different, and that Standard has been built from the ground up in a new stack, it is not possible to dynamically upgrade from one to the other, at least today. Even if SKUs are not mutable, a procedure is provided to change, but will cause some downtime, you can check here. Before talking more in depth about technical features, let me highlight two important differences between these two SKUs:

  • Pricing

Standard and Basic SKUs have very different pricing. The Basic LB is free of charge, while the standard Azure Load Balancer has a charge associated with it. That pricing will be based on the number of rules configured (load balancer rules and NAT rules) and data processed for inbound originated flows. However, there will be no hourly charge for the standard load balancer itself when no rules are configured. Additionally, ILB scenario is not charged at all. You can check pricing details here.

  • Subscription Limits

Standard SKU comes with greater capacity and scalability, then some of the max thresholds per region per Azure subscription are different. In this article, please note the different values for the number of rules, front-end configurations, and backend pool size:

Introduction of a new IP Public Address SKU

Azure “Standard” SKU for Load Balancer is not coming alone, we are also going to have a “Standard” SKU for Public IP addresses (now still in preview). These are strictly related, new IP SKU is necessary and tightly bound to the Load Balancer used: concept here is reflected in the rule that dictate the usage of “Standard” IP with “Standard” LB, not possible to mix. All public IP addresses created before the introduction of SKUs are Basic SKU public IP addresses. With the introduction of SKUs, you have the option to specify which SKU you would like the public IP address to be. You may wonder why this has been created, I have my idea on that, not officially confirmed, but if you look at the feature list, it seems to be related to the (upcoming) introduction of Azure Availability Zones (AZ):

IP address types and allocation methods in Azure

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-ip-addresses-overview-arm

 

I will talk more in depth about “Availability Zones” (AZ) later in this article. In the meantime, let me ask your attention on four important points:

  • SKU choice: Azure LB Standard SKU does require Azure IP Standard SKU.
  • Standard public IP SKU is static only: this will have an impact on cost and maximum number of IPs you will be able to deploy, as you can read at this and this links.
  • Availability Zones: if you want to use LB in AZ scenario, you must use Standard SKU.

Creating it in PowerShell requires only an additional parameter “Sku” to be specified with value “Standard”:

New-AzureRmPublicIpAddress -Name $pubIpName -ResourceGroupName $rgname -AllocationMethod Static -Location $location -Sku Standard -IpAddressVersion IPv4

NOTE: Azure Public IP address Standard SKU cannot be used with VPN Gateway nor Application Gateway.

New LB Standard SKU features

During the past years, for several of my Azure projects, I faced several kinds of issues and limitations for the now named Azure “Basic” Load Balancer SKU. Looking at the list of improvements for “Standard” SKU in the article below, I’m impressed and glad that now almost everything has been finally solved. Let me report the list below along with my comments and experiences from those projects.

Azure Load Balancer Standard overview

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-overview

 

Backend Pool

The Backend pool is essentially the pool of “compute” instances that the LB will forward the traffic to. During one of my project in the past, I faced two distinct issues, here are the descriptions and how Standard SKU is aiming to solve now:

  • Scalability: max limit of 100 VM instances. Now with Standard SKU the limit has been raised 10x up to 1000.
  • Flexibility: only configurations with VMs (or VM Scale Sets – VMSS) into a single Availability Set (AS) were possible. Now with Standard SKU you can include every VM instance inside the same VNET, standalone or in multiple AS.

Frontend Configuration

Multiple Frontend IP configuration: in reality, this is possible also with Basic SKU (see here), but with Standard SKU it has been enhanced. You can now use these multiple IPs to scale on the number of ephemeral ports for SNAT. If you had problems in the past with SNAT port exhaustion, you should use LB Standard SKU with additional front-end Standard SKU IPs. Read carefully here on scenarios and how to solve potential issues.

Availability Zones

AZ is a very important feature that will become available soon, currently in preview in selected regions. Without this feature, is not possible to have fault-isolated locations within an Azure region, providing redundant power, cooling, and networking. Availability Zones allow customers to run mission-critical applications with higher availability and fault tolerance to datacenter failures, using synchronous data replication, to achieve zero data loss in case of single zone failure. With AZ, Azure will be able to provide 99,99% HA SLA for its Virtual Machines. If you want to use a Load Balancer and a Public IP that is zone-resilient, you must use Standard SKU for both these objects.

 

It is worth noting the following notes about “zonal” and “zone-redundant” concepts:

  1. Azure “Load Balancer”, “Virtual Network” and “Subnet” are region wide objects, that is regional, and never zonal.
  2. Both public and internal Load Balancer support zone-redundant and zonal scenarios.
  3. “Public IP” address can be created local to a zone (zoned) or regional, that is zone-resilient: in the former case, you need to specify a zone when creating the object, in the latter instead, you should not.

For additional details you can read the article below:

Standard Load Balancer and Availability Zones

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-availability-zones

If you want to know more on Azure Availability Zones, you can read my blog post below:

Why Azure Availability Zones

https://blogs.msdn.microsoft.com/igorpag/2017/10/08/why-azure-availability-zones

HA Ports

This feature is available only with Standard LB SKU and in internal load balancing scenarios (ILB), cannot be used with internet facing public IP. Mechanic is very simple: you can write a simple single rule that will balance *all* TCP or UDP ports and protocols. Without this feature, you need to explicitly write a load balancing rule for each port, with a maximum threshold of 250 using Basic SKU internal load balancing.

High Availability Ports overview

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-ha-ports-overview

Which kind of scenarios can have benefits from this? First usage is for Network Virtual Appliances (NVAs). With (HA) Ports is easy to configure a single load balancing rule to process and distribute VNet traffic coming across any Layer 4 port to your NVAs increasing reliability with faster failover and more scale-out options.

You can create such rule easily using the Azure Portal or PowerShell (ARM template and Azure CLI 2.0 are also available following these instructions):

New-AzureRmLoadBalancerRuleConfig -Name "HAPortsLBrule” -FrontendIpConfiguration $FEconfig -BackendAddressPool $BEPool -Protocol "All" -FrontendPort 0 -BackendPort 0

In the cmdlet above, please pay attention to the usage of “FrontEndPort”, “BackendPort” and “Protocol” switches.

Monitoring and Diagnostics

Using Basic SKU for Azure Load Balancer (LB), once enabled as described here, three different type of loggings are available: Audit, Health Probe and Alerts. The last one is particularly useful to troubleshoot potential issues on the LB, such as SNAT port exhaustion. All these logs can be collected and stored on an Azure storage account, and then analyzed using tools like PowerBI. What is new in Standard SKU is the possibility to retrieve important metrics and counters that will give you a much better insight on how your LB is working. You can get the list of metrics available for an Azure Standard LB resource using the Azure Portal (via Azure Monitor integration) or PowerShell cmdlet “Get-AzureRmMetricDefinition” from “AzureRM.Insights” module:

The first two “counters” will tell you about effective availability of frontend IP (VIP) and backend internal IP (DIP), the other counters are self-explanatory. For the description of each item, please see this article. This functionality is an addition to diagnostics already available in the Basic SKU that you can still leverage:

High-availability SLA

I personally love SLAs, as something that permits you to build your architecture on solid bases, and “resell” to your final customers an application that they can rely upon.  Standard LB SKU comes with a new brand high-availability SLA of 99,99%, Basic SKU is not covered here. Very useful and nice improvement.

SLA for Load Balancer

https://azure.microsoft.com/en-us/support/legal/sla/load-balancer/v1_0

The new metric monitoring feature, will allow you to easily assess availability, and other counters, from the Azure Portal with nice customizable diagrams.

Breaking changes in Azure LB Standard SKU

I decided to dedicate a section to this topic because there are two breaking changes in the behavior of the Azure Load Balancer Standard SKU, compared to the previous Basic SKU. If you are already using the latter, and planning to move to the former, please read carefully my notes below and associated documentation links:

  • Network Security Group (NSG): When you assign a standard SKU public IP address to a virtual machine’s network interface, you must explicitly allow the intended traffic with a network security group. Communication with the resource fails until you create and associate a network security group and explicitly allow the desired traffic. This is for obvious security reasons.
  • VM outbound connectivity: if a VM is part of a Standard LB backend pool, outbound connectivity is not permitted without additional configuration. This is totally different from Basic LB, where a VM has totally open outbound connectivity to the Internet. To permit the VM outbound connectivity, a load-balancing rule (NAT rule is not sufficient) must be programmed on the LB Standard SKU, as explained in the article below (see “Scenario 2: Load-balanced VM without an Instance Level Public IP address”):

Outbound connections in Azure

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections

NOTE: There is an exception to the rule I just gave you, I will explain you in a minute.

The article above is a very interesting piece of Azure networking architecture, I strongly recommend you read entirely if you want ot have a complete understanding of Azure LB behavior. What is worth reading is the list of scenarios for outbound VM connectivity:

Regarding the exception I mentioned in the previous note, as you can read for the first scenario, if you add an additional IP configuration to your VM, with an Instance Level Public IP, your VM will be able to reach outbound Internet.

What is still missing?

As I said at the beginning of this article, Azure Load Balancer (LB) Standard SKU solved most of the problems I faced in the past, an amazing work from Azure Networking team. As a Cloud Architect for my customers and partners, there are still a couple of “nice to have” details I would like to know:

  1. Load Balancer latency: officially, there is no mention on how much overhead the traffic will get flowing through the load balancer. I have done my tests in the past and have my idea, and you can do easily your own, but there is no SLA on that.
  2. Load Balancer throughput: how many Gbit/s or Pps (Packets per second) an Azure load balancer can sustain? There is no capacity guidance around this, I suspect because Azure load balancers run on a multi-tenant infrastructure, then numbers may change.

Regarding the second point, in the past was rare to hit throughput cap, other limits kicked in before. But now you can scale the backend pool to 1000 VMs, then will be interesting to do some performance tests at scale. Anyway, let me close my article with the sentence contained in the overview article for Azure Standard Load Balancer:

....Low latency, high throughput, and scale are available for millions of flows for all TCP and UDP applications.....

Examples on GitHub

Since the Standard SKU Load Balancer is new, and some behaviors are different from the past, I created some very simple samples using PowerShell you can play with and customize. You can find at this link on GitHub. Please be aware that these are NOT production ready, I created them only for learning purpose. Additionally, the code in the samples is not intended to be launched all in once. Instead, you should carefully review each commented section, understand the effects, then run it to observe the outcome.

  1. SAMPLE[1]: Create a simple zoned VM with an instance level Standard IP (ILPIP). Look how to create a VM in a specific Azure Availability Zone (AZ), then create a new Standard SKU type Public IP and use it to expose the VM. A Network Security Group (NSG) is necessary to permit traffic through the Standard Public IP, differently from Basic Public IP as done in the past, where all the ports were open for VM access.
  2. SAMPLE[2]: Create a new type Standard SKU Load Balancer (LB), and use it in conjunction with a new Standard SKU type Public IP. Then, create two "zoned" VMs with Managed Disks, each one hosted in a different Azure Availability Zone (AZ). Worth noting that the two VMs will be in the same subnet and Virtual Network, even if in different AZs. Finally, will be demonstrated how Standard Load Balancer will transparently redirect to the VM in Zone[2] if VM in Zone[1] will be down. In this sample, the necessary NSG will be created and bound to the subnet level, not at the specific VM NIC as in the previous example.
  3. SAMPLE[3]: Create a Standard SKU internal Load Balancer (ILB) with HA Port configured and 1 zoned VM behind it. You will see how the publishing rule for this feature is different, and how it works with VM created in a specific Zone.
  4. SAMPLE[4]: You will see here how to retrieve metric definitions and values for Standard SKU Load Balancer (LB), using PowerShell.

Conclusions

Azure new Standard SKU for Load Balancer has been officially released yesterday. It is a key component for many new Azure scenarios, most important for Azure Availability Zones (AZ). Additionally, it provides greater scalability and new diagnostic and monitoring capabilities that were missed in the Basic SKU release.  Hope you enjoyed my article, feel free to send me your notes and feedbacks. You can follow me on Twitter using @igorpag. Regards.

 

 

Lesson Learned #34: Does Azure SQL Database support OPENJSON?

$
0
0

The answer is Yes! but your database needs to have in the compatibility level 130 or above. If your database compatibility level is less than 130 you could get an error about "incorrect syntax near the keywork with" or "invalid syntax".

 

Please, follow these steps to read a JSON file from a Azure Blob Storage from SQL Server Management Studio:

  1. Download the latest version of SQL Server Management Studio.
  2. Connect to your Azure SQL Database and run the following TSQL in order to identify the compatibility level version of your database: select * from sys.databases
  3. If your database has the compatibility level than 130 you need to change it, if not you will not be able to use OPENJSON. We need to change running the following TSQL command: ALTER DATABASE DotNetExample SET COMPATIBILITY_LEVEL = 130. Remember that Azure SQL Database supports right now the compatibility level of 140 ( SQL SERVER 2017 ).
  4. Create your JSON file and upload it to the Azure Blob Storage. You could use, for example, Microsoft Azure Storage Explorer
  5. Give the permissions to read data in the Azure Blob Storage and copy the key.
  6. Using SQL Server Management Studio:

 

Create a new credential:

CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sv=2017-04-17&ss=bfqt&srt=sco&sp=rwdl&st=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';

 

Create a new external data.

CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://xxxxx.blob.core.windows.net',
CREDENTIAL= MyAzureBlobStorageCredential);

 

Run the following TSQL to read the data from a JSON file. (In my case, I created a JSON with the results of sys.dm_db_resource_stats execution)

SELECT book.* FROM OPENROWSET (BULK 'ResourceDB.json',DATA_SOURCE = 'MyAzureBlobStorage',  SINGLE_cLOB) as j
CROSS APPLY OPENJSON(BulkColumn)
WITH( [end_time] datetime , [avg_cpu_percent] [decimal](5, 2), [avg_data_io_percent] [decimal](5, 2) , [avg_log_write_percent] [decimal](5, 2) , [avg_memory_usage_percent] [decimal](5, 2), [xtp_storage_percent] [decimal](5, 2), [max_worker_percent] [decimal](5, 2) , [max_session_percent] [decimal](5, 2) , [dtu_limit] [int] ) AS book

Enjoy!

Teamwork: Embracing new culture of work

$
0
0

"Talent wins games, but teamwork and intelligence wins championships" – Michael Jordan

New culture of work

Just like sports, fostering a culture of teamwork is essential for a productive enterprise. Teamwork is about team members working together, building upon each other's expertise, sharing their experiences, expressing their opinions and thereby building culture that helps companies innovate. Hallmark of a highly productive team is culture where members believe in sharing information rather than keeping it to themselves. Again, just like any team sports, team members need to cultivate trust among each other. They need to believe that whole is always greater than a pie. And just like Michael Jordan said, aim at winning the championship, not individual games.

So, what is all this to do with Microsoft Teams? It is:

  • The spirit of working in open,
  • Sharing by default,
  • and Trusting by default.

Emails and Instant Messaging applications have made us all productive by speeding up time is takes to communicate between team members, and now it is time to speed up the reach of information. If we want to speed up our innovation, we need to:

  • Change our culture,
  • Our habit and
  • Our tools of communication.

At the same time:

  • We need to start trusting,
  • We need to start sharing, and
  • We need to start building upon each other's contribution.

Please note, sharing in this context does not mean compromising on information protection or compromising security. Information is the new oil, and it needs to be protected. Context here is sharing within your team members with all the controls and policies appropriately configured for security, privacy and compliance.

If you are using emails and IMs are your primary communication tools, and still are on the fence on adopting Microsoft Teams, this article will walk you through benefits that your team is missing by not making the change. It's a team effort, the team has to be all in to gain the benefits.

Why Conversations rock!

Emails restrict the consumption of information to folks who are chosen by the author of email. Whereas by using Teams conversation, information is available to all existing as well as future members of the team.

Benefits of Teams conversation compared to emails

  • Conversations are available to new members who join the team. New members can quickly ramp up on history and context.
  • Access to conversation is removed when members leave the team. In case of emails, the information is available in the member's inbox even when they leave team.
  • Information is always available in the context of a team. Emails are transactional in nature and do not live within a team's context
  • Author of conversation can use @ mentions to notify specific members, channel or a team. This generates a notification for the consumer of the information thereby raising the importance for the consumer. Emails do not have such notification functionality
  • Conversations in Teams can be supplemented with pulling information directly from an app (either Microsoft, enterprise or 3rd party app).
  • Ability for members to like conversations thereby increasing the significance of the information

Why Teams Chats is better

Teams provide persistent chat between 2 or more people, whereas instant messaging (IM) is transactional in nature where the context is lost when conversation ends


Benefits of Teams Chat compared to IM

  • Teams provides persistent chat that preserves conversation context.
  • Members in a chat can pull in new members. Since conversation is persisted, new members can view conversation history
  • Member can add a Tab that pulls in data from apps
  • Files shared in chat are automatically stored in OneDrive. Permissions to the file are automatically updated for members in the chat
  • Members in chat can quickly get access to all files shared in the conversation through Files tab

Why meetings in Teams are cooler

Meetings in Teams allows members to be productive by providing access to conversations and recordings before and after the meeting.


Benefits of meetings in Teams compared to Skype For Business

  • Meetings scheduled and recorded within a Teams channel is available to new members.
  • Plugin-less meeting join from Edge or Chrome browser
  • User is alerted if they are speaking on mute. How cool is this!
  • Ability for members who are not in invite to join meetings
  • Persistent conversations during and after meeting


Get started today


! Be the champion for change.

Mixed Reality for everyone through Windows 10

$
0
0

Written by Natalie Afshar

What exactly is mixed reality? And how does it compare with augmented reality and virtual reality?

Well, virtual reality (VR) usually involves a fully immersive virtual experience including head mounted displays, while augmented reality (AR) involves an overlay of content on the real world, and mixed reality (MR) takes it one step further where the holographic content is anchored to the real world, and interacts with it in real time, for example overlaying a virtual architectural model on a table or surgeons overlaying a virtual x-ray on a patient while performing an operation.

Microsoft have a Mixed Reality (MR) offering called Windows Mixed Reality which allows anyone with the latest Windows10 update to experiment with holographic technology. Windows Mixed Reality offers a whole suite of creative tools, from Paint 3D to Mixed Reality Viewer and Remix 3D, which I will explain in more depth below. These tools allow students and teachers to be active creators of mixed reality content, rather than just passive consumers: they can experiment with 3D drawing, mixed reality video editing and even 3D printing or creating 3D content with Unity.

This is useful for schools and teachers because you don’t actually need an expensive VR headset to start creating and experimenting - a readily available mixed reality solution can be accessed by anyone with the latest build of Windows10, with the Fall Creators Update, and a PC with a webcam.

For IT admins who want to read more about the Fall Creators Update and how to get started, click here.

It also updates your device with the latest security features. Windows Defender Antivirus specifically safeguards against malicious apps including ransomware attacks and threats like WannaCry, and it continuously scans your device to protect against any potential threats in real time through the cloud protection service.

If you’re not sure if you already have the Fall Creators update, you can check your device to see which version of Windows 10 you’re on here.

Here is some more information about tools in Windows Mixed Reality - you might want to share these resources with teachers.

The Mixed Reality viewer feature uses the web-cam on your laptop to embed virtual 3D objects into real world images and videos. For example, the above image shows a rover being overlaid into a photo of a boy standing next to a car. You can rotate and move around an object so it fits realistically into the photo.

The Story Remix feature in the Photos app allows you to insert 3D objects directly into videos. For example, above is a video of a dinosaur placed into a football game, as the girl kicks a ball of flames. Watch the tutorial here.

Paint 3D is the updated version of the old favourite Windows Paint. It allows simple 3D creation and editing in a modern interface. You can doodle in 2D and convert it to a 3D object using the Magic Select tool and see your creations in MR through the Mixed Reality Viewer app.

The 3D Builder app allows you to build and 3D print any design you create, without having to know how CAD works. Previously you would need some CAD knowledge to 3D print, but 3D Builder simplifies this. You can also import designs you find online and print to any 3D printer that has a Windows-compatible printer driver. The app can be used as a reference and a test tool for 3D-editing, and for validating 3MF files that you create.

Taking it one step further, for advanced learners, you can create professional 2D or 3D games with Unity (3rd Party App).

For more in-depth information of the newest features of the Fall Creators Update, read here.

 

Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.


Announcement: Publish markdown files from your git repository to VSTS Wiki

$
0
0

This feature will be available in VSTS after the deployment of the Sprint 132 update is completed.

Now you can publish markdown files from a git repository to the VSTS Wiki. Developers often write SDK documents, product documentation, or README files explaining a product in a git repository. Such pages are often updated alongside code in the code repository. Git provides a friction free experience where code and docs can live on the same branch and are reviewed together in pull requests and are released together using the same build and release process.

There are few issues with hosting such pages in code repositories:

  1. Viewers of this documentation need to sift through tons of code to find and read the content
  2. Documentation inherently needs ordering i.e a structure that can dictate a page hierarchy. Git does not offer ordering of files therefore you end up seeing all pages in the alphabetical order only
  3. Managing documentation versions is cumbersome i.e. it is not easy to find documentation of product release 1.0, 2.0, or other past releases.
  4. There is no ubiquitous search or discovery to find all documentation at one place. Such pages are discoverable through code search while the rest of the document is discoverable through wiki search.

You can read about more use cases in this user voice.

Solution

Now, VSTS Wiki allows you to publish markdown files from a folder in a git repository as Wiki pages. So let’s see how it works.

Publishing MD pages from code as Wiki

For the purpose of this blog post, I am creating a sample repository. Let's say I have a code repository on Azure Node.js SDK that contains all my code and documentation. The conceptual folder has a bunch of markdown files that I need to publish to VSTS Wiki.

Sample git repository containing code and documentation

If you are a first time user of VSTS Wiki and you navigate to the Wiki hub, you see two options.

Publish code as Wiki shows up for first time users of Wiki

  • Create Wiki – creates your first project wiki (read more)
  • Publish code as Wiki – publishes markdown files from a git repository as Wiki

In case you are already using Wiki, then you will see Publish code as Wiki in the wiki breadcrumb.

Publish code as Wiki shows up on an existing Wiki 

When you click on Publish code as Wiki, you see a dialog to select a repository, branch, and folder that you want to publish as wiki. You can also give your wiki a friendly and memorable name.

Map a wiki to a git repo

If you use the entire repository for documentation purposes, then you can select the root of the repository to publish all markdown files in the repository as wiki.

When you publish a branch in a git repo as wiki, the head of the branch of the git repo is mapped to the wiki. Therefore any changes made in this folder in the published branch will immediately reflect in Wiki. There are no other workflows involved.

And that’s it. All the content from the git repo will show up in the Wiki. Now you have a mirror image of the folder in the git repo in Wiki that will continue to get updated as developers write documentation in their git repo.

Markdown files published from git repository to VSTS Wiki 

Once you click publish, we look for all markdown files recursively under the folder and its sub folders. If a sub-folder does not contain a markdown file, it will not show up in Wiki.

Edit pages & add new pages

You can also add new pages or edit existing pages in the git repository and these pages will appear in the wiki immediately. Clicking on the Edit page button in Wiki takes you to the underlying git repository.

Publish another version

As your product grows, you may add various versions to the product. Therefore you may have a need to write documentation regarding the various versions of the product.

Wiki now supports publishing multiple versions of the content. You can click on Publish new version.

Publish another branch from code repository as wiki

Next, select the branch that indicates the new version, and click on Update.

Select another branch from code repository

Now you have two different versions of documentation residing in the same wiki.

 

VSTS Wiki showing documentation from multiple branches 

Reordering pages

Another issue with content hosted in version control is that you have no control over the ordering of the files and all files show up in alphabetical order. However, ordering the content is a basic need for content published in any wiki.

In VSTS Wiki, once you publish the content, you can add a .ORDER file at a folder level in your git repository to create your own page order.

.ORDER file to manage order of pages in Wiki

In the above example, I added a .ORDER file and added titles of few pages in a particular order. Once, I commit this change, the ordering of pages in wiki changes to mimic what I mentioned in the .ORDER file.

Changed order in Wiki hierarchy

Promoting folders to Wiki pages

Once the  markdown files are published as Wiki, the sub-folders in the git repository will appear as follows.

Folders in git mapped to VSTS Wiki 

In a wiki, there is an inherent need to have a hierarchy of pages, therefore you need a better treatment for folders. Now, you have the ability to promote a git repo folder to a page in wiki by following these simple steps:

  • If you want to promote the folder called 'includes' as a wiki page then create a page called 'includes.MD' at the same hierarchy as the folder

Promote a folder to a wiki page - includes.md is at the same hierarchy as includes folder

  • Commit the changes to the branch and view the page in Wiki. Now you see that 'includes' does not appear as a folder but rather is appears as a wiki page.

VSTS Wiki shows a git folder as a Wiki page

Searching published wiki pages from code

All the markdown files published as wiki are searchable in the Wiki hub. Currently, files from the first published branch are indexed.

Un-publish a wiki

We understand that there could be moments when you could have published a code repository by mistake or the code repository content may turn stale therefore you may have a need to un-publish a wiki.

You can simply click on "Unpublish wiki" to remove the mapping between the git repository and wiki.

 

Unpublish wiki

Unpublishing a wiki will have no impact on the markdown pages in the code repository. i.e. all the pages in the git repository are still intact and you can always publish the pages from the code repository again at a later time.

Unpublishing a wiki has no impact on the markdown pages in the code repository

When you unpublish a wiki, the pages are also removed from the search index therefore you will not be able to search pages in wiki once the wiki is unpublished.

What features are coming next?

  1. Fix broken links on page move or page rename (70 votes)

Report Issues or Feature request

If you have a feature request or want to report a bug:

  1. Report a Wiki bug on developer community
  2. New feature request for Wiki 

 

 

Azure Active Directory Authentication (Easy Auth) with Custom Backend Web API

$
0
0

A common scenario in web application development is a frontend web application accessing some backend API. The backend API may provide an interface to some shared business system or database (e.g., a customer or inventory database) and the frontend web application may be a business system interacting directly with customers or employees. We see this scenario when a frontend application uses services such as Microsoft Graph API. In a previous blog post, I have discussed how to configure web app authentication (a.k.a. Easy Auth) such that it provides user authentication for the web app but also grants a token to the Graph API. In this blog post, I will expand on this scenario by showing how one can do the same with a custom backend API.

The scenario is illustrated below:

Specifically, the backend Web API requires users to present a bearer token when accessing this API. This token authorizes the user to access the API and based on claims in the token the user may have access to all or parts of the API. However, the user does not access the API directly, rather access happens through a web app and the user will authenticate with Azure Active Directory (AAD) credentials when accessing the web app. Once authenticated, the user will obtain a token for accessing the backend API and the web app will present this token to the API when it needs to access it.

As illustrated, this scenario actually requires two AAD app registrations; one for the Web API and one for the web frontend. We will walk through how to create these and connect them appropriately such that when the user authenticates using the app for the frontend web app, they will get a token for the backend API.

I have created a very simple example backend Web API, which you can find here on GitHub. This is a API uses bearer token authentication and will allow users to create lists in a database. They can be to do lists, shopping lists, etc. Each user will only have access to their own lists. As you can see in the code we use an Azure Active Directory app registration to set up the bearer token authentication.

To set up the app registration, go to the Azure portal and find the App Registrations pane in Active Directory:

After creating the app registration, we will modify the manifest for it to define some scopes for the API. The simple API above doesn't actually use any sophisticated features like that, but if you have had to select which delegated privileges the application should require, e.g. User.Read, User.ReadBasic.All, etc. for Graph API, you will be familiar with the concept and you may wish to define such scopes for your API. Hit the "Manifest" button to edit:

I will not repeat the whole manifest here, but to define a new scope called ListApi.ReadWrite, I have modified the oauth2Permissions section as follows. The id for the new scope is a GUID that you can generate yourself, e.g. with the New-Guid PowerShell cmdlet.

  "oauth2Permissions": [
    {
      "adminConsentDescription": "Allow application access to ListApi on behalf of user",
      "adminConsentDisplayName": "Access ListApi (ListApi.ReadWrite)",
      "id": "08b581aa-6278-4811-a112-e625a6eb8729",
      "isEnabled": true,
      "type": "User",
      "userConsentDescription": "Allow application access to ListApi on behalf of user",
      "userConsentDisplayName": "Access ListApi (ListApi.ReadWrite)",
      "value": "ListApi.ReadWrite"
    },
    {
      "adminConsentDescription": "Allow the application to access ListApi on behalf of the signed-in user.",
      "adminConsentDisplayName": "Access ListApi",
      "id": "de81bb7a-4378-4e61-b30b-03759f49ba53",
      "isEnabled": true,
      "type": "User",
      "userConsentDescription": "Allow the application to access ListApi on your behalf.",
      "userConsentDisplayName": "Access ListApi",
      "value": "user_impersonation"
    }
  ]

After this, we are done with the app registration for the API. Make a note of the application ID and you should also make a note of your AAD tenant ID. You can deploy the API in an Azure API App, you will need to define the following app settings:

{
  "ADClientID": "SET-TO-APPLICATION-ID",
  "ADInstance": "https://login.microsoftonline.com/TENANT-NAME-OR-ID/"
} 

In this case, the app registration is in an AAD tenant in the commercial cloud (as indicated by the login.microsoftonline.com URL) but you could have used an Azure Government tenant instead.

With the API deployed, we will move on to the app regstration for the frontend web application. In the same way as before, we will create a new app registration. Make a note of the Application ID and for this application, we will need to create a key. To understand why this is necessary, consider the fact that once the user is authenticated we will need to ask for a token for the backend API on behalf of the user and in that role the this app registration will act as an identity of sorts and it will need a password. So hit the Keys pane:

You can create a key by giving it a name, a duration and hit save. The key will be displayed when you hit save and only once. After you navigate away from that page, the key will not be shown again, so make sure you make a note of it.

Next step is to connect this app registration to the app registration for the backend API. You will do this in the same way that you add "Required Permissions" for say the Graph API:

In this case, hit "Select API" and search for the name of the app registration that you created from for the API. Once you select it, you can choose which scopes the application will need:

Notice that the scope we defined above shows up and you can select it. Before we actually deploy a client, this is a good time to check that all of this works. I will show you how to do this using Postman. First step is to select OAuth2 authentication:

Then hit "Get New Access Token", which will bring up this dialog:

Here are the values you need to enter:

  • Grant Type: Authorization Code
  • Callback URL: https://www.getpostman.com/oauth2/callback
  • Auth URL: https://login.microsoftonline.com/TENANT-ID-OR-NAME/oauth2/authorize?resource=APPLICATION-ID-FOR-FOR-WEB-API
  • Access Token URL: https://login.microsoftonline.com/TENANT-ID-OR-NAME/oauth2/token
  • Client ID: APPLICATION-ID-FOR-FRONT-END-WEB-APP
  • Client Secret: YOUR SECRET CREATED IN PORTAL
  • Scope: ListApi.ReadWrite
  • Client Authentication: Send client credentials in body

A note on the "Callback URL", you have to register this callback URL for your app registration for this to work. Otherwise you will get an error saying it is not a registered callback URL and you should add it. You will also need to add the callback URL for actual client application. Say we deploy the client in a web app called mywebapp and we set up Easy Auth (see below), the callback URL would be something like https://mywebapp.azurewebsites.net/.auth/login/aad/callback. Here is how those reply URLs look in this examples:

After hitting "Request Token" you will be taken through the login procedure for AAD and the token should be displayed. If you have the option of using a PIN, don't. Postman doesn't handle that well, use username and password (combined with MFA) instead. Hit "Use Token" at the bottom of the dialog and it will populate the token field in Postman. After that, you should be able to query the backend API:

Now that we know that we have a backend API and we can get a token for it using the app registration for the frontend web app, we can go ahead and deploy the frontend web application (the client). I have written a simple List Client MVC application, which calls the backend API. The application is designed to be deployed with Easy Auth, once the user has authenticated and if we have made sure that access to the backend resource is requested (see below), we can use code such as this:

        public async Task&lt;IActionResult&gt; Delete(int id)
        {
            string accessToken = Request.Headers["x-ms-token-aad-access-token"];

            var client = new HttpClient();
            client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
            var response = await client.DeleteAsync("https://listapi.azurewebsites.net/api/list/" + id.ToString());

            return RedirectToAction("Index");
        }

In this example function we are deleting a list item. Notice that we are using the x-ms-token-aad-access-token request header, which will contain the bearer token after signin and we are passing that token on to the API.

You can deploy the code into an Azure Web App and then set up Easy Auth with powershell. Because we need to make sure that we set additionalLoginParams for the web app authentication, it is not straightforward to do it in the portal. If you would like to learn how to set this manually using Postman, you can look at this blog post. I have also made a PowerShell tool called Set-WebAppAADAuth that does it all for you, it is part of my HansenAzurePS PowerShell module as of version 0.5. You can install this module with:

Install-Module HansenAzurePS

Or browse the source if you just want to see the steps for setting up authentication. With the tool, you can set the authentication as seen below. Here is a script that creates a web app and sets the authentication correctly:

$ClientID = "APP-REG-ID-FRONTEND-WEB-APP"
$ClientSecret = "SECRET-FOR-FRONTEND-WEBAPP"
$IssuerUrl = "https://login.microsoftonline.com/TENANT-ID-OR-NAME/"
$AdditionalLoginParams = @( "resource=APP-REG-ID-BACKEND-WEB-API")

$ResourceGroupName = "ListClientResourceGroup"
$WebAppName = "ListClientAppName"
$Location = "eastus"

#New resource group
$rg = New-AzureRmResourceGroup -Name $ResourceGroupName -Location $Location

#App Service plan
$asp = New-AzureRmAppServicePlan -Name $WebAppName `
-ResourceGroupName $rg.ResourceGroupName -Location $rg.Location

#New Web App
$webapp = New-AzureRmWebApp -ResourceGroupName $rg.ResourceGroupName `
-AppServicePlan $asp.Id -Location $rg.Location -Name $WebAppName

#Finally the authentication settings.
Set-WebAppAADAuth -ResourceGroupName $rg.ResourceGroupName `
-WebAppName $webapp.Name -ClientId $ClientID `
-ClientSecret $ClientSecret -IssuerUrl $IssuerUrl `
-AdditionalLoginParams $AdditionalLoginParams `
-Environment AzureCloud

The Environment parameter refers to the cloud where your web app is location. In the example above, we are using AAD in commercial Azure and the web app is also in commercial Azure, but you can mix and match to suit your specific scenario.

With the application deployed, you can now try to access it. You should get prompted for login credentials and you will be asked to accept that the application will have access to information on your behalf, including access to the List API as defined above:

If you hit OK, you will get access and through the "List" link at the top of the page, you can now access your list (e.g., to do list or shopping list):

The toy application used in this example is not intended to be production code. The List API uses an in memory database, so when the app restarts all the lists are gone. It is only intended to be used for illustration purposes.

The workflow presented above, where we add the additional resource parameter to get a token for the API, only solves part of the problem if you need to access multiple backend APIs. Say for instance you want to access the Graph API and a custom API. In that case, you need to use an "on-behalf-of" workflow in your code to obtain tokens for additional APIs. This may be a subject of a later blog post, in the meantime, if you want to learn more about Azure Active Director and how to integrate it with your applications, please consult the Azure Active Directory Developers Guide, where you can find lots more information about app registrations and various authentication workflows.

Let me know if you have questions/comments/suggestions.

Getting started with Azure App Services Development

$
0
0

In this post, Application Development Manager, Vijetha Marinagammanavar, demonstrates how to get started with Azure App Services.


To get started with Azure development we need to have Visual Studio 2013 or later, Azure SDK, and an active Azure subscription. We are using Visual Studio 2017 with our demo.

If using VS2013 then download the SDK from https://azure.microsoft.com/en-us/downloads/


clip_image002

Figure 1 Download Azure SDK


Follow instructions below to deploy web application to Azure by creating App Service Plan

Azure App Service is a fully managed "Platform as a Service" (PaaS) that integrates Microsoft Azure Websites, Mobile Services, and BizTalk Services into a single service, adding new capabilities that enable integration with on-premises or cloud systems.


1. Open VS2017

Note: Same steps will apply to all previous versions of the VS.


2. Create a new Web Application in Visual Studio by following the path

Click on File –> New –> Project –> Templates –> Visual C# –> Web

clip_image004

Figure 2 Create new Web Application in Visual Studio


3. Create MVC application with authentication set to Individual User Account, this authentication type will allow us to register users and maintain the profile in SQL Server database.

clip_image006

Figure 3 MVC project with Authentication set to Individual User


4. The project would look like below after creating it

clip_image008

Figure 4 MVC Web Application


5. Now let’s build the project.

Build: Click on Build then Build Solution

clip_image010

Figure 5 Build the project


6. Next step is to publish the application to Azure using an active subscription. Right click on the project in solution explorer and select Publish option.

clip_image012

Figure 6 Publishing MVC project from Visual Studio 2017


7. Now choose Microsoft Azure App Service then hit Publish. Then add your Microsoft Account on which you have your Active Azure Subscription.

clip_image014

clip_image016

Figure 7 Create Microsoft Azure App Service Plan using VS2017


8. Now name your Web Application and choose active subscription.

Create new Resource Group if needed. Same applies to App Service Plan. You can create new in case one does not exist. In your case, you will create new one, choose the Location that is near to you, check http://azurespeedtest.azurewebsites.net/ for response time of different data center to find out best location to use according to your location.

An App Service plan is the container for your app. The App Service plan settings will determine the location, features, cost and compute resources associated with your app.

After some time, it will automatically take you to the website, you have created.

clip_image018

Figure 8 Web App is created using Visual Studio 2017


9. Now it is time to publish our Application to the server

Just change the Home page view which is under Views –> Home –> Index.cshtml and make some changes to this page.

Note: I have made changes to the text inside the jumbotron css class.

clip_image020

Figure 9 Change the Index.html in MVC Project


10. Right click on the Project and hit Publish.

clip_image022

Figure 10.a Publishing Profile connected to Azure


Hit the publish button once you reach above screen, it may take some time to upload the files for first time and depending upon your Internet Connection Upload speed. Once uploaded we can see our change is reflected.

clip_image024

Figure 10.b Publishing Change using Visual Studio 2017 to Azure App Service

You have successfully created new App Service Plan using Visual Studio and Azure SDK and published your very first change to Azure App Service.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Export Azure SQL Database to local path

$
0
0

We noticed a few requests come to our support queue asking for a feasibility to export Azure SQL Database to local path directly, so following steps below to build  PowerShell script that can do that job to copy Azure SQL Database to another db for consistency, then export Azure SQL Database to blob storage, later connect to a single or all storage container and download blob files locally:

First you need to save your Azure login credential to be able to use saved profile credential later to automate login to Azure subscription, to do so, please follow below steps:

  1. Open Windows PowerShell ISE
  2. Copy/past below command 
# Setup – First login manually per previous section 
Add-AzureRmAccount

# Now save your context locally (Force will overwrite if there)
$path = &amp;amp;amp;quot;C:AzurePSProfileContext1.ctx&amp;amp;amp;quot; 
Save-AzureRmContext -Path $path -Force

# Once the above two steps are done, you can simply import 
$path = ‘C:AzurePSProfileContext1.ctx’ 
Import-AzureRmContext -Path $path

 

 

A new window opened to enter username & password to login to Azure Subscription

 

 

Once you can authenticated, your Azure subscription information listed and saved as shown below.

 

 

To verify, navigate to local path, you will find the ProfileContext.ctx file created as shown below.

 

  1. Now as ProfileContext data saved locally, please copy / past below PowerShell script to a new notepad and save it as CopyFilesFromAzureStorageContainer.ps1
  2. Note that values highlighted in Yellow need to fill before executing this PowerShell script Manually as in the below example

PS C:bacpac> .CopyFilesFromAzureStorageContinaer.ps1 -ResourceGroupName $ResourceGroupName -ServerName $SeverName -DatabaseName $DatabaseName -CopyDatabaseName $CopyDatabaseName -LocalPath $LocalPath -StorageAccountName $StorageAccountName -ContainerName $ContainerName

 

 

<#
.SYNOPSIS
    Export Azure SQL Database to Blob storage and download the exported *.bacpac file from blob to local path
.DESCRIPTION
    This PowerShell Script to export Azure SQL DB to a blob storage and then copies blobs from a single storage container to a local directoy. 
   
    The script supports the -Whatif switch so you can quickly see how complex the copy
    operation would be.

.EXAMPLE

    .CopyFilesFromAzureStorageContainer -LocalPath "c:usersmyUserNamedocuments" `
        -ServerName "myservername" -DatabaseName "myDBname" -ResourceGroupName "myresourcegroupname" -StorageAccountName "mystorageaccount" -ContainerName "myuserdocuments" -Force
#>;
[CmdletBinding(SupportsShouldProcess = $true)]
param(
    # The destination path to copy files to.
    [Parameter(Mandatory = $true)]
    [string]$LocalPath,

    # The name of the SQL Server to connect to.
    [Parameter(Mandatory = $true)]
    [string]$ServerName,

    # The name of the SQL database to export.
    [Parameter(Mandatory = $true)]
    [string]$DatabaseName,

    # The name of the resource group contians (SQL Server, SQL Database and Storage account name).
    [Parameter(Mandatory = $true)]
    [string]$ResourceGroupName,

    # The name of the storage account to copy files from.  
    [Parameter(Mandatory = $true)]
    [string]$StorageAccountName,

    # The name of the SQL database to export.
    [Parameter(Mandatory = $true)]
    [string]$CopyDatabaseName,

    # The name of the storage container to copy files from.  
    [Parameter(Mandatory = $true)]
    [string]$ContainerName
)
        # Login to Azure subscription
        $path = ‘C:AzurePSProfileContext.ctx’ 
        Import-AzureRmContext -Path $path
        
        # $DatabaseName = "DBName"
        # $ServerName = "ServerName"
        # $ResourceGroupName = "ResourceGroupName"
        # $StorageAccountName = "StorageAccountName"
        # $ContainerName = "StorageContainerName"
        # $LocalPath = "C:LocalPath"
        
        
        # Create a credential
        $ServerAdmin = "serverlogin"
        $Password = ConvertTo-SecureString –String 'password' –AsPlainText -Force
        $Credential = New-Object –TypeName System.Management.Automation.PSCredential `
        –ArgumentList $ServerAdmin, $Password
        

        # Generate a unique filename for the BACPAC
        $bacpacFilename = "$DatabaseName" + (Get-Date).ToString("yyyy-MM-dd-HH-mm") + ".bacpac"


        # Blob storage information
        $StorageKey = "YOUR STORAGE KEY"
        $BaseStorageUri = "https://STORAGE-NAME.blob.core.windows.net/BLOB-CONTAINER-NAME/"
        $BacPacUri = $BaseStorageUri + $bacpacFilename
        New-AzureRmSqlDatabaseCopy -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -CopyResourceGroupName $ResourceGroupName -CopyServerName $ServerName -CopyDatabaseName $CopyDatabaseName
        
        Write-Output "Azure SQL DB $CopyDatabaseName Copy completed"

        # Create a request
        $Request = New-AzureRmSqlDatabaseExport –ResourceGroupName $ResourceGroupName –ServerName $ServerName `
        –DatabaseName $DatabaseName –StorageKeytype StorageAccessKey –StorageKey $StorageKey `
        -StorageUri $BacPacUri –AdministratorLogin $Credential.UserName `
        –AdministratorLoginPassword $Credential.Password


        # Check status of the export
        $exportStatus = Get-AzureRmSqlDatabaseImportExportStatus -OperationStatusLink $Request.OperationStatusLink
        [Console]::Write("Exporting")
        while ($exportStatus.Status -eq "InProgress")
        {
        $exportStatus = Get-AzureRmSqlDatabaseImportExportStatus -OperationStatusLink $Request.OperationStatusLink
        Start-Sleep -s 10
        }
        $exportStatus
        $Status= $exportStatus.Status
        if($Status -eq "Succeeded")
        {
        Write-Output "Azure SQL DB Export $Status for "$DatabaseName""
        }
        else
        {
        Write-Output "Azure SQL DB Export Failed for "$DatabaseName""
        }
            

        # Download file from azure
        Write-Output "Downloading"
        $StorageContext = Get-AzureRmStorageAccount -Name $StorageAccountName -ResourceGroupName $ResourceGroupName 
        $StorageContext | Get-AzureStorageBlob -Container $ContainerName -blob $bacpacFilename | Get-AzureStorageBlobContent -Destination $LocalPath
        $Status= $exportStatus.Status
        if($Status -eq "Succeeded")
        {
        Write-Output "Blob $bacpacFilename Download $Status for "$DatabaseName" To $LocalPath"
        }
        else
        {
        Write-Output "Blob $bacpacFilename Download Failed for "$DatabaseName""
        } 

        # Drop Copy Database after successful export
        Remove-AzureRmSqlDatabase -ResourceGroupName $ResourceGroupName `
        -ServerName $ServerName `
        -DatabaseName $CopyDatabaseName `
        -Force
         
        Write-Output "Azure SQL DB $CopyDatabaseName Deleted"

The above script can be saved and triggered manually, to automate the process and setup a schedule task, we can gain benefits from Windows Task Scheduler to run the PowerShell script on schedule of your preference. To do so, please follow steps below:

  1. Now that we need to automate this PowerShell script and run it using Windows Task Scheduler.
  2. Copy / past the below PowerShell Script to a new notepad and save it as CopyFilesFromAzureStorageContainerV2.ps1.
  3. Values highlighted in Green need to be updated, to automate run this PowerShell script which is explained in the below steps:
    1. a. Create a new schedule task
    2. In action tap navigate to powershell.exe localpath C:WindowsSystem32WindowsPowerShellv1.0powershell.exe
    3. In add argument (optional) copy/past the full local path to your PowerShell script C:bacpacCopyFilesFromAzureStorageContainerV2.ps1
    4. Add trigger (schedule) according to your need and then save it

<#
.SYNOPSIS
    Export Azure SQL Database to Blob storage and download the exported *.bacpac file from blob to local path
.DESCRIPTION
    This PowerShell Script to export Azure SQL DB to a blob storage and then copies blobs from a single storage container to a local directoy. 
   
    The script supports the -Whatif switch so you can quickly see how complex the copy
    operation would be.

.EXAMPLE

    .CopyFilesFromAzureStorageContainer -LocalPath "c:usersmyUserNamedocuments" `
        -ServerName "myservername" -DatabaseName "myDBname" -ResourceGroupName "myresourcegroupname" -StorageAccountName "mystorageaccount" -ContainerName "myuserdocuments" -Force
#>;

        # Login to Azure subscription
        $path = ‘C:AzurePSProfileContext.ctx’ 
        Import-AzureRmContext -Path $path
        
        $DatabaseName = " hidden"
        $CopyDatabaseName = $DatabaseName + "_Copy"
        $ServerName = "hidden"
        $ResourceGroupName = "hidden"
        $StorageAccountName = "hidden"
        $ContainerName = "bacpac"
        $LocalPath = "C:localpath"
        
        
        # Create a credential
        $ServerAdmin = "serverlogin"
        $Password = ConvertTo-SecureString –String 'password' –AsPlainText -Force
        $Credential = New-Object –TypeName System.Management.Automation.PSCredential `
        –ArgumentList $ServerAdmin, $Password
        

        # Generate a unique filename for the BACPAC
        $bacpacFilename = "$DatabaseName" + (Get-Date).ToString("yyyy-MM-dd-HH-mm") + ".bacpac"


        # Blob storage information
        $StorageKey = "YOUR STORAGE KEY"
        $BaseStorageUri = "https://StorageAccountName.blob.core.windows.net/ContainerName/"
        $BacPacUri = $BaseStorageUri + $bacpacFilename
        New-AzureRmSqlDatabaseCopy -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -CopyResourceGroupName $ResourceGroupName -CopyServerName $ServerName -CopyDatabaseName $CopyDatabaseName
        
        Write-Output "Azure SQL DB $CopyDatabaseName Copy completed"

        # Create a request
        $Request = New-AzureRmSqlDatabaseExport –ResourceGroupName $ResourceGroupName –ServerName $ServerName `
        –DatabaseName $CopyDatabaseName –StorageKeytype StorageAccessKey –StorageKey $StorageKey `
        -StorageUri $BacPacUri –AdministratorLogin $Credential.UserName `
        –AdministratorLoginPassword $Credential.Password


        # Check status of the export
        $exportStatus = Get-AzureRmSqlDatabaseImportExportStatus -OperationStatusLink $Request.OperationStatusLink
        [Console]::Write("Exporting")
        while ($exportStatus.Status -eq "InProgress")
        {
        $exportStatus = Get-AzureRmSqlDatabaseImportExportStatus -OperationStatusLink $Request.OperationStatusLink
        Start-Sleep -s 10
        }
        $exportStatus
        $Status= $exportStatus.Status
        if($Status -eq "Succeeded")
        {
        Write-Output "Azure SQL DB Export $Status for "$DatabaseName""
        }
        else
        {
        Write-Output "Azure SQL DB Export Failed for "$DatabaseName""
        }

               

        # Download file from azure
        Write-Output "Downloading"
        $StorageContext = Get-AzureRmStorageAccount -Name $StorageAccountName -ResourceGroupName $ResourceGroupName 
        $StorageContext | Get-AzureStorageBlob -Container $ContainerName -blob $bacpacFilename | Get-AzureStorageBlobContent -Destination $LocalPath
        $Status= $exportStatus.Status
        if($Status -eq "Succeeded")
        {
        Write-Output "Blob $bacpacFilename Download $Status for "$DatabaseName" To $LocalPath"
        }
        else
        {
        Write-Output "Blob $bacpacFilename Download Failed for "$DatabaseName""
        }

        # Drop Copy Database after successful export
        Remove-AzureRmSqlDatabase -ResourceGroupName $ResourceGroupName `
        -ServerName $ServerName `
        -DatabaseName $CopyDatabaseName `
        -Force
         
        Write-Output "Azure SQL DB $CopyDatabaseName Deleted"

Hope that you enjoyed that article, we appreciate your comments and feedback on this !!

devconf’s log #2 – Sun, Mandela, and no Bowtie

$
0
0

Continued from DevConf’s log #1 – Flights, floods, and lack of sleep. Spent the weekend relaxing, mentally preparing for the conference, and enjoying the sun, food, and discussions.

Part of the mental preparation is to practice the session and ensure the 60-75min talk can be delivered in the 45min DevConf slot. I focused on grounding my feet to create the foundation of a relaxed, automated, and passionate session that delivers value. Recognise the practice of “continuous learning and improving”?
WP_20180325_003

Here are a few pictures to give visual context:

Conference centre without the rain.
DSCN4078DSCN4076
DSCN4081DSCN4082DSCN4083DSCN4084DSCN4086DSCN4089DSCN4090DSCN4116DSCN4134

It’s called the RED cross for a reason – I give up!
 WhatsApp Image 2018-03-24 at 12.25.34

Enjoyed a long catch-up lunch with my good old friend at Mandela Square.
image
WP_20180324_004

Only South-Africans will understand me enjoying a Spur Goodie Burger, Black Label, and Biltong Smile

WP_20180324_012WP_20180324_010DSCN4091

Enjoyed a great dinner and chat with Robert, who’s one of the pivotal organizers of the DevConf event. A few photos from the DevConf presentation rooms. Notice the lack of a bowtie?
 DSCN4136
DSCN4100DSCN4101

T-2 sleeps until the event begins Smile

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>