Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Doing more with functions: Verbose logging, Risk mitigation, and Parameter Sets

$
0
0

Welcome back to PowerShell for Programmers, this week I'm trying gitGist again for the code blocks. Let me know what you think about it vs the normal syntax highlighter I use 🙂

As we've seen in the other posts about functions, attributes are a really cool thing to extend the features we have available to ourselves and for our users. This post is going to deal with an attribute for the function itself as well as ones for the individual parameters. This will let us make our functions behave more like any other cmdlet, by giving us access to the common parameters that users are used to seeing.

Common Parameters

Common parameters are available on every single cmdlet. This means that users who are going to consume your functions are used to seeing them. If you're just coming into PowerShell and unfamiliar with them, take a look here. We can get access to these on our code by adding the attribute [cmdletbinding()] at the top of our functions of scripts (before the param keyword).

If we take a look at the syntax or just use the intellisense we can see they have been added:

Note that our custom parameters will always come before the common ones, so we know that once we see -verbose there won't be any more stuff we created.

While they are there, they might not actually do anything yet. The common parameters are all about overriding some default behaviors of powershell. For example, -verbose or -debug will just make those streams visible for that command. If we have write-verbose lines in our function then -verbose will show those lines:

Error action is going to work for us for everything in our code, including custom error messages:

It can be really nice to get this kind of functionality into your tool sets before you distribute them to others, whether they are on your team, in your company or a broader online audience.

Risk Mitigation

The [cmdletbinding()] attribute can also enable a bunch of different flags inside of it. Two really common ones are:

  • "SupportsShouldProcess" which gives us the Risk Mitigation parameters -whatif and -confirm. These are similar to the common parameters, but they aren't on everything, they are just on a lot of stuff.
  • "ConfirmImpact" which tells PowerShell how dangerous your code is and whether or not it should default a confirmation prompt, or just show it when using -confirm

To leverage these, we are also going to use a built in variable called $PSCmdlet, which contains a bunch of useful metadata and methods for doing interesting things to our functions that we want to be cmdlets.

I've gone and added risk mitigation, but as you can see it isn't doing anything effective yet:

What we can do is wrap our "action code" in an if statement. The action code is whatever you want -whatif to block from happening. For example, most of your code is probably grabbing and looking at data to make decisions, and you still probably want that to happen so you can give a meaningful "whatif:" message, but instead of deleting/changing data you might want to wrap that up for the -whatif to block.

Ok, but how to I determine whether or not the if statement runs?

Great question! That's where $PScmdlet comes in. One of the methods it has is called "ShouldProcess", it returns a bool, and it has a bunch of overloads you can see on the docs page. If the user puts -whatif, ShouldProcess will return false. If they put -confirm and hit no in the box it will also return false. Additionally ShouldProcess is what generates the "Whatif:" and confirmation box messages. The most common overload is this one:

 
public bool ShouldProcess (string target, string action);

Let's see that in action with PowerShell:

Finally, let's flag our code as dangerous. ConfirmImpact can be "Low", "Medium", or "High".

Notice how it will pop up any time you call it, regardless of if you typed -confirm. This is actually controlled with a setting variable $ConfirmPreference if you ever want to change it. You can also override it inline with -confirm:$false

Parameter Sets

Using parameter sets can really let you extend your toolset. Think of Parameter Sets as overloads for your function. Instead of declaring them all separately like you might in C#, we have to build them all together in PowerShell. A good example might be if you had a function to do the same work on different objects, so the only part that might change is fetching the objects:

  • Stop-process is a good place to see this as it only has 3 parameter sets. One that takes in process IDs, one that takes in process names and one that takes in process objects. All of these scenarios will use the same action code, but fetch the process data differently.
  • If you had code for AD users and computers you could offer a -computer set that fetches a computer and a -user set that fetches a user, but then the code that acts on the data could remain the same.

To do this, we go back to our friend the [parameter()] attribute. We can specify the set we want each parameter in, and we can have it be mandatory or optional for each set we specify. If you do not specify a set, then it will be in all sets.

Notice now we can use either set, but we can't use parameters from different sets together.

But how can I make sure I only run the code I want for each set?

We will leverage $PScmdlet again, some of the meta data it holds includes which parameter set we used:

Finally, if someone tries to use our parameters positionally we might run into some issues due to the sets. We can let Power Shell know which one to use as the default with  [cmdletbinding()] like this:

 
cmdlet MyFunction at command pipeline position 1
Supply values for the following parameters:
Process:

Well that’s all for now, hopefully this helps you as you start to build out tools in PowerShell that you want to add that professional shine to!

For the main series post, check back here.

If you find this helpful don't forget to rate, comment and share 🙂


Creating Azure SQL Managed Instance using ARM templates

$
0
0

Azure API enables you to create Azure SQL Managed Instance using ARM templates. These are JSON objects that contain definition of resources that should be created. You can send these objects to the Azure REST API to automate creation of Azure SQL Managed Instance.

In order to create a new Azure SQL Managed Instance, you need to create ARM JSON request. An example of ARM JSON request is shown in the following script (the important part is under resources node):

{
    "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
    "contentVersion": "1.0.0.1",
    "parameters": {
        "pwd": {
            "type": "securestring"
        }
    },
    "resources": [
        {
            "name": "jovanpoptest",
            "location": "westcentralus",
            "tags": {"Owner":"JovanPop","Purpose":"Test"},
            "sku": {
                "name": "GP_Gen4"
            },
            "properties": {
                "administratorLogin": "Login that will connect to the instance",
                "administratorLoginPassword": "[parameters('pwd')]",
                "subnetId": "/subscriptions/ee5ea899-0791-9270-77cd8273794b/resourceGroups/cl_pilot/providers/Microsoft.Network/virtualNetworks/cl_pilot/subnets/CLean",
                "storageSizeInGB": "256",
                "vCores": "16",
                "licenseType": "BasePrice"
            },
            "type": "Microsoft.Sql/managedInstances",
            "identity": {
                "type": "SystemAssigned"
            },
            "apiVersion": "2015-05-01-preview"
        }
    ]
}

Values that you need to change in this request are:

  • name - name of your Azure SQL Managed Instance (don't include domain).
  • properties/administratorLogin - SQL login that will be used to connect to the instance.
  • properties/subnetId - Azure identifier of the subnet where Azure SQL Managed Instance should be placed. Make sure that you properly
    configure network for Azure SQL Managed Instance
  • location - one of the valid location for Azure data centers, for example: "westcentralus"
  • sku/name: GP_Gen4 or GP_Gen5
  • properties/vCores: Number of cores that should be assigned to your instance. Values can be 8, 16, or 24 if you select GP_Gen4 sku name, or 8, 16, 24, 32, or 40 if you select GP_Gen5.
  • properties/storageSizeInGB: Maximum storage space for your instance. It should be multiple of 32GB.
  • properties/licenceType: Choose BasePrice if you don't have SQL Server on-premises licence that you want to use, or LicenceIncluded if you can have discount for your on-premises licence.
  • tags(optional) - optionally put some key:value pairs that you would use to categorize instance.

Note that you cannot enter password as plain text - you need to specify parameters as a securestring, and pass it via PowerShell.

Once you create this JSON template you should save it to your local computer in some file (for example c:\tempnewmi.json) and use this file as an input for PowerShell command that will execute it.

Invoking ARM template

In order to execute ARM template, you would need to install Azure RM PowerShell. In most of the cases the following three commands might install everything that you need:

Install-Module PowerShellGet -Force
Install-Module -Name AzureRM -AllowClobber
Install-Module -Name AzureRM.Sql -AllowPrerelease -Force

Then, you need to run something like to following PowerShell script:

Connect-AzureRmAccount

Select-AzureRmSubscription -Subscription "<Put-your-subscription-name-here>"

$secpasswd = ConvertTo-SecureString "<S0me-Strong-Password>" -AsPlainText -Force

New-AzureRmResourceGroupDeployment -administratorLoginPassword $secpasswd -ResourceGroupName my_exisitng_resoure_group -TemplateFile 'c:tempnewmi.json'

This script will first connect to your Azure account, select subscription where you want to put Managed Instance, create secure password, and execute New-AzureRmResourceGroupDeployment that will send ARM request to Azure API. In this command you need to specify some resource group, and provide password and path to ARM JSON request file (c:\tempnewmi.json in this case).

If there are no errors in your script, you will create new Managed Instance.

Could not load file or assembly ‘System.Private.CoreLib

$
0
0

I wrote this post to help those that are facing the following error when trying to install an UWP and Desktop Bridge .NET application:

 

Exception type:   System.IO.FileNotFoundException

Message:          Could not load file or assembly 'System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified.

 

This error occurs because Microsoft Store no longer accepts packages with mixed UWP and Desktop .NET binaries that have not been created with the proper packaging project. This is to ensure the UWP binaries get the proper .NET native compile in the cloud (which is not applicable to the Desktop .NET binaries).

To package an UWP app with a Win32 extension, consider to use the new Visual Studio Packaging Project template, and then create the store package out of that project in VS.

If your Win32 EXE is not built in VS and you just want to include the binary in your UWP project you should still use the Packaging project. Make sure the Win32 EXE gets dropped into a subfolder of the package. See screenshot below for this type of project structure.

Details are in the following blog posts:

Example #3 for this specific case:

https://blogs.windows.com/buildingapps/2017/12/04/extend-desktop-application-windows-10-features-using-new-visual-studio-application-packaging-project/#uvfV1r7937WrSkX2.97

Package a .NET desktop application using the Desktop Bridge and Visual Studio Preview

https://blogs.msdn.microsoft.com/appconsult/2017/08/28/package-a-net-desktop-application-using-the-desktop-bridge-and-visual-studio-preview/

To install the App in a sideload scenario (without use Microsoft Store), it is necessary to install the corresponding AppPackages folder that has the “_Test” suffix and install the appxbundle that is in that folder, since the assemblies in the .appxupload generated by VS (in all projects) are IL, not yet compiled with .NET Native.

 

Don't forget to double-check if the App is compiled in Release mode with .NET native compilation enabled.

For more info/examples, look here:

https://stefanwick.com/2018/04/06/uwp-with-desktop-extension-part-1/

https://blogs.windows.com/buildingapps/2017/12/04/extend-desktop-application-windows-10-features-using-new-visual-studio-application-packaging-project/ (see example #3)

 

I hope it helps.

 

Using PowerShell To Maintain Windows Firewall Rules For Remote Access

$
0
0

Editor's note: The following post was written by Visual Studio and Development Technologies MVP Terri Donahue as part of our Technical Tuesday series. Albert Duan of the MVP Award Blog Technical Committee served as the technical reviewer for this piece.

As System Administrators, how many times have we checked our event viewer logs to see tons of access attempts against commonly open application ports like RDP, FTP, and SQL? Sometimes these attempts are successful, but even when access is not gained, the application can be compromised due to resource constraints and other issues created by this scenario.

Because of this occurrence, I decided to implement an automated Firewall management process. I have many standalone machines that I provide primary system administration support for. This requires a unique solution, which provides automated updates with the ability to add/remove/update IPs to gain access to the required resource in a timely fashion. I focused on RDP, FTP, and SQL access. This script could easily be modified to include remote PowerShell or IIS administration access.

There are 3 main pieces to this implementation: a scheduled task, a web page, and of course, the PowerShell script. I’ll provide the settings that I use for the scheduled task, but this can be modified to meet your requirements. I’ll also provide the format for the web page and a link to download the PowerShell script from the TechNet gallery.

There is some prep work required for the user running this process. For starters, create a unique username, set a very strong password and enable the ‘password never expires’ flag.

New user dialog

This user will be the identity that is leveraged to run the scheduled task. It needs to be a member of the Network Configuration Operators group as well as have the Log on as a batch job security setting applied. This policy is found within Local Policies/User Rights Assignment.

Group assignment

Another important factor is to host the web page in a single location to which all servers have access. I use a page configured in our CMS to host the data that is consumed by the PowerShell script to define the firewall rules. The file should contain text that can be used to verify that the data has been download successfully. I learned this the hard way and temporarily locked myself and my colleagues out of our servers. You could also modify the script to use FTP rather than HTTP to grab the file.

For this post, I created a very simple html page that contains the comma separated values that will define the firewall rules. Here is a screenshot of what the file should contain:

iplist.html

Before you can setup the scheduled task, you need to acquire the PowerShell script. You can either copy the script directly from this post or download from here. Save this script to your script folder. I place all scripts in c:adminscripts.

Lastly, a scheduled task must to be setup. Create a task that runs as the user created earlier. I set the task to run every 5 minutes, but this can be adjusted per your needs. The program to run is PowerShell and the argument is the Maintain-FirewallRules.ps1 script. Include -verb runas at the end of the script name. It should be set as scriptlocation/scriptname -verb runas. The ‘-verb runas’ portion runs the script with elevated privileges. For detailed steps, you can view this video here

It is now time to run the script and verify that the new rules are created as expected. You can verify the rules are created and in use by checking the Firewall Monitoring MMC.

Windows Defender Firewall MMC

Once you have ensured that the new rules are in place and configured correctly, you will need to disable the existing open access rules for FTP, RDP, and SQL (or the protocols that you specified in the web page).

$rulelist = 'c:adminscriptsrulelist.txt'
$URI = 'http://localhost/iplist.html'
#URI = the web location of the IP list
 
$html = Invoke-WebRequest -Uri $URI -UseBasicParsing
$data = $html.Content | Out-File $rulelist
 
$ErrorActionPreference = 'SilentlyContinue'
 
 
Get-Content $rulelist |
ForEach-Object -Process {
$line = $_.TrimEnd()
if ($line.Contains('Format for entries')) { #used to ensure that file isn't blank
 
Get-NetFirewallRule -DisplayName Custom* | Remove-NetFirewallRule #delete existing rules created by script
 
}
$regex = '^([^|]*),([^|]*),([^|]*)'
#matches line as follows - IP,UserName,Protocol
#127.0.0.1,Terri,RDP
 
if ($line -match $regex) {
$name = $line.Split(',')[0]
$ip = $line.Split(',')[1]
$protocol = $line.Split(',')[2]
$ruleName = 'Custom-' + $protocol + '-' + $name
$description = 'Allow rule for ' + $protocol + ' access'
if ($protocol -eq 'RDP') {
$port = '3389'
}
elseif ($protocol -eq 'FTP') {
$port = @('21','4901-4910')
}
else {
$port = '1433'
}
 
 
'creating rule for ' + $protocol |Out-File $log -Append
 
New-NetFirewallRule -DisplayName $rulename -Description $description -RemoteAddress $ip -LocalPort $port -Protocol 'TCP' -Action 'Allow' -Enabled 'True'
 
}
} 

In conclusion, being able to programmatically control Windows Firewall rules remotely has proven to be a big win for me and the team that I support. IPs can be quickly added to restore connectivity if you are traveling and need to access a server from a different location or even if your dynamic home IP changes. This extra line of security has also greatly decreased the daily brute force attacks against these normally open protocols.  


Terri Donahue is a Microsoft MVP for IIS from North Carolina by way of Louisiana, Texas, and South Carolina. Terri is a she-geek who truly enjoys providing information and support related to many different technologies. She has worked in a Systems Administration role specifically dealing with IIS and Windows server since 1999. In her previous life, Terri has provided System Administration in both the corporate world and as a consultant for other technologies. You can reach Terri via Twitter @terrid_dw 

London Quantum Computing Meetup

$
0
0

image

Tuesday, June 12 at 6:30 PM – LONDON, UK Quantum Meetup

Topological Qubits are an approach to Quantum Computing that is looking to use quasi-particles called Majorana Fermions to create truly scalable qubit. Topological Qubits are an approach to Quantum Computing that is looking to use quasi-particles called Majorana Fermions to create truly scalable qubit architectures. These quasi-particles were predicted by Italian physicist Ettore Majorana who worked on neutrino masses. On March 25, 1938, he disappeared under mysterious circumstances while going by ship from Palermo to Naples.

His prediction was that under certain conditions, Majorana fermions can appear as the collective movement of several individual particles, not a single one. This opens up the potential for using these Majorana's as robust qubits within a quantum computer where the quantum state information is encoded in the topology of these quasi-particles - hence Topological Quantum Computing.

Microsoft has been working for over 10 years on the development of the theory, physics and engineering of Topological Quantum Computing, with the goal of building a truly scalable quantum computer.
In this meetup, Dr Julie Love from Microsoft's Quantum Computing team in Seattle will provide an accessible overview what quantum computing is, how it works, what the implications and applications and quantum are, and how Microsoft's is progressing with Topological Quantum Computing and the development of a full stack solution including applications and algorithm development, software engineering and simulator environments.

Register now at https://www.meetup.com/London-Quantum-Computing-Meetup/events/250779465/


The Microsoft UK team will also be able to provide attendees with advice on using the Q# Quantum Development Kit.

Recent MS Build 2018 – Azure DevOps Related Announcements

$
0
0

Chef Gillani - Shimail Gillani - Cloud Solutions Architect Microsoft
@Chef Gillani

Azure DevOps with VSTS


by Damian Brady, Abel Wang

 

Container DevOps in Azure


by Steven Murawski, Jessica Deen

 

Building Windows – how the bits flow from check-in to the fast-ring


by Edward Thomson and Jill Campbell

 

Git patterns and anti-patterns for successful developers


by Edward Thomson

 

Analyze and report on your work using the new VSTS analytics service


by Romi Koifman

 

Migrating your code to the cloud – how to move from TFS to VSTS


by Rogan Ferguson

 

Continuous, efficient & reliable testing with integrated reporting in CI/CD


by Vinod Joshi

 

Some Other Related News

VSTS Announcement Highlights

 

Thanks for joining ...
Shimail Ahmed Gillani
Cloud Solutions Architect
Microsost US Education
https://twitter.com/chefgillani @ChefGillani 
@ChefGillani

 

The Sprint 132 Update of Azure Visual Studio Team Services (VSTS) has rolled out

NEWS FLASH! – Learning Tools UPDATE!

$
0
0

Learning Tools were inclusively designed to help people improve their reading skills, including those with dyslexia, dysgraphia, ADHD, Second Language Learners, and emerging readers. With tomorrow’s upcoming announcement, we are talking about the 5 key things listed below. We have seen Learning Tools become a differentiator for the Microsoft Education story, so ensuring customers are aware of the new Office and Edge updates built into the Microsoft platform are important to help drive high-level and strategic discussions.
  1. OneNote for Mac – Learning Tools started with OneNote, and we’re excited to bring the Immersive Reader to the Mac version of OneNote. All the capabilities are here, including read aloud, line spacing, page colors, syllables, parts of speech, line focus and picture dictionary.
  2. Word for Mac – just like in Word Desktop and Word for iPad, Learning Tools is now available on the latest version Word for the Mac.
  3. Edge browser for Windows 10 April 2018 Update - With the Windows 10 April 2018 free update, we’ve enhanced the Learning Tools features in the Edge browser, including the following:
    1. Reading View in Edge now has Learning Tools features such as read aloud, page colors, text size, syllables, parts of speech highlighting
    2. ePub files now have syllables and parts of speech capabilities. This is in addition to the existing read aloud, line spacing and page colors.  And ePub files work without the internet, providing an equitable offline solution for those without high speed internet at home 
  4. Outlook Desktop – This month, Read Aloud is now rolling out to Office 365 ProPlus customers. Now you can use Read Aloud for any mails you receive in Outlook, or use it on the mail you are about to send to make sure it sounds the ways you intended.  
  5. OneNote Desktop Learning Tools update – we have an updated OneNote Desktop Learning Tools, version 1.8.0.0.  The updated version contains Syllables and Parts of Speech support for Russian.

Immersive Reader Background:

The Immersive Reader includes techniques that help people read more effectively with capabilities such as:

·        Read Aloud reads text out loud with simultaneous highlighting which improves decoding, fluency, and comprehension while sustaining focus and attention.

·        Spacing optimizes font spacing in a narrow column view to improve reading fluency for users who suffer from "visual crowding" issues.

·        Syllables shows the breaks between syllables to improve word recognition and decoding.

·        Parts of Speech* highlighting for Nouns/Verbs/Adjectives. Currently only available for English, French and Spanish

·        Line Focus - New feature designed to help people focus while reading

·        Picture Dictionary - new built in capabilities for clicking on a word and getting a picture representation pop up

 

Recent Customer Stories, Studies and Resources:

·        VIDEO: An 8 year old boy learns to read for the first time with the help of Microsoft Learning Tools

·        VIDEO: Holly Springs Students and Learning Tools - dyslexia

·        VIDEO: Learning Tools in special education - increasing student reading speeds

·        New research shows Learning Tools improved reading comprehension in diverse student groups

·        Picture Dictionary, Custom Parts of Speech Colors and Roaming Settings come to Immersive Reader

·        Pinterest: Inclusive Classroom http://aka.ms/InclusiveClassroompin

·        Pinterest Learning Tools http://aka.ms/LearningToolsPin

 

Positioning & Field Guidance

With inclusion and equity in mind, and based on direct feedback from educators and students, the team continues to expand the capabilities and availability of the tools that help students of all abilities, including those students with dyslexia, be successful.  With these new cross platform and Edge updates, we are demonstrating larger scale efficacy, as we bring learning tools across more important Office and Windows platforms of Microsoft at scale.

 

When and where will these Learning Tools updates be rolled out?

  • Word Mac and OneNote Mac start rolling out this week
  • Edge Browser updates are all live with the Windows 10 April 2018 update that came out 2 weeks ago

o   All language updates will be live by May 20th at our support page: https://support.office.com/en-us/article/Languages-supported-by-Learning-Tools-47F298D6-D92C-4C35-8586-5EB81E32A76E

 

If you have additional questions or concerns not addressed in the above, please contact Mike Tholfsen michtho@microsoft.com.


.NET Framework May 2018 Preview of Quality Rollup

$
0
0

Today, we are releasing the May 2018 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Resolves an issue in WindowsIdentity.Impersonate where handles were not being explicitly cleaned up. [581052]
  • Resolves an issue in deserialization when using a collection, for example, ConcurrentDictionary by ignoring casing. [524135]
  • Removes case where floating-point overflow occurs in the thread pool’s hill climbing algorithm. [568704]
  • Resolves instances of high CPU usage with background garbage collection. This can be observed with the following two functions on the stack: clr!*gc_heap::bgc_thread_function, ntoskrnl!KiPageFault. Most of the CPU time is spent in the ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire function. This change updates background garbage collection to use the CLR implementation of write watch instead of the one in Windows. [574027]

Networking

  • Fixed a problem with connection limit when using HttpClient to send requests to loopback addresses. [539851]

WPF

  • A crash can occur during shutdown of an application that hosts WPF content in a separate AppDomain. (A notable example of this is an Office application hosting a VSTO add-in that uses WPF.) [543980]
  • Addresses an issue that caused XAML Browser Applications (XBAP’s) targeting .NET 3.5 to sometimes be loaded using .NET 4.x runtime incorrectly. [555344]
  • A WPF application can crash due to a NullReferenceException if a Binding (or MultiBinding) used in a DataTrigger (or MultiDataTrigger) belonging to a Style (or Template, or ThemeStyle) reports a new value, but whose host element gets GC'd in a very narrow window of time during the reporting process. [562000]
  • A WPF application can crash due to a spurious ElementNotAvailableException. [555225]
    This can arise if:
    1. Change TreeView.IsEnabled
    2. Remove an item X from the collection
    3. Re-insert the same item X back into the collection
    4. Remove one of X's subitems Y from its collection
      (Step 4 can happen any time relative to steps 2 and 3, as long as it's after step 1. Steps 2-4 must occur before the asynchronous call to UpdatePeer, posted by step 1; this will happen if steps 1-4 all occur in the same button-click handler.)

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog.

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4103473
.NET Framework 3.5 4095875
.NET Framework 4.5.2 4098974
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 4098972
Windows Server 2012 Catalog
4098968
.NET Framework 3.5 4095872
.NET Framework 4.5.2 4098975
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 4098971
Windows 7
Windows Server 2008 R2
Catalog
4103472
.NET Framework 3.5.1 4095874
.NET Framework 4.5.2 4098976
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1 4096234
Windows Server 2008 Catalog
4103474
.NET Framework 2.0, 3.0 4095873
.NET Framework 4.5.2 4098976
.NET Framework 4.6 4096234

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

Use AU Analyzer for faster, lower cost Data Lake Analytics

$
0
0

Do you use Data Lake Analytics and wonder how many Analytics Units your jobs should have been assigned? Do you want to see if your job could consume a little less time or money? The recently-announced AU Analyzer tool can help you today!

See our recent announcement of the AU Analyzer, available in both Visual Studio and the Azure Portal. Using this feature with our cost-saving guide will help you get the most out of your Data Lake Analytics spend.

Optimize for cost

To see a simple recommendation for how much your job may cost in a balanced setting, click on the Balanced recommendation in the AU Analysis tab for your job. You’ll see the estimated running time and cost of your job if you assign the specified number of AUs, barring other changes to the job or any of its dependencies.

Optimize for speed

To see an estimate of how fast your job can reasonably run, click on the Fast recommendation. Just as before, you’ll see the estimated running time and cost of your job if you assign the specified number of AUs, barring other changes to the job or any of its dependencies.

Customize it!

Try out different scenarios by assigning a custom number of Analytics Units by clicking the Custom card in the AU Analysis tab for your job and moving the slider.

For the following job, it looks like Balanced is the best option for me. The next time I submit this job, I’ll choose 185 AUs, and spend 3.63 USD (13%) more to reduce my job’s running time by 16 minutes.

How does the AU Analyzer work?

The AU Analyzer looks at all the vertices (or nodes) in your job, analyzes how long they ran and their dependencies, then models how long the job might run if a certain number of vertices could run at the same time. Each vertex may have to wait for input or for its spot in line to run. The AU Analyzer isn't 100% accurate, but it provides general guidance to help you choose the right number of AUs for your job.

You’ll notice that there are diminishing returns when assigning more AUs, mainly because of input dependencies and the running times of the vertices themselves. So, a job with 10,000 total vertices likely won’t be able to use 10,000 AUs at once, since some will have to wait for input or for dependent vertices to complete.

In the graph below, here’s what the modeler might produce, when considering the different options. Notice that when the job is assigned 1427 AUs, assigning more won’t reduce the running time. 1427 is the “peak” number of AUs that can be assigned.

How do we calculate the recommendations?

We generate the Balanced recommendation by first modeling the job’s running time multiple times for various selections of AUs. We then walk through each of the options, from 1 AU upwards, and identify the option which, compared with one fewer AU, offers a performance boost equal to or greater than the increase in cost.

The approach for Fast recommendation is similar. We walk through each of the same options, from 1 AU upwards, and identify the option which, compared with one fewer AU, offers a performance boost equal to or greater than half the increase in cost.

To make sure we're giving you quality recommendations, we may update the recommendation logic for Balanced and Fast and add more recommendations in the future.

For example, if increasing from 374 AUs to 375 AUs might give a 3% speed boost for only a 3% increase in cost, we recommend 6 AUs as the Balanced option. If increasing from 825 AUs to 826 AUs might give a 2% speed boost for a 4% increase in cost, we recommend 826 AUs as the Fast option.

Try this feature today!

Give this feature a try and let us know your feedback in the comments.

Interested in any other samples, features, or improvements? Let us know and vote for them on our UserVoice page!

Improved Privacy using Homomorphic Encryption

$
0
0

In the following post, Premier Developer Consultant Razi Rais gives more insight into how to use homomorphic encryption in the field of healthcare to improve privacy and secure data.


Modern encryption schemes like homomorphic encryption (HE) provide higher privacy guarantees compared to the existing schemes. Basically, you don't need to reveal your data in plaintext to third parties (e.g. social media, insurance, healthcare providers, etc.). Instead, they can perform computation on your data which is encrypted. Even in active development stages, HE does provide many interesting opportunities to secure our data.

I have created a simple application using the open source tools, that demonstrates using this type of encryption in the field of healthcare.

You can see Razi’s walkthrough and demo here.

Using Azure for Machine Learning

$
0
0


Guest post by  Ami Zou, Microsoft Student Partner at University College London studying Computer Science, Mathematics and Economics.

clip_image002

About me

In my spare time, I love learning new technologies and going to hackathons. Our hackathon project Pantrylogs using Artificial Intelligence was selected as one of the 10 Microsoft Imagine Cup UK finalists. I’m interested in learning more about AI, Data Science, and Machine Learning to improve the performances of our application.

In this article, I would love to share my experience of using Azure Machine Learning Studio with you. Follow the steps, and within half an hour, you will have a working Machine Learning experiment 😀

Machine Learning Studio

Azure Machine Learning Studio is a very powerful browser-based, visual drag-and-drop authoring environment.

I love using it because it is very simple. We don’t have to write any code but just need to drag and drop the modules to deploy our ideas. There are many different modules that cover all you needs for machine learning and there are also Python, R, and other programming language modules where you can put customized code to make the algorithm work the way you want.

As a student, we get FREE Azure membership. Yes, free! It costs us nothing to start a Machine Learning experiment and we can use up to 100 modules per experiment and get a $100 free credit for any Azure product see http://aka.ms/azure4students.

Are you excited to build your first Azure Machine Learning experiment? Do it now!

Simply register with Azure and get started with Machine Learning :D.

Simple Azure ML experiment based on Car Data

Let’s build a simple ML experiment based on car data together to see how Azure ML Studio work.

There are two parts of the experiment: firstly, we will create a training environment to analyse the car data and train the machine learning experiment; secondly, we will publish it as a predictive experiment and use Linear Regression to predict the price of a car based on its features such as brand, door, bhp and etc.

Here is a snapshot of our final predictive experiment:

clip_image004

You can see we predict the price of an Audi to be £20,000 based on loads of car data against the real price £23,000. We know the model is accurate because Audi is overpriced 🙂

Ready? Let’s have a closer look:

Part 1: Create a Training Environment

Before starting the lab, please Download the car data Car prices.csv from GitHub: https://github.com/martinkearn/AI-Services-Workshop/blob/master/MachineLearning/Car%20prices.csv

1. 1: Create an experiment and load data

Firstly, we need to create a new blank experiment and upload our car data:

  1. Sign into the Azure Machine Learning Studio: http://aiday.info/MLStudio
  2. Once you sign in, click Datasets > New > From Local File > Car prices.csv to load our car dataset.
  3. Then click Experiments > New > Blank experiment to create a new blank experiment.
  4. Finally click Save in the bottom command bar and Type ‘Car Price Prediction’ to save our car prediction experiment.

This should be what it looks like: a blank experiment named ‘Car Price Prediction’ with Car prices.csv in My Datasets.

clip_image006

1.2 - Add data set

As the starting point in our experiment, we need to add the data.

No codes needed, ML Studio uses a drag-and-drop authoring environment: drag modules from the left side navigation and drop them onto the canvas. ‘Stitch’ modules together by connecting the input/output ports (the small circles on the top and bottom of the modules) on the modules (ML Studio will automatically draw a line between them).

Now in our experiment,

  1. Drag ‘Car prices.csv’ from Datasets > My DataSets on the left side navigation to the canvas.
  2. Then Right-Click the Output port (small circle on the bottom of ) and select Visualise to visualise the data.

clip_image008

(Step 1 and 2)

When you finish, the visualisation should look like this:

clip_image010

1.3 - Clean Data by Removing Rows

A lot of times raw data contains some unnecessary parts and missing values, and we need to clean it to make it an uninformed, ‘prepared’ data for our machine learning experiment.

We will be using the ‘Clean Missing Data’ module to remove rows with missing values to produce a clean dataset:

  1. Drag the Data Transformation > Manipulation > ‘Clean missing data’ module (or simply Search for it)
  2. Connect the output port (small circle on the bottom) of Car prices.csv to the input port (small circle on the top) of Clean missing data

clip_image012

(Step 2)

  1. Click on Clean missing data and use the right side panel to set the Cleaning mode = "Remove entire row"

clip_image014 clip_image016

(Step 3) (Step 4)

  1. Using bottom command bar (the green arrow) to Run the experiment and observe green ticks which indicates that everything is working as it should be.

clip_image018

(Step 4)

  1. Right-click > Visualise the Output Port (small circle on the bottom) of Clean missing data and note that the rows with missing data have been removed.

clip_image020

(Step 5)

1.4 - Split Data

The way machine learning works is that we use some actual data to train the algorithm, and then test the algorithm by comparing its output (in our case, the predicted car price) with the actual data (in our case, the actual car price).

Therefore we have to reserve some actual data for testing. Here let’s make it 75% for training and 25% for testing but you can surely modify that:

  1. Drag the Data Transformation > Sample & Split > ‘Split Data’ module (or Search for it)
  2. Connect ‘Clean Missing Data’s output port to Split Data module’s input port

clip_image022

(Step 2)

  1. Click on 'Split Data' and use the right side panel to set ‘Fraction of rows in the first output dataset’ to 0.75

clip_image024 clip_image026

(Step 3) (Step 4)

  1. Run the experiment and observe the green ticks.

Now the left output port of the Split Data module represents a random 75% of the data and the right output port represents a random 25%.

1.5 - Add Linear Regression

There are many machine learning algorithms such as Linear Regression, Classification and Regression Tree, Naive Bayes, K-nearest Neighbors and etc (see ‘Top 10 Machine Learning Algorithm’ in the Resource session). For our task of predicting a single data point, the best suitable algorithm is the Linear Regression. We just need to add ‘Linear Regression’ module to the machine learning algorithm:

  1. Drag the Machine Learning > Initialize Model > Regression > Linear Regression module (or just Search for it)
  2. Place next to the ‘Split data’ module

Here is what it should look like:

clip_image028

1.6 - Train the model on Price

Now comes to the most important part -- using Linear Regression to train the model on the price field. The algorithm learns the factors in the data that impact and affect the price, and then uses those factors to predict the price. The output, predicted price, is called a ‘Scored Label’.

  1. Drag the Machine Learning > Train > Train Model module (or Search for it)
  2. Connect Train Model’s Left Input (Upper) Port to Linear Regression’s Output (Bottom) port, so we are taking the output of the Linear Regression as one of the inputs of the Train Model.

clip_image030

(Step 2)

  1. Connect Train Model’s Right Input Port to Split Data’s Left Output Port.

clip_image032

(Step 3)

  1. Click on Train Model and click the Launch column selector in the right side panel.
  2. Add price as a selected column.

clip_image034

(Step 5)

  1. Run the experiment and observe the green ticks.

Now we're using the Linear Regression algorithm to train on price using 75% of the data set and reserving the rest 25% of the data for future predicting:

clip_image036

1.7 - Score the Model

Finally, let’s test the performance of our model by comparing it against the remaining 25% of data to see how accurate the price prediction is.

  1. Drag the Machine Learning > Score > Score Model module (or Search for it).
  2. Connect Score Model’s Left Input Port to Train Model’s Output Port.
  3. Connect Score Model’s Right Input Port to Split data’s Right Output Port.

clip_image038

(Step 2 and 3)

  1. Run the experiment and observe the green ticks.
  2. Right-click Score Model’s Output Port > Visualise

clip_image040

(Step 5)

  1. Compare the price to scored label. This shows that the predicted price (i.e. scored label) is in the right 'ball park' compared to the actual price.

clip_image042

Yay! Now we have a functional training experiment! Let’s jump to the second part -- converting the training experiment to a predictive experiment and using some new data to test the API 😀

Part 2: Create and Publish a Predictive Experiment

2.1 - Convert to Predictive Experiment

Let’s convert our training experiment to a ‘predictive experiment’ so we can use it to score new data:

  1. Run the experiment and observe the green ticks

clip_image044

(Step 1)

  1. Using the bottom command bar open the Setup Web Service menu and choose Predictive Web Service

clip_image046

(Step 2)

  1. Run the new predictive experiment (this may take approximately 30 seconds)

clip_image048

(Step 3 and 4)

  1. Using the bottom command bar, Deploy Web Service. The experiment will now be deployed and you'll see a screen when it is completed.

Here it is what it looks like when it completes - the experiment is not be deployed and there is a screen containing the endpoint, key and some test interfaces.

clip_image050

2.2 - Test the Web Service

Now it is time to use our deployed predictive experiment to test some new car data, get new predicted prices, and see how good our model is!

  1. Stay at the last shown screen OR use the left navigation panel, and go to Web Services > Car Price Prediction [Predictive Exp]
  2. Click Test (preview). This is in the Test column for the request/response endpoint - not the big blue button, but the small link next to it which will pops up a new tab when you click it.

clip_image052

(Step 2: Click the ‘Test ’hyperlink - not the Blue ‘Test’ Button )

  1. Complete the Input1 form with the following data

○ make = audi

○ fuel = diesel

○ doors = four

○ body = hatchback

○ drive = fwd

○ weight = 1900

○ engine-size = 150

○ bhp = 150

○ mpg = 55

○ price = 23000

clip_image054

(Step 3)

  1. Click Test Request-Response

clip_image056

(Step 4 and 5)

  1. Observe scored labels (the predicted price: 20261.2780003912 ) is lower than the actual price of £23,000. We know the model is right because it is an Audi and therefore it is overpriced 🙂

Congrats! Now we have a fully functional predictive experiment! Test it with some other new data or modify the model.


Conclusions

So, how do you feel about Azure ML Studio? Easy to use right?

I like Azure because it is so easy to use and we get free student membership. Compared to other ML Resources such as Google ML Kit, we don’t have to write any code but just need to drag and drop the modules in Azure ML Studio. Our free student membership allows as to use up to 100 modules per experiment and has 10GB storage while Amazon ML on AWS charges per hour. Of course if we want to go into production we will have to pay for Azure subscription, but the free membership is far more than enough for studying purpose, and what’s interesting, high-level ML APIs for enterprise producers such as HPE Haven OnDemand is hosted on Azure.

Azure ML Studio is very powerful. For instance, with our car dataset, there are so many other things we can do with the training model. We can normalise the data to make it a standardised dataset (values between 0 and 1). We can pick many different algorithms such as Clustering and Classification from ‘Machine Learning > Initialize Model’ to satisfy our needs for the model. There are also specified modules for data analysis programming languages such as R and Python.

I love it also because there are loads of resources and supportive communities. You can easily find tutorials and examples, and Microsoft Developer Networks has many Machine Learning related forums.

And because it’s free! Azure student membership includes free access to many other interesting and useful products such as Microsoft IoT Hub, SQL Database, and Cognitive Services which I use a lot for Pantrylogs. You can really play around with it and learn something new each time. It is always exciting to experiment some new technologies, isn’t it?

Now go explore Azure Machine Learning Studio and learn more about data and machine learning 😀


Related Resources

- Microsoft Azure Machine Learning Studio: https://studio.azureml.net

- GitHub Machine Learning Lab: https://github.com/martinkearn/AI-Services-Workshop/blob/master/MachineLearning/MachineLearning-Lab.md

- Azure Machine Learning Real-World Examples: https://aischool.microsoft.com/learning-paths/2qon88L7GIWEeUuEaas6wK

- Microsoft Docs: https://docs.microsoft.com/en-us/

- Top 10 Machine Learning Algorithms: https://towardsdatascience.com/a-tour-of-the-top-10-algorithms-for-machine-learning-newbies-dde4edffae11

- Basic Machine Learning Tools and Frameworks for Data Scientists and Developers: https://www.computerworlduk.com/galleries/data/machine-learning-tools-harness-artificial-intelligence-for-your-business-3623891/

- Microsoft Developer Networks: https://social.msdn.microsoft.com/Forums/en-US/home?brandIgnore=True

- Microsoft ML Resources: https://docs.microsoft.com/en-us/azure/machine-learning/

AMQP transaction support and Send Via are now generally available

$
0
0

For the long time Service Bus users among our community this is a great day transactions are now fully supported in our new open source libraries (Java and .Net Standard) or any AMQP library that implements the AMQP 1.0 Standard.

If you know the SBMP transaction feature you don’t need to read much further but likely will want to right away see how this all works. Please find a code sample here:

https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/TransactionsAndSendVia

If you are new to the Transaction or the Send Via sample you may want to go through the official documentation:

The feature is available across all regions and in Standard as well as our Premium SKU. It is available for both the .Net Standard and Java client. A Java sample will be available on GitHub shortly.

Some key things to note:

  • Transactions cannot span more than one connection context, so your code needs to reflect that.
  • Receive is not part of the transaction. Only operations which do something with the message on the broker are part of the transaction. These are: Send, Complete, Dead letter, Defer. Receive itself already utilizes the Peek lock concept on the broker.
  • Send Via lets you cross entity boundaries within a transaction scope: A single transaction cannot theoretically span across entities. To support cross-entity transaction, you can now send a message to a destination-queue via another queue. The transaction will be performed on the via-queue, and once successful, the message will be forwarded/transferred to its intended destination.

Enjoy this great new feature!

Gotchas – Office 365 REST Reporting Web Service, empty results, time-out issues and more

$
0
0

Sharing my gotchas working with Office 365 Reporting Web Service (including MessagingTrace) related ones. Few of the known issues were, 

- We make GET request, but face time-out issues. Say, https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$format=Json&$filter=StartDate%20eq%20datetime%272018-05-11T23:49:34%27%20and%20EndDate%20eq%20datetime%272018-05-12T01:50:22%27
- Get Empty Report result values
Le Café Central de DeVa - Deva Blogs
- Data’s are not appearing instantly
- Time-out issues

Adding few Gotchas that I learnt/observed while working with Reporting Web Service or designing apps using them or while accessing the data:

- As you aware Office 365 Reporting web service is an integrated service, receiving data from a wide variety of sources and datacenters. If there is planned/unplanned downtime, then Reporting Web Services is unavailable.
- If your application makes a lot of requests, or your dashboard website uses a single service account to gather all the reporting data, you might encounter throttling of your requests – so you need to be aware of it.
- To be able to see the reports, you need the right permissions in Office 365. If you aren't already able to see them, ask your org's administrator to add you to one of the administrator roles.
- Most types of information about mail processing, message tracing, and so on are available to the reports within a couple of hours. However, none of the data will appear "instantly."; so expect delay or retry it later after sometime.
- Reports can take more than a couple of seconds depending on the amount of detail data. Till the time, you may end-up getting empty report result values for the request success.
- Your application should time how long the reports take to retrieve, provide status and make sure that you set user expectations accordingly.
- Exchange Server/Office 365 has network-bandwidth protection in the form of response "throttling" that can sometimes affect the Reporting web service. But you’re unlikely to be affected by that unless you’re requesting a lot of detailed reports very quickly.
- Most errors that we see during development come from malformed requests, bad column names, and so on.
- Ensure that the service is available before you make the report request.
- When you receive them(JSON Format), read them carefully, as they often tell you exactly where the problem is.
- You might also receive retry-able data mart timeout errors.

Please consider looking at the detailed documentation/recommendations @  https://msdn.microsoft.com/en-us/library/office/jj984332.aspx.

Hope this helps!!

QnA Maker サービスを作成して KB を公開する

$
0
0

QnA Maker が一般公開されましたため、今回は、QnA Maker サービスを作成して KB を公開するまでの手順を、以下の公開ドキュメントに沿ってご案内いたします。

 

(1) Create a QnA Maker service の手順に沿って QnA Maker サービスを作成

(2) Create-Train-Publish your knowledge base の手順に沿って KB を作成して公開

 

その前に、QnA Maker のプレビュー版をご利用いただいていた方はアーキテクチャの変更に戸惑われるかもしれませんので、まずは、変更後のアーキテクチャを踏まえて、ご利用のための以下の大まかな流れをつかんでいただきたいと思います。

 

clip_image001

 

上記の図は、Learn about QnA Maker のドキュメントからの抜粋です。図の右下に、以下の 3 つのサービスが追加されており、それぞれ以下の役割をもっています。

(A)         App Service

QnA Maker のランタイムは、App Service としてデプロイされます。

 

(B)         Azure Search

Q&A や類語(Synonyms) やメタデータはAzure Search に保存されます。

 

(C)         App Insights

App Insights を、QnA Maker サービスの作成時に有効にしておくことで、全てのChat logs (チャットのログ) を保存することができます。

 

上記の図に振られている番号の手順 (1 ~ 5) の流れを訳し関連リンクを貼っておきます。

    1. Azure ポータルQnA Maker のリソースを作成します。

    1. QnA Maker ポータルにログオンします。

    1. Knowledge Base (KB) を作成します。

    1. QnA エンドポイントをBot で使用します。

    1. QnA Maker ポータル
      またはAPI
      を通じてKB を管理します。

API のリファレンス:https://westus.dev.cognitive.microsoft.com/docs/services/5a93fcf85b4ccd136866eb37/operations/5ac266295b4ccd1554da75ff

API の言語ごとのドキュメント (C#Java, Node.js, Python, Go)

 

 

それでは、以下QnAMaker のサービスを作成し、KB を公開するところまでの手順をご紹介していきます。

 



(1) Create a QnA Maker service の手順に沿って QnA Maker サービスを作成

 

    1. Azure ポータルにログオンします。

    1. [リソースの作成] をクリックします。

 

clip_image002

 

[新規] の検索ボックスに “qna maker” と入力し、Enter を押すと、以下のように QnA Maker のリソースが表示されるので、それを選択します。

 

clip_image003

 

    1. 以下の画面で [作成] をクリックします。

 

clip_image004

 

    1. 以下の画面で必要な項目を入力していきます。

 

clip_image005

 

l  Name はこのQnA Maker サービスを特定するユニークな名前です。この名前は、KB を関連付けるQnA Maker エンドポイントも特定します。

l  サブスクリプションには QnA Maker リソースをデプロイするサブスクリプションを選択します。

l  Management pricing tier QnA Maker のポータルとAPI の価格レベルです。各価格レベルの詳細は、Cognitive Services の価格 - QnA Maker をご参照ください。

l  Resource group は、QnA Maker リソースをデプロイするリソースグループを新規作成 (推奨) または既存のものを使用します。

l  Search pricing tier は、Azure Search の価格レベルを選択します。Free レベルがグレーアウトされている場合は、すでにお使いのサブスクリプションでは Free レベルをすでに利用していることを意味します。その場合は Basic レベルをご利用ください。価格レベルの詳細は、Azure Search の価格 をご参照ください。

l  Search location は、Azure Search のデータをデプロイしたい場所です。お客様のデータを保存しなければならない場所に関する制限は、Azure Search のデータを保存するのに選んだ場所になります。

l  App name には、ご自身のApp Service の名前を入れます。

l  デフォルトではApp Service の価格レベルはStandard (S1) になります。これはリソースの作成後に変更できます。App Service の価格レベルの詳細は、App Service の価格 をご参照ください。変更方法は後述の補足に記載したドキュメントをご参照ください。

l  Web location は、App Service をデプロイする場所です。上述の Search location とは別の場所にすることができます。

l  Application Insights
Enable にするかどうかを決めることができます。Enable にしておけば、トラフィックのテレメトリーや、チャットログ、エラーを保存することができます。

l  App Insights location は、Application Insights をデプロイする場所です。

 

補足:各サービスの価格レベルの変更方法は、Upgrade your QnA Maker service のドキュメントをご参照ください。

 

    1. 全ての項目に有効な入力がされていれば、[作成] をクリックして、ご自身のサブスクリプションでこれらのサービスをデプロイします。完了までに数分かかります。

    1. デプロイが完了して、当該リソースの [Overview] [リソース グループ] の名前をクリックすると、以下の図のように上述したリソースが作成されたことを確認できます。

 

clip_image006

 

clip_image007

 

 

(2) Create-Train-Publish your knowledge base の手順に沿ってKB を作成して公開

 

上記ドキュメントの手順では、BitLocker Recovery FAQ のサイトをもとに KB を作成しますが、本ブログは日本のお客様向けですし、当該サイトは日本語のものがないようなので、例として、Azure サポートに関する FAQ を使ってみます。

 

1.              (1) Azure ポータルへのアクセスと同じ資格情報で、QnA Maker ポータル の右上の [Sign in] からログオンします。

 

clip_image008

 

ちなみに、以前使っていたプレビュー版のポータルは、上記画面の[QnAMaker Preview portal] をクリックすれば、< https://www.qnamaker.ai/old > で確認できます。プレビュー版からのKB の移行方法は上記画面の [here] をクリックすれば Migrate a knowledge base using export-import のドキュメントをご参照いただけます。

 

2.              画面上部の [Create a knowledge base] をクリックします。

 

clip_image009

 

3.              まだ (1) の手順を行っていない場合は、STEP 1 [Create a QnA service] をクリックします。今回はスキップします。

4.              STEP 2 の各ドロップダウンリストを選択していきます。

 

clip_image010

 

5.              STEP 3 Name your KB に、ご自身のKB 名を入力します。今回は例として “My Sample QnA KB” と入力します。

6.              STEP 4 Populate your KB に、FAQ のサイトのURL またはファイルを指定します。今回は例としてAzure サポートに関する FAQ URL を使ってみます。

 

clip_image011

 

7.              STEP 5 [Create your KB] をクリックします。

 

clip_image012

 

8.              以下のポップアップが出るので、KB が作成されるまで、しばらく待ちます。

 

clip_image014

 

9.              KB が作成されると、以下のような Knowledge base のページが表示されます。ここで、QnA を編集できます。

 

clip_image015

 

10.           画面右側の [Add QnA pair] をクリックし、Question に「こんにちは。」、Answer に「Azure サポートについてご質問ください。」と入力してみます。

 

clip_image016

 

11.           Save しない限り、上記の編集内容は維持されません。画面右上の [Save and train] をクリックすると、編集が保存され、この QnA Maker のモデルが学習されます。

12.           さらに右の [Test] をクリックし、学習結果を確認してみます。テキストボックスに「こんにちは。」と入力してみます。

 

clip_image018

 

13.           Inspect をクリックすると、さらに詳細を確認できます。Confidence Score の詳細は、Confidence Score のドキュメントをご参照ください。

 

clip_image020

 

14.           Test 画面の表示をやめるには、再度 [Test] をクリックします。続いて、KB を公開するために、画面上部の [PUBLISH] タブをクリックします。

 

clip_image021

 

15.           [Publish] をクリックします。

16.           Publish に成功したら、以下の画面のエンドポイントを、ご自身のアプリやBot に組み込みます。

 

clip_image022

 

 

 

上記がお役に立てば幸いです。

 

Cognitive Services 開発サポートチーム 津田

 


A “new” learning opportunity – the Hackathon at NAVUG Focus 18

$
0
0

I know, a Hackathon is not something new. Hackathons have existed at least half a decade, but how can a Hackathon be a learning opportunity?

Typically we see a Hackathon as an event where people get together to create some prototype or proof of concept of an idea, but the Wikipedia description of a Hackathon is actually as simple as:

“A hackathon, a hacker neologism, is an event when programmers meet to do collaborative computer programming.”

The idea

A few months ago, Mark (@GatorRhodie) contacted me and told me about NAVUG FOCUS 18. He told me that one of the themes of this years NAVUG FOCUS is "A Brave New World" - about AL development, Docker, VS Code, Azure etc.

He told me that he wanted to conduct a Hackathon during the event. We had a few calls and discussed various approaches and ended up agreeing that the best approach would be to create some ideas/challenges, which people can work on in groups if they don't have ideas of their own. The challenges should be things of common usage, things that people can go back and look at as a reference on how to do things.

We brainstormed some ideas and with great help of Jesper (@JesperSchulz) we ended up with a set of challenges, which we think are appropriate for the event.

The event

The event takes place on Monday, may 21st evening from 5:30PM to 11PM (not sure how my jetlag is going to cope with that:-)) and the idea is that people can choose one or more of "our" challenges to work on - or they can work on ideas of their own.

For every challenge there is a description, an expected result, some steps, some hints and some cheat sheets. We will have some people in the room to help out if people get stuck, but the primary idea is, that people help each other. People working on "our" challenges can request a cheat sheet if they cannot figure out how to solve a specific issue.

Depending on the outcome of this event, we might use the same mechanism at other conferences. I am also considering whether our challenges can be made public somehow so that people can conduct their own Hackathon events for social learning/programming.

A sample challenge

Below, you will find one of the challenges in its full form (but without the cheat sheets). This challenge is a level 1 challenge.

Auto-fill company information on the customer card

As a new customer is entered in Dynamics 365 Business Central, the user can decide to enter a domain name instead of the name, which leads to the system looking up the information for the company associated with this domain name from a Web Service and filling out the remaining fields on the customer card with information obtained from the Web Service.

To complete this challenge, you will need:

  • A Dynamics 365 Business Central Sandbox Environment
  • Visual Studio Code with the AL Extension installed
    • Azure VMs will have VS Code pre-installed
  • An API Key from http://www.fullcontact.com

Expected result:

Steps:

  • Create an empty app
  • Create a page extension for the customer card
  • On the OnAfterValidate trigger on the Name field, check whether the entered value is a domain name
  • Ask the user whether he wants to lookup information about the company associated with this domain name
  • Call the fullcontact Web API and assign field values

Hints:

  • In VS Code, use Ctrl+Shift+P and type AL GO and remove the customerlist page extension
  • Use the tpageext snippet
  • Use EndsWith to check whether the name is a domain name
  • Use the Confirm method to ask whether the user want to download info
  • Use HttpClient to communicate with the Web Service
  • Use Json types (JsonObject, JsonToken, JsonArray and JsonValue) to extract values from the Web Service result

Cheat Sheets:

  • Create an empty app
  • Create a page extension
  • Code for communicating with Web Service
  • Update the customer

 

See you in Indianapolis.

 

Enjoy

Freddy Kristiansen
Technical Evangelist

test

Deployment template validation failed: Circular dependency detected on resource

$
0
0

I was writing this article “How to use ARM templates for deployments” and received this error when I click the purchase button from within the portal.

Deployment template validation failed: "Circular dependency detected on resource: 
"/subscriptions/25ec5/resourceGroups/HCM/providers/Microsoft.Network/networkInterfaces/hcm00146";. 
Please see https://aka.ms/arm-template/#resources for usage details.". (Code: InvalidTemplate)

Here is a list of some actions to take if you get this exception “Solution 5 - circular dependency detected”.

I did what the articles said and simply removed some of the DepndsOn settings in the JSON file, I was optimistically suprised when the script ran with such little effort, that feeling was quickly squashed, Figure 1.

image

Figure 1, ARM deployment failed

Let the effort begin…

To start with, I deleted everything and then recreated my VM and watched real close what was being created and when.  I clicked the create button, navigated to the Resource Group blade and hit the Refresh button repeatedly watching what was being created and in what order.  Here is what I saw.

  • Batch 1 – IP, NSG and VNET
  • Batch 2 – Network Interface
  • Batch 3 – Storage Account
  • Batch 4 – Virtual Machine and Disk

But that didn’t turn out to be the issue.  Ultimatly, there was a indeed a ‘circular reference’ which required me to read through the entire template and find it, just like the instructions said.  Here is what I found, perhaps had I started with a simplier template, I would have found it faster.

The Virtual Machine contained a dependsOn reference to my network interface:

"[resourceId('Microsoft.Network/networkInterfaces', parameters('networkInterfaces_hcm001446_name'))]";

And my network interface containded a dependsOn reference to my Virtual Machine:

"[resourceId('Microsoft.Compute/virtualMachines', parameters(';virtualMachines_HCM001_name'))]"

Working through many paths towards finding which to remove, I concluded to remove the dependsOn reference in the network interface configuration.  Whihc means that the network interface is not dependent on the Virtual Machine but the Virtual Machine is dependent on the network interface.  Once I did that, the script worked as expected.

The path which solved it for me was that I created a similar deployment template using Visual Studio, as discussed here and I saw no dependsOn from the network interface to the virtual machine in the generated template.

To get a overview of the project I worked on, read the following articles as well.

osDisk.managedDisk.id’ is not allowed

$
0
0

I was writting this article here “How to use ARM templates for deployments” and received the result seen in Figure 1.

image

Figure 1, failed deployment using templates azure arm

When I clicked on the operations details i found the following details.

{
  "error": {
    "code": "InvalidParameter",
    "target": "osDisk.managedDisk.id",
    "message": "Parameter 'osDisk.managedDisk.id' is not allowed."
  }
}

I found on GitHub here “Azure managed disk: osDisk.managedDisk.id is not allowed” that doing the following would resolved the issue.

Replace this:

"managedDisk": {
  "storageAccountType": "Premium_LRS",
  "id": "[parameters('virtualMachines_HCM200_id')]"
},

with this:

"managedDisk": {
  "storageAccountType": "Standard_LRS"
},

I tested this out and it worked for me, at least it moved me one step forward, until I got this one.

Required parameter ‘adminPassword’ is missing (null).

$
0
0

In case you have not read the article which led to this one, check it out here “How to use ARM templates for deployments”.  Once I resolved the previous 2 bumps:

The final action was to provide an admin password for my VMs. 

image

Figure1 – failed deployment using an Azure ARM template

When I clicked on the ‘Operations details’ I saw the following output as the cause of the failure.

{
 "error": {
 "code": "InvalidParameter",
 "target": "adminPassword",
 "message": "Required parameter 'adminPassword' is missing (null)."
 }
}

I am not a security expert, but I can see why those who make the security decisions would not include the password to an already created VM via a link within the portal.  Therefore, searching through the template I found the following and thought a good place for the adminPassword would be right after the adminUsername and therefore, replaced the following:

"osProfile": { 
 "computerName": "[parameters('virtualMachines_HCM001_name')]", 
 "adminUsername": "HCMAdmin", 
 "windowsConfiguration": 
 { 
   "provisionVMAgent": true, 
   "enableAutomaticUpdates": true 
 },    
  ...

with this:

"osProfile": {
  "computerName": "[parameters('virtualMachines_HCM001_name')]", 
  "adminUsername": "HCMAdmin",  
  "adminPassword": "P@ssw0rd",
  "windowsConfiguration": { 
    "provisionVMAgent": true, 
    "enableAutomaticUpdates": true },
  ...

And that completed the journey.

Although a little challenging, this is possible and actually quite fulflling to solve the issues and be successful.

To get a overview of the project I worked on, read the following articles as well.

Viewing all 29128 articles
Browse latest View live