Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Simplify extension development with PackageReference and the VSSDK meta package

$
0
0

Visual Studio 2017 version 15.8 made it possible to use the PackageReference syntax to reference NuGet packages in Visual Studio Extensibility (VSIX) projects. This makes it much simpler to reason about NuGet packages and opens the door for having a complete meta package containing the entire VSSDK.

Before using PackageReference, here’s what the References node looked like in a typical VSIX project:

It contained a lot of references to Microsoft.VisualStudio.* packages. Those are the ones we call VSSDK packages because they each make up a piece of the entire public API of Visual Studio.

Migrate to PackageReference

First, we must migrate our VSIX project to use PackageReference. That is described in the Migrate from packages.config to PackageReference documentation. It’s quick and easy.

Once that is done it is time to get rid of all VSSDK packages and installing the meta package.

Installing the VSSDK meta package

The meta package is a single NuGet package that does nothing but reference all the NuGet packages that make up the VSSDK. So, it references all relevant Microsoft.VisualStudio.* packages and is versioned to match major and minor version of Visual Studio.

For instance, if your extension targets Visual Studio 2015, then you need version 14.0 of the VSSDK meta package. If your extension targets Visual Studio 2017 version 15.6, then install the 15.6 version of the VSSDK meta package.

Before installing the meta package, make sure to uninstall all the Microsoft.VisualStudio.*, VsLangProj* and EnvDTE* packages, as well as stdole, Newtonsoft.Json, from your project. After that is done, install the Madskristensen.VisualStudio.SDK package matching the minimum version of Visual Studio your extension supports. It supports all the way back to Visual Studio 2015 version 14.0.

After the meta package is installed, the References node looks a lot simpler:

You can read more about the VSSDK meta package on GitHub.

Known limitations

To use PackageReference and the VSSDK meta package, make sure that:

  1. The VSIX project targets .NET Framework 4.6 or higher
  2. You are using Visual Studio 2017 version 15.8 or higher
  3. You include search for pre-release packages since it is in beta

Try it today

The VSSDK meta package is right now a prototype that we hope to make default in Visual Studio 2019. I’m personally dogfooding it in about 10 production extensions, but we need more extensions to use it to ensure it contains the right dependencies for the various versions of Visual Studio. When it has been properly tested and we’re confident that it will work, it will be renamed to Microsoft.VisualStudio.SDK or similar.

So please try it out and let us know how it works for you.

Mads Kristensen, Senior Program Manager
@mkristensenMads Kristensen is a senior program manager on the Visual Studio Extensibility team. He is passionate about extension authoring, and over the years, he's written some of the most popular ones with millions of downloads.

NEW EXAMPLE SCENARIO: Web Application Monitoring on Azure

$
0
0

Written by Shawn Gibbs and Nanette Ray from AzureCAT. Reviewed by Mike Wasson, Ed Price, and Tim Benjamin. Published by Adam Boeglin from Microsoft patterns & practices.

We believe monitoring applications is a vital cloud scenario, and so we're glad to build out this content, now available on the Azure Architecture Center.

Platform as a service (PaaS) offerings on Azure manage compute resources for you and in some ways change how you monitor deployments. Azure includes multiple monitoring services, each of which performs a specific role. Together, these services deliver a comprehensive solution for collecting, analyzing, and acting on telemetry from your applications and the Azure resources they use.

The services that this Example Scenario includes:

  • Azure App Service
  • Application Insights
  • Azure Monitor
  • Log Analytics

This article explains the following aspects:

  1. Scenario Architecture
    1. Components (services used)
  2. Considerations
    1. Alternatives
    2. Scalability and availability
    3. Security
  3. Pricing

 

Please check out the "Web Application Monitoring" article, as well as a growing library of additional Example Scenarios from Microsoft, on the Azure Architecture Center:

 

AzureCAT Guidance

"Hands-on solutions, with our heads in the Cloud!"

Microsoft Flow and PowerApps monitoring strategy

$
0
0

App Dev Manager Om Chauhan demonstrates an approach for monitoring Flows and PowerApps to enable alerts or taking custom actions.


What is Microsoft Flow?

Flow is an online workflow service that enables you to work smarter and more efficiently by automating workflows across the most common apps and services. It’s a recommended solution for workflows in the modern Office 365 ecosystem. Going forward Microsoft does not plan to make any updates to their legacy workflow products like SharePoint Designer and SharePoint 2013 workflows. Using Microsoft Flow, you can connect to more than 100 services and manage data in either the cloud or on-premises sources such as SharePoint and SQL Server. The list of applications and services that you can use with Microsoft Flow grows constantly.

What is Microsoft PowerApps?

PowerApps allows the user to build business apps that run in a browser, or on a phone or tablet via a mobile app. It’s is an interface design tool that creates form and screens for users to perform CRUD operations. Microsoft has positioned PowerApps as their recommended replacement for InfoPath. Like Flow, PowerApps also provides 100s of connectors to integrate with apps and services in the cloud and on-premise.

Why the need for a monitoring solution?

The Admin centers for Microsoft Flow (https://admin.flow.microsoft.com) and PowerApps (https://admin.powerapps.com) allow administrators to manage the environments, the resources, security, Data Policies, licenses, quotas etc. What the Admin Center does not provide is an automated way of monitoring the Flows, PowerApps, Connectors etc. created within the different environments. From a security and governance point of view, organizations might want to build their own solution that could monitor the Flows and PowerApps being created across the organizations and trigger appropriate alerts and remediation actions as necessary.

Below are few of the many example scenarios that an organization may want to monitor:

  1. Daily summary of new Flows and PowerApps being created in different environments.
  2. Unusually high number of Flows or Power Apps being created on a specific day.
  3. Flow or PowerApps using connectors that are being restricted to use within the organization.
  4. Newly available connectors from Microsoft or custom connectors build by the organization users.

Building your own custom monitoring solutions

You could build your own custom monitoring solution using the below two approaches:

  1. Using Power platform PowerShell Cmdlets (Preview)
    Microsoft has exposed the Power Platform API’s through the PowerShell cmdlets that will allow the App Creators and the Administrators to automate many of the monitoring, administration and management tasks using PowerShell.
    Here is the article with the preview launch announcement of the PowerApps cmdlets
    https://docs.microsoft.com/en-us/powerapps/administrator/powerapps-powershell
  2. Using Power platform management connectors (Preview)
    Microsoft has also exposed Power Platform API’s through the standard connectors that easily allows to interact with PowerApps and Flow resources within PowerApps and Flow itself.

    Here is the link to blog article about the announcement of these new connectors
    https://powerapps.microsoft.com/en-us/blog/new-connectors-for-powerapps-and-flow-resources/

    The Admin Connectors (Power Platform for Admins, Microsoft Flow for Admins and PowerApps for Admins) will give the user access to resources tenant-wide, whereas the Maker Connector (PowerApps for App Makers) only gives access to the resources if the user has some ownership of the resource (e.g., Owner, Editor, Shared with, etc.).

Monitoring solution using Power Platform PowerShell cmdlets

While still in preview these cmdlets allow to programmatically perform almost the same level of administration operations that are currently available through the Flow/PowerApps admin centers.

Below is one sample PowerShell script that uses the PowerApps cmdlets and generates two files in the current folder named FlowPermissions.csv and AppPermissions.csv. The files list all the flows and PowerApps details like Flow/PowerApp name, Creation DateTime, Modification DateTime, Owner Names, Connectors and so on.

Import-Module (Join-Path (Split-Path $script:MyInvocation.MyCommand.Path) "Microsoft.PowerApps.Administration.PowerShell.psm1") -Force

$AppRoleAssignmentsFilePath = ".AppPermissions.csv"
$FlowRoleAssignmentsFilePath = ".FlowPermissions.csv"

# Add the header to the app roles csv file
$appRoleAssignmentsHeaders = "EnvironmentName," `
        + "AppName," `
        + "CreatedTime," `
        + "LastModifiedTime," `
        + "AppDisplayName," `
        + "AppOwnerObjectId," `
        + "AppOwnerDisplayName," `
        + "AppOwnerDisplayEmail," `
        + "AppOwnerUserPrincipalName," `
        + "AppConnections," `
        + "RoleType," `
        + "RolePrincipalType," `
        + "RolePrincipalObjectId," `
        + "RolePrincipalDisplayName," `
        + "RolePrincipalEmail," `
        + "RoleUserPrincipalName,";
Add-Content -Path $AppRoleAssignmentsFilePath -Value $appRoleAssignmentsHeaders

# Add the header to the app roles csv file
$flowRoleAssignmentsHeaders = "EnvironmentName," `
        + "FlowName," `
        + "CreatedTime," `
        + "LastModifiedTime," `
        + "FlowDisplayName," `
        + "FlowOwnerObjectId," `
        + "FlowOwnerDisplayName," `
        + "FlowOwnerDisplayEmail," `
        + "FlowOwnerUserPrincipalName," `
        + "FlowConnectionOwner," `
        + "FlowConnections," `
        + "FlowConnectionPlusOwner," `
        + "RoleType," `
        + "RolePrincipalType," `
        + "RolePrincipalObjectId," `
        + "RolePrincipalDisplayName," `
        + "RolePrincipalEmail," `
        + "RoleUserPrincipalName,";
Add-Content -Path $FlowRoleAssignmentsFilePath -Value $flowRoleAssignmentsHeaders


Add-PowerAppsAccount

#populate the app files
$apps = Get-AdminPowerApp

foreach($app in $apps)
{
    #Get the details around who created the app
    $AppEnvironmentName = $app.EnvironmentName
    $Name = $app.AppName
    $DisplayName = $app.displayName -replace '[,]'
    $OwnerObjectId = $app.owner.id
    $OwnerDisplayName = $app.owner.displayName -replace '[,]'
    $OwnerDisplayEmail = $app.owner.email
    $CreatedTime = $app.CreatedTime
    $LastModifiedTime = $app.LastModifiedTime

    $userOrGroupObject = Get-UsersOrGroupsFromGraph -ObjectId $OwnerObjectId
    $OwnerUserPrincipalName = $userOrGroupObject.UserPrincipalName

    #Get the list of connections for the app
    $connectionList = ""
    foreach($conRef in $app.Internal.properties.connectionReferences)
    {
        foreach($connection in $conRef)
        {
            foreach ($connId in ($connection | Get-Member -MemberType NoteProperty).Name) 
            {
                $connDetails = $($connection.$connId)

                $connDisplayName = $connDetails.displayName -replace '[,]'
                $connIconUri = $connDetails.iconUri
                $isOnPremiseConnection = $connDetails.isOnPremiseConnection
                $connId = $connDetails.id


                $connectionList += $connDisplayName + "; "
            }
        }        
    }

   
    #Get all of the details for each user the app is shared with
    $principalList = ""
    foreach($appRole in ($app | Get-AdminPowerAppRoleAssignment))
    {
        $RoleEnvironmentName = $appRole.EnvironmentName
        $RoleType = $appRole.RoleType
        $RolePrincipalType = $appRole.PrincipalType
        $RolePrincipalObjectId = $appRole.PrincipalObjectId
        $RolePrincipalDisplayName = $appRole.PrincipalDisplayName -replace '[,]'
        $RolePrincipalEmail = $appRole.PrincipalEmail
        $CreatedTime = $app.CreatedTime
        $LastModifiedTime = $app.LastModifiedTime

        If($appRole.PrincipalType -eq "Tenant")
        {
            $RolePrincipalDisplayName = "Tenant"
            $RoleUserPrincipalName = ""
        }
        If($appRole.PrincipalType -eq "User")
        {
            $userOrGroupObject = Get-UsersOrGroupsFromGraph -ObjectId $appRole.PrincipalObjectId 
            $RoleUserPrincipalName = $userOrGroupObject.UserPrincipalName  
            
        }

        # Write this permission record 
        $row = $AppEnvironmentName + "," `
                + $Name + "," `
                + $CreatedTime + "," `
                + $LastModifiedTime + "," `
                + $DisplayName + "," `
                + $OwnerObjectId + "," `
                + $OwnerDisplayName + "," `
                + $OwnerDisplayEmail + "," `
                + $OwnerUserPrincipalName + "," `
                + $connectionList + "," `
                + $RoleType + "," `
                + $RolePrincipalType + "," `
                + $RolePrincipalObjectId + "," `
                + $RolePrincipalDisplayName + "," `
                + $RolePrincipalEmail + "," `
                + $RoleUserPrincipalName;
        Add-Content -Path $AppRoleAssignmentsFilePath -Value $row 
    }
}
        

#populate the flow files
$flows = Get-AdminFlow

foreach($flow in $flows)
{
    #Get the details around who created the flow
    $FlowEnvironmentName = $flow.EnvironmentName
    $Name = $flow.FlowName
    $DisplayName = $flow.displayName -replace '[,]'
    $OwnerObjectId = $flow.createdBy.objectid
    $OwnerDisplayName = $flow.createdBy.displayName -replace '[,]'
    $OwnerDisplayEmail = $flow.createdBy.email
    $CreatedTime = $flow.CreatedTime
    $LastModifiedTime = $flow.LastModifiedTime

    $userOrGroupObject = Get-UsersOrGroupsFromGraph -ObjectId $OwnerObjectId
    $OwnerUserPrincipalName = $userOrGroupObject.UserPrincipalName

    $flowDetails = $flow | Get-AdminFlow

    $connectionList = ""
    $connectorList = ""
    $connectionPlusConnectorList = ""
    foreach($conRef in $flowDetails.Internal.properties.connectionReferences)
    {
        foreach($connection in $conRef)
        {
            foreach ($connId in ($connection | Get-Member -MemberType NoteProperty).Name) 
            {
                $connDetails = $($connection.$connId)

                $connDisplayName = $connDetails.displayName -replace '[,]'
                $connIconUri = $connDetails.iconUri
                $isOnPremiseConnection = $connDetails.isOnPremiseConnection
                $connId = $connDetails.id
                $connName = $connDetails.connectionName

                $connectionObject = Get-AdminPowerAppConnection $connName
                $connectorName = $connectionObject.ConnectorName
                $environmentName = $connectionObject.EnvironmentName
                $connectionOwner = $connectionObject.CreatedBy.UserPrincipalName

                $connectionList += $connectionOwner + "; "
                $connectorList += $connDisplayName + "; "
                $connectionPlusConnectorList += "{" + $connectionOwner + ":" + $connDisplayName + "}; "
            }
        }        
    }
    
    $principalList = ""
    foreach($flowRole in ($flow | Get-AdminFlowOwnerRole))
    {        
        $RoleEnvironmentName = $flowRole.EnvironmentName
        $RoleType = $flowRole.RoleType
        $RolePrincipalType = $flowRole.PrincipalType
        $RolePrincipalObjectId = $flowRole.PrincipalObjectId
        $RolePrincipalDisplayName = $flowRole.PrincipalDisplayName -replace '[,]'
        $RolePrincipalEmail = $flowRole.PrincipalEmail

        If($flowRole.PrincipalType -eq "Tenant")
        {
            $RolePrincipalDisplayName = "Tenant"
            $RoleUserPrincipalName = ""
        }
        If($flowRole.PrincipalType -eq "User")
        {
            $userOrGroupObject = Get-UsersOrGroupsFromGraph -ObjectId $flowRole.PrincipalObjectId 
            $RoleUserPrincipalName = $userOrGroupObject.UserPrincipalName  
            
        }

        # Write this permission record 
        $row = $RoleEnvironmentName + "," `
            + $Name + "," `
            + $CreatedTime + "," `
            + $LastModifiedTime + "," `
            + $DisplayName + "," `
            + $OwnerObjectId + "," `
            + $OwnerDisplayName + "," `
            + $OwnerDisplayEmail + "," `
            + $OwnerUserPrincipalName + "," `
            + $connectionList + "," `
            + $connectorList + "," `
            + $connectionPlusConnectorList + "," `
            + $RoleType + "," `
            + $RolePrincipalType + "," `
            + $RolePrincipalObjectId + "," `
            + $RolePrincipalDisplayName + "," `
            + $RolePrincipalEmail + "," `
            + $RoleUserPrincipalName;
        Add-Content -Path $FlowRoleAssignmentsFilePath -Value $row 
    }
}
 

The script can be made to run on a regular schedule and can be easily extended to send the two CSV files as an attachment in an email to the configured recipients.

Monitoring solution using Power platform management connectors

Microsoft recently released a Flow Template named “Get List of new PowerApps, Flow and Connectors” that sends an email with a report of the newly created PowerApps, Flows and Connectors that have been introduced in to your tenant within a configurable window. Please note this Flow would require PowerApps/Flow administrator permissions in order to use the admin connectors within the Flow.

Here is the documentation link to this Flow template https://us.flow.microsoft.com/en-us/galleries/public/templates/0b2ffb0174724ad6b4681728c0f53062/get-list-of-new-powerapps-flows-and-connectors/

The Flow template make use of the below 5 connectors:

clip_image002

The Flow Template uses a Recurrence trigger that can be configured to run the flow on a regular schedule. It uses the Power platform for Admins connectors Get Environments action to get all the environments within the tenant. It then loops through each of the environments and uses

  • Flow Management connectors List Flows as Admin action
  • Flow Management connectors List Connectors action
  • PowerApps for Admins connectors Get Apps as Admin action

to generate a list of Flows, Connectors and PowerApps that were created during that window of time. It then formats the three list and uses Office 365 Outlook connectors send an email action to send the email to the configured recipients.

paflow1

Below is the sample of the email that is generated and sent by this Flow.

clip_image006

You can also use Microsoft Flow to access the Office 365 Security and Compliance Center Audit Logs for Flow and PowerApps and monitor specific operations and send alert email notifications. Please do note that there is a delay of 30+ minutes before the audit events show up in the Office 365 Security and Compliance Center Audit Logs.

Here is a blog article by Kent Weare, PM at Microsoft about the same https://flow.microsoft.com/en-us/blog/accessing-office-365-security-compliance-center-logs-from-microsoft-flow/.

Conclusion

Both Flow and PowerApps are great tools that can be in the hands of business and power users to accelerate the building of automated workflows and business apps across on-premise and the cloud services. It’s easy to see how there could be many such Flows and Apps built, deployed and running within your tenant. It’s a best practice for the IT administrators to devise a monitoring strategy in place that could proactively keep a watch on these Flows, Apps and other related resources and alert and perform remediation actions as necessary.

Rename A Published App

$
0
0

Windows store presents an app to users with the help of logo, name, description and screenshots. But what uniquely identifies an app on the store and makes users remember it is the app's name!

There's a variety of reasons for which you might have to rename and re-brand you application. The reason could be for making the app more discover-able on the store, or making it standout among competing apps, or it could be for a major business transformation. Let's see the steps to achieve this.

For this blog I will use our published app named "How much to tip?" as an example. It calculates tips to be paid at restaurants. Adding a keyword "Restaurant" would probably make the name more relevant and make the app easier to find for our users. We decide to rename the app "Restaurant Tip Calculator" to better reflect the purpose of the app. In the course of the steps, I will use the screenshots and reference of this app

In the course of renaming an app, we need to make changes to the app at two places

1. Windows Developer Dashboard

2. In the app package.

Though these two steps are independent of each other and can be performed in parallel, it's best to first reserve an app name. You do not want to make change to the app package, and later realize the name you used there is not available!

Step 1: Reserve a new name

  1. Visit the Windows Developer Dashboard and login with the credentials you used for publishing the app.

  2. Under the Overview page, click on the name of the app you want to rename.

  3. Go to App Management > Manage App Names.

  4. Under Manage Product Names > Go to “Reserve more names”.

  5. Add the new name you want to reserve for the existing app in the textbox

  6. Click on Check Availability.

  7. If the name is available(if there is a green tick in the text box), click on "Reserve Product Name".




           You can see in the screenshot, I am trying to reserve "Restaurant Tip Calculator" name and it seems available.

Dashboard_AppManagement_ManageAppNames]

Step 2: Update your App Package

a) Make relevant changes to the icons, logos

Each app has an icon that is shown at the title bar of your app window, the app list in the start menu, the taskbar and task manager, your app's tiles, your app's splash screen, in the Microsoft Store. These images corresponding to the tiles, logos and splashes are a part of the branding and you may want them to reflect the new app name.
Make the desired changes and keep them ready to be consumed in the package.

b) Updating the appxmanifest:

When the app is meant to be published on the store, the Application's manifest needs to have information as per the dashboard. Since you are going to change the name of the app, it has to reflect in the manifest file as well. When we want to change the application name, we have to make sure the appxmanifest file reflects the new name, else we may run into errors like “Invalid Package Identity name/family name” during app submission to the store.

You can follow the following steps to change the Appxmanifest. If you do not have a project for the app and only have the appx, you can go for the Option 2 listed below.

Option 1: You have a Visual Studio Project:

If you have a UWP/PWA project or a packaging project for a Desktop Bridge app, you can:

  1. Open the project in Visual Studio

  2. Open the Package.appxmanifest file

  3. Under the Application tab, change the Display Name to the new app name

  4. Under the Packaging tab, change the Package Display Name to the new app name.

  5. Under the Packaging tab, increment the Version Number.

When choosing a UWP package from your published submission, the Microsoft Store will always use the highest-versioned package that is applicable to the customer’s Windows 10 device. Since this package will be a new submission, it is important this package's version number is greater than the one you submitted earlier.Refer this article for more details.

Editing manifest file in designer(packaging tab):

Application Tab:
Packaging Tab:

Editing manifest file as code(F7):

.................
<Properties>
  <DisplayName>NewName</DisplayName>
  <PublisherDisplayName>XXXXX</PublisherDisplayName>
  <Logo>ImagesStoreLogo.png</Logo>
</Properties>
.................
<uap:VisualElements DisplayName="NewName" .................
.................

Associating a Visual Studio project to store is an easy alternative to make the manifest reflect the app details from store. This is something you might have used during your app's first submission. But since it is a rename scenario, you would not find the new app name in the store association list yet. So we will have to go with the manual changes to the manifest.
  1. Repackaing the app: You can package the appx by :
    right clicking on the project > Store > Create App Packages

store_package

Select the first option.

store_package

Select the old app name and click on Next.

Option 2:
If you do not have a Visual Studio project but only the appx file, you need to

  1. Rename the appx file to, say, "appname.zip"

  2. Unzip the file.

  3. Open the manifest file from the folder

  4. Make changes to assets

  5. You may want to change the assets to reflect your brand and new name.

  6. To create an appx from an appx's folder, you can use makeappx command.

    makeappx pack -d "FOLDER_NAME_WITH_LOCATION" -p "APPX_NAME_WITH_LOCATION"

Step 3: Create a new submission

You would have guessed by now that for changing the app name, you will have to change the package, which in turn means you have to create a new submission on dashboard.

To create a new submission for a published app:

  1. Go to App Overview

  2. Click on Update next to the most recent submission shown on the App overview page(the submission whose status shows "In the Store"). Note in the screenshot, the current submission is Submission 2 for this app.

create_a_new_submission

You will see this takes you to a new submission. Note in the screenshot below, it now shows Submission 3 as the current submission.

submission

Step 4: Upload the new package.

  1. Go to packages and upload the new package you created in step 2

  1. Make sure you select check-boxes against the required device families.

  2. Click on the Save button to retain the changes to this submission.

Step 5: Delete old packages

  1. Go to the Packages of the new submission.

  2. Delete all old packages from the previous submission.

  3. Save.

Step 6: Update Store listing page

  1. Go to the Store listing page, repeat these steps for each store listing.

  2. Under Default description, in the Product Name drop-down, select the newly chosen name.

You will continue to see the old app name in the left side panel. This will remain till we delete the older name from reserved names. Do not worry about it now, we will perform this action in step 8.
  1. (Optional) Make changes to the other details like Description, App features.

  2. Add new screenshots.

Adding new screenshots may seem like an optional step but it is important we take new screenshots, especially if the older ones have the app's title bar shown. This means the older screenshots would depict the older name of the app, making the old screenshots inappropriate for use with the new name.

Step 7: Submit to the store:

You can modify other fields from the submission if required and submit it to the store.

Step 8: Delete the older name

You might have observed, in spite of the new submission, the app still reflects the older name.
Deleting the older name will rectify this for us. Make sure you no more need this name, because once deleted, it becomes available for allocation publicly.

Go to App Management > Manage App Names. You can delete the older name here.

Once the older name is deleted, you can see the new name reflected on the dashboard.

Since the new submission was using the new name "Restaurant Tip Calculator" and the older name "How much to tip?" is no more being used in any active submission, you will be able to delete it now. This option was not available till there was an active submission using the older name.

Please note - the app identity name will continue to show the first name reserved.
Also, you may have to refresh the dashboard page to make the new name seen.

Here’s some extra information that might be helpful:

  1. Once a name is reserved, you have three months to publish with that name before the reservation expires. I don’t believe that you will immediately lose the name once it expires, but the name will instead become available for another developer to claim it, which may or may not happen immediately. So it’s best to try to use the name soon. Once you have published your first submission, you’d no longer need to worry about losing the name(of course with the exception of name infringement).

  2. The name that you first reserve for your app when it’s created will be used in some of your app's identity details, such as the Package Family Name (PFN). These values may be visible to some users, and cannot be changed, so keep this in mind when you first set up your app.

  3. And finally, if you try to reserve a name, sometimes it is already taken by a company or an individual. If you hold the trademark or some other legal right to the name, you can report the issue here.

FINAL REMINDER: API Version 11 to Sunset October 31st

$
0
0

With the release of Bing Ads API version 12 in April, we announced the sunset of API version v11 will happen on October 31, 2018. If you are currently using API v11, which is now deprecated, this is a courtesy reminder for you to migrate to API v12 by the sunset date.

If you need details on migrating to v12, refer to this guide. Please bear in mind that we cannot guarantee availability of the older services after the sunset date, so to have continued access to the Bing Ads API, please migrate before October 31st.

If you have any questions in this regard, feel free to reach out to us on the Bing Ads developer forum or contact us at bingads-feedback@microsoft.com.

Elster Schnittstelle in der Zukunft ohne Übermittlung der Datei

$
0
0

Zukünftig wird die Elster Funktionalität in Microsoft Dynamics NAV und Business Central die Erstellung der XML Datei, nicht jedoch deren Übermittlung unterstützen. Die Änderung hat damit zu tun, dass die bisher genutzte Schnittstelle nicht mehr weiter genutzt werden kann. Wir arbeiten daran, diese Änderung im November Update für alle supporteten Versionen von Microsoft Dynamics NAV und Business Central in der Cloud und On Premises zu veröffentlichen.

 

Die Originalnachricht finden Sie unten, zusammen mit dem Link zum Yammer Post.

** Important update on the ELSTER/VAT declaration submission feature of NAV & Business Central German version **

Due to changes in technology imposed by ELSTER it will no longer be possible for NAV and Business Central to support automatic submission of the VAT declaration. Instead, customers will have the option to generate and save the declaration XML file locally and manually handle the submission to ELSTER using the ELSTER portal or any other known options.

We aim for this feature to be released on the November update of supported NAV versions and Business Central cloud and on premises.

https://www.yammer.com/dynamicspartnernetwork/#/threads/company?type=general

Mit freundlichem Gruß

Andreas Günther

Escalation Engineer
Microsoft Dynamics NAV
CSS EMEA Dynamics and SMS&P

 

ServiceNow Digital Workflows coming to Azure Government customers

$
0
0

Microsoft and ServiceNow announced an alliance today to help deliver ServiceNow digital workflows to Azure Government customers; paving the way to secure modernization and helping to accelerate digital transformation. We announced this alliance at our Government Leaders Cloud Forum in Washington, DC.

ServiceNow is changing the way government agencies work by expediting and simplifying the delivery of modern IT services. By automating routine activities, tasks and processes at work, ServiceNow helps agencies gain efficiencies and increase the productivity of their workforce.

“Federal agencies are increasingly focused on modernizing their digital infrastructure, but a long review process and a higher regulatory standard to approve cloud offerings for U.S. federal use can greatly hinder the government’s ability to deploy innovative cloud solutions,” said Brian Marvin, Vice President of Federal Sales, ServiceNow. “With this offering, U.S. Federal customers are expected to have greater access to ServiceNow’s best-in-class digital workflows on Azure Government regions including support for DISA IL5 and FedRAMP High.”

This joint cloud offering will bring together the IT employee and customer workflow experiences from ServiceNow with the security, protection and compliance of Azure Government. ServiceNow offerings will soon be available in the Azure Government Marketplace, providing an innovative platform for every department in the organization from IT to cyber operations, recruitment to field service and more.

ServiceNow on Azure Government enables federal agencies to move faster and securely to cloud-based solutions and services, accelerating digital transformation and modernizing service delivery to citizens.

To learn more, see the ServiceNow press release here.

 

AX2009/AX2012R2 メインストリーム サポートの終了

$
0
0

2018/10/09を持ちまして、AX2009/AX2012R2 メインストリーム サポートが終了致しました。

詳細は以下の情報を御確認頂けますでしょうか。

 

Dynamics AX 2009 SP1 ending support in 2018

https://blogs.technet.microsoft.com/dynamicsaxse/2018/01/11/dynamics-ax-2009-sp1-ending-support-in-2018/

 

Mainstream support ending for Dynamics AX 2012 RTM, R2

https://blogs.technet.microsoft.com/dynamicsaxse/2018/10/01/mainstream-support-ending-for-dynamics-ax-2012-rtm-r2/

 

Microsoft Dynamics AX 2009

https://support.microsoft.com/ja-jp/lifecycle/search/13619

 

Microsoft Dynamics AX 2012 R2

https://support.microsoft.com/ja-jp/lifecycle/search?alpha=Microsoft%20Dynamics%20AX%202012%20R2


INTUNE – Intune and Autopilot Part 3 – Preparing your environment

$
0
0

To be able to start using AutoPilot, there are some prerequisites.

We are going to start with company branding.

  • For AutoPilot, the Sign-in page text and the "Square logo image", highlighted below, need to be specified. The other are optional

  • Specify your sign-in page text, and upload your square logo and click Save
  • Another key part is the Azure Active Directory tenant name. This was filled in when you created the tenant and can be modified by clicking Azure Active directory >> Properties, and changing the Name
  • As a prerequisite, we need to enable automatic enrollment.

This can be done by clicking Azure Active directory >> Mobility (MDM and MAM) >> Intune
Change the MDM user scope to All. Leave MAM set to None and click save

Looks like we're all set up for AutoPilot. Next steps would involve adding some devices to windows AutoPilot and create a profile of settings, but we'll cover that in the next post

 

Ingmar Oosterhoff, Matthias Herfurth and Johannes Freundorfer

 

 

Connection timeout and Command timeout in SQL Server

$
0
0

Hello all,

 

While working with SQL Server, one of the common issue application teams report is  timeouts. In this article, we are covering connection and Command timeouts and ways to isolate them.

There are 2 types of timeouts.

  1. Connection timeout
  2. Command Timeout

 

CONNECTION TIMEOUT: 

It is the time in seconds application waits while trying to create a connection with SQL Server before terminating the attempt. Default value of connection timeout is 15 seconds.

 

When you encounter Connection timeout issues, you should review:

  1. Check if you are able to telnet SQL Server on SQL port
  2. Check if the 3 way TCP handshake is working
  3. The troubleshooting approach should be on fixing SQL Connectivity with application.

Additional information on troubleshooting SQL Connectivity issues is documented here.

 

Reviewing the exception and the stack trace  is a starting point to isolate connection timeout issues. Reviewing the thread stack trace, you would observe that application tries to create a connection and times out post encountering Connection timeout value (by default 15 seconds).

 

Exception Details: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)

Stack Trace:

[SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)]
System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, DbConnectionPool pool, String accessToken, Boolean applyTransientFaultHandling, SqlAuthenticationProviderManager sqlAuthProviderManager) +1431
System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions) +1085
System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnectionPool pool, DbConnection owningObject, DbConnectionOptions options, DbConnectionPoolKey poolKey, DbConnectionOptions userOptions) +70
System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject, DbConnectionOptions userOptions, DbConnectionInternal oldConnection) +964
System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject, DbConnectionOptions userOptions, DbConnectionInternal oldConnection) +109
System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, UInt32 waitForMultipleObjectsTimeout, Boolean allowCreate, Boolean onlyOneCheckConnection, DbConnectionOptions userOptions, DbConnectionInternal& connection) +1529
System.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection owningObject, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal& connection) +156
System.Data.ProviderBase.DbConnectionFactory.TryGetConnection(DbConnection owningConnection, TaskCompletionSource`1 retry, DbConnectionOptions userOptions, DbConnectionInternal oldConnection, DbConnectionInternal& connection) +258
System.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal(DbConnection outerConnection, DbConnectionFactory connectionFactory, TaskCompletionSource`1 retry, DbConnectionOptions userOptions) +312
System.Data.SqlClient.SqlConnection.TryOpenInner(TaskCompletionSource`1 retry) +202
System.Data.SqlClient.SqlConnection.TryOpen(TaskCompletionSource`1 retry) +413
System.Data.SqlClient.SqlConnection.Open() +128
System.Data.Entity.Infrastructure.Interception.InternalDispatcher`1.Dispatch(TTarget target, Action`2 operation, TInterceptionContext interceptionContext, Action`3 executing, Action`3 executed) +104
System.Data.Entity.Infrastructure.Interception.DbConnectionDispatcher.Open(DbConnection connection, DbInterceptionContext interceptionContext) +503
System.Data.Entity.SqlServer.<>c__DisplayClass1.<Execute>b__0() +18
System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute(Func`1 operation) +234

 

 

COMMAND TIMEOUT:

This is the time in seconds to wait for the command to execute. This setting allows the cancellation of an ExecuteReader method call, due to delays from network traffic or heavy server use.  The default value is 30 seconds.

 

When you encounter Command timeout issues, you should review:

  1. Check if there are any performance issues with SQL Server like blocking
  2. Check in SQL Server why the query execution is taking more than Command Timeout value.
  3. Collect Attention events in SQL profiler/Extended Events and track all queries timing out and review the waits.

 

Reviewing the stack trace closely, you would observe that application connects to SQL Server and it calls ExecuteReader method for retrieving data from database.

 

Exception Details: System.ComponentModel.Win32Exception: The wait operation timed out

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:

[Win32Exception (0x80004005): The wait operation timed out]
[SqlException (0x80131904): Execution Timeout Expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.]
System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +3305692
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) +736
System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) +4061
System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() +90
System.Data.SqlClient.SqlDataReader.get_MetaData() +99
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString, Boolean isInternal, Boolean forDescribeParameterEncryption, Boolean shouldCacheForAlwaysEncrypted) +604
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, Boolean inRetry, SqlDataReader ds, Boolean describeParameterEncryptionRequest) +3303
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry) +667
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +83
System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +301
System.Data.Entity.Infrastructure.Interception.InternalDispatcher`1.Dispatch(TTarget target, Func`3 operation, TInterceptionContext interceptionContext, Action`3 executing, Action`3 executed) +104
System.Data.Entity.Infrastructure.Interception.DbCommandDispatcher.Reader(DbCommand command, DbCommandInterceptionContext interceptionContext) +499
System.Data.Entity.Core.EntityClient.Internal.EntityCommandDefinition.ExecuteStoreCommands(EntityCommand entityCommand, CommandBehavior behavior) +36

 

SUMMARY:

To summarize, when an application reports timeout error, its essential to identify the type of timeout, if it is Connection/Command timeout. Exception reported and the stack trace is a starting point to isolate the type of timeout.  Accordingly, further data collection steps and actions can be performed to isolate the exact issue.

 

Hope this blog helps in identifying the timeouts and helps in isolating the timeout issues.

 

Please share your feedback, questions and/or suggestions.

Thanks,
Don Castelino | Premier Field Engineer | Microsoft

Disclaimer: All posts are provided AS IS with no warranties and confer no rights. Additionally, views expressed here are my own and not those of my employer, Microsoft.

 

 

 

Azure DevOps Impact due to authentication service outage – 10/11 – Investigating

$
0
0

Initial notification: Thursday, October 11th 2018 12:47 UTC

  • We're investigating user impact in the Azure DevOps service, users may be experiencing performance issues or outages

  • Preliminary investigation shows reliability issues on our main authentication service

  • At this time, according to telemetry, impact is reducing over time

  • Next Update: Before Thursday, October 11th 2018 13:20 UTC

Sincerely,
Pedro

How can I include/exclude specific memory blocks in user-mode crash dumps?

$
0
0


You can tweak the information included in crash dumps
created by Windows Error Reporting.



To request that Windows Error Reporting
include a dump of another process if the calling process crashes,
use

the
Wer­Register­Additional­Process function
.
This is useful if your program is split up into multiple processes.



You can add additional key/value pairs of data with

the
Wer­Register­Custom­Metadata function
.
This information can be used to help categorize and filter
crash dumps automatically.
For example, you might include metadata that says whether
the app is running in trial mode.



You can add
a block of memory to user-mode crash dumps
(heap dumps or larger),
with

the
Wer­Register­Memory­Block function
.
Note that this memory will be captured in the dump,
but it's not the same as metadata because you can't filter on it.



Conversely,
you can exclude
a block of memory from user-mode crash dumps with

the
Wer­Register­Excluded­Memory­Block function
.
This is handy if you have large memory blocks containing
information that isn't all that interesting from a debugging
standpoint,
like video texture buffers or audio output buffers.

New capabilities to enable your mission end to end

$
0
0
As we continue to focus on helping government customers secure their mission end to end, we’d like to share a set of new Azure capabilities to help US government customers deliver breakthrough outcomes. For in-depth coverage of these announcements and more, read Julia White’s blog ‘Enabling intelligent cloud and intelligent edge solutions for government.’

Update on Azure Government Secret regions
Last October, we announced Azure Government Secret and the expansion of our mission-critical cloud with new regions enabled for Secret U.S. classified data or Defense Information Systems Agency (DISA) Impact Level 6 workloads. These new regions will be available by the end of the first quarter of 2019.

Azure Data Box, Data Box Edge, and Data Box Heavy available for government customers
For government agencies who may have mission needs in environments where connectivity and cloud access are limited or unavailable, Azure Data Box Edge enables you to pre-process data at the edge and move data to Azure efficiently. Azure Data Box Heavy offers a rugged, secure, tamper-resistant appliance with 1 petabyte of storage for transporting customer data to an Azure datacenter. By bringing the expanded family of Data Box offerings to our government customers, we expect to enable critical mission scenarios requiring analysis of larger volumes of data, wherever that data is gathered.

Azure Reservations now available for Azure Government
To help government agencies take advantage of cloud services rapidly and with minimal friction, we’re announcing Azure Reservations now available for Azure Government. Azure Reservations enable you to budget and forecast more easily with up-front payment for one and three-year terms.

Announcing Azure FastTrack for Government
Enhanced deployment support with Azure FastTrack for Government gives you access to design and architecture planning, as well as configuration of services, all based out of offices in Washington, D.C.

Expanded FedRAMP coverage for Azure, continuing broadest support in the industry
Microsoft is announcing the expansion of our FedRAMP Moderate coverage for our public cloud regions. A total of 50 services are now available, creating new opportunities across a broad array of federal agencies. For those government agencies with citizenship requirements and more controlled access to certain types of data, Azure Government will continue to offer services at FedRAMP High, with the addition of the government cloud guarantees around heightened screening of personnel, management exclusively by U.S. citizens, and other important protections.

In addition to these updates we’re announcing several new partnerships focused on enabling mission-critical workloads in the cloud. Read the full announcement for more detail on these initiatives.

Getting Started with the Kusto Query Language (KQL)

$
0
0

Azure Data Explorer is in public preview and their documentation is an excellent place to educate yourself on the Kusto Query Language that is used to interact with Azure Data Explorer. There's also a 4-hour Pluralsight course which will really jump start you. The Azure Data Explorer white paper also covers the basics of the query language. But you're still reading, so I'll assume you want my version of how to get started with the query language.

Queries generally begin by either referencing a table or a function. You start with that tabular data and then run it through a set of statements connected by pipes to shape your data into the final result. Here are some basics to get you querying:

  • where - The "where" operator allows you to filter your data. So if you start with TableA and you want to only keep events that have a certain key,you would use:
    TableA | where MyKey='MyValue'
  • project/extend - These two operators help you pick the fields that you want to show up in your output. Extend adds a new field and project can either choose from the existing set of fields or add a new field. These two statements produce the same result:
    T | project A, B, C=D+E
    
    T | project A, B | extend C=D+E
  • count - The count operator returns a scalar value of the number of rows in the tabular set.
    T | count
  • summarize - This is a big topic, but we'll keep it light for now. The summarize operator can perform aggregations on your dataset. For example, the count operator mentioned above is short for:
    T | summarize count()

    You can specify a number of aggregations over a variety of fields:

    T | summarize count(), sum(A), avg(B) by C, D

    The bin() function is often used in conjunction with summarize statements. It lets you group times (or numbers) into buckets.

    T | summarize count() by bin(TimeStamp, 1d) // count the number of rows per day
  • join - Many types of joins are supported but the common ones are inner join (keep rows that match on both sides) and leftouter (keep all rows from the left side and include matching rows from the right). You technically don't have to specify a join kind but I recommend that you always do. It makes for easier readability and the default probably isn't what you expect.
    T | join kind=inner (U) on A

    Note that joins are only on equality and generally it's expected that the keys have the same name on both sides. If they aren't the same, you can use a project statement to make them the same or use an alternate key specification syntax:

    T | join kind=inner (U) on $left.A == $right.B
  • now(), ago() and datetime math - Azure Data Explorer excels at time series data analysis. There are some handy functions to get used to like "now()" which gives the current UTC time and "ago()". The ago function is especially handy when you're looking for recent data.
    T | where A > ago(5m) // where A is greater than 5 minutes ago
    
    T | where A > ago(1d) // where A is greater than 1 day ago

    You can also do easy datetime math.

    print ago(1d) + 10m
  • arg_max(), arg_min() - These are somewhat advanced topics but I include them when I introduce people to Kusto because they are so handy. Imagine that you have a bunch of entities and each one sends a row to your table periodically. You want to run a query over the latest message from each entity.
    T | summarize arg_max(Timestamp, *) by Id // for every Id, get the row with the maximum Timestamp

    Use these functions with care though. If they are used on a huge table and the cardinality of the grouping is high, it can destroy performance.

  • Rendering charts - Both  the Kusto Explorer desktop client and the web client have the ability to easily render charts. You can read the documentation to learn about the various types, but since I deal with a lot of time series data, the one I use the most is timechart. It's a line chart where the x-axis is a datetime and everything else goes on the y-axis. It automatically keeps the x-axis spaced nicely even if your data doesn't have every time specified.
    T | render timechart

In future posts, I'll cover some beginning perf tips and style/readability tips. You can find all of my posts on this topic under the Azure Data Explorer tag. Keep calm and Kusto on!

How to Use Class Template Argument Deduction

$
0
0

Class Template Argument Deduction (CTAD) is a C++17 Core Language feature that reduces code verbosity. C++17's Standard Library also supports CTAD, so after upgrading your toolset, you can take advantage of this new feature when using STL types like std::pair and std::vector. Class templates in other libraries and your own code will partially benefit from CTAD automatically, but sometimes they'll need a bit of new code (deduction guides) to fully benefit. Fortunately, both using CTAD and providing deduction guides is pretty easy, despite template metaprogramming's fearsome reputation!

 

CTAD support is available in VS 2017 15.7 and later with the /std:c++17 and /std:c++latest compiler options.

 

Template Argument Deduction

C++98 through C++14 performed template argument deduction for function templates. Given a function template like "template <typename RanIt> void sort(RanIt first, RanIt last);", you can and should sort a std::vector<int> without explicitly specifying that RanIt is std::vector<int>::iterator. When the compiler sees "sort(v.begin(), v.end());", it knows what the types of "v.begin()" and "v.end()" are, so it can determine what RanIt should be. The process of determining template arguments for template parameters (by comparing the types of function arguments to function parameters, according to rules in the Standard) is known as template argument deduction, which makes function templates far more usable than they would otherwise be.

 

However, class templates didn't benefit from these rules. If you wanted to construct a std::pair from two ints, you had to say "std::pair<int, int> p(11, 22);", despite the fact that the compiler already knows that the types of 11 and 22 are int. The workaround for this limitation was to use function template argument deduction: std::make_pair(11, 22) returns std::pair<int, int>. Like most workarounds, this is problematic for a few reasons: defining such helper functions often involves template metaprogramming (std::make_pair() needs to perform perfect forwarding and decay, among other things), compiler throughput is reduced (as the front-end has to instantiate the helper, and the back-end has to optimize it away), debugging is more annoying (as you have to step through helper functions), and there's still a verbosity cost (the extra "make_" prefix, and if you want a local variable instead of a temporary, you need to say "auto").

 

Hello, CTAD World

C++17 extends template argument deduction to the construction of an object given only the name of a class template. Now, you can say "std::pair(11, 22)" and this is equivalent to "std::pair<int, int>(11, 22)". Here's a full example, with a C++17 terse static_assert verifying that the declared type of p is the same as std::pair<int, const char *>:

 

C:Temp>type meow.cpp

#include <type_traits>

#include <utility>

 

int main() {

    std::pair p(1729, "taxicab");

    static_assert(std::is_same_v<decltype(p), std::pair<int, const char *>>);

}

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 meow.cpp

meow.cpp

 

C:Temp>

 

CTAD works with parentheses and braces, and named variables and nameless temporaries.

 

Another Example: array and greater

C:Temp>type arr.cpp

#include <algorithm>

#include <array>

#include <functional>

#include <iostream>

#include <string_view>

#include <type_traits>

using namespace std;

 

int main() {

    array arr = { "lion"sv, "direwolf"sv, "stag"sv, "dragon"sv };

 

    static_assert(is_same_v<decltype(arr), array<string_view, 4>>);

 

    sort(arr.begin(), arr.end(), greater{});

 

    cout << arr.size() << ": ";

 

    for (const auto& e : arr) {

        cout << e << " ";

    }

 

    cout << "n";

}

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 arr.cpp && arr

arr.cpp

4: stag lion dragon direwolf

 

This demonstrates a couple of neat things. First, CTAD for std::array deduces both its element type and its size. Second, CTAD works with default template arguments; greater{} constructs an object of type greater<void> because it's declared as "template <typename T = void> struct greater;".

 

CTAD for Your Own Types

C:Temp>type mypair.cpp

#include <type_traits>

 

template <typename A, typename B> struct MyPair {

    MyPair() { }

    MyPair(const A&, const B&) { }

};

 

int main() {

    MyPair mp{11, 22};

 

    static_assert(std::is_same_v<decltype(mp), MyPair<int, int>>);

}

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 mypair.cpp

mypair.cpp

 

C:Temp>

 

In this case, CTAD automatically works for MyPair. What happens is that the compiler sees that a MyPair is being constructed, so it runs template argument deduction for MyPair's constructors. Given the signature (const A&, const B&) and the arguments of type int, A and B are deduced to be int, and those template arguments are used for the class and the constructor.

 

However, "MyPair{}" would emit a compiler error. That's because the compiler would attempt to deduce A and B, but there are no constructor arguments and no default template arguments, so it can't guess whether you want MyPair<int, int> or MyPair<Starship, Captain>.

 

Deduction Guides

In general, CTAD automatically works when class templates have constructors whose signatures mention all of the class template parameters (like MyPair above). However, sometimes constructors themselves are templated, which breaks the connection that CTAD relies on. In those cases, the author of the class template can provide "deduction guides" that tell the compiler how to deduce class template arguments from constructor arguments.

 

C:Temp>type guides.cpp

#include <iterator>

#include <type_traits>

 

template <typename T> struct MyVec {

    template <typename Iter> MyVec(Iter, Iter) { }

};

 

template <typename Iter> MyVec(Iter, Iter) -> MyVec<typename std::iterator_traits<Iter>::value_type>;

 

template <typename A, typename B> struct MyAdvancedPair {

    template <typename T, typename U> MyAdvancedPair(T&&, U&&) { }

};

 

template <typename X, typename Y> MyAdvancedPair(X, Y) -> MyAdvancedPair<X, Y>;

 

int main() {

    int * ptr = nullptr;

    MyVec v(ptr, ptr);

 

    static_assert(std::is_same_v<decltype(v), MyVec<int>>);

 

    MyAdvancedPair adv(1729, "taxicab");

 

    static_assert(std::is_same_v<decltype(adv), MyAdvancedPair<int, const char *>>);

}

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 guides.cpp

guides.cpp

 

C:Temp>

 

Here are two of the most common cases for deduction guides in the STL: iterators and perfect forwarding. MyVec resembles a std::vector in that it's templated on an element type T, but it's constructible from an iterator type Iter. Calling the range constructor provides the type information we want, but the compiler can't possibly realize the relationship between Iter and T. That's where the deduction guide helps. After the class template definition, the syntax "template <typename Iter> MyVec(Iter, Iter) -> MyVec<typename std::iterator_traits<Iter>::value_type>;" tells the compiler "when you're running CTAD for MyVec, attempt to perform template argument deduction for the signature MyVec(Iter, Iter). If that succeeds, the type you want to construct is MyVec<typename std::iterator_traits<Iter>::value_type>". That essentially dereferences the iterator type to get the element type we want.

 

The other case is perfect forwarding, where MyAdvancedPair has a perfect forwarding constructor like std::pair does. Again, the compiler sees that A and B versus T and U are different types, and it doesn't know the relationship between them. In this case, the transformation we need to apply is different: we want decay (if you're unfamiliar with decay, you can skip this). Interestingly, we don't need decay_t, although we could use that type trait if we wanted extra verbosity. Instead, the deduction guide "template <typename X, typename Y> MyAdvancedPair(X, Y) -> MyAdvancedPair<X, Y>;" is sufficient. This tells the compiler "when you're running CTAD for MyAdvancedPair, attempt to perform template argument deduction for the signature MyAdvancedPair(X, Y), as if it were taking arguments by value. Such deduction performs decay. If it succeeds, the type you want to construct is MyAdvancedPair<X, Y>."

 

This demonstrates a critical fact about CTAD and deduction guides. CTAD looks at a class template's constructors, plus its deduction guides, in order to determine the type to construct. That deduction either succeeds (determining a unique type) or fails. Once the type to construct has been chosen, overload resolution to determine which constructor to call happens normally. CTAD doesn't affect how the constructor is called. For MyAdvancedPair (and std::pair), the deduction guide's signature (taking arguments by value, notionally) affects the type chosen by CTAD. Afterwards, overload resolution chooses the perfect forwarding constructor, which takes its arguments by perfect forwarding, exactly as if the class type had been written with explicit template arguments.

 

CTAD and deduction guides are also non-intrusive. Adding deduction guides for a class template doesn't affect existing code, which previously was required to provide explicit template arguments. That's why we were able to add deduction guides for many STL types without breaking a single line of user code.

 

Enforcement

In rare cases, you might want deduction guides to reject certain code. Here's how std::array does it:

 

C:Temp>type enforce.cpp

#include <stddef.h>

#include <type_traits>

 

template <typename T, size_t N> struct MyArray {

    T m_array[N];

};

 

template <typename First, typename... Rest> struct EnforceSame {

    static_assert(std::conjunction_v<std::is_same<First, Rest>...>);

    using type = First;

};

 

template <typename First, typename... Rest> MyArray(First, Rest...)

    -> MyArray<typename EnforceSame<First, Rest...>::type, 1 + sizeof...(Rest)>;

 

int main() {

    MyArray a = { 11, 22, 33 };

    static_assert(std::is_same_v<decltype(a), MyArray<int, 3>>);

}

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 enforce.cpp

enforce.cpp

 

C:Temp>

 

Like std::array, MyArray is an aggregate with no actual constructors, but CTAD still works for these class templates via deduction guides. MyArray's guide performs template argument deduction for MyArray(First, Rest...), enforcing all of the types to be the same, and determining the array's size from how many arguments there are.

 

Similar techniques could be used to make CTAD entirely ill-formed for certain constructors, or all constructors. The STL itself hasn't needed to do that explicitly, though. (There are only two classes where CTAD would be undesirable: unique_ptr and shared_ptr. C++17 supports both unique_ptrs and shared_ptrs to arrays, but both "new T" and "new T[N]" return T *. Therefore, there's insufficient information to safely deduce the type of a unique_ptr or shared_ptr being constructed from a raw pointer. As it happens, this is automatically blocked in the STL due to unique_ptr's support for fancy pointers and shared_ptr's support for type erasure, both of which change the constructor signatures in ways that prevent CTAD from working.)

 

Corner Cases for Experts: Non-Deduced Contexts

Here are some advanced examples that aren't meant to be imitated; instead, they're meant to illustrate how CTAD works in complicated scenarios.

 

Programmers who write function templates eventually learn about "non-deduced contexts". For example, a function template taking "typename Identity<T>::type" can't deduce T from that function argument. Now that CTAD exists, non-deduced contexts affect the constructors of class templates too.

 

C:Temp>type corner1.cpp

template <typename X> struct Identity {

    using type = X;

};

 

template <typename T> struct Corner1 {

    Corner1(typename Identity<T>::type, int) { }

};

 

int main() {

    Corner1 corner1(3.14, 1729);

}

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 corner1.cpp

corner1.cpp

corner1.cpp(10): error C2672: 'Corner1': no matching overloaded function found

corner1.cpp(10): error C2783: 'Corner1<T> Corner1(Identity<X>::type,int)': could not deduce template argument for 'T'

corner1.cpp(6): note: see declaration of 'Corner1'

corner1.cpp(10): error C2641: cannot deduce template argument for 'Corner1'

corner1.cpp(10): error C2514: 'Corner1': class has no constructors

corner1.cpp(5): note: see declaration of 'Corner1'

 

In corner1.cpp, "typename Identity<T>::type" prevents the compiler from deducing that T should be double.

 

Here's a case where some but not all constructors mention T in a non-deduced context:

 

C:Temp>type corner2.cpp

template <typename X> struct Identity {

    using type = X;

};

 

template <typename T> struct Corner2 {

    Corner2(T, long) { }

    Corner2(typename Identity<T>::type, unsigned long) { }

};

 

int main() {

    Corner2 corner2(3.14, 1729);

}

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 corner2.cpp

corner2.cpp

corner2.cpp(11): error C2668: 'Corner2<double>::Corner2': ambiguous call to overloaded function

corner2.cpp(7): note: could be 'Corner2<double>::Corner2(double,unsigned long)'

corner2.cpp(6): note: or       'Corner2<double>::Corner2(T,long)'

        with

        [

            T=double

        ]

corner2.cpp(11): note: while trying to match the argument list '(double, int)'

 

In corner2.cpp, CTAD succeeds but constructor overload resolution fails. CTAD ignores the constructor taking "(typename Identity<T>::type, unsigned long)" due to the non-deduced context, so CTAD uses only "(T, long)" for deduction. Like any function template, comparing the parameters "(T, long)" to the argument types "double, int" deduces T to be double. (int is convertible to long, which is sufficient for template argument deduction; it doesn't demand an exact match there.) After CTAD has determined that Corner2<double> should be constructed, constructor overload resolution considers both signatures "(double, long)" and "(double, unsigned long)" after substitution, and those are ambiguous for the argument types "double, int" (because int is convertible to both long and unsigned long, and the Standard doesn't prefer either conversion).

 

Corner Cases for Experts: Deduction Guides Are Preferred

C:Temp>type corner3.cpp

#include <type_traits>

 

template <typename T> struct Corner3 {

    Corner3(T) { }

    template <typename U> Corner3(U) { }

};

 

#ifdef WITH_GUIDE

    template <typename X> Corner3(X) -> Corner3<X *>;

#endif

 

int main() {

    Corner3 corner3(1729);

 

#ifdef WITH_GUIDE

    static_assert(std::is_same_v<decltype(corner3), Corner3<int *>>);

#else

    static_assert(std::is_same_v<decltype(corner3), Corner3<int>>);

#endif

}

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 corner3.cpp

corner3.cpp

 

C:Temp>cl /EHsc /nologo /W4 /std:c++17 /DWITH_GUIDE corner3.cpp

corner3.cpp

 

C:Temp>

 

CTAD works by performing template argument deduction and overload resolution for a set of deduction candidates (hypothetical function templates) that are generated from the class template's constructors and deduction guides. In particular, this follows the usual rules for overload resolution with only a couple of additions. Overload resolution still prefers things that are more specialized (N4713 16.3.3 [over.match.best]/1.7). When things are equally specialized, there's a new tiebreaker: deduction guides are preferred (/1.12).

 

In corner3.cpp, without a deduction guide, the Corner3(T) constructor is used for CTAD (whereas Corner3(U) isn't used for CTAD because it doesn't mention T), and Corner3<int> is constructed. When the deduction guide is added, the signatures Corner3(T) and Corner3(X) are equally specialized, so paragraph /1.12 steps in and prefers the deduction guide. This says to construct Corner3<int *> (which then calls Corner3(U) with U = int).

 

Reporting Bugs

Please let us know what you think about VS. You can report bugs via the IDE's Report A Problem and also via the web: go to the VS Developer Community and click on the C++ tab.


Authentication issues in all regions – 10/11 – Mitigated

$
0
0

Final Update: Thursday, October 11th 2018 18:59 UTC

Azure identity team has confirmed full mitigation of the authentication issues.

Sincerely,
Samuli


Update: Thursday, October 11th 2018 18:24 UTC

We are engaged with our partners in Azure identity team to investigate the issue. The azure status portal can be viewed for partner team updates.

Sincerely,
Samuli


Initial notification: Thursday, October 11th 2018 17:21 UTC

We're investigating authentication issues in all regions. Users who are logged on to the service will not experience any disruption but users trying to sign in with a fresh session will see issues.

  • Next Update: Before Thursday, October 11th 2018 17:55 UTC

Sincerely,
Sudheer

Federation patterns using Azure AD

$
0
0

In this post, Premier Dev Consultant Marius Rochon considers scenarios where an application needs to be accessed by users from many sources of authentication (Office 365, owned and operated by Microsoft but whose use is managed separately by many independent organizations is an example of such a resource). It proposes a framework for determining an optimal solution for the application using Azure AD.


The optimal identity infrastructure architecture to minimizes security risks, maximizes utility of authentication while minimizing its cost. In particular, providing single sign-on reduces the risk of user compromising their credentials (because they have to remember several). Also, having the credential managed outside of the application removes from it the cost of its management (storage, associated call center services, etc.).

However, providing SSO to users from many identity directories involves solving at least two technical problems. These are described in more detail below as Issues. The Discovery section describes a taxonomy that seems useful in understanding authentication requirements. These requirements are then used to identify distinct identity directory architectures addressing the cost/benefit objectives described above.

Continue reading on his blog

“AaronLocker” update (v0.91) — and see “AaronLocker” in action on Channel 9!

$
0
0

"AaronLocker" is a robust, practical, PowerShell-based application whitelisting solution for Windows. See it in action in this new Defrag Tools episode on Channel 9!

This update (AaronLocker-v0.91) to the original 0.9 release includes these improvements:

  • Documentation updates, particularly in the area of Group Policy control;
  • Blocks execution from writable alternate data streams on user-writable directories under the Windows and Program Files directories;
  • Blocks older versions of Sysinternals BgInfo.exe that were not AppLocker-aware and allowed execution of unapproved VBScript files (N.B., release of an AppLocker-aware BgInfo.exe is imminent -- subscribe to the Sysinternals blog for notifications);
  • Improvements to the information retrieved from event logs, including an additional date/time column that Excel can filter on, a file extension column that can help track files with non-standard extensions, and a label for when an event is associated with the built-in local administrator account;
  • Additional info in the event workbook's Summary tab, and a new "Users and event counts" tab;
  • Performance improvements in Generate-EventWorkbook.ps1;
  • PowerShell v2 DLLs blocked with explicit deny rules instead of exceptions;
  • Minor bug fixes.

I still intend to put it on GitHub but haven't gotten to it yet. In the meantime, I want to get the update out, so you can download the updated AaronLocker here. (I also need to create new sample event content, but don't want to hold this up any longer.)

Brief description of "AaronLocker" repeated from original post:

AaronLocker is designed to make the creation and maintenance of robust, strict, AppLocker-based whitelisting rules as easy and practical as possible. The entire solution involves a small number of PowerShell scripts. You can easily customize rules for your specific requirements with simple text-file edits. AaronLocker includes scripts that document AppLocker policies and capture event data into Excel workbooks that facilitate analysis and policy maintenance.

AaronLocker is designed to restrict program and script execution by non-administrative users. Note that AaronLocker does not try to stop administrative users from running anything they want – and AppLocker cannot meaningfully restrict administrative actions anyway. A determined user with administrative rights can easily bypass AppLocker rules.

AaronLocker’s strategy can be summed up as: if a non-admin could have put a program or script onto the computer – i.e., it is in a user-writable directory – don’t allow it to execute unless it has already been specifically allowed by an administrator. This will stop execution if a user is tricked into downloading malware, if an exploitable vulnerability in a program the user is running tries to put malware on the computer, or if a user intentionally tries to download and run unauthorized programs.

AaronLocker works on all supported versions of Windows that can provide AppLocker.

A personal note: the name “AaronLocker” was Chris (@appcompatguy) Jackson’s idea – not mine – and I resisted it for a long time. I finally gave in because I couldn’t come up with a better name.

The zip file contains full documentation, all the scripts, and sample outputs.

By the way, I'd also like to point out that AaronLocker addresses many of the AppLocker bypasses that various sites have published.

Guide: How to do the change management for Office 365 clients?

$
0
0

Daily i come across good number of customers/IT pros where they want to know more about the change management for Office 365 clients, so i am posting the related info here.

As you aware the client applications that are included with Office 365 are released regularly with updates that provide new features and functionality together with security and other updates. Windows 10 has also adopted this servicing model and is also releasing new functionality regularly. As an IT Professional, you need to understand this servicing model and how you can manage the releases while your organization takes advantage of the new functionality.

So we have documented the detailed info about the change management for Office 365 clients, service models for updates, understand about the release channels, cadences and how to effectively manage the release of Office 365 client application for your organization  - so please go through and make use of it.

In addition, you can refer the related articles as well:

Overview of update channels for Office 365 ProPlus

Overview of Windows as a service

Microsoft cloud IT architecture resources

Release information for updates to Office 365 ProPlus

Hope this helps.

Azure IOT: IOT Toolkit extension for VS Code

$
0
0
  • Do you use Microsoft VS code to build applications?
  • Are you a Azure IOT developer?
  • Do you know that using VS code to develop with Azure IOT Hub?
Yes, there's a VS Code extension to develop with Azure IoT Hub. If you didn't, then dont miss this new episode of the IoT Show. See how simple it is to start your IoT development right from Visual Studio Code.

Using this, now you can interact with Azure IoT Hub, IoT Device Management, IoT Edge Management, IoT Hub Code Generation.

Just make use of it. Happy coding !!
Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>