Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Insider Fast release notes: 15.37 (170718)

$
0
0

Here's a look at this week's Insider Fast update: 15.37 (170718):

Top fixes:

  • Fixed a reminder issue where created or modified reminders (included snoozed reminders) will not be displayed until you re-launch Outlook
  • Fixed a bug where moving a message with attachment from Inbox to another folder, the attachment is missing when displayed in the new folder.

Top improvements:

  • Smart folders list on the new side bar is now visible all time
  • Improved keyboard and VoiceOver navigation for meeting invites and email headers
  • Improved the visuals for drag and drop email(s) to folder
  • Provide a command to use domain-based AutoDiscover when adding Exchange mailbox (Applicable to specific O365 tenants, contact your IT admin for details)

Other notes:

  • For any issues, please use Help > Contact Support
    • We've disabled comments on these posts to direct feedback to Contact Support (keep the feedback coming!)
  • For feature requests, please use Help > Suggest a Feature
  • More information on Office Insider and joining Insider Fast

Announcing the JDBC 6.2 RTW!

$
0
0

We are pleased to announce the full release of the Microsoft JDBC Driver 6.2 for SQL Server. The RTW release rolls up all new functionality and bug fixes released in preview releases 6.1.1 to 6.1.7.

The 6.2 RTW introduces support for:

  • Azure Active Directory (AAD) on Linux
  • Federal Information Processing Standard (FIPS) enabled Java virtual machines
  • Kerberos Authentication improvements - Principal/Password method, Cross-real authentication, Kerberos constrained delegation
  • New timeouts - Query Timeout, Socket Timeout

Read the full release announcement here.

Getting the latest release

The latest bits are available on the Microsoft Download Center, GitHub repository, and Maven Central.

Add the JDBC 6.2 RTW driver to your Maven project by adding the following code to your POM file to include it as a dependency in your project.

<dependency>
    <groupId>com.microsoft.sqlserver</groupId>
    <artifactId>mssql-jdbc</artifactId>
    <version>6.2.1.jre8</version>
</dependency>

Note: An issue with the metadata caching improvement was found in the JDBC 6.2 RTW released on June 29, 2017. The improvement was rolled back and new jars (version 6.2.1) were released on July 17, 2017 on the Microsoft Download Center, GitHub, and Maven Central. Please update your projects to use the 6.2.1 release jars. Please view release notes for more details.

Help us improve the JDBC Driver by filing issues on GitHub or contributing to the project.

Please also check out our tutorials to get started with developing apps in your programming language of choice and SQL Server.

Andrea Lam (andrela@microsoft.com)

Start & Stop all OpsMgr Services on all SCOM Management Servers in a mass with Powershell

$
0
0

Description :

This Powershell Script is basically intended to stop and start OpsMgr related services (“cshost”, “healthservice”, “omsdk”) on all SCOM Management Servers in a mass. This script can be used to stop and start other windows services too. You basically need to enter the server and service name into the related arrays.

The Script is also available on TechNet Script Gallery for download.
https://gallery.technet.microsoft.com/Start-Stop-all-OpsMgr-on-3231855f

Powershell Script :

# Start-or-Stop-all-OpsMgr-Services-on-all-ManagementServers.ps1
# Version 7.0
# Date: 7/17/2017
# Author: Cengiz KUSKAYA
# Description: A script to stop or start OpsMgr Services on all SCOM Management Servers in mass.

# START OpsMgr Services on Management Servers #
#######################################

$Server = @(“SCOMMS01”, “SCOMMS02”, “SCOMMS03”) # Note: Multiple Server Names can be added in the following format (“SCOMMS01”, “SCOMMS02”, “SCOMMS03”)
$ServiceList = @(“cshost”, “healthservice”, “omsdk”) # Note: Multiple Service Names can be added in the following format (“cshost”, “healthservice”, “omsdk”)
$ServicesStateStopped = get-service -computername $Server -name $ServiceList

foreach ($Service in $ServicesStateStopped)
{
if ($Service.status -eq “Stopped”)
{
$Service | Start-Service
Write-Host ($Service).Name “Service has been STARTED on Server” $Service.MachineName “….!” -Separator ” -> ” -foregroundcolor white -backgroundcolor green
}
else
{
Write-Host ($Service).Name “Service is already in RUNNING State on Server” $Service.MachineName “….!” -Separator ” -> ” -foregroundcolor DARKRED -backgroundcolor yellow
}
}

# STOP OpsMgr Services on Management Servers #
######################################

$Server = @(“SCOMMS01”, “SCOMMS02”, “SCOMMS03”) # Note: Multiple Server Names can be added in the following format (“SCOMMS01”, “SCOMMS02”, “SCOMMS03”)
$ServiceList = @(“cshost”, “healthservice”, “omsdk”) # Note: Multiple Service Names can be added in the following format (“cshost”, “healthservice”, “omsdk”)
$ServicesStateRunning = get-service -computername $Server -name $ServiceList

foreach ($Service in $ServicesStateRunning)
{
if ($Service.status -eq “Running”)
{
$Service | Stop-Service
Write-Host ($Service).Name “Service has been STOPPED on Server” $Service.MachineName “….!” -Separator ” -> ” -foregroundcolor white -backgroundcolor red
}
else
{
Write-Host ($Service).Name “Service is already in STOPPED State on Server” $Service.MachineName “….!” -Separator ” -> ” -foregroundcolor DARKRED -backgroundcolor yellow
}
}

Source : Start & Stop all OpsMgr Services on all SCOM Management Servers in a mass with Powershell

Calling Azure Functions from XSLT map in Logic Apps

$
0
0

Scenario

Let's say we have to build an expense processing application. It takes the receipt with amount and currency information and converts it into final amount in US dollars based on the exchange rate.

Solution

We will use Logic Apps to build the solution. It takes the receipt information through a Request trigger, uses XML Transform to generate the final amount and returns the result using Response action. We don't want to hard-code the currency conversion rate in the XSLT map as they keep changing and will Azure Functions to look it up.

Let's start with the building blocks - Azure Function and XSLT map. Later we will stitch everything together in a Logic App.

Azure Function

We have a C# Webhook Function here which reads the currency information from request query parameters and gets the conversion rate to USD. Here we have hardcoded the mapping in the Function, but in a real world scenario it would be coming from a database or in real-time from a web service etc. Click here to download the source code.

XSLT Map

The XSLT map takes Azure Function App URL as a parameter (line #5). There are several benefits of doing this -

  1. The Function URL has authorization code embedded in it. We don't want to hard-code them in the map, rather fetch them from Azure Key Vault during deployment.
  2. We might have different Function App URL for test vs. production environment.

To learn more about XSLT parameters support in Logic Apps you can try out Azure Logic Apps - XSLT with parameters on github.

It then creates the output XML consisting of original currency, original amount and calculated amount in USD. For the calculation it first call the Azure Function passing the currency as input parameter and then multiplies it with the original amount.

We have used inline C# scripting to call the Function App using the HTTP client. In this example, we are passing the input parameters through query parameters and making a GET request but it can be easily modified to do more complex things like making a POST request with XML payload in request body.

Logic App

Let's start with creating an Integration Account first and uploading the XSLT map into it.

Next, create a Logic App in the same region and associate the Integration Account with it.

Its a simple Logic App containing a Request trigger, XML Transform action and a Response action.

The Request trigger exposes a REST endpoint and expects the receipt information (XML format) in the request body.

The Transform action shows 2 fields -

  1. The Content field takes the receipt information from the trigger body.
  2. The Map field shows the list of maps in the associated Integration Account. Select the map uploaded in previous steps.

The designer detects that the map contains an input parameter called AzureFunctionUri and automatically adds another field in the action. Copy the Function URL from the Function blade and paste it here.

In the end, we call the Logic App from Postman passing in the receipt payload in the request body and get back the amount in USD.

Old-school debugging with traces

$
0
0

On the field, a customer has a big Windows application made with C/C++, Windows API and MFC. The application is used by 16.000 users. This app presents multiple windows on the same screens like MDI child browser windows, Win32 app and more. The app reorganizes windows in perspectives.

The problem, is that, sometimes, the windows are re-arranged and the panels contain some empty windows and the app runs in a infinite loop... There is a problem on the display. There are multiples threads and a lot of logic with Windows messaging subsystem. It took me hours to find where there can exists a failure in the code and an infinite loop. It put traces on some methods... There was some zones in the code that could enter the app in infinite loop so I was fighting in the code at multiple places.

There was already a trace mode so that entering in each method was traced with parameters. The problem is that when the code is huge, it does not help so much... So I put my level of traces and I start learning the method calls by myself. It was hard because the internal data structures are mixed between Win32 messages, custom messages and business code. There is little spaguetti stuff !

After some failed attempts to analyze the code, I was stucked on a method called GetView() wich , when it begins to hang, return NULL... And displaying NULL a an MDI child window is not very acceptable... It works all the time but, under a special business scenario, it fails. I was sure there was an exception code somewhere where I could see and system exception. The code contains SEH (structured exception handling) and there was traces on __except(). I reactivated the original tracing system and saw exception messages.... Multiples exception messages. And my GetView() function was called immediatly after the exception so it could explain that I get a NULL. It fails so some parts of the code are not executed and the flow of windows messages produce a hang.

After some days of learning the methods names and the program flow, I decided to rely on the existing trace system. After several scenarios, I have a business case that produces a app hang. Same actions and app hangs every time. A good candidate... The result is clear : the cache subsystem hangs and fails to recover cache object. It explains why my GetVies() returns NULL...

The devil is in the details. I decided to remove the thread of the cache subsystem. It stores Windows and kill them after 5 minutes of inactivity. Now there are no more problems.

To finish the job, I just need to fix the cache subsystem but, we can run without it so... It's the opinion of the customer to see if we continue investigations whether or not.

So the old-school advice is: always trace your exception handlers.

Using External data with Azure Jupyter Notebooks

$
0
0

image

One of the vital requirements for academics is to provide a single data set to allow all there students to utilise for undertaking experiments.

By hosting data on a Blob Storage account you can allow students connect and undertake experiments using Azure Jupyter Notebook http://azure.notebooks.com  in a pretty straight forward manner.

Data can be uploaded it to an Azure blob using the Azure Storage Explorer tool.

Creating a storage account on Azure

  1. Sign in to the Azure portal.
  2. On the Hub menu, select New -> Storage -> Storage account.
  3. Enter a name for your storage account. See Storage account endpoints for details about how the storage account name will be used to address your objects in Azure Storage.

    Note

    Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.

    Your storage account name must be unique within Azure. The Azure portal will indicate if the storage account name you select is already in use.

  4. Specify the deployment model to be used: Resource Manager or Classic. Resource Manager is the recommended deployment model. For more information, see Understanding Resource Manager deployment and classic deployment.

    Note

    Blob storage accounts can only be created using the Resource Manager deployment model.

  5. Select the type of storage account: General purpose or Blob storage. General purpose is the default.

    If General purpose was selected, then specify the performance tier: Standard or Premium. The default is Standard. For more details on standard and premium storage accounts, see Introduction to Microsoft Azure Storage and Premium Storage: High-Performance Storage for Azure Virtual Machine Workloads.

    If Blob Storage was selected, then specify the access tier: Hot or Cool. The default is Hot. See Azure Blob Storage: Cool and Hot tiers for more details.

  6. Select the replication option for the storage account: LRS, GRS, RA-GRS, or ZRS. The default is RA-GRS. For more details on Azure Storage replication options, see Azure Storage replication.
  7. Select the subscription in which you want to create the new storage account.
  8. Specify a new resource group or select an existing resource group. For more information on resource groups, see Azure Resource Manager overview.
  9. Select the geographic location for your storage account. See Azure Regions for more information about what services are available in which region.
  10. Click Create to create the storage account.

Manage your storage account

Change your account configuration

After you create your storage account, you can modify its configuration, such as changing the replication option used for the account or changing the access tier for a Blob storage account. In the Azure portal, navigate to your storage account, find and click Configuration under SETTINGS to view and/or change the account configuration.+

Note

Depending on the performance tier you chose when creating the storage account, some replication options may not be available.

Changing the replication option will change your pricing. For more details, see Azure Storage Pricing page.

For Blob storage accounts, changing the access tier may incur charges for the change in addition to changing your pricing. Please see the Blob storage accounts - Pricing and Billing for more details.

Manage your storage access keys

When you create a storage account, Azure generates two 512-bit storage access keys, which are used for authentication when the storage account is accessed. By providing two storage access keys, Azure enables you to regenerate the keys with no interruption to your storage service or access to that service.+

Note

We recommend that you avoid sharing your storage access keys with anyone else. To permit access to storage resources without giving out your access keys, you can use a shared access signature. A shared access signature provides access to a resource in your account for an interval that you define and with the permissions that you specify. See Using Shared Access Signatures (SAS) for more information.+

View and copy storage access keys

In the Azure portal, navigate to your storage account, click All settings and then click Access keys to view, copy, and regenerate your account access keys. The Access Keys blade also includes pre-configured connection strings using your primary and secondary keys that you can copy to use in your applications.

Regenerate storage access keys

We recommend that you change the access keys to your storage account periodically to help keep your storage connections secure. Two access keys are assigned so that you can maintain connections to the storage account by using one access key while you regenerate the other access key.

Warning

Regenerating your access keys can affect services in Azure as well as your own applications that are dependent on the storage account. All clients that use the access key to access the storage account must be updated to use the new key.

Storage Explorers - If you are using any storage explorer applications, you will probably need to update the storage key used by those applications

Here is the process for rotating your storage access keys

  1. Update the connection strings in your application code to reference the secondary access key of the storage account.
  2. Regenerate the primary access key for your storage account. On the Access Keys blade, click Regenerate Key1, and then click Yes to confirm that you want to generate a new key.
  3. Update the connection strings in your code to reference the new primary access key.
  4. Regenerate the secondary access key in the same manner.

Once you have setup your storage account you can use the Azure Storage Explorer to connect to your storage container create a new BLOB container and upload the data.

image

Azure Storage Explorer showing upload blob which has SampleData for our experiment

Within your Jupyter Notebook you now need to define the connection parameters, So in a code block create the following and take the details from your Azure Account.

image

Example of Notebooks setup

So Code Block is where we define the connection

blob_account_name = "" # fill in your blob account name

blob_account_key = ""  # fill in your blob account key

mycontainer = ""       # fill in the container name

myblobname = ""        # fill in the blob name

mydatafile = ""        # fill in the output file name

The Azure storage account provides a unique namespace to store and access your Azure Storage data objects. All objects in a storage account are billed together as a group. By default, the data in your account is available only to you, the account owner.

There are two types of storage accounts:

In a new code block create your connection and query strings

import os # import OS dependant functionality

import pandas as pd #import data analysis library required

from azure.storage.blob import BlobService

dirname = os.getcwd()


blob_service = BlobService(account_name=blob_account_name,
             account_key=blob_account_key)

blob_service.get_blob_to_path(mycontainer, myblobname, mydatafile)

mydata = pd.read_csv(mydatafile, header = 0)

os.remove(os.path.join(dirname, mydatafile))

print(mydata.shape)


Before you can create a storage account, you must have an Azure subscription, which is a plan that gives you access to a variety of Azure services. You can get started with Azure with a free account.  If you have Imagine Access premium then your an Visual Studio Dev Essentials, you get free monthly credits that you can use with Azure services, including Azure Storage. See Azure Storage Pricing for information on volume pricing.+

To learn how to create a storage account, see Create a storage account for more details. You can create up to 200 uniquely named storage accounts with a single subscription. See Azure Storage Scalability and Performance Targets for details about storage account limits.

Power BI Desktop 2017年7月の更新

$
0
0

Microsoft Japan Data Platform Tech Sales Team 伊藤

Power BI には Power BI Service と Power BI Desktop があり、Power BI Desktop の今回 (2017年7月) の更新では次の機能が追加・強化されました。

※ Power BI Report Server 用の Power BI Desktop には当てはまりません。(が、こちらも 更新版 が出ております。)

レポート

  • 新しいテーブルとマトリックスの視覚化の一般提供開始
  • 視覚化におけるフィールド名の変更
  • カスタム ビジュアル ストアの統合
  • 相対的なデータのフィルター
  • 視覚化におけるレスポンシブ レイアウト (プレビュー)
  • ウォーターフォール 図のオプション  - ブレイクダウン
  • カスタム ビジュアルの更新

分析とモデリング

  • クイック メジャー ギャラリー
  • DirectQuery における「両方向にセキュリティ フィルターを適用する」の一般提供開始


データ接続

  • Snowflake コネクタの一般提供開始


クエリの編集

  • 「例からの列の追加」の強化

これらの更新について概要をご紹介します。サンプルファイルは こちら からダウンロード可能です。

レポート

  • 新しいテーブルとマトリックスの視覚化の一般提供開始
    これまでのプレビュー機能に加え、以下の3つの機能を追加しました。
    • 行と列のどちらからも容易にドリルできるようになりました。マトリックスの行と列のどちらかにだけ階層がある場合、視覚化の上部のドリルボタンでは階層がある方だけでドリル操作が行われます。行と列のどちらにも複数のレベルがある場合、行と列のどちらに対してドリル操作を行うのかを選択するためのドロップダウンが表示されます。
      image
    • テーブルとマトリックスにおいて「既定」というスタイルができました。以前のものに戻したい場合は「なし」を選択してください。(上図は「既定」のスタイルです。)
    • 「右端での折り返し」において、改行コードがデータに含まれている場合や DAX 関数 UNICHAR(10) を使っている場合に、改行コードを優先して折り返します。(UNICHAR 関数については こちら をご覧ください。)
  • 視覚化におけるフィールド名の変更
    これまではフィールド名がそのまま表示されていましたが、視覚化ごとに変更できるようになりました。視覚化を選択し、レイアウトを設定するところで名前を変えたいフィールドをダブルクリック (または右クリックして [Rename] を選択) すると変更できます。
    image
  • カスタム ビジュアル ストアの統合
    Power BI Desktop の [ホーム] のリボンに [ストアから] というボタンが追加され、直接カスタム ビジュアルを参照して Power BI Desktop に追加できるようになりました。
    image
  • 相対的なデータのフィルター
    相対日付スライサーと同様に、フィルターの日付データに対しても相対日付を使えるようになりました。
    image
  • 視覚化におけるレスポンシブ レイアウト (プレビュー)
    レポートやダッシュボードをスマートフォンなどの小さい画面から閲覧する場合に、データをより見やすくするための「レスポンシブ」レイアウトが利用できます。例えば棒グラフの視覚化のサイズを変えていただくと分かりますが、下図のように同じ棒グラフでも [レスポンシブ] をオンにしたものは、小さくすると自動的にグラフの軸を非表示にしたりします。
    image棒グラフ、折れ線グラフ、面グラフ、視覚化の書式の [全般] において「(プレビュー) レスポンシブ」をオンにすると有効化できます。
    image
  • ウォーターフォール 図のオプション - ブレイクダウン
    [詳細] を使用して [カテゴリ] の内訳を表示可能になりました。ここでは [カテゴリ] に「年月」、[詳細] に「地域」、[Y 軸] に「人数」を設定しています。作っていて気づいたのですが、ウォーターフォール図を使うとわざわざ「前月比」のような計算列を作る必要がないんですね。予算の達成度を組織別や製品別にブレイクダウンして分析するのにピッタリで、Microsoft 社自身が使いたくて実装された気がしてなりません。
    この [詳細] の項目が多い場合には、「その他」としてまとめることができます。既定では 5 を超えると「その他」としてマージされます。視覚化の [書式] → [最大ブレークダウン] でマージされるしきい値を設定できます。
    image
  • カスタム ビジュアルの更新
    • 次の 3 つの カスタムビジュアルが追加されました。
      Drilldown Choropleth, Drilldown Cartogram, Drilldown Player
    • 認定済みカスタム ビジュアル
      一連のコードの要件を満たし、厳密なセキュリティ テストに合格したものです。カスタム ビジュアルが認定されると、PowerPoint にエクスポートすることができ、ユーザーがレポート ページをサブスクライブしたときに受け取るメールにそれが表示されます。認定方法や認定済みのカスタム ビジュアルについての詳細は ドキュメント をご覧ください。

分析とモデリング

クエリの編集

  • 「例からの列の追加」の強化
    • 「日付/時間/タイムゾーンの変換」で「ローカル タイム」をサポート
    • 「数値の変換」で偶数/奇数の判定、二乗、平方根、自然対数などをサポート

詳細はこちらのドキュメント (日本語英語) をご覧ください。
翻訳のタイムラグがあるため、英語の方に最新情報が記載されています。

関連リンク

翻訳の時差があるので、最新情報はこちらの英語のドキュメントをご覧ください。

 

Create Bot for Microsoft Graph with DevOps 15: BotBuilder features – LUIS Dialog 101

$
0
0

In this article, I use LUIS to process natural language, and LuisDialog to integrate LUIS into BotBuilder.

Create LUIS application

1. Go to https://luis.ai and create an account or signin if you already have account.

2. Click [New App].

image

3. Create O365Bot application.

image

4. You can create Intent and entities by yourself but let’s use pre-built one. Select Prebuilt domains and add Calendar.

image

5. Once added, go to Intents to confirm you have intents added.

image

6. Click Entities to confirm you have entities added.

image

7. I want to purse datetime as well. Click Add prebuilt entity and select datetimeV2, then click [Save].

image

8. Go to Train & Test and click [Train Application].

9. Once train completed, enter [go to dinner with my colleague next Wednesday from 7pm to 10pm] and hit Enter. You see the sentence is understood as [Calender.Add] intent.

image

10. Change the Labels view drop down to Tokens and confirm how the sentence is pursed into entities.

image

11. Go to Publish App and assign key. If you don’t have any yet, click [Add a new key to your account] link and follow the instructions.

12. Click [Publish] and you see the endpoint url is generated. Note the subscription-key value from the address.

13. Go to Settings and note the application id.

image

LUIS Dialog

I used Dialog and FormFlow until now. In this article, I switch to use LuisDialog. Before doing it, make sure Microsoft.Bot.Builder version is the latest.

1. Open O365Bot project and add LuisRootDialog.cs in Dialogs folder. Replace the code and update LUIS app id and key. As you can see, I only use Calendar.Find and Add at the moment.

using AuthBot;
using AuthBot.Dialogs;
using Autofac;
using Microsoft.Bot.Builder.ConnectorEx;
using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Bot.Builder.Luis;
using Microsoft.Bot.Builder.Luis.Models;
using Microsoft.Bot.Connector;
using O365Bot.Services;
using System;
using System.Configuration;
using System.Threading;
using System.Threading.Tasks;

namespace O365Bot.Dialogs
{
    [LuisModel("LUIS Application ID", "LUIS Key")]
    [Serializable]
    public class LuisRootDialog : LuisDialog<object>
    {
        LuisResult luisResult;

        [LuisIntent("Calendar.Find")]
        public async Task GetEvents(IDialogContext context, IAwaitable<IMessageActivity> activity, LuisResult result)
        {
            this.luisResult = result;
            var message = await activity;

            // Check authentication
            if (string.IsNullOrEmpty(await context.GetAccessToken(ConfigurationManager.AppSettings["ActiveDirectory.ResourceId"])))
            {
                await Authenticate(context, message);
            }
            else
            {
                await SubscribeEventChange(context, message);
                await context.Forward(new GetEventsDialog(), ResumeAfterDialog, message, CancellationToken.None);
            }
        }

        [LuisIntent("Calendar.Add")]
        public async Task CreateEvent(IDialogContext context, IAwaitable<IMessageActivity> activity, LuisResult result)
        {
            var message = await activity;
            // Check authentication
            if (string.IsNullOrEmpty(await context.GetAccessToken(ConfigurationManager.AppSettings["ActiveDirectory.ResourceId"])))
            {
                await Authenticate(context, message);
            }
            else
            {
                await SubscribeEventChange(context, message);
                context.Call(new CreateEventDialog(), ResumeAfterDialog);
            }
        }

        [LuisIntent("None")]
        public async Task NoneHandler(IDialogContext context, IAwaitable<IMessageActivity> activity, LuisResult result)
        {
            await context.PostAsync("Cannot understand");
        }

        private async Task Authenticate(IDialogContext context, IMessageActivity message)
        {
            // Store the original message.
            context.PrivateConversationData.SetValue<Activity>("OriginalMessage", message as Activity);
            // Run authentication dialog.
            await context.Forward(new AzureAuthDialog(ConfigurationManager.AppSettings["ActiveDirectory.ResourceId"]), this.ResumeAfterAuth, message, CancellationToken.None);
        }


        private async Task SubscribeEventChange(IDialogContext context, IMessageActivity message)
        {
            if (message.ChannelId != "emulator")
            {
                using (var scope = WebApiApplication.Container.BeginLifetimeScope())
                {
                    var service = scope.Resolve<INotificationService>(new TypedParameter(typeof(IDialogContext), context));

                    // Subscribe to Office 365 event change
                    var subscriptionId = context.UserData.GetValueOrDefault<string>("SubscriptionId", "");
                    if (string.IsNullOrEmpty(subscriptionId))
                    {
                        subscriptionId = await service.SubscribeEventChange();
                        context.UserData.SetValue("SubscriptionId", subscriptionId);
                    }
                    else
                        await service.RenewSubscribeEventChange(subscriptionId);

                    // Convert current message as ConversationReference.
                    var conversationReference = message.ToConversationReference();

                    // Map the ConversationReference to SubscriptionId of Microsoft Graph Notification.
                    if (CacheService.caches.ContainsKey(subscriptionId))
                        CacheService.caches[subscriptionId] = conversationReference;
                    else
                        CacheService.caches.Add(subscriptionId, conversationReference);

                    // Store locale info as conversation info doesn't store it.
                    if (!CacheService.caches.ContainsKey(message.From.Id))
                        CacheService.caches.Add(message.From.Id, Thread.CurrentThread.CurrentCulture.Name);
                }
            }
        }

        private async Task ResumeAfterDialog(IDialogContext context, IAwaitable<bool> result)
        {
            // Get the dialog result
            var dialogResult = await result;
            context.Done(true);
        }

        private async Task ResumeAfterAuth(IDialogContext context, IAwaitable<string> result)
        {
            // Restore the original message.
            var message = context.PrivateConversationData.GetValue<Activity>("OriginalMessage");
            await SubscribeEventChange(context, message);
            switch (luisResult.TopScoringIntent.Intent)
            {
                case "Calendar.Find":
                    await context.Forward(new GetEventsDialog(), ResumeAfterDialog, message, CancellationToken.None);
                    break;
                case "Calendar.Add":
                    context.Call(new CreateEventDialog(), ResumeAfterDialog);
                    break;
                case "None":
                    await context.PostAsync("Cannot understand");
                    break;
            }
        }
    }
}

2. Call LuisRootDialog instead of RootDialog in MessagesController.cs

if (activity.Type == ActivityTypes.Message)
{
    await Conversation.SendAsync(activity, () => new Dialogs.LuisRootDialog());
}

Run with emulator

1. Press F5 to run the application.

2. Try with emulator. You realize that it goes to Calendar.Add as I wanted, but no entities are utilized.

image

Unit Testing

Now I am using LuisDialog, and how to unit test it?

There is a way to mock the LUIS part.

1. In UnitTest project, add LuisTestBase.cs in Helper folder and replace the code. You can find the original code at GitHub.

using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Bot.Builder.Luis;
using Microsoft.Bot.Builder.Luis.Models;
using Microsoft.Bot.Builder.Tests;
using Moq;
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Linq.Expressions;
using System.Reflection;
using System.Threading;
using System.Threading.Tasks;

namespace Microsoft.Bot.Builder.Tests
{
    public abstract class LuisTestBase : DialogTestBase
    {
        public static IntentRecommendation[] IntentsFor<D>(Expression<Func<D, Task>> expression, double? score)
        {
            var body = (MethodCallExpression)expression.Body;
            var attributes = body.Method.GetCustomAttributes<LuisIntentAttribute>();
            var intents = attributes
                .Select(attribute => new IntentRecommendation(attribute.IntentName, score))
                .ToArray();
            return intents;
        }

        public static EntityRecommendation EntityFor(string type, string entity, IDictionary<string, object> resolution = null)
        {
            return new EntityRecommendation(type: type) { Entity = entity, Resolution = resolution };
        }

        public static EntityRecommendation EntityForDate(string type, DateTime date)
        {
            return EntityFor(type,
                date.ToString("d", DateTimeFormatInfo.InvariantInfo),
                new Dictionary<string, object>()
                {
                    { "resolution_type", "builtin.datetime.date" },
                    { "date", date.ToString("yyyy-MM-dd", DateTimeFormatInfo.InvariantInfo) }
                });
        }

        public static EntityRecommendation EntityForTime(string type, DateTime time)
        {
            return EntityFor(type,
                time.ToString("t", DateTimeFormatInfo.InvariantInfo),
                new Dictionary<string, object>()
                {
                    { "resolution_type", "builtin.datetime.time" },
                    { "time", time.ToString("THH:mm:ss", DateTimeFormatInfo.InvariantInfo) }
                });
        }

        public static void SetupLuis<D>(
            Mock<ILuisService> luis,
            Expression<Func<D, Task>> expression,
            double? score,
            params EntityRecommendation[] entities
            )
        {
            luis
                .Setup(l => l.QueryAsync(It.IsAny<Uri>(), It.IsAny<CancellationToken>()))
                .ReturnsAsync(new LuisResult()
                {
                    Intents = IntentsFor(expression, score),
                    Entities = entities
                });
        }

        public static void SetupLuis<D>(
            Mock<ILuisService> luis,
            string utterance,
            Expression<Func<D, Task>> expression,
            double? score,
            params EntityRecommendation[] entities
            )
        {
            var uri = new UriBuilder() { Query = utterance }.Uri;
            luis
                .Setup(l => l.BuildUri(It.Is<LuisRequest>(r => r.Query == utterance)))
                .Returns(uri);

            luis.Setup(l => l.ModifyRequest(It.IsAny<LuisRequest>()))
                .Returns<LuisRequest>(r => r);

            luis
                .Setup(l => l.QueryAsync(uri, It.IsAny<CancellationToken>()))
                .Returns<Uri, CancellationToken>(async (_, token) =>
                {
                    return new LuisResult()
                    {
                        Intents = IntentsFor(expression, score),
                        Entities = entities
                    };
                });
        }
    }
}

2. Add LuisUnitTest1.cs in root and replace the code. In this case, I simply test ShouldReturnEvents for now. I will add the rest in the next article.

using Autofac;
using Microsoft.Bot.Builder.Base;
using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Bot.Builder.Dialogs.Internals;
using Microsoft.Bot.Builder.Internals.Fibers;
using Microsoft.Bot.Builder.Luis;
using Microsoft.Bot.Builder.Luis.Models;
using Microsoft.Bot.Builder.Tests;
using Microsoft.Bot.Connector;
using Microsoft.Graph;
using Microsoft.QualityTools.Testing.Fakes;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Moq;
using O365Bot.Dialogs;
using O365Bot.Services;
using System;
using System.Collections.Generic;
using System.Globalization;
using System.Threading;
using System.Threading.Tasks;

namespace O365Bot.UnitTests
{
    [TestClass]
    public class SampleLuisTest : LuisTestBase
    {
        [TestMethod]
        public async Task ShouldReturnEvents()
        {
            // Instantiate ShimsContext to use Fakes
            using (ShimsContext.Create())
            {
                // Return "dummyToken" when calling GetAccessToken method
                AuthBot.Fakes.ShimContextExtensions.GetAccessTokenIBotContextString =
                    async (a, e) => { return "dummyToken"; };


                // Mock the LUIS service
                var luis1 = new Mock<ILuisService>();
                // Mock other services
                var mockEventService = new Mock<IEventService>();
                mockEventService.Setup(x => x.GetEvents()).ReturnsAsync(new List<Event>()
                {
                    new Event
                    {
                        Subject = "dummy event",
                        Start = new DateTimeTimeZone()
                        {
                            DateTime = "2017-05-31 12:00",
                            TimeZone = "Standard Tokyo Time"
                        },
                        End = new DateTimeTimeZone()
                        {
                            DateTime = "2017-05-31 13:00",
                            TimeZone = "Standard Tokyo Time"
                        }
                    }
                });
                var subscriptionId = Guid.NewGuid().ToString();
                var mockNotificationService = new Mock<INotificationService>();
                mockNotificationService.Setup(x => x.SubscribeEventChange()).ReturnsAsync(subscriptionId);
                mockNotificationService.Setup(x => x.RenewSubscribeEventChange(It.IsAny<string>())).Returns(Task.FromResult(true));

                var builder = new ContainerBuilder();
                builder.RegisterInstance(mockEventService.Object).As<IEventService>();
                builder.RegisterInstance(mockNotificationService.Object).As<INotificationService>();
                WebApiApplication.Container = builder.Build();

                /// Instantiate dialog to test
                LuisRootDialog rootDialog = new LuisRootDialog();

                // Create in-memory bot environment
                Func<IDialog<object>> MakeRoot = () => rootDialog;
                using (new FiberTestBase.ResolveMoqAssembly(luis1.Object))
                using (var container = Build(Options.ResolveDialogFromContainer, luis1.Object))
                {
                    var dialogBuilder = new ContainerBuilder();
                    dialogBuilder
                        .RegisterInstance(rootDialog)
                        .As<IDialog<object>>();
                    dialogBuilder.Update(container);

                    // Register global message handler
                    RegisterBotModules(container);

                    // Specify "Calendar.Find" intent as LUIS result
                    SetupLuis<LuisRootDialog>(luis1, d => d.GetEvents(null, null, null), 1.0, new EntityRecommendation(type: "Calendar.Find"));

                    // Create a message to send to bot
                    var toBot = DialogTestBase.MakeTestMessage();
                    toBot.From.Id = Guid.NewGuid().ToString();
                    toBot.Text = "get events";

                    // Send message and check the answer.
                    IMessageActivity toUser = await GetResponse(container, MakeRoot, toBot);

                    // Verify the result
                    Assert.IsTrue(toUser.Text.Equals("2017-05-31 12:00-2017-05-31 13:00: dummy event"));
                }
            }
        }

        /// <summary>
        /// Send a message to the bot and get repsponse.
        /// </summary>
        public async Task<IMessageActivity> GetResponse(IContainer container, Func<IDialog<object>> makeRoot, IMessageActivity toBot)
        {
            using (var scope = DialogModule.BeginLifetimeScope(container, toBot))
            {
                DialogModule_MakeRoot.Register(scope, makeRoot);

                // act: sending the message
                using (new LocalizedScope(toBot.Locale))
                {
                    var task = scope.Resolve<IPostToBot>();
                    await task.PostAsync(toBot, CancellationToken.None);
                }
                //await Conversation.SendAsync(toBot, makeRoot, CancellationToken.None);
                return scope.Resolve<Queue<IMessageActivity>>().Dequeue();
            }
        }

        /// <summary>
        /// Send a message to the bot and get all repsponses.
        /// </summary>
        public async Task<List<IMessageActivity>> GetResponses(IContainer container, Func<IDialog<object>> makeRoot, IMessageActivity toBot)
        {
            using (var scope = DialogModule.BeginLifetimeScope(container, toBot))
            {
                var results = new List<IMessageActivity>();
                DialogModule_MakeRoot.Register(scope, makeRoot);

                // act: sending the message
                using (new LocalizedScope(toBot.Locale))
                {
                    var task = scope.Resolve<IPostToBot>();
                    await task.PostAsync(toBot, CancellationToken.None);
                }
                //await Conversation.SendAsync(toBot, makeRoot, CancellationToken.None);
                var queue = scope.Resolve<Queue<IMessageActivity>>();
                while (queue.Count != 0)
                {
                    results.Add(queue.Dequeue());
                }

                return results;
            }
        }

        /// <summary>
        /// Register Global Message
        /// </summary>
        private void RegisterBotModules(IContainer container)
        {
            var builder = new ContainerBuilder();
            builder.RegisterModule(new ReflectionSurrogateModule());
            builder.RegisterModule<GlobalMessageHandlers>();
            builder.RegisterType<ActivityLogger>().AsImplementedInterfaces().InstancePerDependency();
            builder.Update(container);
        }

        /// <summary>
        /// Resume the conversation
        /// </summary>
        public async Task<List<IMessageActivity>> Resume(IContainer container, IDialog<object> dialog, IMessageActivity toBot)
        {
            using (var scope = DialogModule.BeginLifetimeScope(container, toBot))
            {
                var results = new List<IMessageActivity>();

                var botData = scope.Resolve<IBotData>();
                await botData.LoadAsync(CancellationToken.None);
                var task = scope.Resolve<IDialogTask>();

                // Insert dialog to current event
                task.Call(dialog.Void<object, IMessageActivity>(), null);
                await task.PollAsync(CancellationToken.None);
                await botData.FlushAsync(CancellationToken.None);

                // Get the result
                var queue = scope.Resolve<Queue<IMessageActivity>>();
                while (queue.Count != 0)
                {
                    results.Add(queue.Dequeue());
                }

                return results;
            }
        }
    }
}

3. Compile the solution and run the test.

Summery

LUIS is great service to purse natural language input. In the next article, I revise the code to utilize pursed entities and also update the unit tests. So no GitHub code this time.

Ken


[Teams] Sharepoint でデバイスアクセスを設定すると Teams の Wiki/ファイル共有にアクセスできない

$
0
0

こんにちは Teams サポートチームの吉野です。

TeamsとSharepointは密接に関係しており、TeamsのアクティビティはSharepointに反映されます。
ところで、 Sharepoint の設定で「デバイスアクセス」というものがあります。
ここで自社のネットワークの IP アドレスを入力すると、それ以外のIPからのアクセスをはじくことができます。

ところが、この設定を行うと Teams の一部機能(Wikiとファイル共有)が利用できなくなります。
これはTeamsのサーバーからのリクエストをSharepointが拒否しているためです。

これを回避するには、Teamsサーバーのアドレスをデバイスアクセスの許可リストに追加します。

Office 365 URL 及び IP 範囲

といっても大量にあるため、すべて入力するのは大変です。
ある程度集約したアドレスリストを作りましたのでこちらをご利用いただければと思います。

13.64.0.0/10,23.96.0.0/11,40.64.0.0/10,51.140.0.0/14,52.112.0.0/14,52.128.0.0/9,
104.40.0.0/13,111.221.104.75/32,137.116.0.0/15,168.60.0.0/14,191.232.0.0/13,207.46.155.141/32

このようになります。(黄色い部分はお客様自身のアドレス範囲を設定ください)

快適なTeams/Sharepointライフを満喫ください。

Error while executing web part: System.MissingMethodException: Method not found: 'Boolean Microsoft.Office.Server.Utilities.ParsedCamlQuery.HasMultipleLookupIdInClauses()'

$
0
0

Yesterday we got a very strange behavior in one of our customers' SharePoint (SharePoint Server 2010) that after installing Service Pack 2 for SharePoint Server 2010, all the users started getting the below error, when accessing any library even on Central Administration site!

Unable to display this Web Part. To troubleshoot the problem, open this Web page in a Microsoft SharePoint Foundation-compatible HTML editor such as Microsoft SharePoint Designer. If the problem persists, contact your Web server administrator.

 

Digging deeper in the SharePoint ULS logs, we found the below interesting log...

Error while executing web part: System.MissingMethodException: Method not found: 'Boolean Microsoft.Office.Server.Utilities.ParsedCamlQuery.HasMultipleLookupIdInClauses()'.     at Microsoft.Office.DocumentManagement.MetadataNavigation.MetadataNavigationContext.ApplyOverrideClauses()     at Microsoft.Office.DocumentManagement.MetadataNavigation.MetadataNavigationContext.CacheContextFilterOpsAndCaml()     at Microsoft.SharePoint.WebControls.SPDataSource.OnSelecting(EventArgs arguments)     at Microsoft.SharePoint.WebControls.SPDataSourceView.ExecuteSelect(DataSourceSelectArguments selectArguments, String aggregateString, Boolean wantReturn, BaseXsltListWebPart webpart, SPListItem& listItem, SPListItemCollection& listItems, String[]& fieldList)     at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigatorInternal()     at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator()     at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform(Boolean bDeferExecuteTransform)

 

As seen from the above callstack, this is totally OOTB functionality (not custom!). Therefore, we had a look on the dll "Microsoft.Office.DocumentManagement.dll" which is located at "C:Program FilesCommon FilesMicrosoft SharedWeb Server Extensions14ISAPI", and found that it had the version "14.0.7010.1000", although the customer has already installed SP2. Hence, we installed November 2016 CU (https://www.microsoft.com/en-us/download/details.aspx?id=54212) which has an updated version of this dll (14.0.7158.5000), as shown below:

 

After installing the updates, and running the configuration wizard, the issue got resolved.

P.S., I saw other customers resolved this issue by installing Service Pack 2, but we had this already installed. Hence, if you don't already have Service Pack 2 installed, go ahead a give it a shot.

Getting Started with Data Science using Cortana Intelligence Solutions Templates

$
0
0

Cortana Intelligence Gallery

Reference architectures for common data science scenarios are now available via the Cortana intelligence solution templates these allow you to quickly build Data Science Solutions from solution templates which include reference architectures and design patterns and allow you to enhance and adapt these model to them your own with the included instructions or with a featured partner. https://gallery.cortanaintelligence.com/solutions

image

Reference Architectures for common scenarios

image

Built on best practice design and patterns

image

Automated deployment on your Azure Subscription

image

Customisable for your needs

The Solution uses five types of resources hosted and managed in Azure:

  • Azure SQL Server instance (Azure SQL) for persistent storage,
  • Azure Machine Learning (AML) webservice to host the R forecasting code,
  • Azure Blob Storage for intermediate storage of generated forecasts,
  • Azure Data Factory (ADF) that orchestrates regular runs of the AML model,
  • Power BI dashboard to display and drill down on the forecasts.

The Solution automates the running of periodic forecasts, at a pace configured in ADF (e.g. monthly), where it learns a model with the current historical data, and predicts quantities for future periods for all products in the product hierarchy. Each forecast cycle consists of a round-trip from the database, through the model, then back to the database. Each cycle measures forecast accuracy by conventional data holdout techniques. You can configure the number of periods, the product categories and the hierarchy among products. You need to load your current data in the Azure SQL database, and extract forecasts after each run from the same database. The Solution exposes the R code model to allow further customizations, and to allow you to simulate historical data, to test the Solution.





Microsoft Flow the Concept - Industrial Project Summary

$
0
0

 

Introduction

Microsoft Flow is a newly developed software service by Microsoft. It acts as a platform that enables automated workflows to be setup across different services and applications. Flow helps users automate essential but repetitive tasks so that they can utilise their time more efficiently.

Earlier this year I set a project to a number of Imperial Engineering students, the primary requirement of this project was to explore the Flow platform and its capabilities. A team of EEE students agreed to meet this requirement by focusing on enhancing users’ online shopping experience through the creation of new workflows on Flow.​ The team has worked hard to provide users with a streamlined and more convenient online shopping experience through the integration of many services like cloud storage, image recognition, text analysis and social media. This report contains a detailed illustration of the initial research and brainstorming, design development, testing and considerations, as well as suggested future work for this project.

Project Organisation

After the initial research and brainstorming process, tasks are delegated to each member with some focus on technical aspects such as research on API (Application Programming Interface) , coding and implementation of workflows on Flow while others focus on non-technical aspects such as team management, problem definition, market research, competitor analysis, deliverables, and documentation. Members have a mix of technical and non-technical responsibilities, a detailed assignment of responsibilities for each team member can be seen  below.

Member

Main Responsibilities

Jie Wu ● Communication with Client and Supervisors

● Coding + Flow Implementation

● Product Demonstration

● Writer - Main Report & Summary Report

Minghe Wen ● Coding + Flow Implementation

● Brainstorming

● Presentation Slides

● Writer - Blog for Microsoft

● Writer - Summary Report

Virgram Mohan ● Image Recognition Service Research and Selection

● CloudSight API Technical Research

● Presenter - Presentation and Demonstration

● Writer - Main Report

Bicheng Huang ● Brainstorming

● Outline technical problem, design specifications and client requirements

● Facebook API Technical Research

● Writer - Main Report

Mengyang Le ● Brainstorming

● Developing Brainstormed Ideas

● Criteria and Idea Selection

● Facebook API Technical Research

● Writer - Main Report

Yumeng Sun ● Brainstorming

● Team Management and Organisation + Taking Minutes (See Appendix

[A])

● Product Operation and Examples

● Structure Outline - Report and Presentation

● Writer - Main Report & Summary Report

Mubarak Alimi ● Brainstorming

● Market Research and Competitor Analysis

● Technical Documentation

● Presenter - Presentation and Demonstration

● Writer - Main Report & Summary Report

Ivan Savelev ● Brainstorming

● Developing Brainstormed Ideas

● Presenter - Presentation and Demonstration

● Writer - Main Report

The team worked with a detailed timeline that can be seen in. The following Gantt Chart. Tasks were frequently modified and added as the team progressed. The team had a flexible and organised schedule with each member completing corresponding tasks either on time or beforehand.

image

Here is a quick summary of the project and its outcomes by the team leader Jie Wu and Minghe Wen.

The Internet, since the very first day, has never been so packed with applications. Emails, messages, pictures, videos … nowadays, there are so much information online that we can create, view, utilise or share. But at the meantime, as most of us have experienced, we have to keep jumping from one app to another all the time in order to complete just one single task. Things get even worse if the task is repetitive which means we have to do it over and over again. Those unconnected web apps have been making us flustered and anxious. What if there is a way to link up all the applications so that the dull and repetitive tasks can be completed automatically on their own?

clip_image002

Figure 1 – connect web apps

There we have Microsoft Flow, a platform to connect the web apps and create automated workflows (or Flow templates) to help you “work less and do more”. Our mission of the project is to use the Flow platform to create automated workflows to enhance the online shopping experiences for the users.

The two Designs

clip_image004

Figure 2 – workflow design 1

clip_image006

Figure 3 – workflow design 2

After brainstorming with the team members, we came up with the two workflow designs.

The first design allows the user to upload a photo. The photo will then get analysed and keywords of the items in it will be extracted. The keywords then will be searched on eBay and Flow will send the searching results back to the user by email.

The second design monitors the user’s Facebook events. When the user accepts an invitation for an event, the description of the event will be analysed and extract the keywords. Then the keywords will be searched on Etsy (another online shopping store) and then Flow will return the searching results back to the user by email.

The Techniques

As some of you may have already known, creating workflows on Flow is incredibly easy. The real technical part is to implement the third-party services onto Flow as custom connectors so that you can utilise their functionalities and build the workflows you desire.

The third-party services we used for the designs are Dropbox, CloudSight, eBay, Facebook and Etsy. These services are either not integrated on Flow yet or the functionalities of the existing ones are limited and hence do not meet our requirements. So we need to implement them on our own. Implementing third-party services onto Flow is assumed to be simple if nothing strange happen. Simply look up the API documentation of the web service and write an OpenAPI file and upload it to Flow, and your job is done.

clip_image008 clip_image010

Figure 4 – Logos of OpenAPI and Azure Functions

But weird things do happen, since there always are compatibility problems between apps. We decided to use Azure Functions Service to solve the problems. Azure Functions is another powerful tool hosted on Azure. It provides serverless computing service which means you can write functions directly on Azure and get them run whenever you call them (using the functions’ addresses) from anywhere else.

We wrote some Python code on Azure Functions and use the functions as intermediary between the Flow platform and the apps. The functions would process the data sent from Flow and the apps and when finished, send HTTP requests to deliver the processed data to the destination. In this way, the compatibility problems are handled easily.

clip_image012

Figure 5 – implementation of the third-party web services

Final Thoughts

We found the project was both challenging and fun and we really appreciate the chance for us to work with Microsoft. It offered us an invaluable hands-on experience of web development and we have been pushed to learn new skills along the way. It was a great project!

MRPパフォーマンスを改善するチェックポイント 【Part 1】

$
0
0

MRPのパフォーマンスは、データ量やハードウェアスペック、BOMの構成、バッチスレッド使用など、様々な要因に依存していますが、MRPパフォーマンス改善にむけて、確認していただくチェックポイントがいくつかございます。(以下のリストはAX2012対応となっておりますので、AX2012以前のバージョンには対応していない事項もあります)

 

一度にすべてを適用するのではなく、検証環境にて1つずつ設定を変更して頂くことで、どの設定を見直すべきか、それぞれご確認いただくことで最適値を知ることができます。

 

1.   ヘルパーの使用

マスタープランを実行する際に「ヘルパー(複数のバッチスレッド)」(マスタプラン > 定期処理 > マスタースケジューリング > スケジューリングヘルパー)を使ってパラレル処理をすることで、ランタイムの処理が短縮できる場合があります。それぞれのヘルパーはMRPのメイン処理に加えて、補助機能としてMRPを実行します。

【ヘルパーに関する詳細情報】

TechNet: https://technet.microsoft.com/ja-jp/library/gg242496.aspx

2.   ヘルパーの適正数

ヘルパーの数はバッチ サーバーで使用可能なスレッドの最大数またはそれ以下にする必要があります

MRPを実行するバッチグループで、使用可能なバッチスレッドより大きい数を指定しても、意味がありません。サーバー/バッチ グループごとのスレッドで使用可能なバッチの数を確認するには、システム管理 > システム > サーバーの構成 より確認ください。

3. 複数AOSの使用

MRP実行時に複数のAOSを使用することによりパフォーマンスが改善される場合があります。また、MRP実行時には、同時に負荷が高い処理が実行されないようにスケジュールします。また、MRP実行時にアクセスが発生するテーブル(InventTrans, InventSumLogTTS, ReqItemTableなど) をロックするような処理は避けてください。

4. 使用していないコンフィグレーションキーの無効化

一例として、プロセス製造モジュールを使用している場合、バッチ番号の有効期限チェックなどの余計な計画ロジックが発生しますので、MRPの実行時間の遅延原因になる場合があります。使用していないモジュールがある場合は、該当するモジュールのコンフィグレーションキーの無効化をご確認ください。

5. アクションメッセージの使用

アクションメッセージを有効化すると、MRP処理時間に影響があります。アクションメッセージを使用して業務分析などを実施していない場合は、アクションメッセージタイムフェンスを「0」に設定 (マスタプラン > 設定 > 計画 > マスタプラン) することで、無効化されMRPパフォーマンスが改善されます。同時に「補充グループ」 (マスタプラン > 設定 > 補充 > 補充グループ) のアクションメッセージ設定についても無効化されていることをご確認ください。

 6. 計画メッセージの使用

計画メッセージの有効化も、MRP処理時間に影響があります。計画期日が必要なBOM計画 (下位レベルの品目などについて今日の日付よりも前の注文日付が計算された際に計画メッセージが出力される) や、払出と受入に影響がある遅延が発生する場合に計画期日を知らせるなどの機能を使用していない場合などが当てはまります。計画メッセージタイムフェンスを「0」に設定 (マスタプラン > 設定 > 計画 > マスタプラン) することで、無効化することでMRPパフォーマンスが改善されます。「補充グループ」 (マスタプラン > 設定 > 補充 > 補充グループ) の計画メッセージ設定についても、すべて無効化されていることをご確認ください。

7. ヘルパータスクバンドル内のタスク数

「ヘルパータスクバンドル内のタスク数」 (マスタプラン > 設定 > マスタプランのパラメータ > 一般 >パフォーマンス) を変更することで、MRPパフォーマンスの改善に役立つ場合があります。ここに指定された数は、単独のヘルパーにより、同時に処理できる品目数となります。

データ量(処理される品目数)が多い場合は、タスク数を増加させ、そうでなければ品目数に相当する(MRPパフォーマンスが最適となる)タスク数を設定します。多くの場合は試行錯誤で最適値を調整していただきます。品目が多くなればなるほど、このタスク数が重要になりますので、心にとめておきましょう。

8. MRP実施中のキャッシュ使用 : 「最大」

MRPの計画ロジックはキャッシュ使用量に大きく依存していることから、キャッシュ使用は「最大」に設定されていることをご確認ください (マスタプラン > 設定 > マスタプランのパラメータ > 一般 >パフォーマンス)

 

MRPパフォーマンスを改善するチェックポイント【Part 2】

$
0
0

Part 1ではヘルパーの使用やパラメータ設定について、MRPパフォーマンスの最適化に貢献するコツを何点かお伝えしました。MRPの設定については、製品のタイプ(ディスクリート製造、プロセス製造、ハイブリッド製造)や、受注形態(個別注文、バルク注文)に依存しますが、さらにパフォーマンスの改善に有効なチェックポイントをご紹介します。

1. 類似のリードタイムを持つ品目は、同じ補充グループを割当てる

補充タイムフェンス、マイナス在庫日数やプラス在庫日数などの補充グループパラメーター設定は、品目のリードタイムに深く関係しています。そのため、品目に適していないリードタイムをMRPが考慮した場合、不必要な計画オーダーが作成されてしまうため、パフォーマンスおよび実行時間の最適化が望めません。従いまして、一つの補充グループに類似のリードタイムを持つ品目で構成するようご検討ください。

【リードタイムに関する詳細情報】
TechNet https://technet.microsoft.com/ja-jp/library/aa497131.aspx

2. MRPパラメータ:動的マイナス在庫日数を使用時の注意点

マイナス在庫日数の設定は品目リードタイムに深く関連しています。マイナス在庫日数が品目のリードタイムより短い場合でも、計画オーダーが生成されます。例えば本日が5月1日、販売注文日付が5月5日、品目リードタイムが6日、発注書が5月8日とします。マイナス在庫日数が2日だと想定すると、品目のリードタイムより短いことになります。納期に間に合わないにもかかわらず、MRPは計画発注書を作成してしまう結果となりますので、ご注意ください。

3. 補充グループのマイナス在庫日数はリードタイムよりも長く設定

上記項目 2 の注意点から、品目リードタイムよりマイナス在庫日数を長く設定することで、余計な計画オーダーの生成が回避されます。

4. プラス在庫日数は必ず「日数」を指定

補充グループのプラス在庫日数がゼロまたは空欄の場合、MRPは既に確定されているオーダーを考慮しないため、新規に計画オーダーが作成されます。補充グループのプラス在庫日数を指定する場合はゼロまたは空欄とならないよう設定ください。なお、手持在庫に関しては、プラス在庫日数の設定を考慮しません。

5. 定期発注・定期製造される品目

定期発注・定期製造される品目のプラス在庫日数は、リードタイムと同じ値を指定することでパフォーマンスが改善されます。

6. MRP実行時の起動オプション

「再生成」ではなく、「差分変更」にしていただくことで、変更分のみMRPがスケジューリングされますので、実行時間の短縮が可能となる場合があります。

7. MRPで使用されるカレンダー

MRPで使用するカレンダに不必要な期間割当がされていないかどうかご確認ください。本日の日付から50年間有効なカレンダを使用することは、現実的ではありません。 業務に適切な期間のカレンダを使用してください。

8. 関連するテーブルエントリの削除

MRP実行前に、前回実行された計画を削除してください。または、以下のテーブルエントリを削除後、マスタスケジューリングを実行してください。必ず検証環境にてテストを実施し、問題がないことをご確認のうえ本番稼働機にて実行してください。
ReqTrans
ReqTransCov
ReqPO
WrkCtrCapRes

Issues with Visual Studio Team Services - 07/20 - Investigating

$
0
0

Initial Update: Thursday, July 20th 2017 10:48 UTC

We are investigating issues with Hosted Build service in Visual Studio Team Services across multiple US regions and West Europe. Customers may experience build failures. Triage is in progress and we will provide an update with more information.

  • Next Update: Before Thursday, July 20th 2017 12:00 UTC

Sincerely,
Vamsi


Préparer son environnement de développement Node.js sous Windows 10 avec VS Code et Bash

$
0
0

Quand on développe une application Node.js, on préfère généralement travailler avec un environnement de développement sous Linux pour pouvoir bénéficier de toute la richesse de l’écosystème Node.js sous Linux et plus particulièrement les modules natifs. Cependant, dans un environnement d’entreprise, il n’est souvent pas simple pour un développeur d’obtenir une machine sous Linux car le poste de développeur est souvent associé (à tort) à un poste bureautique. D’autres, dont je fais partie,, n’ont tout simplement pas envie de renoncer au confort d’utilisation de Windows en tant qu’utilisateur.

Désormais, en couplant Bash on Windows et Visual Studio Code, il est possible de se concocter un environnement de développoement Node.js sous Windows sans compromis par rapport à son équivalent sous Linux puisque node va s’exécuter dans le sous-système Linux (Windows Subsystem for Linux) !

Il ne reste plus qu’à se lancer et mettre en oeuvre son environnement. Ci-dessous, je vous déroule un pas-à-pas pour vous faire gagner du temps.

Installer Bash on Windows

  • Ouvrez le Panneau de configuration
  • Cliquez sur Programmes
  • Cliquez sur Activer/Désactiver des fonctionnalités Windows

clip_image001[6]

  • Lancez une console Powershell en mode Administrateur
  • Exécutez la commande lxrun /install
  • Renseignez un nom d’utilisateur et un mot de passe pour votre session Linux

Powershell

  • Maintenant, vous pouvez lancer une fenêtre terminal Bash, en saisissant Bash dans la barre de recherche Windows 10 !

Installer Visual Studio Code

VSCodeSetUp1

  • Si vous voulez pouvoir lancer Visual Studio Code, je vous suggère de vérifier que les options suivante sont cochées

VSCodeSetUp2

  • Une fois l’installation terminée, vous pouvez lancer Visual Studio Code

Configurer Bash comme la fenêtre terminal par défaut de Visual Studio Code

  • Ouvrez une fenêtre terminal intégrée à Visual Studio Code avec Ctrl+ù ; par défaut, sous Windows, c’est une fenêtre Powershell

OpenTerminalWindow

  • Ouvrez la palette de commande avec Ctrl+Shift+P et saisissez terminal default

SetBashAsDefault

  • Sélectionnez WSL Bash

SetBashAsDefault2

  • Dans la fenêtre terminal, ouvrez une fenêtre Bash et fermez la fenêtre Powershell

ChangeToBash

  • Votre fenêtre terminal par défaut dans Visual Studio Code sera désormais Bash

Installer Node.js

  • Dans la fenêtre terminal Bash, saisissez curl –sL https://deb.nodesource.com/setup_6.x | sudo –E bash -

SetNode1

  • Saisissez sudo apt-get install –y nodejs

SetUpNode2

  • Node.js est installé ; vous pouvez vérifier sa version en saisissant node –v

SetUpNode3

  • Vous pouvez installer la dernière version de npm en saisissant sudo npm install npm@latest –g

UpgradeNpmToLatest

Voilà, votre environnement Visua Studio Code + Bash + Node.js sur Windows 10 est prêt. Il n’y a plus qu’à !  Smile

The definitive guide to the coding interview

$
0
0

Guest blog by Andrei Margreloiu Microsoft Student Partner at University College London

clip_image002

About Me

My name is Andrei Margeloiu and I’ve been a competitive programmer for the last five years.

clip_image004

I’ve participated in the World Finals of Google HashCode, the largest algorithmic competition organised by Google, and before that, I won three Gold Medals in the Computing Olympiad in Romania. Having this experience, I’ve published the online course “Introduction to Algorithms and Data Structures in C++” that has been helping over 8000 students from 135 countries with their first steps in the field.

Summer Internships, placements and jobs

Now I’m studying Computer Science at University College London and meanwhile, I’m creating a course to help students and young professionals to prepare for the coding interview in the big tech companies. Hence, in this article, I want to share my experience with the coding interview and tell you the straightest way to pass it. You can reach me here.

Top Tips for passing a coding interview

The article has three parts and it will take approximately 30 minutes to read.

1. What is the coding interview?

2. How to prepare for the coding interview?

3. How to give your best during the coding interview?

What is the coding interview?

Let me ask you a question: have you ever dreamed of working in a big tech company, like Microsoft? If so, be sure that most of us did, too. And because there is such a high demand for a few dozen jobs, they need a way to see which candidates are better prepared.

Before you get invited to the coding interview, you need to apply for the position and have a CV that shines among the others. In this article, I’ll talk about the actual coding interview and how you can prepare for it, and not about the application process for the job.

For most tech companies, the coding interview consists of Algorithms & Data structures problems. Think of these as problem-solving questions, in which the interviewer is looking to evaluate your ability to solve a problem that you haven’t seen before. One interview takes roughly 45 minutes and you are given one or two coding problems. The interviewer is expecting you to find the most optimal solution, code it and explain what you have just coded.

Lastly, you will have the chance to ask him some questions about the company or anything that interests you. We will comprehensively discuss all these steps in the last part of this article.

How to prepare for the coding interview?

I want to be clear with you from the beginning: there is no such shortcut or trick to pass the interview. The single way to find optimal solutions to algorithmic problems is by practicing, solving as many problems as possible. In short, it’s hard work.

You may not like it and start looking somewhere else for a trick to understand how to solve interview questions, but it doesn’t exist. Think for a second: if there would be such a trick, why wouldn’t everyone use it and pass the interview?

So, you need to understand that the single variable of passing the interview is how much you practice before. Your experience, intelligence and everything else is already fixed. Hence, the time spent on practice will make the difference between the candidates who pass and fail.

The coding interview is a fight with yourself, and the single way to win it is to practice a lot.

The first step is to feel comfortable with a mainstream coding language, such as C/C++, Java or Python. Some companies also accept other languages, and the clear majority stick with these three. Pick the one you like most and stick with it. Don’t decide halfway through preparation that you want to change the language! For the rest of the article, I will assume that you’ve chosen a language and know well its syntax.

Now, typically the coding interview is all about Algorithms and Data structures. They include some fundamental topics that you need to understand thoroughly, and don’t even consider going to the interview without feeling comfortable with them.

clip_image008

Below there is a list of the Algorithms and Data structures that you need to know. I've written them in order of their importance and it is the best learning path.

1) Big O complexity: It’s a must and forms the underlying foundation of your algorithmic thinking. Understand what it means and arrive at the point when you can say the complexity of a basic algorithm just from looking at it.

2) Arrays: You should be familiar with the concept of arrays at this point. This topic refers to problems where the array is used just for storage and the solution includes basic techniques, such as iterating with two pointers. A classic problem is to check if a given array is a permutation.

3) Strings: Know how to manipulate a string in your language, and be familiar with problems that ask you to concatenate or rotate them.

4) Linked Lists: It’s common to encounter a linked list problem in the interview. Here you need to pay special attention to the corner cases. Think what happens if the linked list is empty? Or it has just one element? Or you want to iterate until the last element? When you solve a problem with linked lists, think twice about the corner cases.

5) Hash Tables: Fundamental data structure that is present in most interviews. If you go deeper in one topic, choose hash tables. Be fluent in the hash tables library of your language and practice at least five problems. In the end, take this challenge: How would you find the longest subarray with distinct entries?

6) Stacks: Be familiar with the idea that you can manipulate just one end of the stack. Solve this challenge: Implement a Stack with a MAX API, meaning that at every moment you can ask what is the maximum element in the stack.

7) Queues: Don’t confuse them with stacks, because they have two ends. Implement the classic problem of simulating a queue using two stacks.

8) Greedy: This technique is quite simple and you probably use it every day. It basically refers to taking the best decision possible at a specific moment, without considering future consequences. Practice a few problems, and don’t think that any problem can be solved using Greedy.

9) Primitive types: It is mostly concerned with bit manipulation and basic operations on numbers. However, some problems can become quite tricky. How can you count the number of 1’s in a binary representation?

10) Binary trees: Focus on traversals, common ancestors and recursively iterate through the trees.

11) Heaps: They are widely used in real world application, so get to know heaps! Practice until you can be confident when to use a min-heap or a max-heap. How would you print the biggest five elements from a number sequence? Do you use a min-heap or a max-heap?

12) Searching: Searching is a core subject that everyone should know. So, practice at least three problems using binary search.

13) Sorting: Be sure that you can implement mergesort and quicksort. Know very well their best, average and worst case complexities. If you have time, also learn Heapsort.

14) Binary Search Trees: They are widely asked in the coding interview, and you need to be able to implement all their basic operations, including the deletion of an element!

15) Backtracking: It basically means generating all possible solutions and take the ones which respect to your requirements. Implement a generator of the power set of the set and the n-Queen problem.

16) Graphs: They are probably the most used data structure in computer science. Did you know that every social network is just a huge graph? Practice on making a copy of the graph in memory and detecting cycles in graphs.

17) Dynamic programming: Dynamic programming is seen by most as the scariest topic. But it’s the most beautiful if you understand the thinking behind it. So, practice the top five most common questions and stick with them until you understand where the recurrences come from.

In the course, I'm creating now, I'm explaining each topic in this order. I'm still working on the course, but you can get early access to some of the lessons and also help me with your feedback, by joining the course Facebook group.

If many topics are unfamiliar to you, it means that you need to start practicing right after this article. No more searching online other guides, just go and practice. Remember that the interview is a fight with yourself, the more you practice, the higher your chance of winning. It’s up to you to increase your problems comfort zone.

How to give your best during the coding interview?

The interview has five parts which we’ll discuss in depth. Watch the explanatory videos for real examples.

image

1) Introduction

This part commences the interview and takes about three to five minutes. The interviewer wants to find out more about you and gives you the chance to present yourself. Typically, he is interested in what projects you have worked on before and which was the most impressive one. Now, this is your chance to shine, so don’t waste it!

Don’t try to be flattering. It’s easy to spot a fake personality that tries to trick you. Me and everyone else hate those people, so be sure you show your true personality.

You need to know beforehand what you are going to say about your favourite project, so prepare it before the interview. Now, most people say just two short sentences, which is bad. Which answer do you like more?

1) "I worked on a web application to better manage patients in hospitals. I wrote the backend in Node.js."

The way you present yourself really matters

2) "A project that I really enjoyed was about creating a web application for the National Health Services of the UK to help them to better manage the patients in the hospital. It was part of the University curriculum, it took two months and I have been the leader of a team of three.

We took the project from a basic idea and turned it into a real application. I was responsible for coding the backend in Node.js and database in MongoDB. It was the first application I've ever created for a client and I wasn't familiar with the web technologies. The developing process was iterative, and we made some mistakes on the way.

In the end, we over-delivered with many features, and my favourite one is that the users can update the rooms of the hospital, instead of being hard coded.

I learned many things from this project, especially how to work in a team. There were some moments in which technical or team problems arose, but in the end, everything went well and we got the highest mark of the year!"

Turn this presentation into your advantage! The interviewer is truly interested in finding more about you, so start with a concise introduction of the project, explaining what it's about. Then say what was your contribution (if you helped the team with a smart idea, say it!), and what you learnt from the developing process. You can also mention what difference you made for the users.

You should talk one-two minutes about your project.

2) Understand the problem

The interviewer will briefly tell you the problem statement that you need to solve. I said ‘briefly’ because he won’t give you many details about the constraints, corner cases or how you receive the data. It’s part of your job to ask for everything that you need to solve the problem. Repeat the statement loud to be sure that you got it correctly.

3) Search for a solution

Now you need to find an optimal solution to the given problem. I want to be straight with you, and tell you that you won't magically find an optimal solution if you didn't practice a lot before.

I know that finding the solution might not come naturally to you from the beginning. So, the first thing when you receive the problem is to make an idea of the data structure that you need to use.

After you spot the data structure, think of typical algorithms and problems that you have seen before using this data structure. For example, if you receive a problem with trees, it's clear that you need to use algorithms specific to trees, such as recursion, right? If you receive a problem telling you to find the minimum or maximum, then think of heaps, sorting or stacks. And the list of such examples continues.

The key here is that practice will tell its word now, and I have good news for you! Many interview questions are similar, so there is a high chance that you practiced a similar problem before and you can build a solution starting from that.

The Moral here is basic:

The more you practice, the faster you will find optimal solutions

Now, the other thing that is crucial at this step is to talk aloud. The interviewer is interested in seeing your thinking process and hearing your explanations. Most people stay quiet for three minutes while thinking of a solution, and that is the worst strategy possible for them.

Don't be one of them! Make the interviewer really understand what you want to say, and don't assume he gets bored while hearing to you. He is there to listen to your thinking process. Don't be scared that you might start with a bad solution, there is time to improve it!

The two things I want you to remember is to practice a lot before and talk aloud during the interview, period.

Keep in mind to talk aloud and explain what you are thinking.

Now, in the following video we'll move to finding an optimal solution for the problem. I want you to pay special attention to how I speak and explain. Think that now you are the interviewer and I am the candidate. I will try to do my best to explain. If you understand what I'm explaining, it means that I'm doing a great job as a candidate. If something is unclear to you, it means that it would also be unclear for the interviewer! So, feel free to judge me as a candidate and leave a comment if something doesn’t make sense. I’ll read all of them.

4) Code the solution

Now we arrived at the interesting part, coding the solution. To do well in this part, you must feel comfortable with the language that you are coding in, even if you don't have an IDE to help you. Some companies ask you to code on a blackboard or Google docs, where there is no word highlight, so make sure that you feel comfortable with the language.

Now, moving to the coding itself, I want to tell you something.

This part is not all about coding, it's also about talking loud and explaining what you are coding.

The interviewer doesn't want you to code in silence for ten minutes and then say that you finished. He wants you to talk aloud and explain everything that you write.

The reason is simple: he doesn't know what you are coding if you don't explain. So, make his job easier and talk while you are coding.

Also, make your code look clean and neat. No one likes messy codes that one can barely read. Be sure that your code is explanatory.

After you finished coding, there is a high chance that you missed a bug. That's not a problem at all! Your job is to repair the code by yourself. So, tell to the interviewer: "I have just finished coding. Can I have a look at the code to be sure there are no mistakes?". He would happily say "Yes, let's go through your code".

Then you walk him through each line of code and explain again what is happening there. If you see a mistake, tell it! Don't pretend that the code is perfect, you will just hurt yourself! So simply say: "I see that there is a bug. Here is the solution to solve it ...", and you make the changes.

In the end, you should arrive at an optimal solution that works and is bug free

5) Ask questions

And here we came, the moment after you finished coding the problems. Now the interviewer will give you the chance to ask him any question that you might be interested in.

Many people say that they don't have any questions, and that is a huge mistake. They miss a big chance to find out more about the company and the job itself!

I really encourage you to ask at least two questions. It's your chance to find out what it is like to work in that company!

You can ask: "How do you find the culture of the company?" or "What do you like most about your job? or "How was your first week here?" or "Do you get to choose which project to work on?" or "What project do you normally give to interns?". You can pretty much ask anything about the company and his job.

Ask questions in the end! Don’t skip them

Finally

The coding interview is all about you finding a solution and explaining it. There are two things you need to remember from this article: to practice a lot before the interview and articulate your thinking process during the interview. They are equally important and if you master both, then you’ve passed the interview!

Microsoft BizTalk Server Webinar Series - Configuring BizTalk 2016 with SQL Always On Cluster on Azure

$
0
0

We are planning to have next session on SQL Always on with BizTalk - this will be in 2 parts, 26th July and 2nd Aug.

Here is the meeting invite ics file.

https://biztalkteam.blob.core.windows.net/webinar/Configuring_BizTalk_2016_with_SQL_Always_On_Cluster_on_Azure.ics

 

Agenda

Part A  26th July - 4 pm to 5 pm IST

 

  1. Overview of  SQL always functionality
  2. Basic concepts of SQL Always on
  3. Demo- Configure SQL Always on using Azure template

 

Part B  2nd August - 4 pm to 5 pm IST

 

  1. Configure BizTalk databases on SQL always on
  2. SSO clustering on SQL always on cluster
  3. Common issues

Microsoft Azure Stack Development Kit

$
0
0

Last month I shared some details around Azure Stack at https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/06/15/using-azure-stack-to-teach-devops-and-it-skills/ 

Azure Stack in teaching and Learning

Follow the journey of academics and students at ROC van Amsterdam in the Netherlands learning Azure with Azure Stack https://azurestackblog.wordpress.com/ where the team recently had a skype call with Azure Stack Team as part of the General Availability of Azure Stack.

Azure Stack now GA to all users

Azure Stack has been rolled out across the world, and allows developers to create and run applications on their own servers but use all the Azure tools that cloud-only professionals enjoy. As you can see from the blog above Azure stack is a great tool for  so-called hybrid model – using servers on their own premises as well as Microsoft’s cloud.

Teaching and learning using Azure Stack will demonstrate to students how to cut latency and connectivity issues as data is processed on-site, rather than online. It will also allow certain industries – such as banking or healthcare – to meet regulations or policy requirements regarding uploading data to the cloud.

image

Azure stack experience also demonstrate that consistency is required to build and deploy applications using the exact same approach – same APIs, same DevOps tools, same portal – leading to increased developer productivity.

Azure Stack Development Kit

Microsoft Azure Stack Development Kit is a single-node version of Azure Stack, which you can use to evaluate and learn about Azure Stack. You can also use Azure Stack Development Kit as a developer environment, where you can develop using consistent APIs and tooling.

You should be aware of these points with Azure Stack Development Kit

  • Azure Stack Development Kit must not be used as a production environment and should only be used for testing, evaluation, and demonstration.
  • Your deployment of Azure Stack is associated with a single Azure Active Directory or Active Directory Federation Services identity provider. You can create multiple users in this directory and assign subscriptions to each user.
  • With all components deployed on the single machine, there are limited physical resources available for tenant resources. This configuration is not intended for scale or performance evaluation.
  • Networking scenarios are limited due to the single host/NIC requirement.


Recommended Hardware for on premise servers

Disk drives: Operating System
1 OS disk with minimum of 200 GB available for system partition (SSD or HDD)

Disk drives: General development kit data*
4 disks. Each disk provides a minimum of 250 GB of capacity (SSD or HDD). All available disks will be used.

Compute: CPU
Dual-Socket: 16 Physical Cores (total)

Compute: Memory
128 GB RAM (This is the minimum to support PaaS resource providers.)

Compute: BIOS
Hyper-V Enabled (with SLAT support)

Network: NIC
Windows Server 2012 R2 Certification required for NIC; no specialized features required

HW logo certification
Certified for Windows Server 2012 R2

Full hardware spec at https://docs.microsoft.com/en-gb/azure/azure-stack/azure-stack-deplo 

To download the Azure Stack Development kit register at https://azure.microsoft.com/en-gb/overview/azure-stack/development-kit/

Resources and further reading

Azure Stiack https://azure.microsoft.com/en-us/overview/azure-stack/

Azure Stack – GA https://azure.microsoft.com/en-us/blog/microsoft-azure-stack-is-ready-to-order-now 

Deploying Azure Stack Development Kit https://docs.microsoft.com/en-gb/azure/azure-stack/azure-stack-deploy

Add Routes to an Azure Web App Integrated with a VNET

$
0
0

The default routes assigned to the Point-to-Site connection are inherited from the VNET’s routes. Additional routes may be needed to correctly route requests, bound for the VNET or on-premises, down the Point-to-Site tunnel. Below are three options to add routes the web app’s Point-to-Site configuration. The method of "Using the Azure Portal" is the recommended way to configure routes. The other two options are just as effective but are more prone to user error. This walkthrough assumes the following:

·        You have configured a Point-to-Site connection on your web app

·        The routes you are adding were not inherited from the VNET

Using the Azure Portal

Note: This will temporarily bring down the P2S connection disrupting any other apps that currently using the connection temporarily

1.      Navigate to the Azure Portal

2.      Select the web app

3.      In the overview blade click on the App Service Plan

clip_image002[12]


 

4.      Select the Network option from this menu

clip_image004[6]


 

5.      Select Click here to manage

clip_image005[6]

 

6.      Select the VNET that the Point two site connection is connected to

clip_image007[6]

 

7.      At the very bottom of this list you can add in your additional routes.

8.      Finally click save and the routes should be added. If you experience issues with the routing not working try syncing the network using the Sync Network option

Note: This will temporarily bring down the P2S connection disrupting any other apps that currently using the connection temporarily

Using resources.azure.com

Note: This will temporarily bring down the P2S connection disrupting any other apps that currently using the connection temporarily

1.      Navigate to resource.azure.com

2.      Navigate down to the route attribute
Subscription -> resourceGroup -> providers -> Microsoft.web -> serverfarms -> virtualNetworkConnections -> routes

clip_image009[6]


 

  1. From the top menu change the mode to read/write and then press the create button:

    clip_image011[6]

  2. Give your resource a name similar to this “10.0.0.0-8“
     

    clip_image012[6]
  3. Remove the JSON code and replace it with the following (make sure to replace the text in red with your values):

    {

"id": "/subscriptions/your subscription id/resourceGroups/your resource group/providers/Microsoft.Web/serverFarms/your App Service Plan/virtualNetworkConnections/vnet name/routes/the name you used in step 3",

"name": "name you specified in step 4",

"type": "Microsoft.Web/serverfarms/virtualNetworkConnections/routes",

"location": "West Europe (or the location of your choice)",

"properties": {

"name": "name specified in step 4",

"startAddress": "start ip address",

"endAddress": " end ip address ",

"routeType": "DEFAULT",

  "denyRoute": false

}

}

Here’s an example of the json object:
{

  "id": "/subscriptions/xxxx/resourceGroups/ ResourceGroup/providers/Microsoft.Web/serverFarms/ ASP/virtualNetworkConnections/ webappVNET/routes/10.0.0.0-8",

  "name": "10.0.0.0-8",

  "type": "Microsoft.Web/serverfarms/virtualNetworkConnections/routes",

  "location": "South Central US",

  "properties": {

    "name": "10.0.0.0-8",

    "startAddress": "10.0.0.0",

    "endAddress": "10.255.255.255",

    "routeType": "INHERITED",

    "denyRoute": false

  }

}

  1. Press on the green PUT button
  2. The items can be edited and deleted by selecting the Edit button on the newly created resource or by selecting the Actions(POST,DELETE). The page will still need to be in Read/Write mode to modify the resources.

    clip_image014[6]

 

 

Using PowerShell

Note: This will temporarily bring down the P2S connection disrupting any other apps that currently using the connection temporarily

Prerequisite: Having the Azure PowerShell Module installed

1.      Follow steps 1 and 2 from the “Using resources.azure.com” section above

2.      Select the PowerShell option and copy the #CREATE routes section of the PowerShell Commands

clip_image016[6]

3.      Open the PowerShell ISE and copy the commands. Remember before running the commands you must first login with your Azure Account using Login-AzureRmAccount

4.      The properties section should be similar to the example below. Make sure to use ‘=’ instead of ‘:’ to set the values:

$PropertiesObject = @{

       name = "192.168.0.0";

    startAddress = "192.168.0.0";

    endAddress = "192.168.255.255";

    routeType = "STATIC";

    denyRoute = "false"

}

clip_image018[6]

 

5.      You can use the command from resources.azure.com to list the existing routes.

clip_image020[6]

Let me know if this was helpful or if you have any questions in the comments below.

References:

 https://docs.microsoft.com/en-us/azure/app-service-web/web-sites-integrate-with-vnet

https://resources.azure.com

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>