Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Visual Studio Toolbox: Design Patterns: Template Method

$
0
0

This is the third of an eight part series where I am joined by Phil Japikse to discuss design patterns. A design pattern is a best practice you can use in your code to solve a common problem.  In this episode, Phil demonstrates the Template Method pattern. This pattern defines the program skeleton of an algorithm in an operation, deferring some steps to subclasses. 

Episodes in this series:

  • Command/Memento patterns
  • Strategy pattern
  • Template Method pattern (this episode)
  • Observer/Publish-Subscribe patterns (to be published 7/25)
  • Singleton pattern (to be published 8/8)
  • Factory patterns (to be published 8/10)
  • Adapter/Facade patterns (to be published 8/15)
  • Decorator pattern (to be published 8/17)

Resources


アジア太平洋地域でのサイバーセキュリティの高度化

$
0
0

2017 年 5 15 - Microsoft Secure Blog スタッフ - マイクロソフト

このポストは「 How the Asia-Pacific region is advancing cybersecurity 」の翻訳です。

執筆者: Angela McKay - サイバーセキュリティ ポリシー担当ディレクター

今年初め、私とチームは光栄にも数日を日本で過ごすことができ、情報処理推進機構 (IPA) シンポジウムに参加しました。また、業界の仲間と会って、公共政策に取り組むうえでのサイバーセキュリティの世界的動向とチャンスを議論すると共に、日本政府のパートナーと会って、クラウド セキュリティの問題についても検討しました。

 

東京での滞在はわずか数日でしたが、日本とアジア太平洋地域全体の政府機関と業界の両方で、サイバーセキュリティの重要性を力説する声が高まっているのがわかりました。

さらに、具体的な行動が今すぐ必要だという認識も広まっています。

 

日本は、この領域における地域のリーダーシップをとるうえでふさわしい体制を整えています。そのことは、IPA シンポジウムの規模、参加者と発表者の経験豊かな顔ぶれ、会話の成熟度を見れば明白です。日本では、サイバーセキュリティが単に技術オタクだけの関心事項から、政府機関、企業、および消費者にとっての大きな懸念事項へと明らかに変化しています。政策をめぐる論争は、概念的な議論から、セキュリティに関する施策や要件の策定といったより実用的な検討へと移っており、特にその傾向は、重要なインフラストラクチャと政府機関において顕著です。

 

日本のアプローチでとりわけ称賛に値する優れた点は、日本政府がこの領域の課題に反復的な方法で取り組んでいることです。すなわち、テクノロジやリスクの変化とそのさまざまな活動の効果に基づいて、優先順位分野を動的に変更して、重点の置き所を変えるのです。例えば、サイバーセキュリティ基本法や国家サイバーセキュリティ戦略は 2 年以上前に導入されましたが、日本政府はそれ以来、サイバーセキュリティに対する政府横断型の協調体制など、なかなか成果が出にくいとわかった領域については、繰り返し話し合い、再検討を行っています。

 

サイバーセキュリティの管理方法に取り組んでいるのは日本だけではありませんが、日本は、サイバーセキュリティが一度検討すればその後 10 年間無視できるような領域ではないことを理解している数少ない政府の 1 つです。また、日本政府は、2020 年のオリンピック/パラリンピックに向けた推進力を活かして、サイバーセキュリティの回復力を高めると共に、クラウド コンピューティングなどの新しいテクノロジによって、いかに政府や重要なインフラストラクチャ、モノのインターネット (IoT) のセキュリティを強化できるかを分析しています。2020 年をめどにして、進捗状況を積極的に評価しようと試みています。例えば、サイバーセキュリティ情報の共有によってオリンピックや重要な経済セクターのセキュリティを高められるか、その方法は何か、などです。そのため、ISAC を設置するだけでなく、民間部門とも連携して、1) 共有はリスク管理の成果にフォーカスし、2) 日本固有と考えられる文化面、構造面での障害を理解し、これに対応するようにしています。

 

また、重要インフラストラクチャ部門にリスク管理の施策を導入するように促す際にも、同様のアプローチがとられています。日本政府は、サイバーセキュリティへの取り組みは本来自発的なもので今後も自発性が重要であると考えつつも、民間部門の企業の多くがこの領域での対策推進方法についてガイダンスを求めていることを認識し、ガイドの策定に取り組んできました。それに応えて、マイクロソフトは、NIST が独自のサイバーセキュリティ フレームワークによって進めているのと同様のモデルを作成することを提案しました。同フレームワークでは、政府と民間部門が連携して、エグゼクティブにとって重要な意味を持つ包括的なフレームワーク内で実証済みの標準やベスト プラクティスに基づき、ガイダンスが策定されています。

 

この実用的なアプローチ以外にも、日本は、引き続き新たな重要領域でソート リーダーシップを発揮しています。商業組織と産業組織を対象とするモノのインターネット (IoT) の標準と、その新たなイノベーション領域を効果的に保護する方法について提案をまとめるために、日本はドイツとの新たなパートナーシップを発表しました。これにより、日本はこの領域における真の世界的リーダーとして責任も負うことになりました。まずは、セキュリティの懸念事項を明確に発信して、IoT サービス企業に対処を呼びかけなくてはなりません (詳細については、NTIA に対する当社の対応へのリンクを参照)。行動促進のためのインセンティブの利用も含め、日本の解決策は、地域だけでなく世界中の各国政府機関から注目されるでしょう。

 

デジタル化の時代において、すべての政府と組織が日本のような効果的なイニシアティブやプログラムに目を向け、各自のポリシーや運用に取り入れて体系化すべきです。マイクロソフトは日本や他のアジア太平洋諸国と連携して、信頼できるハイテク世界を創造するために、サイバーセキュリティの力強い原則に基づくグローバル文化を構築できることを楽しみにしています。それには、デジタル領域の安全性とセキュリティの保証に向けた、日本をはじめとする各国のリーダーシップと、マイクロソフトをはじめとする業界リーダーの取り組みが不可欠となるでしょう。

 

Setting up Raspian and .NET Core 2.0 on a Raspberry Pi

$
0
0

Here's a little something I wrote for a customer who wants to use .Net Core on a Raspberry Pi. You might find it useful as well.

Raspbian Linux is the Raspberry Pi Foundation's officially supported operating system for running the Raspberry Pi.
.NET Core is Microsoft's new modular open source implementation of .NET for creating web applications and services that run on Windows, Linux and Mac.

Don't worry if you've never used either of these technologies before, this post will take you though the steps to get Raspian and .NET Core installed and working on a Raspberry Pi device.

Both these technologies are well maintained and supported by their creators and each has excellent documentation which is always up to date. To keep this post as small as possible it will act as a guide, describing each task to be carried out in brief but providing references to external documentation when more details are required.

Step 1 - Prepare a Raspian Linux SD card.

In order to start-up a Raspberry Pi, it must have an operating system installed onto an SD card which has been inserted into the device. Raspian Linux is one of the officially supported operating systems along with 3rd party options such as Windows 10 IoT Core.

The process of installing an operating system onto an SD card is known as flashing the card. Flashing is carried out using a tool or utility which is capable of reading an image file which has been downloaded from an O/S vendor. The image file is typically a binary file and the utility is capable of reading the image and writing it to the SD card.

Task:  Install Raspian Linux onto an SD card.

Reference: https://www.raspberrypi.org/documentation/installation/installing-images/README.md

  1. Download the latest Raspbian Jessie with desktop Linux image to your local machine. Other download options are available at Download Raspian.
  2. Download the Etcher image writing utility and install it.
  3. Insert the SD card into your computer.
  4. Open Etcher and select from your hard drive the .zip file you downloaded in step 1.
  5. Select the SD card you wish to write your image to.
  6. Review your selections and click 'Flash!' to begin writing data to the SD card.
  7. Once flashing is complete, create a new empty file named ssh (with no extension) in the root of the drive that holds the SD card. This will ensure that the SSH daemon is enabled once the Raspberry Pi has started and you can logon over the network.

Step 2 - Boot the Raspberry Pi and connect over the network.

Ensure your Raspberry Pi has the following cables connected:

  • HDMI - so you can watch the boot process from a screen.
  • Ethernet cable - avoid connecting to corporate networks as firewalls and proxies/proxy authentication problems can waste setup time.
  • Keyboard and mouse.

Task: Power up the Raspberry Pi and obtain its IP address.

  • Insert the SD card into the base of the Raspberry Pi. Ensure the electric contacts are facing upwards.
  • Power-up the RPi by plugging in the micro-USB power supply. Wait until it reaches the desktop.

  • Click on the Terminal icon on the top menu to open a prompt and type ifconfig to obtain the IP address which has been assigned by your DHCP server to the Raspberry Pi.

  • Download the PuTTY SSH and Telnet client and launch it.
  • Enter the IP address of the Raspberry Pi and click Open. Accept the message about keys.

  • Enter pi as the logon name, and raspberry as the password.
  • Once you have reached the command line, change the default password for the pi user.

Step 3 - Install the .NET Core Runtime.

Three components are typically required to create and run a .NET Core application:-

  1. .NET Core Runtime. Needs to be installed on any machine where a .NET Core application will run.
  2. .NET Core SDK. Needs to be installed on any machine used to develop .NET Core applications.
  3. IDE Tools. Adds-on for your chosen IDE to automate the development process.

The .NET Core Runtime is available for Windows, MacOS and various flavours of Linux however only the most recent version; .NET Core Runtime 2.0 is supported for running on Raspian Linux on the ARM32 processor architecture. This means it is possible to run .NET Core applications on the Raspberry Pi.

However there is currently no .NET Core SDK available for Raspian Linux running on the Raspberry Pi (i.e. ARM32).

The end result? It is not as yet possible to develop a .NET Core application *directly* on the RPi itself. Instead an application must be developed on a supported platform (i.e. another machine running Windows, MacOS or one of the various flavours of Linux running on x86 or x64) then deployed to the Raspberry Pi.

See the dotnet/Core repository for an official statement.

Task: Install the .NET Core Runtime on the Raspberry Pi.

The following commands need to be run on the Raspberry Pi whilst connected over an SSH session or via a terminal in the PIXEL desktop environment.

  • Run sudo apt-get install curl libunwind8 gettext. This will use the apt-get package manager to install three prerequiste packages.
  • Run curl -sSL -o dotnet.tar.gz https://dotnetcli.blob.core.windows.net/dotnet/Runtime/release/2.0.0/dotnet-runtime-latest-linux-arm.tar.gz to download the latest .NET Core Runtime for ARM32. This is refereed to as armhf on the Daily Builds page.
  • Run sudo mkdir -p /opt/dotnet && sudo tar zxf dotnet.tar.gz -C /opt/dotnet to create a destination folder and extract the downloaded package into it.
  • Run sudo ln -s /opt/dotnet/dotnet /usr/local/bin` to set up a symbolic link...a shortcut to you Windows folks 😉 to the dotnet executable.
  • Test the installation by typing dotnet --help.

  • Try to create a new .NET Core project by typing dotnet new console. Note this will prompt you to install the .NET Core SDK however this link won't work for Raspian on ARM32. This is expected behaviour.

  • Optional: Follow the steps outlined in the .NET Core on Raspberry Pi guide to create a test console application on your local development machine then run it on the RPi. The key action here is that you will create an application then publish the it using the dotnet publish -r linux-arm command.

Building Azure Stream Analytics Resources via ARM Templates - Part 2 - The Template.

$
0
0

Check out Part 1 of this post.

In this 2nd part of my post on building Azure Stream Analytics Resources using ARM Templates we are going to get down and actually build something 🙂

You've probably ended up here because you can't find a lot of really good help anywhere else on how to create Stream Analytics resources in ARM templates. Here is a good reason for this....it's not officially supported. Yes that's right, the reason you can't find any proper documentation on the MSDN or Azure documentation website is that it doesn't exist. Even the formal ARM template schemas which we discussed in part 1 don't exist.

For example, take a look at this snippet from a Stream Analytics template which you might have seen elsewhere on the intertubes:

{
      "apiVersion": "2016-03-01",
      "name": "[variables('streamAnalyticsJobName')]",
      "location": "[resourceGroup().location]",
      "type": "Microsoft.StreamAnalytics/StreamingJobs",
      "dependsOn": [
        "[concat('Microsoft.Devices/IotHubs/', parameters('iotHubName'))]"
      ],
      .........

See that apiVersion bit and the type bit? Go and take a look at the schema reference file Microsoft.StreamAnalytics.json in the location I told you about last time, i.e. in the https://github.com/Azure/azure-resource-manager-schemas/tree/master/schemas/2016-03-01 folder.

Did you enjoy reading the file? Of course you didn't....it doesn't exist 🙁 and so here-in lies the problem.

Quite why this was never published or made official is something I'm not able to answer. My assumption is that either 1) there is some (possibly technical) blocker which is preventing it from being published, 2) it's not seen as a priority. In either case you can provide feedback at https://feedback.azure.com/forums/270577-stream-analytics which is the official location where the Stream Analytics team look for new feature requests.

But it's not all sad news 🙂

After a few days playing with the specification, some trial and error and a good dose of reverse engineering using https://resources.azure.com I was to get a fully working template which created a Stream Analytics job and created both input and output locations. Creating the job was the easy bit, there are lots of place on the web which show this.

Dynamics AX 2012 – DIXF Performance Benchmark Results

$
0
0

The Dynamics AX InMarket team has provided the following benchmarking results for DIXF imports based on numerous requests for this information. If there are any questions or requests for additional entity coverage please let me know!

Problem Statement:

Customer is importing around 500K records and the need for some form of benchmarking the performance expected out of importing/exporting using DIXF.

This article answer customer's questions about what kind of performance they could expect out of DIXF.

 

Machine Configuration

Below was the machine configuration on which the benchmark testing was performed.

Machine (3 BOX Setup) 

Processor GHz 

RAM (GB) 

#Cores 

OS Version

AOS 

2.67 

16 

24 

Windows Server 2008 R2 Enterprise 

SQL  

2.13 

48 

8 

Windows Server 2008 R2 Enterprise 

Client 

2.41 

8 

8 

Windows Server 2008 R2 Enterprise 

 

 

Entities

Microsoft performed the benchmark performance testing for below entities with different amount of data in each iteration. Similar iterations was performed with different numbers of batch tasks.

           

Entity

Iteration 1

(Records)

Iteration 2

(Records)

Iteration 3

(Records)

Iteration 4

(Records)

Iteration 5

(Records)

Opening Balance 

1500 

3000 

6000 

12000 

24000 

Product

5000 

10000 

20000 

40000 

80000 

Sales Order header

1000 

2000 

4000 

8000 

16000 

Sales Order Line

5000 

10000 

20000 

40000 

80000 

Vendor Invoice Header

500 

1000 

2000 

4000

8000 

Vendor Invoice Line

1500 

3000 

6000 

12000 

24000 

Sales Order Composite Entity 

6000 

12000 

24000 

48000 

96000 

Vendor Invoice Composite Entity 

2000 

4000 

8000 

16000 

32000 

 

Results

Below are the results for each entity when run under the batch execution with different tasks/record counts.

 

Opening Balance Entity (Batch Execution)
         

Entity

No. of Tasks

Record Count 

Staging Execution Time

Target Execution Time

 

 

Opening Balance

32

1500 (3 lines per journal)

3 sec

24 sec

32

3000 (3 lines per journal)

4 sec

32 sec

32

6000 (3 lines per journal)

9 sec

59 sec

32

12000 (3 lines per journal)

9 sec

2 min 2 sec 

32

24000 (3 lines per journal)

15 sec

5 min 28 sec 

 

Execution details with different numbers of tasks.

         

Entity

No. of Tasks

Record Count

Staging Execution Time

Target Execution Time

 

Opening Balance

32

24000 (3 lines per journal)

15 sec

5 min 28 sec 

16

24000 (3 lines per journal)

14 sec

8 min 27 sec

8

24000 (3 lines per journal)

14 sec

10 min 52 sec

 

 

Product Entity (Batch Execution)
         

Entity

No. of Tasks

Record Count 

Staging Execution Time

Target Execution Time

 

 

Product

32

5000 

6 sec 

2 min 45 sec 

32

10000 

9 sec 

5 min 26 sec 

32

20000 

17 sec 

11 min 13 sec 

32

40000 

37 sec 

26 min 41 sec 

32

80000 

1 min 3 sec 

58 min 37 sec 

 

Execution details with different numbers of tasks.

         

Entity

No. of Tasks

Record Count 

Staging Execution Time

Target Execution Time

 

Product

32

80000

1 min 3 sec 

58 min 37 sec 

16

80000

1 min 1 sec 

1 hr 1 min 56 sec 

8

80000

1 min 2 sec 

1 hr 41 min 32 sec

 

 

Sales Order Header (Batch Execution)
         

Entity

No. of Tasks

Record Count 

Staging Execution Time

Target Execution Time

 

 

Sales Order Header

32

1000 

3 sec 

7 sec 

32

2000 

3 sec 

11 sec 

32

4000 

4 sec 

21 sec 

32

8000 

5 sec 

40 sec 

32

16000 

7 sec

1 min 40 sec 

 

 

Sales Order Line (Batch Execution)
         

Entity

No. of Tasks

Record Count 

Staging Execution Time

Target Execution Time

 

 

SALES Order Line

32

5000 

5 sec 

48 sec 

32

10000 

7 sec 

1 min 49 sec 

32

20000 

10 sec 

3 min 47 sec 

32

40000 

23 sec 

7 min 28 sec 

32

80000 

42 sec 

17 min 1 sec 

 

 

Sales Order Composite Entity (Batch Execution)
           

Entity

Sub Entity 

Record Count 

No. of Tasks

Staging Execution Time

Target Execution Time

 

 

 

 

Sales Order

SOH

1000 

10

16 sec 

2 min 13 sec 

SOL

5000 

22 

SOH

2000 

10

34 sec 

4 min 

SOL

10000 

22 

SOH

4000 

10

1 min 

8 min 34 sec 

SOL

20000 

22 

SOH

8000 

10

2 min 15 sec 

18 min 16 sec 

SOL

40000 

22 

SOH

16000 

10

3 min 38 sec 

38 min 19 sec 

SOL

80000 

22 

 

Execution details with different numbers of tasks.

           

Entity

Sub Entity 

Record Count 

No. of Tasks

Staging Execution Time

Target Execution Time

 

 

Sales Order

SOH

16000

32

3 min 40 sec 

37 min 18 sec 

SOL

80000

SOH

16000

16

3 min 38 sec 

40 min42 sec 

SOL

80000

SOH

16000

8

3 min 36 sec

52 min 37 sec 

SOL

80000

 

 

Vendor Invoice Header (Batch Execution)
         

Entity

No. of Tasks

Record Count 

Staging Execution Time

Target Execution Time

 

 

Vendor Invoice Header

32

500 

2 sec 

4 sec 

32

1000 

2 sec 

6 sec 

32

2000 

2 sec 

10 sec 

32

4000

3 sec 

17 sec 

32

8000 

4 sec 

41 sec 

 

 

Vendor Invoice Line (Batch Execution)
         

Entity

No. of Tasks

Record Count 

Staging Execution Time

Target Execution Time

 

 

Vendor Invoice Line

32

1500 

3 sec 

24 sec 

32

3000 

4 sec 

41 sec 

32

6000 

3 sec 

1 min 23 sec

32

12000 

6 sec 

2 min 44 sec 

32

24000 

8 sec 

5 min 51 sec 

 

 

Vendor Invoice Composite Entity (Batch Execution)
           

Entity

Sub Entity 

Record Count 

No. of Tasks

Staging Execution Time

Target Execution Time

 

 

 

 

Vendor Invoice

VIH 

500 

10

3 sec 

33 sec

VIL 

1500 

22 

VIH 

1000 

10

11 sec 

1 min 

VIL 

3000 

22 

VIH 

2000 

10

20 sec 

1 min 56 sec 

VIL 

6000 

22 

VIH 

4000 

10

35 sec 

3 min 54 sec 

VIL 

12000 

22 

VIH 

8000 

10

1 min 13 sec 

7 min 58 sec 

VIL 

24000 

22 

 

Execution details with different numbers of tasks.

           

Entity

Sub Entity 

Record Count 

No. of Tasks

Staging Execution Time

Target Execution Time

 

 

Vendor Invoice

VIH 

8000 

32

1 min 6 sec 

7 min 43 sec 

VIL 

24000 

VIH 

8000 

16

1 min 7 sec 

7 min 34 sec 

VIL 

24000 

VIH 

8000 

8

1 min 6 sec

9 min 15 sec 

VIL 

24000 

 

High Availability best practices for Dynamics AX 2012

$
0
0

In this blog post, I would like to summarize the best practices to ensure High Availability (HA) for Microsoft Dynamics AX 2012 R3. This has been a critical subject for many customers because Dynamics AX is a mission-critical business application meaning that any downtime impacting end users will result in financial loss.

Business continuity doesn't come for free: it requires additional components, configurations and maintenance. So, before you engage yourself into a complete redesign of your architecture, first define the right level of uptime needed for your application and which service is critical: for example, you can define Dynamics AX client as business critical but not the financial reporting. Service Level Agreement (SLA) can be defined for every scenario and can help you decide the best option to choose from.

Below we are focusing on the database and application layers. However, for the hardware and platform layers, we do recommend virtualization to provide additional capabilities. Also, we are not covering all workloads of the product such as Retail and Enterprise Portal.

1. Operational database High Availability

There are two important Dynamics AX databases: the one hosting the the business data and the one for the code and metadata. Both are critical for the application and must always be available. If the Dynamics AX AOS cannot connect to these databases, it will simply stop running.

Solution: SQL Server AlwaysOn Availability Groups (AOAG). By putting the critical databases in one Availability Group and with the supported Synchronous mode, you allow an automatic failover from the Primary Replicas to the Secondary Replicas. To avoid the Primary Replicas node to be a new single point of failure (SPOF), you should configure one AG Listener as the entry point for all services: Dynamics AX Server Configuration Utility for the Application Object Server (AOS), SQL Server Reporting Services and Management Reporter Service configuration.

Tips: Please also ensure the service account running the AOS services has the right permission to all TempDB databases (Primary and all Secondary replicas).

Limitation: Secondary Replicas are Read only and therefore cannot be used for standard Dynamics AX 2012 Reporting Services. Change Tracking is not allowed on Secondary Replicas so Management Reporter Data Mart, Retail and Entity Store must still point to Primary Replicas databases. However, ad hoc SSRS reports not using the data provider class and standard Dynamics AX 2012 Business Intelligence objects support the Secondary Replicas: you need to create a Read Only Intent on the SQL Listener to route "read only" queries to the Secondary Replicas.

Note: there is a possible performance impact on Dynamics AX when Synchronous mode is used, for example when heavy batch jobs are running (processes like MRP). The scale of this impact depends on how fast the transactions can be committed to the secondary replica so the quality of the connection between Primary and Secondary replica is crucial for best performance.

2. Dynamics AX AOS High Availability

The Dynamics AX clients are connecting to the AOS servers using the Remote Procedure Calls (RPC) protocol. They are connecting to the TCP port (default 2712) of the AOS servers.

Solution: Setup an application Cluster containing all AOS instances in System admin > Setup > System > Cluster configuration. The first AOS in the client configuration utility will load balance all incoming sessions so that same number of sessions are hosted on each AOS of the cluster.

Scenario: when one AOS service is becoming unavailable, the sessions running on that AOS will be stopped automatically but each user will be able to start his client by clicking the same shortcut. The shortcut is calling the Dynamics AX client with the configuration file that contains all AOS included in the cluster.

Tips: You can create a dedicated load balancer AOS to be the single point of contact for new connections. This can be beneficial when there are many Business Connector calls.

3. Dynamics AX Services High Availability

Another communication between Dynamics AX client and AOS are the Windows Communication Foundation (WCF) calls, for example the Reporting Services workload. WCF services are hosted on WSDL port (default 8101) and a service endpoint port (default 8201) on every AOS server. The application load balancer, described in section 2 of this blog, does not include the WCF calls. Therefore all traffic will only be routed to this first AOS instance configured in the cluster, now becoming the SPOF.

Solution: create Windows Network Load Balancing (NLB) cluster with dedicated IP and list of hostnames and IP addresses for AOS. You must then setup the Dynamics AX Client Configuration to point all clients to this NLB. It will route all service clients to all AOS instances defined in the NLB cluster.

Process: first you need to create the NLB, then change the endpoint in the configuration file and refresh the client configuration:

  • On each Remote Desktop Host Server, add two values "wcflbservername" (with NLB virtual name) and "wcflbwsdlport" (with WSDL port) in the registry subkeys: SoftwareMicrosoftDynamics6.0Configurationnew_configuration_name for both HKEY_CURRENT_USER and HKEY_LOCAL_MACHINE
  • Open the Dynamics AX Client Configuration Utility, open the tab "connection"
  • Click "refresh configuration" and restart the AOS. This should modify the WCF.config file and rename the endpoint to reflect the virtual name of the NLB instead of the AOS.Note: You also need to configure this new "Services NLB" for Management Reporter. Open the Configuration Console, click File, Configure and Select "Add Microsoft Dynamics AX 2012 Data Mart". Click Next and replace the AOS Service Port and AOS port (TCP)

Tips: here is a great example of script to automatically remove the failed AOS host instance from the AOS NLB cluster.

4. Operational Reporting High Availability

For Reporter Server Database: you can place them in the SQL AOAG, same as the Report Data Source (Dynamics AX) if you wish. The Secondary Replica must be kept synchronized at deployment time, and always be updated in case of configuration changes (permissions, roles…).

  • Limitation: keep in mind the Report Data Source cannot be set to the Read Only Secondary Replicas because standard Dynamics AX reports require a writeback access. Only Ad Hoc SSRS reports can leverage the Secondary Replicas with the Read Only Intent mode.
  • Tips: Reporting Services will not automatically use a different replica for the Reporting Services database when failover occurs. A custom script is needed to automatically update the SSRS configuration after the failover.

For Reporting Services: create another a new NLB for the SSRS instances deployed in scale out mode. Deploy the Dynamics AX Reporting Extensions on each Report Server. This implies that in the Dynamics AX application, all AOS must be listed and configured against the new RS NLB under System Administration - Settings - Business Intelligence - Report Servers.

  • Tips: Configuring SQL Server Reporting Services scale-out deployment in a Network Load Balancing cluster (White Paper).
  • Note: another option is to use one way transactional replication for near real-time reporting. But we don’t recommend it because it requires much more maintenance like choosing tables to be replicated,  and recreating the publication in case of database schema change.

5. Analysis Services High Availability

For the  services: Analysis Services can rely on NLB solution in scale out mode.

For the database: Analysis Services natively supports Windows Server Cluster Service (WSCS) so you also leverage the AlwaysOn Availability Group.

Tips: you can have near real time data without impacting the operational database. Because processing and querying are read-only workloads you can configure the data source connection to use a readable secondary replica.

6. Financial Reporting High Availability

Management Reporter (MR) database: metadata such like report library, building blocks, configuration and generated financial reports. The solution is to change the recovery mode to FULL and add the MR Reporter database to the AlwaysOn Availability Group. If encryption is used, you need to apply the same configuration on the other node to ensure business continuity after failover.

MR Data Mart: when the database becomes unavailable, integration tasks simply fail and new reports cannot be generated. The DataMart can easily be regenerated from the source. If the instance hosting the DataMart is brought back online within three days of failure, the integration tasks will resume and automatically process all the changes and posted data from AX since the failure happened.

MR Application Service: create multiple instances of the Service across servers. When deploying services, specify the existing database created during first installation. Then create an MR NLB cluster for load balancing. To make sure all menu items in the client are launching the Report Designer with the right URL, you need to manually update the value of the MR Configuration Console with the NLB address and then publish the server connection to AX (update hidden field in AX ledger parameters table).

Tips: you can create an SQL script to ensure the right MR configuration is set up in Dynamics AX when SQL failover occurs. You can also use this script to convert the master key for encryption when MR DB fails over.

MR Process Service generate reports and handle integration. You can deploy it to multiple servers. However, the process itself is managed in MR Database and you cannot have duplicate or overlapping tasks within the same instance. There is no Load Balancing available but if one instance goes down, others can pick it up and provide business continuity.

 

Regards,
@BertrandCaillet
Principal Premier Field Engineer

07/17 - Errata added for [MS-ADTS]: Active Directory Technical Specification

July release notes

$
0
0

The Microsoft Dynamics Lifecycle Services team is happy to announce the immediate availability of the July release of Lifecycle Services (LCS).  

  • In June, we had several milestone product releases such as Dynamics 365 for Finance and Operations, Enterprise edition (on-premises), Dynamics 365 for Finance and Operations, Enterprise edition, and Dynamics 365 for Retail. Each of these products can be deployed and operated through LCS. The July release of LCS is a quality release where we have worked on addressing issues based on feedback for the above product releases. 
     
  • The Dynamics 365 – Translation Service now provides a new Align feature, which enables you to create an XLIFF Translation Memory (TM) file by uploading a set of existing source and target language label files available from your previous translation work. You can submit this XLIFF TM file to your new translation request to recycle the translations from the TM where the label text is an exact match.   


 

 


Azure Content Spotlight – Azure Mobile App

$
0
0

Welcome to another Azure Content Spotlight! These articles are used to highlight items in Azure that could be more visible to the Azure community.Azure Icon

The Azure Mobile App offers a powerful mobile application on iOS and Android (Windows coming soon) for monitoring, diagnosing and responding to issues with Azure subscriptions.  In addition to being able to view Azure subscriptions and component status, alerts can be used to send notifications. 

For a great introduction highlighting many of the key features, take a look at the Azure Friday post.

Cheers!

 

Imagine Cup 2017 - Meet the Team and Judges for next weeks final

$
0
0

imageimage

 

Meet the teams!

Meet the World Finals Teams!

Read about the students and innovations that earned trips to Seattle to compete in the Imagine Cup World Finals. Tune in July 27 to find out who will win over $250K in prizes and the Imagine Cup!


Meet the judges!

Meet the Championship Judges!

Imagine Cup celebrates its 15th anniversary with an incredible group of judges; get to know the three Championship judges who will meet with the final teams to determine who has what it takes to win it all!

 

Sign up now for Imagine Cup 2018!

You could win too!

Your team. Your innovation. Your chance to win. Sign up now for the 2018 Imagine Cup and get a head start with competition updates and access to the tools you need to make your dreams a reality!



Issues accessing profile page in Visual Studio Team Services - 07/21 - Mitigated

$
0
0

Final Update: Friday, July 21st 2017 02:15 UTC

We’ve confirmed that the profile service is back to normal as of Friday, July 21st 2017 01:36 UTC . A deployment done earlier today caused the 'My Profile' link to be broken. Hotfix has been deployed to resolve this issue and customers should not notice any error accessing their profile page from their VSTS accounts. Sorry for any inconvenience this may have caused.

  • Root Cause: Recent deployment done earlier today.
  • Chance of Re-occurrence: Low - Root cause is understood.
  • Incident Timeline: July 20 2017 00:11 UTC through July 20 2017 01:36 UTC

Sincerely,
Manohar


Update: Friday, July 21st 2017 01:02 UTC

Our DevOps team has ruled out any issues with Login into VSTS accounts. The issue has been narrowed down to Profile page being unable to load from UI when customers click the 'My Profile' after logging into their VSTS account. Customers might notice error "Unable to redirect to your profile, please try again later." . We currently have no estimated time for resolution.

  • Work Around: None
  • Next Update: Before Friday, July 21st 2017 03:15 UTC

Sincerely,
Manohar


Initial Update: Friday, July 21st 2017 00:40 UTC

A potentially customer impacting alert is being investigated. Triage is in progress and we will provide an update with more information.
Customers might receive login error with 503: Service Unavailable while trying to access their VSTS account across multiple regions. Initial investigation suggests this to be related to recent deployment and we are in the process of reverting it.

  • Next Update: Before Friday, July 21st 2017 01:45 UTC

Sincerely,
Manohar

Online Analysis Services Course: Developing a Multidimensional Model

$
0
0

Check out the excellent, new online course by Peter Myers and Chris Randall for Microsoft Learning Experiences (LeX). Lean how to develop multidimensional data models with SQL Server 2016 Analysis Services. The complete course is available on edX at no cost to audit, or you can highlight your new knowledge and skills with a Verified Certificate for a small charge. Enrollment is available at edX.

Windows Template Studio 1.1

$
0
0

Если вы еще не слышали, то мы рады сообщить о выходе Windows Template Studio 1.1. В сотрудничестве с заинтересованным сообществом мы поставили на поток и циклический выпуск новые функции и общую функциональность. Мы постоянно ищем помощников, и, если вас интересует эта тема, пожалуйста, зайдите на сайт GitHub: https://aka.ms/wts.

Windows Template Studio

Как получить обновление:

Есть две возможности получить обновление до последней сборки.

  • Если уже установлено: Visual Studio должна автоматически выполнить обновление. Чтобы самостоятельно запустить обновление, откройте меню Сервис (Tools)-> Расширения и обновления (Extensions and Updates). Затем перейдите к расширителю Обновить (Update), расположенному слева, и вы увидите Windows Template Studio — щелкните Обновить (Update).
  • Если еще не установлено: пройдите по ссылке https://aka.ms/wtsinstall, щелкните «загрузить (download)» и дважды щелкните Установщик VSIX (VSIX installer).

Улучшения Мастера (Wizard):

  • переупорядочение страниц;
  • первая страница уже не будет пустой;
  • переименование страниц и фоновых задач;
  • улучшение автономной (offline) работы;
  • начата работа по локализации;
  • добавлен анализ кода.

Обновления страниц:

  • добавлена страница Сетка (Grid);
  • добавлена страница Диаграмма (Chart);
  • добавлена страница Media/Video;
  • улучшена страница веб-представления (Web View).

Обновления функций:

  • добавлено хранилище Store SDK Notifications;
  • SettingStorage теперь может сохранять в двоичном виде (а не только в строковом).

Улучшения шаблона (Template):

  • панель навигации (Navigation panel) перенесена в UWP Community Toolkit;
  • отрегулировано Задание стиля (Styling);
  • улучшена производительность загрузчика ресурсов (ResourceLoader).

Чтобы познакомиться с полным списком решенных проблем релиза 1.1, перейдите на Github.

Что готовится в следующих версиях

Нам очень приятны поддержка и соучастие сообщества. Мы сотрудничаем с фреймворком Caliburn.Micro и создали ветку разработки с Найджелом Сэмпсоном (Nigel Sampson). Мы ведем обсуждения с представителями Prism и Template 10, чтобы понять, как добавить и эти платформы. А ниже мы представляем вашему вниманию список того, что мы собираемся добавить:

  • новый дизайн шаблонов (Fluent design);
  • функции Project Rome как дополнительные опции проектов;
  • поддержку кликов правой кнопки мыши для уже существующих проектов;
  • локализацию Мастера (wizard);
  • поддержку специальных возможностей (Accessibility) в мастере и в шаблонах.

Если вы хотите помочь нам, зайдите, пожалуйста, на https://aka.ms/wts.

Источник: https://blogs.windows.com/russia/2017/06/25/windows-template-studio-1-1/#lOdy6xHBHhpplaum.97

Using React and building a web site on Azure

$
0
0

Guest post from Walid Behlock Microsoft Student Partner at University College London

About me

clip_image002_thumb

I have always loved to solve problems of all kinds and have developed a passion for technology since my youngest age. I have previously participated in international Robotics competitions and attended many talks and events related to hardware and software.

I am currently studying Electronics and Electrical Engineering at UCL and I am the Projects Director of UCL TechSoc. In my free-time, I like to practice endurance sports, listen to music and meet new people.

You are welcome to check my GitHub profile as well as my Linkedin and ask me any kind of questions.

Overview

In this tutorial, we are going to follow the plan below:

● Introduction to React

● What are we going to build?

● Setting up the project

● Developing the project

● Introduction to Microsoft Azure

● Deploying our project to Azure

● Additional resources

Let us now start with a detailed introduction to React.

clip_image004_thumb[2]

What is React?

Before actually starting to introduce React, I will try to illustrate the problems that traditional JavaScript programming causes when building large web applications and how React approaches (and solves) these issues.

Imagine that you are paid to paint a wall in blue every day. From day to day, a minor change is made to the design (for example, you should add a yellow circle in the middle of the wall today) but you only have the option to repaint the whole wall repeatedly. So, every day, you repeat the same work which is time and effort consuming just to apply this little modification.

This is how traditional JavaScript works: the wall can be considered to be the DOM (Document Object Model) of any Webpage and in this case, reloading the page every time just to apply a little change uses a lot of unnecessary computing power and doing DOM manipulation by hand is an error-prone manipulation.

This is where React comes into place: React is an efficient and flexible JavaScript library created by Facebook in 2013 that offers to the developer a virtual DOM to render his changes without actually having to reload the whole page every time. In order to do this, React stores the state of the DOM structure and when an update is made to the UI, it compares this structure with the new one and updates the changes that have been made.

In the case of our previous example, React would have allowed us to paint this tiny circle directly in the middle of the wall.

Let us now look at what we are going to be doing with React.

What are we going to build?

In this tutorial, I am going to walk you through building a GitHub Popular application which classes the most popular repositories on GitHub in general or by language by using the GitHub API to fetch info about all the repos.

clip_image006_thumb[2]

The navigation bar allows us to toggle between different programming languages.

I have also created a GitHub repo for this tutorial. You can find it here: https://github.com/WalidBehlock/github-popular

Dependencies and requirements

Before starting with the development of this project, we will first need to set up our workspace by installing:

NPM (Node Package Manager): NPM allows us to manage different modules and know what modules versions are installed on our system. It makes it easier to install different packages without having to go to the specific websites or repos every time.

The easiest way to install it is to install Node since NPM comes already bundled in it. We can download Node from this website or we can use Homebrew to do so by running:

1. brew install node

from the command line. Once we have NPM installed, we are ready to get started.

Setting up the Project
Environment

The first thing we need to do to start building our React app is to set up our development environment.

The easiest way to create a React app is to head to the directory where we want to store our app and execute these commands through command line:

   1: npm install -g create-react-app 
   2: create-react-app github-popular 
   3: cd github-popular 
   4: npm start 

By using ‘Create React App’, we are installing all the tools that are needed to get started with React.

If you want to get more info about what is happening under the hood when executing this command, you can read about Webpack and Babel which are two of the most important packages that are being installed.

You can also have a look at the ‘Create React AppREADME.

By launching ‘npm start’, our browser should open a new window on a localhost server with this screen:

image

Project Files

After having successfully installed the React modules, we will configure our project folder.

Let’s first head to our ‘github-popular’ folder that we just created. We can see that some files are already present in this folder:

image

● The ‘node_modules’ folder contains all the module files and dependencies

● The ‘package.json’ file is what defines the specific dependencies that need to be used in our project

‘src’ contains the code written by the developer

‘public’ contains the bundled code for production

‘.gitignore’ tells GitHub what files to ignore

Let us start from scratch and create a folder named ‘app’ in our ‘github-popular’ folder (github-popular -> app). This folder will be the base of our app and will replace ‘src’.

In this folder, we are going to create three important files:

● An ‘index.html’ file which sets up our webpage.

● An ‘index.css’ file which will store the CSS code of our platform which gives styling to our website.

● An ‘index.js’ file which is where our React components are going to live.

At this point, our setup looks like this:

image

Now that our project is set up, we will now start writing our first lines of React.

Editing ‘index.js’
Requiring modules and files

We head to our project folder and open the ‘index.js’ file in our preferred code editor (I will be using VS Code in this tutorial). We then write:

   1: // index.js // 
   2: var React = require('react'); 
   3: var ReactDOM = require('react-dom'); 
   4: require('./index.css'); 

● In line 3, we are requiring React into our project.

● In line 4, we are requiring ReactDom since we will render our React project into the DOM.

● In line 5, we are requiring our index.css file which will be included in our application when everything bundles (thanks to webpack).

! Requiring the modules is mandatory in every React project.

Building the App Component

● We first start by initializing our class name (‘App’ in our example) and we tell it to extend a React component.

   1: class App extends React.Component { 
   2:  
   3: } 

● The render method is then used to associate a specific UI to our component: everything that goes into this method will represent our component in the DOM.

   1: class App extends React.Component { 
   2: render() { 
   3:  
   4:  } 
   5: } 

● Now this is where people often get confused:

   1: class App extends React.Component {
   2: render() {
   3: return (
   4: <div>Hello World!</div>
   5:     )
   6:   }
   7: }

In line 4, it seems that we are combining HTML with JavaScript which violates everything we have been learning on separation of concerns.

This line of code is not actually HTML but JSX, and this is the reason why we need Webpack and Babel to transpile this special syntax to traditional JavaScript.

JSX is really useful at building the UI since HTML is a language that is widely used and practical at building User Interfaces. You can read more about JSX here.

Rendering the component

What we are now going to do is take our main component and render it to the DOM:

We go ahead and write:

   1: ReactDOM.render(); 

at the end of our JavaScript file. We are going to pass it two different arguments:

● The first one is our actual component: <App /> (this is also JSX).

● The second argument is where we want it to render:

document.getElementById('app')

(Note that we are going to include a div element with an id of ‘app’ in our html file later).

Our final result should look like this:

   1: // index.js // 
   2:  
   3: var React = require('react'); 
   4: var ReactDOM = require('react-dom'); 
   5: require('./index.css'); 
   6:  
   7: class App extends React.Component { 
   8: render() { 
   9: return ( 
  10: <div>Hello World!</div> 
  11:     ) 
  12:   } 
  13: } 
  14:  
  15: ReactDOM.render( 
  16: <App />, 
  17: document.getElementById('app') 
  18: ); 
Editing ‘index.html’

In this html file, we are just going to copy a traditional html template:

   1: <!-- index.html --> 
   2: <!DOCTYPE html>
   3: <html lang="en">
   4: <head>
   5: <meta charset="UTF-8">
   6: <title>Github Popular</title>
   7: </head>
   8: <body>
   9: <div id="app"></div>
  10: </body>
  11: </html>

! Notice that we created a div with an id of ‘app’ line 9 as stated earlier.

Creating the Platform

Our next step is to start building our GitHub Popular Platform.

Organizing the project

To get more organized, we are going to create a ‘components’ folder inside our ‘app’ folder (github-popular ->app ->components). This folder will store our website pages.

In this new folder, create an ‘App.js’ file and move the ‘App’ component created earlier to this new file. We get this:

   2:
   3: var React = require('react');
   4: var Popular = require('./Popular');
   5:
   1: // App.js // 
   6: class App extends React.Component {
   7:
   8: render() {
   9: return (
  10: <div className='container'>
  11: <Popular/>
  12: </div>
  13:
  14:    )
  15:   }
  16: }
  17:
  18: module.exports = App;

Our index.js file should now look like this:

   1: // index.js // 
   2:
   3: var React = require('react');
   4: var ReactDOM = require('react-dom');
   5: require('./index.css');
   6: var App = require('./components/App');
   7:
   8: ReactDOM.render(
   9: <App />,
  10: document.getElementById('app')
  11: );

The purpose of our ‘index.js’ file is now to require all the necessary files and render our ‘App’ component that is in the ‘components’ folder. It is the main file that will be calling all the other components.

Languages bar

● We will start by creating a file called ‘Popular.js’ inside of our ‘app’ folder (github-popular ->app ->Popular.js) and write the same basic code as before:

   1: // Popular.js // 
   2:
   3: var React = require('react');
   4:
   5: class Popular extends React.Component {
   6:
   7: render() {
   8: return (
   9:
  10:   )
  11:  }
  12: }
  13:
  14: module.exports = Popular;

The ‘module.exports’ command allows us to import our file (in this case ‘Popular.js’) from a different file.

● Now we will create an array to store the different programming languages that are going to be used in the navigation bar.

image

To do this, we write:

   1: var languages = ['All', 'JavaScript', 'Ruby', 'Java', 'CSS', 'Python'];

inside of our ‘render’ method (outside the return element).

After having initialized our languages, we need to show them on our webpage, this means, the DOM. To do this, we are first going to create a ‘list item’ for every language in our array and map over all the languages. We are then going to display them on the screen:

1.    render() {
2.
3.        var languages = ['All', 'JavaScript', 'Ruby', 'Java', 'CSS', 'Python'];
4.
5.        return (
6.            <ul>
7.              {languages.map(function (lang) {
8.                return (
9.                  <li>
10.                      {lang}
11.                  </li>

Now if we refresh our app we get this:

image

We will now add some code that sets up the initial state of our component (‘All’ should always be the default “language” of search when our page is first loaded) and a way to change that state. We add a ‘constructor’ method as well as an ‘updateLanguage’ method inside our React Component.

1.    // Popular.js //  
2.
3.    var React = require('react');
4.
5.    class Popular extends React.Component {
6.      constructor(props) {
7.        super(props);
8.        this.state = {
9.          selectedLanguage: 'All',
10.        };
11.
12.        this.updateLanguage = this.updateLanguage.bind(this);
13.      }
14.      updateLanguage(lang) {
15.        this.setState(function () {
16.          return {
17.            selectedLanguage: lang,
18.          }
19.        });
20.      }
21.      render() {
22.        var languages = ['All', 'JavaScript', 'Ruby', 'Java', 'CSS', 'Python'];
23.
24.        return (
25.          <div>
26.            <ul className='languages'>
27.              {languages.map(function (lang) {
28.                return (
29.                  <li
30.                    style={lang === this.state.selectedLanguage ? {color: '#d0021b'} : null}
31.                    onClick={this.updateLanguage.bind(null, lang)}
32.                    key={lang}>
33.                      {lang}
34.                  </li>
35.                )
36.              }, this)}
37.            </ul>
38.          </div>
39.        )
40.      }
41.    }
42.
43.    module.exports = Popular;

What will now happen is that whenever one of our list items is clicked, it will trigger the ‘updateLanguage’ method which will set the correct state.

This looks good but it will be better with some styling. Let’s copy and paste this code in our ‘index.css’ file:

1

1.    body {
2.      font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen-Sans,Ubuntu,Cantarell,Helvetica Neue,sans-serif;
3.    }
4.
5.    .container {
6.      max-width: 1200px;
7.      margin: 0 auto;
8.    }
9.
10.    ul {
11.      padding: 0;
12.    }
13.
14.    li {
15.      list-style-type: none;
16.    }
17.
18.    .languages {
19.      display: flex;
20.      justify-content: center;
21.    }
22.
23.    .languages li {
24.      margin: 10px;
25.      font-weight: bold;
26.      cursor: pointer;
27.    }

 

 

Our language bar is now done and our final result looks like this:

image

Profiles grid

Before going on with the tutorial, let us look back at what we are trying to achieve:

image

We can see that we still have the Profiles grid to program in our ‘Popular.js’ file. This profiles grid will send AJAX requests to the GitHub API to allow us to class the profiles by popularity for the selected language.

The first thing we’re going to do is to modify our ‘Popular.js’ file to be able to welcome a new component in our App.

Let’s start by creating a new class called ‘SelectLanguage’ and as usual let it extend a React Component and have a render method. In this method, we’re gonna paste the UI of our languages bar.

1.    // Popular.js //  
2.
3.    class SelectLanguage extends React.Component {
4.        render() {
5.            return(
6.        <ul className='languages'>
7.          {languages.map(function (lang) {
8.            return (
9.              <li
10.                style={lang === this.state.selectedLanguage ? {color: '#d0021b'} : null}
11.                onClick={this.updateLanguage.bind(null, lang)}
12.                key={lang}>
13.                  {lang}
14.              </li>
15.            )
16.          }, this)}
17.        </ul>
18.          )
19.      }
20.    }

● We will now add Prop types to this class. Prop types are used to document the properties passed to components. This module get installed directly with ‘Create React App’. To use it, we should first require it at the beginning of our file like any other module.

1. var PropTypes = require('prop-types');
2.

We are then going to pass the ‘state’ in the style statement as a ‘prop’ as well as the ‘updateLanguage’ method in our click handler:

1. style={lang === props.selectedLanguage ? {color: '#d0021b'} :
null}
2.
3. onClick={props.onSelect.bind(null, lang)}

Now we can see that we have created two prop types (selectedLanguage) and (onSelect). These two prop types should be required after the class.

1. SelectLanguage.propTypes = {
2. selectedLanguage: PropTypes.string.isRequired,
3. onSelect: PropTypes.func.isRequired,
4. };

● We can now go to ‘updateLanguage’ and return our ‘SelectLanguage’ component passing it ‘selectedLanguage’ and ‘onSelect’.

A little trick: if everything that is contained in a component is a render method, we can create a function instead of it. This is what we’re going to do here. However, we must pay attention to the ‘this’ keyword which we can’t use anymore. To fix this, we will pass ‘props’ as the first argument of this function. This function is known as a ‘stateless functional component’ since it does not have a state anymore and is a component that executes a function.

Our code should now look like this:

1. // Popular.js // 
2.
3. var React = require('react');
4. var PropTypes = require('prop-types');
5.
6. function SelectLanguage (props) {
7. var languages = ['All', 'JavaScript', 'Ruby', 'Java',
'CSS', 'Python'];
8. return (
9. <ul
className='languages'>
10.
{languages.map(function (lang) {
11. return (
12.
<li
13.
style={lang === props.selectedLanguage ? {color: '#d0021b'} : null}


14.
onClick={props.onSelect.bind(null, lang)}
15.
key={lang}>
16.
{lang}
17.
</li>
18. )


19. })}
20. </ul>
21. )
22. }
23.
24. SelectLanguage.propTypes = {
25. selectedLanguage:
PropTypes.string.isRequired,
26. onSelect: PropTypes.func.isRequired,


27. };
28.
29. class Popular extends React.Component {
30. constructor(props) {
31. super();
32. this.state = {
33. selectedLanguage: 'All',
34. };
35.
36. this.updateLanguage =
this.updateLanguage.bind(this);
37. }
38. updateLanguage(lang) {
39. this.setState(function () {
40. return {
41.
selectedLanguage: lang,
42. }
43. });
44. }
45. render() {
46. return (
47. <div>
48. <SelectLanguage
49. selectedLanguage={this.state.selectedLanguage}
50. onSelect={this.updateLanguage} />
51. </div>


52.   )
53.  }
54. }
55.
56. module.exports = Popular;
AJAX requests
Axios

To fetch info from GitHub (in our case the popular repos), we need to be able to make AJAX requests.

To do this, we must install a powerful library called ‘Axios’ to our project. We open our command line and write:

1. npm install --save axios
2. npm run start

API ping

Let’s go ahead and create a folder called ‘utils’ in our ‘app’ folder.

Inside this folder let’s create a file called ‘api.js’ (github-popular -> app -> utils -> api.js). This file is going to contain all the API requests that we are going to make in our app.

● Let’s open this file and start by requiring the ‘axios’ library.

1. var axios = require('axios');

● The next step is to be able to export an object from this file that will contain all the methods that are needed to interact with an external API.

1. module.exports = {
2. }

● The first method that we’re going to have is the ‘fetchPopularRepos’ method which is going to a take in a language and ping the GitHub API to return us back with the most popular repos for that language.

1. // api.js // 
2.
3. var axios = require('axios');
4.
5. module.exports = {
6. fetchPopularRepos: function (language) {
7.
8. var encodedURI = window.encodeURI('https://api.github.com/search/repositories?q=stars:>1+language:'+ language + '&sort=stars&order=desc&type=Repositories');
9.
10. return axios.get(encodedURI)
11. .then(function (response) {
12. return response.data.items;
13.     });
14.   }
15. };

Let’s break this code down:

● In line 8, ‘encodedURI’ encodes the URL that we hit for the GitHub API and is human readable to a new URL that is understandable by the browser. Take a look at the query here (located after the URL):

“q=stars:>1+language:'+ language + '&sort=stars&order=desc&type=Repositories')”

This allows us to fetch all the repos that have more than one star for a specific language then sort them by descending order.

● We then return the invocation of ‘axis.get’ and pass it our URI. This is going to return us a promise (you can read more about promises here).

Creating the RepoGrid component

Let’s now head to our ‘Popular.js’ file and require our API file that we just created.

1. // Popular.js // 
2.
3. var api = require('../utils/api');

In the ‘updateLanguage’ component, we call ‘api.fetchPopularRepos’ and we are going to pass it the selected language ( api.fetchPopularRepos(lang) ) along with its function:

1.    // Popular.js //  
2.
3.      updateLanguage(lang) {
4.        this.setState(function () {
5.          return {
6.            selectedLanguage: lang,
7.            repos: null
8.          }
9.        });
10.
11.        api.fetchPopularRepos(lang)
12.          .then(function (repos) {
13.            this.setState(function () {
14.              return {
15.                repos: repos
16.              }
17.            });
18.          }.bind(this));
19.      }

Let’s now create a lifecycle event called ‘componentDidMount’ that is going to be called by React whenever the component loads to the screen and it is in this component that we are going to make our AJAX requests. In it, we will invoke

1. componentDidMount() {
2. this.updateLanguage(this.state.selectedLanguage)
3. }

Now let’s create a brand-new component called ‘RepoGrid’ in our render method and pass it our repos:

1. <RepoGrid repos={this.state.repos} />

and create a RepoGrid function that is going to receive some props and return an unordered list to map over all our repos:

1. function RepoGrid (props) {
2. return (
3. <ul className='popular-list'>
4. </ul>
5. )
6. }

Now to map over all our repositories we add ‘props.repos.map’ and give it a class to be able to style it in our CSS file.

1. function RepoGrid (props) {
2. return (
3. <ul className='popular-list'>
4. {props.repos.map(function (repo, index) {
5. return (
6. <li key={repo.name} className='popular-item'>
7. </li>
8. </ul>
9.    )
10. }

We finally add all the items that need to be displayed in our App such as the rank, avatar and number of stars. The final code for ‘RepoGrid’ looks like this:

1.    function RepoGrid (props) {
2.      return (
3.        <ul className='popular-list'>
4.          {props.repos.map(function (repo, index) {
5.            return (
6.              <li key={repo.name} className='popular-item'>
7.                <div className='popular-rank'>#{index + 1}</div>
8.                <ul className='space-list-items'>
9.                  <li>
10.                    <img
11.                      className='avatar'
12.                      src={repo.owner.avatar_url}
13.                      alt={'Avatar for ' + repo.owner.login}
14.                    />
15.                  </li>
16.                  <li><a href={repo.html_url}>{repo.name}</a></li>
17.                  <li>@{repo.owner.login}</li>
18.                  <li>{repo.stargazers_count} stars</li>
19.                </ul>
20.              </li>
21.            )
22.          })}
23.        </ul>
24.      )
25.    }

! Since we are using prop types, we do not have to forget to add prop types to our file by writing:

1.    RepoGrid.propTypes = {
2.      repos: PropTypes.array.isRequired,
3.    }

If we refresh our page we get an error. This is because we are trying to render ‘RepoGrid’ before getting an actual response from the GitHub API. To fix this, we add a condition:

1.    {!this.state.repos
2.              ? <p>LOADING!</p>
3.              : <RepoGrid repos={this.state.repos} />}

to our ‘render’ method. What this does is that it checks if the repos have been rendered. If it does, it displays the RepoGrid, if not, it displays “LOADING” on the screen.

Finally, let us add a bit of styling to the grid in our ‘index.css’ file:

1.    body {
2.      font-family: -apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Oxygen-Sans,Ubuntu,Cantarell,Helvetica Neue,sans-serif;
3.    }
4.
5.    .container {
6.      max-width: 1200px;
7.      margin: 0 auto;
8.    }
9.
10.    ul {
11.      padding: 0;
12.    }
13.
14.    li {
15.      list-style-type: none;
16.    }
17.
18.    .avatar {
19.      width: 150px;
20.      border-radius: 50%;
21.    }
22.
23.    .space-list-items {
24.      margin-bottom: 7px;
25.    }
26.
27.    .languages {
28.      display: flex;
29.      justify-content: center;
30.    }
31.
32.    .languages li {
33.      margin: 10px;
34.      font-weight: bold;
35.      cursor: pointer;
36.    }
37.
38.    .popular-list {
39.      display: flex;
40.      flex-wrap: wrap;
41.      justify-content: space-around;
42.    }
43.
44.    .popular-item {
45.      margin: 20px;
46.      text-align: center;
47.    }
48.
49.    .popular-rank {
50.      font-size: 20px;
51.      margin: 10px;
52.    }




Final Product

image

We have now finished building our Website using React. When switching between the different programming languages, our pages does not reload entirely but specifically fetches the new repos to our grid.

The next step is to upload our project to the cloud and more specifically to Microsoft Azure so that it can be accessed through a URL.

What is Microsoft Azure?

Microsoft Azure is a cloud provider that offers all kind of products such as storage, backup, Virtual Machines, SQL databases and all kind of cloud services. With an availability in forty different regions, Azure is ahead of all other cloud providers and serves 90% of Fortune 500 companies.

It is absolutely FREE to get started with Azure for students via Microsoft Imagine Access http://imagine.microsoft.com

Getting started with Azure

Before starting with Azure, make sure that you create and setup your free account by following this link or signin at http://imagine.microsoft.com if your a student.

Once that is done, head to the Azure Portal by following this link: portal.azure.com

You will be greeted by this screen: This is your dashboard.

image

Deploying the platform (Portal)

In this tutorial, we will deploy our Website to the cloud using the Azure Portal.

The first step is to click on the ‘+’ icon labeled by ‘New’ in the sidebar, then on ‘Web+Mobile’ and ‘Web App’ (New -> Web+Mobile ->Web App).

image

We have many options to fill now:

● In ‘App name’ enter the name you want to give to your website.

‘Applications Insights’ helps you diagnose quality issues in your Web Apps.

● Now click on ‘Create’.

Our WebApp will now be deployed in no time.

Uploading our React Project through GitHub

When our platform has been successfully deployed, we will find this screen for the management of our platform.

image

If we click on the URL to the right of the screen, our browser will open a webpage saying that our App Service has been successfully created.

Let’s now go to the sidebar and to the ‘Deployment Credentials’ tab.

We create a new Username and Password and click on ’Save’.

Now we head to (QuickStart -> ASP.NET -> Cloud Based Source control -> Deployment options -> Setup -> Choose source) and then choose our preferred source.

image

In this case, we will be using GitHub.

● The first step is to give Azure permission to our GitHub account.

● We then choose our Project’s repo as well as its branch and then click on OK.

That’s it! Our platform is now up and running on the cloud. You can see that we were able to deploy our Website to Azure in a few minutes.

Conclusion

In this tutorial, we first looked at how to get started with React and build an interesting platform that allows us to fetch info from the GitHub API. We walked through the advantages of using React over traditional JavaScript programming, how to get started by installing all the dependencies on our computer and then actually creating our first components.

We then got introduced to Microsoft Azure and the variety of services that it offers and uploaded our Website to the cloud. Microsoft Azure doesn’t only allow us to upload already made projects but it also give us powerful tools to build, deploy and manage our apps. From Websites to .NET apps to Internet of Things projects, the possibilities of creation with Azure are endless.

References

React Official Website: https://facebook.github.io/react/

TylerMcGinnis React Fundamentals course: https://learn.tylermcginnis.com/p/reactjsfundamentals

Microsoft Azure: https://azure.microsoft.com/en-us/

Microsoft Imagine Access http://imagine.microsoft.com

Microsoft Imagine Azure https://azure.microsoft.com/en-us/offers/ms-azr-0144p/

Further Reading

React Tic-Tac-Toe tutorial: https://facebook.github.io/react/tutorial/tutorial.html

Framework for building native UWP and WPF apps with React:

https://github.com/Microsoft/react-native-windows

Microsoft Developer Network: https://msdn.microsoft.com/

Node.js on Azure: https://azure.microsoft.com/en-us/develop/nodejs/

JSX documentation: https://jsx.github.io/

ES6: http://es6-features.org/

Calling Azure Cosmos DB Graph API from Azure Functions

$
0
0

Azure Functions at this time provides Input and output Bindings for DocumentDB, one of the Database Models in Azure Cosmos DB. There aren't any for Cosmos DB Graph yet. However, using the ability to reference external assemblies and importing them through NuGet Packages, it is possible to access these APIs from the C# Scripts in Azure Functions.

1) Create a Graph Container in CosmosDB

I have used the starter Solution available in the Documentation for Azure Cosmos DB, that showcases how to use .NET Code to create a Cosmos DB Database, a Graph Container in it, and inject data using the Graph API into this container. It also shows how to query the Graph using the APIs. Once this Starter Solution is deployed, you could use the built in Graph Explorer to traverse the Graph and view the Nodes and Vertices.

2) Create an Azure Function App

Create an Azure Function App, add a Function in it; use the HTTPTrigger Template in it if this Function ought to be invoked from a Client Application. For this exercise, I have used a TimerTrigger Template in C#.

3) Reference external assemblies in project.json and upload this file to the Function App Deployment folder

The project.json that I used and the assemblies that had to be referenced were:

{
"frameworks": {
"net46": {
"dependencies": {
"Microsoft.Azure.DocumentDB": "1.14.0",
"Microsoft.Azure.Graphs": "0.2.2-preview",
"Newtonsoft.Json": "6.0.8",
"System.Collections": "4.0.0",
"System.Collections.Immutable": "1.1.37",
"System.Globalization": "4.0.0",
"System.Linq": "4.0.0",
"System.Threading" :  "4.0.0"
}
}
}
}

The project.json file needs to be uploaded to the location https://<function_app_name>.scm.azurewebsites.net. The Azure Documentation here  and here outlines the steps required to be performed.  See screenshot below:

 

4) Add environmental variables to the App Settings of the Function App

This is to store the Graph Connection information in the environment variables of the Function App

5) Adding the C# Scripts in the Azure Function that calls the Graph API

I have repurposed the code from the Starter sample referred earlier in this article, made minimal changes to suit this article. The Gremlin query used here sorts all the Persons (Nodes) in the Graph, sorted descending on their First Names

Trigger this function and view the results from the execution of the Gremlin Query, in the execution log


QR Code with Microsoft Dynamics NAV App

$
0
0

An interesting extension of the Microsoft Dynamics NAV Phone app has to do with the ability to retrieve pictures using the device camera, where available. This feature is duly described in MSDN: How to: Implement the Camera in C/AL

If you would like, then, to scan QR codes and Data Matrix in order to have an overview of the item assigned to that code, this could be easily implemented with a process close to the following:

  • Take a picture
  • Picture is saved server side
  • A .NET Assembly (QR Code reader) scan QR Code and return the string decoded
  • The string decoded is now actionable (e.g. open Item card)

How this could be done in practice?

1. Develop your own .NET Assembly to read QR Code

In this example, I have used the well-known ZXING library available on github.com.
See references below for documentation and usage disclaimer:
https://github.com/zxing/zxing
https://www.nuget.org/packages/ZXing.Net

Just create a simple Visual Studio Solution that reference zxing and zxing.presentation libraries, accordingly to the target .NET Framework for your solution (e.g. .NET 4.5).

And code against them to have an image decoded as per the following

2. Deploy your .NET assemblies

Deploy the assemblies both server side and – for development only – client side in the Add-Ins folder. These are the relevant files that you have to place in the Add-Ins folders:

3. Create a QR Code Management code unit

In the Development Environment, create a new Codeunit called “QR Code Management” – or whatever you like - and add the following global function. DotNet data type runs as per default server-side.

4. Add QR Code to the Item table

Add a field in Table 27 “Item” called “QR Code” and map that field to both Page 30 “Item Card” and Page 31 “Item List”

5. Code!

Now, it is time to implement the MSDN coding to take a picture and store it server side.
Add the following code in Page 31 “Item List” OnOpenPage trigger

And create an action named “ScanQR” with the following properties and code

Last, but not least, when the PictureAvailable event is fired, implement some code that moves the picture into a specific folder – for better handling –, scan QR Code content and run the specific Page 30 “Item Card”, if this is recognized.

NOTE: images are never erased in C:TEMP folder. Prepare your own batch/routine/ PowerShell script for an easy clean-up of the directory. If you have deployed IAAS solution using Microsoft Azure VMs, you might think of moving the images in the Temporary Storage disk (D:).

Have a good scan!

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.
Duilio Tacconi (dtacconi)
Microsoft Dynamics Italy
Microsoft Customer Service and Support (CSS) EMEA

Performance Issues with Visual Studio Team Services in West Europe - 07/21 - Investigating

$
0
0

Initial Update: Friday, July 21st 2017 09:22 UTC

We are investigating performance issues affecting users of the service in West Europe region. Currently the impact is subsided which got self-healed in ~15 minutes. We continue to closely monitor the situation to confirm complete mitigation. Users might have experienced degraded performance or slow page load times.

  • Next Update: Before Friday, July 21st 2017 10:30 UTC

Sincerely,
Vamsi

PowerShell: Convert ConversationHistory from UserCreated to a Default Folder

$
0
0

Time for a new article, this time talking about Conversation History Retention Policies.

I have recently been working with some customers who reported that Retention Policies applied to Conversation History would not get applied. After a lengthily investigation it appeared the issue was due to the "Conversation History" folder was not of type "CommunicatorHistory" but rather "User Created".

This folder is usually created by Lync or Skype for Business the first time the client connects to the mailbox.

 

This is how a broken Conversation History folder looks like:

Get-Mailbox Blog_User | Get-MailboxFolderStatistics | ?{$_.Name -like "Conversation*"} | Select Name, FolderType

Name                                   FolderType
----                                   ----------
Conversation History                   User Created

 

This is how a correctly configured Conversation History folder looks like:

Get-Mailbox Blog_User | Get-MailboxFolderStatistics | ?{$_.Name -like "Conversation*"} | Select Name, FolderType

Name                                   FolderType
----                                   ----------
Conversation History                   CommunicatorHistory

 

What is it doing then?

This script check the presence of the property 0x35E90102 in the Inbox. This MAPI property contains a link to the UniqueId of the Conversation History folder.

When this property is populated with the correct UniqueId, the Conversation History folder type appears as the well-known FolderType, oppositely if this property has an incorrect UniqieId (i.e. the Conversation History folder has been deleted and recreated) or the property is missing (has never been set or has been accidentally deleted) then the folder would appear as User Created.

 

In this very last scenario, Retention Policies targeting the Conversation History would not be correctly applied and items in the folder would not be aged out.

Overall, the work stream is the following:

  1. Locate the EWS Managed API location (overridden if -EWSManagedApiPath "<location>" is specified)
  2. Locate the EWS endpoint (overridden if -EwsUrl "<url>" is specified)
  3. Connect to the target mailbox(es) via EWS (can leverage Owner Logon, Delegate Access or Impersonation)
  4. Retrieve the mailbox folders data (Inbox and Conversation History)
  5. Update the property  0x35E90102 (unless -Simulate is used)

 

The script is capable of processing a list of mailboxes provided either via a CSV or a list of comma separated email addresses as well as a single mailbox, given its primary SMTP address or the logged-on user mailbox if the -Mailbox parameter is omitted.

 

How to run it?

This is rather easy.

The script comes with detailed help documentation which can be invoked directly via PowerShell.

Get-Help .Convert-ConvHistoryFromUserCreated.ps1 -Detailed

 

Please note that the "Mailbox" parameter expects an email address, this can be the PrimarySmtpAddress, the UserPrincipalName or the WindowsEmailAddress.

Do not try using an alias or a mailbox GUID as these would not work.

 

Examples? Here you go.

Here an end-user would run the script on his own, without admin intervention. The script is run from a Domain-Joined client as the logged-on user email address is retrieved via a query to AD.

.Convert-ConvHistoryFromUserCreated.ps1 -LogFile .LogFile.txt

 

In this example you can see the administrator Admin@Contoso.com is running the script against Test@Contoso.com. At this time Full Access permission is leveraged.

$Cred = Get-Credential
.Convert-ConvHistoryFromUserCreated.ps1 -Mailbox "Test@Contoso.com" -Credentials $Cred -LogFile .LogFile.txt

 

In this other example the administrator Admin@Contoso.com is still running the script against Test@Contoso.com. At this time however Application Impersonation is leveraged.

$Cred = New-Object System.Management.Automation.PSCredential -ArgumentList ('Admin@Contoso.com'), (ConvertTo-SecureString -AsPlainText 'AdminPassword' -Force)
.Convert-ConvHistoryFromUserCreated.ps1 -Mailbox "Test@Contoso.com" -Credentials $Cred -Impersonate -LogFile .LogFile.txt

 

In this other example the administrator Admin@Contoso.com is still will run the script against a list of users. At this time Application Impersonation is leveraged as it enables the administrator to access any mailbox in scope without assigning individual mailbox permissions.

$Cred = Get-Credential -UserName Admin@Contoso.com
.Convert-ConvHistoryFromUserCreated.ps1 -Mailboxes "Test@Contoso.com","Other@Contoso.com" -Credentials $Cred -Impersonate -LogFile .LogFile.txt

 

In the last example the logged on user will run the script against a list of users stored in a CSV file. Again, Application Impersonation is used. Administrative credentials can be provided, as demonstrated in the previous example if desired. The CSV File may have multiple comuns, however it must contain a column named PrimarySMTPAddress in order for the script to process the mailboxes.

.Convert-ConvHistoryFromUserCreated.ps1 -CSV "C:TEMPUserList.csv" -Impersonate -LogFile .LogFile.txt

 

If you have to troubleshoot the script you can avail of the -Verbose switch which will display additional information on screen. If you wish to investigate EWS failures instead you can use the -Trace switch which will print on screen the HTTP requests and responses exchanged between the client and the server.

 

The sample scripts.

Here you can download the script: Convert-ConvHistoryFromUserCreated.

 

The perils of async void

$
0
0


We saw last time that async void is an odd beast,
because it starts doing some work, and then returns as soon
as it encounters an await,
with the rest of the work taking place at some unknown
point in the future.



Why would you possibly want to do that?



Usually it's because you have no choice.
For example, you may be subscribing to an event,
and the event delegate assumes a synchronous handler.
You want to do asynchronous work in the handler,
so you use async void so that your handler
has the correct signature, but you can still await
in the function.



The catch is that only the part of the function before the
first await runs in the formal event handler.
The rest runs after the formal event handler has returned.
This is great if the event source doesn't have requirements
about what must happen before the handler returns.
For example, the Button.Click event lets you
know that the user clicked the button, but it doesn't care
when you finish processing.
It's just a notification.



On the other hand, an event like
Suspending
assumes that when your event handler returns,
it is okay to proceed with the suspend.
But that may not be the case if your handler
contains an await.
The handler has not logically finished executing,
but it did return from its handler,
because the handler returned a Task
which captures the continued execution of the
function when the await completes.



Aha, but you can fix this by making the delegate
return a Task,
and the event source would await
on the task before concluding that the handler
is ready to proceed.



There are some problems with this plan, though.



One problem is that making the event delegate
return a Task
is that the handler might not need to do anything
asynchronous,
but you force it to return a task anyway.
The natural expression of this results in a compiler
warning:



// Warning CS1998: This async method lacks 'await'
// operators and will run synchronously.
async Task SuspendingHandler(object sender, SuspendingEventArgs e)
{
// no await calls here
}


To work around this, you need to add
return Task.CompletedTask;
to the end of the function,
so that it returns a task that has already
completed.



A worse problem is that
the return value from all but the last
event handler is not used.



If the delegate invocation includes output parameters or a return value,

their final value will come from the invocation of the last delegate in the list
.


(If there is no event handler, then attempting to
raise the event results in a null reference exception.)



So if there are multiple handlers,
and each returns a Task,
then only the last one counts.



Which doesn't seem all that useful.



The Windows Runtime developed a solution to
this problem, known as the Deferral Pattern.
The event arguments passed to the event handler
includes a method called Get­Deferral().
This method returns a "deferral object"
whose purpose in life is to keep the event
handler "logically alive".
When you Complete the deferral object,
then that tells the event source that the
event handler has logically completed,
and the event source can proceed.



If your handler doesn't perform any
awaits,
then you don't need to worry about the deferral.



void SuspendingHandler(object sender, SuspendingEventArgs e)
{
// no await calls here
}


If you do an await, you can take a deferral
and complete it when you're done.



async void SuspendingHandler(object sender, SuspendingEventArgs e)
{
var deferral = e.SuspendingOperation.GetDeferral();

// Even though there is an await, the suspending handler
// is logically still active because there is a deferral.
await SomethingAsync();

// Completing the deferral signals that the suspending
// handler is logically complete.
deferral.Complete();
}



The Suspending event is a bit strange for
historical reasons.



Starting in Windows 10,
there is

a standard Deferral object

which also supports IDisposable,
so that you can use the using statement
to complete the deferral automatically when control
leaves the block.
If the Suspending event were written today,
you would be able to do this:



async void SuspendingHandler(object sender, SuspendingEventArgs e)
{
using (var deferral = e.GetDeferral()) {

// Even though there is an away, the suspending handler
// is logically still active because there is a deferral.
await SomethingAsync();

} // the deferral completes when code leaves the block
}



Alas, we don't yet have that time machine the
Research division is working on,
so the new using-based pattern works
only for deferrals added in Windows 10.
A
using-friendly deferral will
implement IDisposable.
Fortunately, if you get it wrong and try to
using a non-disposable deferral,
the compiler will notice and report an error:
"CS1674: type used in a using statement must be implicitly convertible to 'System.IDisposable'".



And that's the end of CLR We... no wait!
CLR Week will continue into next week!
What has the world come to!?

Here's To Finishing Off A Great Week! Have A Look At The Friday Five.

$
0
0
dale-howard

Understanding the Power of Calendars in Microsoft Project

Dale Howard is the Director of Education for Sensei Project Solutions. He is based in St. Louis, Missouri.  This year, Dale's celebrating 20 years using Microsoft Project, and 17 years using the Microsoft PPM solution. He's the co-author of 21 books on the subjects. He's married with three children and five grandchildren. Follow him on Twitter @DaleHowardMVP.

Team Sites vs. Communication Sites In SharePoint Online

Chris Johnson is a Microsoft MVP in Office 365 and SharePoint development, and has been designing and building solutions on Microsoft platforms for nearly a decade.  A skilled architect and speaker, he's presented Microsoft Ignite and Nintex InspireX national conferences as well as at local Office 365 and SharePoint events and conferences. Chris is the Director of Business Solutions at PSC Group, LLC based inChicago. You can read his blog at chrisjohnson.tech and follow him on Twitter @CJohnsonO365.

MVP Feature: Watch The Game With Like-Minded Sports Fans With FanWide

Symon Perriman is FanWide’s President and Founder, and a Cloud & Datacenter MVP with a focus on virtualization and management. He has held a variety of technical and business leadership roles, including VP of Business Development & Marketing (5nine Software), Chief Architect (ScorchCenter LLC), Senior Technical Evangelist (Microsoft), and Senior Program Manager (Microsoft). He lives in Seattle, and is an active tech advisor, investor & entrepreneur. Follow him on Twitter @SymonPerriman.

Kafka in Industrial IoT

Sumant Tambe is a Sr. Software Engineer at LinkedIn and a Microsoft MVP. Sumant helps run Apache Kafka and the related streaming infrastructure at LinkedIn. He contributes code in open-source Kafka as needed. Sumant’s personal blog is Coditation and he also writes a more focused C++ Truths blog. He wrote More C++ Idioms---a wikibook on 80+ curated techniques for C++ programmers. He has a PhD in CS from Vanderbilt University. Follow him on Twitter @sutambe.

Microsoft Forms. Want To give It A Try?  Here As A Basic Demo

Haaron Gonzalez is a solutions architect and consultant devoted to delivering mission critical solutions for organizations where collaboration, communication and knowledge represent a competitive advantage. He's been recognized as an MVP in ASP/ASP.NET since 2005 and SharePoint Server since 2009. Follow him on Twitter @haarongonzalez.

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>