Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

With cloud applications, performance is money

$
0
0

This post is provided by App Dev Manager Reed Robison who examines the opportunity and impact of cloud application performance.  If you are moving apps to the cloud, it has never been more important think about performance and as a developer, the business impact you can have may be profound.


imageEarly in my career, developers were always talking about performance.  Efficient code was something that established credibility, but it was also an integral responsibility of every dev since resources were limited.  Thanks to Moore’s law, most developers don’t lose a lot of sleep over performance these days.  Developer time is a premium expense, so throwing hardware at a problem is commonly the least expensive way to solve these problems.  Combined with the pressure to deliver software faster and the ability to update frequently, performance tuning isn’t nearly as cool as it used to be.  In fact, unless you are a game dev or working in a regulated industry, I’m willing to bet that performance is an afterthought for many projects.  In the cloud, this complacency has customers throwing money away and most don’t even realize it.  If you are moving apps to the cloud, it has never been more important think about performance and as a developer, the business impact you can have may be profound.

If it isn’t a current priority, it might not be obvious why you should start caring about performance in the cloud.  Don’t assume that performance and scale is all magically handled for you in this new world. 

While every cloud provider offers services to help solutions scale and handle massive amounts of load, they come with a cost.  You are responsible for choosing tiers for all these services and you pay a premium for high end workloads (which generally sit on premium hardware).  In the initial move to the cloud, many companies settle on a cloud environment that mimics the hardware they had on premises--  Lift and shift.  This gets you into the cloud, but it’s only a first step toward reaping the benefits and savings.  In order to really see the benefits, it’s important to optimize those workloads to take advantage of features that reduce resource overhead.

In order to optimize workloads, you have to understand usage patterns—baselines, benchmarks, capacity limits, etc.  This understanding allows you to establish minimal deployments, scale to meet increased periods of workloads, and understand how an application behaves when it’s overwhelmed.  Some try to do this in production via monitoring, insights, and telemetry capture, but these are performance testing activities and it’s a risky thing to do in production. Performance data gives you (the developer) insights to consider service limits of cloud features and architect software to maximize ROI.   If you are building applications that will run in the cloud, you have an incredible opportunity to save money through performance tuning and architectural decisions.

If you quickly skim through everything else in this post, slow down here and consider about the following points:

  • Moving to the cloud means have the ability to pay ONLY for what you use
  • Architecting for cost can ensure you use only what you need.
  • Performance Tuning can reduce overhead and minimize operational expense.

Not so long ago, it wouldn’t make any sense to re-write an app to be faster without adding features.  Customers will pay for features because it provides new value, but nobody gets a rebate on hardware they already purchased.  

In the cloud, you ONLY PAY for what you use.  You are renting resources and can think of consumption like an overhead expense. It means that the more you optimize an application, the less you pay out to run it.  It’s money you instantly save.

Developers make decisions every day when they write code about how to programmatically achieve a result.  Today, most of those decisions are not influenced by cost of any operation and that can translate into missed opportunity. For example, using something like Azure Storage in place of SQL Server for simple data needs or caching data around expensive I/O operations can have a profound impact on cost. Load-leveling an intense workload to use a cheap queue and low end server might work just as well as paying for a high end server if message processing doesn’t need to happen in real-time. These are simple examples of decisions that have significant operational consequence in terms of cost.

Simple put, if you re-write a cloud an app today to run on half the resources, that can easily translate into savings > 50%.  If you are hosting a solution (SaaS), you can pass those savings on to your customers and be more competitive.  If you are writing applications that power your company, you might save that business a lot of money.

This is all to get to a few key points:

  • Architecting for cost is fundamental to reaping the benefits of cloud transformation.
  • Developers can have a profound impact on the operational expense of a cloud application.
  • If performance is not a priority in your cloud development, you are throwing money away.
  • If you need help, Premier Support can provide performance tuning services and testing resources to help you make the most of your cloud transition.

If you a building apps or moving workloads to the cloud, make sure you are having strategic conversations with your devs about performance and cost. If you are new to the cloud, it might not be your first priority, but it’s critical to stay competitive, minimize your expenses, and realize the full benefits of cloud transformation.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.


Generate and validate Flat File native instances from Flat File schemas

$
0
0

Update released for Logic Apps Enterprise Integration Tools for Visual Studio 2015:

With this update it is now possible to generate and validate Flat File native instances from Flat File schemas.

https://www.microsoft.com/en-us/download/confirmation.aspx?id=53016

Do note that for these to work if the Flat File schema was created with BizTalk tools, the extension class in the annotation needs to be updated to match with Logic Apps tools.

<schemaeditorextension:schemainfo xmlns:schemaeditorextension="http://schemas.microsoft.com/BizTalk/2003/SchemaEditorExtensions" standardname="Flat File" extensionclass="Microsoft.BizTalk.FlatFileExtension.FlatFileExtension" namespacealias="b"></schemaeditorextension:schemainfo>

This the value in red should be replaced with Microsoft.Azure.Integration.DesignTools.FlatFileExtension.FlatFileExtension.

Credits go to Preetham Vinod for all of the fix, release and extension annotation note.

Issues with MSDN/Technet pages 3/26 -Investigating

$
0
0

We are experiencing issues with MSDN/Technat pages. Users might see missing alignment and images while rendering  https://msdn.microsoft.com or https://technet.microsoft.com 

DevOps teams are actively investigating to mitigate the issue.

We apologize for the inconvenience caused and appreciate your patience.

- MSDN Service Delivery Team

Windows Mixed Reality Headsets

$
0
0

Written by Natalie Afshar

This previous blog post about Windows Mixed reality involved lots of products that are free, and if you want to give your students a completely immersive world, a Mixed Reality headset, there is a broad ecosystem of options available in the mixed reality space, at various price points.

Mid-Range: Lenovo and Dell

A mid-range option is the Windows Mixed Reality head mounted display’s from Lenovo, Dell, Acer, HP and Samsung. They are occluded, like the other tethered head sets from Vive and Oculus i.e. they have a screen that you cannot see through, and they also feature two sensors on the front of the device, rendering it a mixed reality tool. These sensors provide you with inside out tracking, meaning that the device tracks itself so you don’t need to set up external power sources, or extra setup or sensors. You plug two cables into your computer  turn on motion controllers and you’re ready to go. There are no age recommendations for these devices.

Premium: Microsoft HoloLens

The Microsoft HoloLens Device is a see-through, fully untethered device – which means it doesn’t have any cables, or need to be plugged into a computer. It projects digital objects the real world and maps the space you’re in, in real time, allowing shared experiences and instantaneous interactions with others wearing the HoloLens headset. HoloLens doesn’t have a screen - which means that you are looking through a piece of transparent glass and everything at that point is projected into the real world. It must be noted that Microsoft recommends usage of the HoloLens for children aged 13 and above. If you want more information about developing in Microsoft HoloLens, here is a great selection of videos.

There are lots of application for HoloLens that are being used in an educational space, such as LifeLiQe which engages students with interactive 3D models, HoloStudy, HoloAnatomy which teaches anatomy, Titan which is an exploration of Saturn’s moon, and MyLab which is an interactive periodic table.

So how do they fit together?

Both the HoloLens and Mixed Reality headsets from Acer, Dell, HP, Lenovo, and Samsung are part of the same Windows Mixed Reality platform. Apps available on SteamVR are interchangeable from one platform to another.

There are also some great educational apps available on Windows Mixed Reality. Read more about it here.

You can also play Minecraft in VR which builds on the existing educational experiences with Minecraft.

Activity idea for teachers

Get students to create or build something within Minecraft, then extract it from Minecraft and into Mixed Reality viewer. You can then overlay it in the real world through the web-cam of any device with the latest Windows10 Fall Creators update.  Students can see in 3D what they have created, without needing a headset. Read more about Minecraft in Mixed Reality here.

What kind of Mixed Reality apps run on a Windows Mixed Reality headset?

The Windows Mixed Reality platform has its own set of games and apps, and you can also run SteamVR experiences through the Windows Mixed Reality for SteamVR app. This lets you access any of the games and apps from Oculus Rift or HTC Vive that run on SteamVR.  Read more here.

How do you set it up?

It’s actually quite easy to set up. If you already have the Fall Creator’s update on your PC, you can set up any of the Mixed Reality headsets in about 10 minutes.

If you’re not sure if you already have the Fall Creators update, you can check your device to see which version of Windows 10 you’re on here.

Windows Mixed Reality just requires a Windows 10 compatible PC and headset, plus the Windows 10 Fall Creators Update, then you download an app from the app store. While the requirements for PC’s needed might vary, you can see some compatible PC’s here.

You can read more about the experiences available here.

And also you can rea more about the apps and content here.

As an IT administrator, you will need to ensure that all the PCs have the Windows 10 Fall creators update. You can then use the readily available Windows 10 apps such as Mixed Reality Viewer and Remix 3D on any device that has that Windows 10 fall creators update, and this will work on a variety of different devices, both new and old. If you want to use a mixed reality headset device, you will need compatible PC’s which can be found at the above link.

Microsoft believes the need to create has never been more important. The Windows ecosystem fosters incredible innovation, allowing users to be active creators, experimenting, learning and exploring. These tools make it easier to understand abstract concepts that are difficult to imagine in 3D – such a complex mathematical shapes and vectors or human anatomy, and are useful in a range of educational settings and for a range of age levels.

Read more about the creators update here.

 

Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.

Using the PowerShell Standard Library

$
0
0

This is the first a set of posts which will help you take advantage of a new Nuget package PowerShellStandard Library 5.1.0. This package allows developers to create modules and applications which host the PowerShell engine which are portable between Windows PowerShell version 5.1 and PowerShell Core 6.0. This means that you can create PowerShell modules and applications which run on Windows, Linux, and Mac with a single binary!

In this post, I'll walk through the steps for creating a simple binary module which has a single, (very) simple cmdlet. I will also be using the dotnet cli tools for creating everything I need.

First, we need create a project for our new module and create a project template:

PS> New-Item -Type Directory myModule

Directory: C:UsersJamessrcpwsh
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 3/26/2018 2:41 PM myModule

PS> Set-Location myModule
PS> dotnet new library
The template "Class library" was created successfully.

Processing post-creation actions...
Running 'dotnet restore' on C:UsersJamessrcpwshmyModulemyModule.csproj...
 Restoring packages for C:UsersJamessrcpwshmyModulemyModule.csproj...
 Generating MSBuild file C:UsersJamessrcpwshmyModuleobjmyModule.csproj.nuget.g.props.
 Generating MSBuild file C:UsersJamessrcpwshmyModuleobjmyModule.csproj.nuget.g.targets.
 Restore completed in 222.92 ms for C:UsersJamessrcpwshmyModulemyModule.csproj.

Restore succeeded.

You can see that the dotnet cli has created a source file and .csproj file for my project. Right now they're quite generic, but we can add a reference to the PowerShellStandard Library very easily:

PS> dotnet add package PowerShellStandard.Library --version 5.1.0-preview-02
 Writing C:UsersJamesAppDataLocalTemptmp2C8D.tmp
info : Adding PackageReference for package 'PowerShellStandard.Library' into project 'C:UsersJamessrcpwshmyModulemyModule.csproj'.
log : Restoring packages for C:UsersJamessrcpwshmyModulemyModule.csproj...
info : Package 'PowerShellStandard.Library' is compatible with all the specified frameworks in project 'C:UsersJamessrcpwshmyModulemyModule.csproj'.
info : PackageReference for package 'PowerShellStandard.Library' version '5.1.0-preview-02' added to file 'C:UsersJamessrcpwshmyModulemyModule.csproj'.
PS> get-childitem

Directory: C:UsersJamessrcpwshmyModule
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 3/26/2018 2:42 PM obj
-a---- 3/26/2018 2:42 PM 85 Class1.cs
-a---- 3/26/2018 2:42 PM 259 myModule.csproj

 

if we inspect the .csproj file, we can see the reference

PS> Get-Content .myModule.csproj
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
 <TargetFramework>netstandard2.0</TargetFramework>
 </PropertyGroup>
<ItemGroup>
 <PackageReference Include="PowerShellStandard.Library" Version="5.1.0-preview-02" />
 </ItemGroup>
</Project>

We can now implement our cmdlet in the Class1.cs file

PS> get-content Class1.cs
using System;
using System.Management.Automation;

namespace myModule {
  [Cmdlet("Get","Thing")]
  public class GetThingCommand : PSCmdlet {
    protected override void ProcessRecord() {
      WriteObject("GetThing");
    }
  }
}

We can now build, import the dll (as a module) and run our new cmdlet

PS> dotnet build
Microsoft (R) Build Engine version 15.5.180.51428 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

Restoring packages for C:UsersJamessrcpwshmyModulemyModule.csproj...
 Restore completed in 164.19 ms for C:UsersJamessrcpwshmyModulemyModule.csproj.
 myModule -> C:UsersJamessrcpwshmyModulebinDebugnetstandard2.0myModule.dll

Build succeeded.
 0 Warning(s)
 0 Error(s)

Time Elapsed 00:00:03.16
PS> import-module .binDebugnetstandard2.0myModule.dll
PS> get-thing
GetThing

This is a completely portable module, I can load this from PowerShell 5.1 and run it without problem. Of course, we've barely scratched the surface, but you can see how this can get you started!

データベースの GDPR 準拠に向けて

$
0
0

Microsoft Japan Data Platform Tech Sales Team

佐藤秀和

前回 Azure SQL Database のセキュリティ新機能についてご紹介いたしましたが、今回も同様にセキュリティ・コンプライアンスに関する内容で、データベースのGDPR 対応に関する情報をお伝えいたします。

DGPR とは
EU (欧州連合)において、個人情報保護に関する新しい法律「EU 一般データ保護規制 (GDPR)」が施工され、2018 年 5 月 25 日 より運用開始となる予定です。

GDPRが与える影響
GDPR は個人のプライバシー権利の強化やデータ保護義務の厳格化などに関する要件が盛り込まれており、EU 圏内に所属する組織だけでなく、EU と取引のある全ての組織が対象となり、法令に準拠していない組織に対して厳しい制裁措置が課せられます。

GDPR準拠に向け4つのステップ
GDPR に準拠するためには、個人データを扱うデータベースに対して安全対策を講じるとともに、適切に維持管理を行っていく必要があります。
マイクロソフトでは、GDPR 準拠のための 4 つのステップによる取り組みをご案内しており、Microsoft データ プラットフォーム においても、この 4 つのステップを進めていくために、役立つ様々な機能をご提供しております。

GDPR_4Step

検出 ー 管理されている個人データとその保存場所を特定します。
・クエリとカタログ ビューへのクエリを使用して、個人データを検索して特定できます。
・SQL Server テーブルに格納された文字ベースのデータに対してフルテキスト クエリを使用できます。
拡張プロパティ機能を使用して、データ分類ラベルを作成し、それを個人の機密情報に適用して、データ分類をサポートします。
データの検出と分類 (New) により、データベース内の個人データを検出して、分類、ラベル付けが出来ます。

管理 ー 個人データの使用方法とアクセス方法を管理します。
・データベースに組み込まれた組み込みの認証メカニズムを使用して、有効な資格情報を持つ承認されたユーザーのみがデータベースサーバーにアクセスできるようにします。SQL Server は SQL 認証とWindows 認証との統合セキュリティを提供します。Azure SQL Database と SQL Data Warehouse のお客様は、多要素認証もサポートしている Azure Active Directory 認証を使用できます。
ロールベースのアクセス制御を適用して、データベースの認証ポリシーを管理でき、職務の分離の原則を実装できます。
行レベルセキュリティを使って、データにアクセスしようとするユーザーの特性に基づいて、(機密情報を含む) テーブル内の行へのアクセスを防ぐことができます。
・Microsoft SQL Server とマスター データサービスを活用すると、個人データを完全に保ち、データの編集、削除、処理の中止の要求が、システム全体に確実に反映されるようにすることができます。
・Microsoft SQL Server の SQL Server Audit、および Azure SQL Database の Azure SQL Database の監査を使うと、SQL Server のテーブルで発生したデータの変更を確認できます。
SQL クエリとステートメントを使って、ターゲット データを識別して削除できます。
・SQL Server テーブルの文字ベースのデータに対して、フルテキスト、正規表現、一般のクエリを使って、ターゲットの個人データを識別し、エクスポートできます。

保護 ー 脆弱性とデータ侵害の防止、検出、および対応を行うセキュリティ制御を確立します。
透過的なデータ暗号化 (TDE) 機能による保管時の暗号化を使って、物理ストレージ層での暗号化により、個人データのセキュリティを保護します。
Always Encrypted 機能を使うと、権限が高いものの承認されていないユーザーによる、転送時、保管時、使用中のデータへのアクセスを防ぐことができます。
行レベル セキュリティおよび動的データマスク機能を使って、個人データを保護します。
・認証を使用して、有効な資格情報を持つ承認されたユーザーのみがデータベースサーバーにアクセスできるようにします。SQL Server の場合は、統合 Windows 認証の使用を推奨します。SQL Database と SQL Data Warehouse の場合は、Azure Active Directory Multi-Factor Authentication (多要素認証) の使用を推奨します。
Always On 可用性グループを使用する企業は、ユーザー データベースのグループの可用性を最大化できます。
・Azure SQL Database と Azure SQL Data Warehouse の SQL Database の脅威検出を使うと、データベースに対するセキュリティの脅威となる可能性がある、異常なデータベースのアクティビティを検出できます。
・SQL Server の SQL Server Audit および Azure SQL Database と Azure SQL Data Warehouse の Azure SQL Database の監査を使うと、継続的なデータベースアクティビティを理解し、アクティビティ履歴を分析して調査し、潜在的な脅威や、不正使用の疑いや、セキュリティ侵害を特定できます。
・Azure SQL Database または SQL Server の脆弱性評価サービス (NEW) を使用すると、データベースのスキャンを行い、データベースの安全でない構成や、外部からアクセス可能な領域、その他の潜在的なセキュリティ上の問題がないかどうか確認できます。

保護に関する内容は、こちらの記事にまとめておりますので、あわせて参照ください。

レポート - 必要な文書(レポート)を保管し、データの要求に対応し、侵害の発生時には通知します。
・SQL Server の SQL Server Audit、Azure SQL Database と Azure SQL Data Warehouse の Azure SQL Database の監査を使うと、監査証跡を保持することができます。
・冗長性を実現して、効果的な障害復旧戦略を実装するため、アクティブな geo レプリケーションや geo リストアなどの組み込みの障害復旧機能を使って、必要なデータセンター間でデータのレプリケーションを行うことができます。これらの障害復旧計画は、複数の Azure 仮想マシンで実行されている SQL Server データベースに対しても実装できます。
・データ保護影響評価 (DPIA: Data Protection Impact Assessment) の一部となるセキュリティ評価ツールとして、脆弱性評価レポートを使うことができます。
・Azure SQL Database では、Azure Data Catalog を使用して、データ処理に関するインサイトを提供し、DPIA の作成をサポートできます。
・Azure SQL Database では、Azure SQL Database の監査、SQL Server では SQL Server Audit を使って、DPIA を実行するために役立つ情報を入手できます。

GDPR関連リソース
上記、データベースのGDPR準拠に向けた対応に関するより詳細な内容や、マイクロソフトの取り組み事例などはこちらのホワイトペーパーにまとめられております。
・Microsoft SQL プラットフォームでのプライバシーの強化と GDPR 要件への対応に関するガイド
https://docs.microsoft.com/ja-jp/sql/relational-databases/security/microsoft-sql-and-the-gdpr-requirements

また、Microsoft の GDPR に対する取り組みについては、こちらのサイトに情報を集約しておりますので、あわせてご活用ください。
・Microsoft Trust Center
https://www.microsoft.com/ja-jp/trustcenter
・Microsoft Trust Center - Microsoft SQL Server
https://www.microsoft.com/ja-jp/trustcenter/cloudservices/sql/GDPR

 

Make the Most of Microsoft Teams!

$
0
0

Microsoft Teams for Education

Microsoft Teams is a digital hub that brings conversations, content, and apps together in one place. Educators can create collaborative classrooms, connect in professional learning communities, and communicate with school staff – all from a single experience in Office 365 for Education.

The following videos break down how to get started with Microsoft Teams in your school, how to use OneNote Class Notebook in conjunction with Teams, and how to embed external content into Microsoft Teams PLUS all the courses and resources available on the Microsoft Educator Community that will help you and your colleagues learn more about Teams.

Introduction to Microsoft Teams

This session, led by Microsoft Learning Consultant Megan Townes, breaks down how to get started with Microsoft Teams in your classroom.

Complete the free online introduction to Microsoft Teams Course here

Using OneNote Class Notebook within Microsoft Teams

This session breaks down how to use OneNote Class Notebook in conjunction with Microsoft Teams.

Complete the free online OneNote Class Notebook Course here

Embedding External Content and the MEC

This session breaks down how to get started with embedding external content into Microsoft Teams and all the courses and resources available on the Microsoft Educator Community.

Visit the Microsoft Educator Community homepage here

 

Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our community and effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.

 

Sharing TDE Encrypted Backup outside the organisation.

$
0
0

In order to share the TDE Encrypted Database backup with somebody outside the organisation, the below steps can be followed.

++ Create new temporary database in order to prepare a make-shift copy of the intended database.

RESTORE DATABASE MyEncryptedDB_Temp FROM DISK = N'C:TempMyEncryptedDB.bak' WITH
MOVE N'MyEncryptedDB' TO N'C:Program FilesMicrosoft SQL ServerMSSQL13.MSSQLSERVERMSSQLDATAMyEncryptedDB_Temp.mdf',
MOVE N'MyEncryptedDB_log' TO N'C:Program FilesMicrosoft SQL ServerMSSQL13.MSSQLSERVERMSSQLDATAMyEncryptedDB_Temp_log.ldf';
GO

++ Disable TDE on the new temporary database in order to reset the certificate and the DEK and then re-enable it with the new ones.

ALTER DATABASE MyEncryptedDB_Temp SET ENCRYPTION OFF;
GO

++ Remove Original Database Encryption Key

USE MyEncryptedDB_Temp;
GO
DROP DATABASE ENCRYPTION KEY;

++ Create a new certificate to encrypt temporary database

USE master;
GO
CREATE CERTIFICATE MyTempTDECert
WITH SUBJECT='Certificate for encrypting temporary DB';
GO

++ Create new database encryption key

USE MyEncryptedDB_Temp;
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE MyTempTDECert;

++ Backup the new temporary certificate

USE master;
GO
BACKUP CERTIFICATE MyTempTDECert
TO FILE = 'C:tempMyTempTDECert.cer'
WITH PRIVATE KEY (file='C:tempMyTempTDECert.pvk',
ENCRYPTION BY PASSWORD='Provide Strong Password for Backup Here');

++ Enable TDE on the temporary Database. This time the database will pick the newly created certificate for encryption.

ALTER DATABASE MyEncryptedDB_Temp SET ENCRYPTION ON;
GO

++ Backup the new temporary database

BACKUP DATABASE MyEncryptedDB_Temp TO  DISK = N'C:tempMyEncryptedDB_Temp.bak';

++ Provide outside organization with database backup, certificate backup, and private key backup files.

C:tempMyEncryptedDB_Temp.bak
C:tempMyTempTDECert.cer
C:tempMyTempTDECert.pvk

++ Now as we have achieved the required outcome of sharing the TDE encrypted database outside the organisation, we can clear the make-shift temporary database and certificate.

EXEC msdb.dbo.sp_delete_database_backuphistory @database_name = N'MyEncryptedDB_Temp';
GO
USE master;
GO
DROP DATABASE MyEncryptedDB_Temp;
DROP CERTIFICATE MyTempTDECert;

 

Hope this helps !! Happy Sharing !!


Secure communication and work management with Microsoft Kaizala

$
0
0
Co-authored by Nitu Narula

In today’s mobile-centric world, instant messaging apps and chat platforms have disrupted business communication. However, when workers communicate with teammates over public mobile messaging platforms, businesses have no control over security protocols. They cannot control dissemination of sensitive data in chat groups, control group access on an individual level or verify the identity of group members. Storing data in local servers to comply with local data protection regulations and erasing data on employees’ private devices is also not possible on these platforms.

In essence, while these platforms have made it easier for teams to communicate, they also present a number of security challenges:

  • Maintaining data ownership
  • Managing and getting visibility on users
  • Preventing data leakage when an employee leaves an organization
  • Data residency compliance

Although mainstream chat platforms and traditional enterprise security solutions might be able to address cyber attacks and malware, they are unable to meet deeper business security and local compliance requirements. When we created Kaizala for real-time business communication and coordination, our focus was on securing data in the increasingly mobile-centric business landscape. To enable communication in a secure manner, crucial data required to be protected not just from external attacks, but also from internal points of failure, device-level security gaps and risks of human error.

Our team at Microsoft thus took a broader view of enterprise data security and ingrained comprehensive security and compliance features that protect business interactions on Kaizala at a deeper level. Here’s how Kaizala was built to be secure by design:

Compliance framework

Kaizala is built on Office 365 trust and secure principles. The Office 365 compliance framework details the protocols that need to be followed while handling user and customer data. Kaizala is placed in Category A for high compliance with the framework, which means it has a strong commitment to privacy, doesn’t mine customer data for advertising purposes and doesn’t offer voluntary disclosure of private data to law enforcement agencies.

A range of compliance features augment Kaizala’s data protection:

  • Data is stored locally in Azure data centers to meet local data regulatory requirements. Data in messages, attachments and contacts details from Indian phone numbers, for example, is stored locally in Microsoft Azure data centers located in India. This feature is exclusive to Indian phone numbers.
  • Organization groups on Kaizala have access to a consolidated Tenant User List (TUL) that includes all users who are members of the tenant’s organization groups. Admins can request specific data for each user on a group, easily identifying them and instantly revoking access for any unauthorized users.
  • When access is revoked and a user is removed from an organization’s group, Kaizala automatically erases all the group’s data from the client device. This is a unique feature that helps secure company data when a former employee or unauthorized user is removed from an organization group.

These unique features help Kaizala go beyond the standard compliance requirements and provide augmented data protection to businesses, employees, and customers.

Security features

Kaizala resides on the Office 365 & Microsoft Azure cloud platforms, which are industry leaders in cloud security. Our team’s experience of working with large corporations as well as small enterprises has enabled us to create a highly sophisticated cloud platform that handles data responsibly.

Security and privacy features are woven into every step of the design process. Security development lifecycle (SDL) helps our engineers and designers address and resolve security concerns from the initial planning stages to the eventual launch of a product. We have created detailed diagrams for data flow and threat models to help identify security concerns while designing Kaizala.

Multiple layers of security make the service platform secure at every level, from data to physical security.

By automating processes and monitoring, security is enhanced by eliminating human intervention. The system constantly scans for anomalies and abnormal behavior to identify threats, malware, and bugs. Anti-malware software is baked into the system and the automated process of identifying threats makes it more secure and agile. Meanwhile, users are offered sophisticated tools to monitor activities on the platform. They can control access to every aspect of their data, prevent unauthorized access and confirm new updates and patches to the software securely.

Our team works with an "Assume Breach" mindset. This means engineers and developers working on Kaizala assume that a data breach has already occurred and their patchwork is crucial in resolving the issue at the earliest. With this mindset, our team has built four pillars of security into the Kaizala system:

  • Prevent: A system of continuous monitoring and incremental improvements help keep Kaizala secure. Tools such as multi-factor authentication and DDoS detection help mitigate the risk of an attack on data.
  • Detect: A massive internal analysis system collects and collates security alerts. Machine learning is applied to these external and internal alerts to improve the way the system detects threats.
  • Respond: The system automatically responds to a data breach by shutting off access to sensitive data and applying tools to resolve the issue.
  • Recover: Once the threat has been resolved, the system updates the necessary components and brings the service back online for regular use.

Privacy and encryption

Data encryption at rest helps us secure data flowing across the Kaizala network. FIPS 140-2 (Federal Information Processing Standard) compliant hashing algorithms, approved by the Microsoft Crypto Board, are used to encrypt data and manage cryptographic keys. We regularly scan the source code to ensure long-term data security.

Communication across the Kaizala platform is carried out over the HTTPS protocol. Only TLS 1.2 is enabled, which exceeds basic standards for security. We go a step further and configure TLS 1.2 to disable TLS compression and insecure renegotiation.

Centralized secret management tools help provide discreet access to admins in the Office Services Infrastructure environment. All user data is stored on the device’s internal memory and quarantined from other apps and external storage devices through Application Sandbox.

Group security on Kaizala goes a step further. One Time Passwords (OTPs) and Azure Active Directory services are used to verify each user and allow admins to control who joins a group.  Group admins can ask for more identity details from each user and manage the way files and messages are shared, forwarded, or copied between members.

Deeper security management with Microsoft Intune

Admins can use Microsoft Intune for advanced management of Kaizala. The cloud-based mobile device management (MDM) and mobile application management (MAM) solution allows admins to control data on groups more thoroughly.

Intune enables unparalleled control of the Kaizala ecosystem. Admins can implement the need for a mobile PIN, restrict data transfer, block screen sharing, and even wipe off data on the group at regular intervals. Businesses using Intune alongside Kaizala can combine seamless communication with the highest level of data control. With these features, single-use devices can be created for an entire pool of employees. Admins retain access control for specific devices and users, which allows the team to bring their own devices to work and connect to the system without compromising data security.

Earning trust with a relentless commitment to security

The importance of security in everyday business communication cannot be undermined. With customers entrusting us with their data, building security into our apps by design is not only a vital imperative but a relentless commitment.

Kaizala has been engineered from the ground-up with compliance, security, encryption, and privacy features that set it apart from other enterprise-focused apps. The level of security built into the app by design exceeds industry standards. This is how stakeholders such as employees, customers and partners can communicate through the platform with peace of mind knowing that their most valuable asset remains protected with industry-leading best-practices.

    •  

    Pozvánka na konferenci Global Azure Bootcamp v Praze

    $
    0
    0

    Rok se sešel s rokem a tradičně přinesl řadu zajímavých novinek. Rád bych vás tímto pozval na konferenci Global Azure Bootcamp 2018, která se koná tento rok v Praze a pořádám ji ve spolupráci s Robertem Hakenem (HAVIT).

    Společnými silami a s pomocí předních českých odborníků na platformu Microsoft Azure jsme sestavili program, který vám pomůže nahlédnout na nejzajímavější azure služby, seznámíte se s aktuálními novinkami a chybět nebudou přednášky plné praktických tipů a zkušeností.

    Naše pozvání přednášet na Global Azure Bootcamp přijali například Jiří Činčura, David Gešvindr, Tomáš Herceg a odborníci z Microsoftu a předních DEV/IT firem v Čechách.

    Témata

    • Přehled platformy Microsoft Azure
    • Novinky v Azure PaaS
    • Azure Cosmos DB
    • Azure SQL
    • Zkušenosti s Azure CDN
    • Zkušenosti s migrací do Azure
    • Azure Bot Service
    • Power BI a otevřená data
    • Kubernetes
    • Azure IoT hub
    • Další alternativní přednášky nejen o Azure

    Pre-Day Workshop

    Marek Chmel, Vladimír Mužný a Michal Marusan rozjedou GAB 2018 ve velkém stylu už v pátek od 10 hodin workshopem Utilizing Azure Services for Data Science and Machine Learning. Na tento workshop je omezený počet míst a nutná samostatná registrace.

    Veškeré informace a možnost registrace na konferenci naleznete na oficiální stránce konference Global Azure Bootcamp 2018.

    Miroslav Holec
    27. března 2018

    Running ASP.Net Core 2.0 web application on Windows Azure Virtual Machine Scaleset (VMSS)

    $
    0
    0

    Introduction

    Azure Virtual Machine Scale Sets (VMSS) is a group of virtual machines (VMs) that has identical software running on them. They are great for running workloads which require high scalability and availability. It provides a great option to run a workload which needs identical virtual machines (VMs). Common example of such a workload is a stateless web front-end or web API. It provides a good balance between IaaS and PaaS model by allowing to perform infrastructure operations such as remote desktop connection to individual VM or instance and provisioning other platform components such as virtual network (VNet), Load Balancer, Public IP address, etc.

    Just like virtual machines, VMSS can use either Windows or Linux as it's OS.

    ASP.Net Core 2.0 is an open-source and cross-platform .NET framework for building modern web applications. It provides a unified platform for building web UI and APIs. It can help author applications that can be hosted on IIS, Nginx, Apache or Docker. This flexibility makes it an ideal choice for creating applications that can be hosted on Azure.

    This post covers creating and running a ASP.Net Core 2.0 web application running on Windows VMSS. It covers common issues and their resolutions encountered in getting ASP.Net Core 2.0 applications deployed to VMSS.

    Sample application code is available at github location - https://github.com/Mahesh-MSFT/VMSSDemo

    Deployment options

    VMSS supports 2 main deployment options. They are –

    1. VM Extension
    2. Custom VM Image

    Many developers are already familiar with VM Extension approach for post-production operations on individual VMs. It offers a great automation experience using tools such as DSC, Chef and Puppet. This same experience can be extended to VMSS as well.

    Creating a custom VM image is the most elaborative experience compared to creating a VM extension. This approach mostly used by IT teams to create "golden copies" of specialized software and ticking a checkbox in their BCDR checklist.

    Choosing an option makes a difference in how quickly or slowly scaling can be performed. For example, if an VM extension is created, for every new VM that is added in VMSS, needs to wait before it becomes available to take traffic for extension to get provisioned and become active. This makes scaling operation taking longer. On the other hand, although creating a custom image can be arduous task, creating a new VM with custom image and adding it to scaleset is very fast.

    This post uses Custom VM Image option to create a new VM using that image and adding it to scaleset.

    Creating the Application

    ASP.Net Core 2.0 has an impressive list of features that makes a web application richer. This post uses Razor Page Handler feature to display the current web server name by making a AJAX call. This ties back nicely with VMSS where this application will be running on all the VMs making up the VMSS. It will be simply displaying web server name that is serving the page.

    To create a ASP.Net Core 2.0 application, click File à New in Visual Studio. I am using Visual Studio 2017 in this post, but equivalent steps can also be used in other editors like VSCode.

    In subsequent dialog box, choose ASP.Net Core 2.0 as the framework and select "Web Application" as template.

    Resulting application will have a structure as outlined in "Solution Explorer" window below.

    In the Index.cshtml, I add a AJAX call to OnGetServerData function.


    To add interactivity to page, I call this function every second using Javascript.

    Server function OnGetServerData simply returns name of web server and current timestamp.

    When running locally, application loads with home page like below.

    Setting up CI/CD workflow - Build

    Deploying any application to Azure should be automated using CI/CD workflow. This helps to improve availability as there is no manual intervention involved. For deploying a ASP.Net Core 2.0 application to VMSS, there are following main steps involved.

    1. Build web application and package it
    2. Provision Web Server (IIS)
    3. Install ASP.Net Core 2.0
    4. Deploy application using WebDeploy
    5. Configure application
    6. Create a VMSS
    7. Update VMSS (If already created)

    This post will cover how each of the above steps is automated using Visual Studio Team Services (VSTS). All of these steps can be automated equally easy using other CI/CD tools such as Jenkins or TeamCity.

    Build web application and package it

    VSTS offers a pre-packaged build step for ASP.Net Core application as highlighted below.

    This template provides a standard 4 step process as shown below to build and publish the application.

    Provision Web Server (IIS)

    This step starts to prepare destination server. This post shows how to provision IIS server. However, there are similar steps that can be used to provision other web servers such as Nginx or Apache.

    Install ASP.Net Core 2.0

    There are 2 components needed to be installed for IIS on Windows servers.

    1. .NET Core 2.0 SDK
    2. .NET Core 2.0 Windows Server Hosting bundle

    Following powershell script illustrates automating both steps. Note that download path and installable file name will change over the time. Script below uses download path and installable filename as of Feb-18.

    Deploy application using WebDeploy

    Earlier, step "Build web application and package it" will create a deployment package in the form of a single zip file. This zip file can be deployed on IIS server using WebDeploy. Powershell script to download installable and installing it as shown below.

    Configure application

    When deploying web application using WebDeploy, it gets deployed as a "Virtual Directory" and should be converted to a "Web Application". Script below shows how to do that.

    A consolidated script to automate steps discussed in earlier section is available here. It's to be checked-in as part of source code for use in build process. Updated build pipeline looks like as shown below. Note that A copy file task copies script from source code/folder to artifactstagingdirectory which is finally published as build output in "Publish Artifact" Step.


    Setting up CI/CD workflow - Release

    After Build artifacts are generated, it's time to deploy them VSTS offers a "Azure Virtual Machine Scale Set Deployment" task that can pick up build artifacts and deploy them on VMSS.

    Deployment Process Task

    This task uses Packer template. After adding this task, configure it as shown below.

    Key values to be configured are –

    1. Packer Template: This should have value "Auto Generated". If you are authoring a custom template, it can be used as well. This post uses "Auto Generated".
    2. Azure Subscription: A VSTS Service endpoint for Azure.
    3. Storage Location: Pick from the Drop-down box.
    4. Storage Account: Pick from a list that appears after providing Storage Location. Ensure that it exists before you can configure it here. Ensure that it is "general purpose" kind.
    5. Resource Group: Resource Group where VMSS will be created. It should be same as Storage Account Resource Group.
    6. Deployment Package: Enter path to deployment package obtained after Build process.
    7. Deployment Script: Path to script in Deployment Package.

    Build immutable machine image Task

    This task uses the Packer template configured in above step. It adds few more parameters as discussed below.

    Key parameters to be configured are discussed below.

    1. Base Image Source: This value can be "Gallery" or "Custom". This post uses "Gallery" which mean OS base image will be fetched from Azure MarketPlace.
    2. Base Image: Choose Windows Version.
    3. Additional Builder Parameters: Packer uses a default VM size of Standard_D3_v2 to create a VM image. This VM series may not be available in some regions. Packer's default configuration value can be changed by passing additional builder parameter as shown above. Here a "vm_size" parameter with value of "Standard_A2" is passed.
    4. Image URL: This variable holds the blob storage location where actual image will be created. By using a variable, it's value can be used in subsequent steps.

    Create a VMSS

    Once the immutable image is created in above step, a script can be invoked to either create or update VMSS. Script is available in source code (createorupdatevmss.ps1).

    This script in essence calls following powershell cmdlets –

    1. Update-AzureRmVmss -ResourceGroupName $rgname -Name $vmssName -VirtualMachineScaleSet $vmss (IF VMSS already exists!)
    2. New-AzureRmVmss -ResourceGroupName $rgname -Name $vmssName -VirtualMachineScaleSet $vmss -Verbose; (IF VMSS doesn't already exists!)

    Key parameter value above is "Script Arguments". Apart from resource location and resource group name, image URL which holds the image location is passed as an argument.

    When this script finishes running, a VMSS is created if it didn't exist before or is updated with new image if it existed already.

    Common troubleshooting

    Below are some common troubleshooting techniques to overcome issues that may surface with approach described above.

    Enable Packer Logging

    Packer does most of the processing to create images. Without a proper debugging assistance, it's a problem to identify what is happening and potentially going wrong.

    Image below shows how it's possible to enable Packer logging.

    Some common examples of issues that can be captured by enabling Packer logging are –

    1. VM Size not supported in a region.
    2. Site doesn't exist
    3. The supplied password must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements

    To identify exact error description, download Build or Release log file and search for "error" word.

    Use General Purpose Storage account to store VMSS image

    Ensure that the storage account used to hold the VMSS image is of type "General Purpose".

    Use Custom Image for VMSS

    Ensure that when you run above script, VMSS (with MarketPlace image) is not created beforehand using portal. Approach described above works with VMSS created using custom image. If you start with a pre-created VMSS using MarketPlace image, you'll get following error.

    scaleset The property 'uri' cannot be found on this object. Verify that the property exists and can be set

    Use a private VSTS agent over hosted VSTS agent

    Approach explained above uses VSTS hosted agent. This is default VSTS agent. Note that this agent has a default timeout setting of 30 mins. Creating a VMSS image and applying it to a new or existing image can easily take more than 30 mins. Depending upon, at what point build or release process times out, you may encounter following error.

    A workaround is to manually run the last script by passing parameter values.

    vmsscreateorupdate.ps1 CreateOrUpdateScaleSet -loc '<your azure region>' -rgname '<your rg>' -imageurl "<your Image VHD Location"

    Doing More With Functions: Taking Parameters on the Pipe

    $
    0
    0

    In an earlier post, I showed you how you could use the [parameter(mandatory)] attribute to force your parameters to behave  a bit more like you'd expect from other languages. We also have a bunch of other useful attributes we can use on our parameters to enable cool features.

    Pipelineing

    The pipe might feel pretty magical to you in PowerShell, as it just seems to work with our built in cmdlets. You can add this same kind of functionality to your own tools, and we give you two options to do so.

    First of all, here is what happens if you don't use this attribute:

    function PipeValueTest
    {
        param($p1) #no pipe specified
        Write-host &quot;$p1 was recieved&quot; -ForegroundColor Green
    }
    &quot;hello&quot; | PipeValueTest #pipe will fail to grab data
    
     was recieved
    

    Pipeline By Value

    When we talk about stuff coming in "by value" we mean that you told PowerShell some specific parameters that can take data off the pipe if we are given values matching their type (or convertible to their type, which can cause some weird behavior). We can do this using the "ValueFromPipeline" option in our [parameter()] attribute.

    Let's see a simple example in action and then see some of the things we need to be aware of:

    function PipeValueTest
    {
        param([parameter(ValueFromPipeline)]$p1) #no data type to care out
        Write-host &quot;$p1 was recieved&quot; -ForegroundColor Green
    }
    
    PipeValueTest -p1 &quot;hi&quot; #we can still use it normally
    &quot;hello&quot; | PipeValueTest #now we can pipe it
    

    Let's try adding a data type and seeing how this behaves as well:

     
    function PipeValueTest
    {
        param([parameter(ValueFromPipeline)][int]$p1) #integers only
        Write-host &quot;$p1 was recieved&quot; -ForegroundColor Green
    }
    
    1 | PipeValueTest #pipe an int
    &quot;5&quot; | PipeValueTest #pipe a string we can convert to int
    &quot;frank&quot; | PipeValueTest #pipe a data type we aren't expecting and can't convert
    

    Notice that PowerShell has a conversion step, where it tried to convert what it saw on the pipe to what we were asking for. This step is generally there for convenience, but occasionally could cause problems if it converts data in a way you didn't intend. When we gave it an easy string "5" we were all set, but when we gave it "frank" it let us know it had no idea what to do with a string on the pipe that it couldn't convert. (0 is the default value for an undefined integer in PowerShell if you're wondering why we got that back after the error).

    This conversion step can cause some odd problems with multiple parameters that use pretty basic types like string or int:

     
    function PipeValueTest
    {
        param([parameter(ValueFromPipeline)][int]$intP,
              [parameter(ValueFromPipeline)][string]$stringP)
        Write-host &quot;Int: $intP was recieved&quot; -ForegroundColor Green
        Write-host &quot;String: $stringP was recieved&quot; -ForegroundColor magenta
    }
    
    1 | PipeValueTest #pipe an int -- uh oh conversion!
    &quot;5&quot; | PipeValueTest #pipe a string we can convert to int -- uh oh
    &quot;frank&quot; | PipeValueTest #pipe a string we can't convert to an int
    

    So the moral of the story is that when you're using "By Value" you need to be careful and test. Usually you just flag your default parameter to take in stuff by value for each parameter set to avoid these situations. We do have another option for receiving data on the pipe, which gives us a lot of flexibility.

    Pipeline By Property Names

    If we ever receive something of a type we didn't flag as "by value", that fails the conversion step, we can actually check if the object recieved has properties that match our parameters. If the object property and parameter data type match, it takes it, or tries to convert it before ignoring it/erroring. A cool way to see this already happening is to run the following:

    get-service ALG | get-process #this will error
    

    Now that line of code doesn't make any sense, but the error is the interesting part:

    get-process : Cannot find a process with the name "ALG". Verify the process name and
    call the cmdlet again.

    How did get-process know to look for a process named ALG? Get-Process doesn't have a single parameter expecting a service object. Well, our service objects have a "Name" property, and our Get-Process cmdlet has a -name parameter. Let's take a look at the help for it.

    Notice that it shows "Accept pipeline input? TRUE (ByPropertyName).

    1. It recieved a service object it wasn't expecting
    2. It failed conversion to something it was expecting
    3. It checked the parameters flagged "ByPropertyName" against the service object properties and sucked out the name "Alg"
    4. if "ALG" weren't a string it would have tried to convert it

    Let's add this to our own code, so we can take in some custom objects on the pipe.

    function PipePropertyTest
    {
        param([parameter(ValueFromPipelineByPropertyName)][int]$intP,
              [parameter(ValueFromPipelineByPropertyName)][string]$stringP)
        Write-host &quot;Int: $intP was recieved&quot; -ForegroundColor Green
        Write-host &quot;String: $stringP was recieved&quot; -ForegroundColor magenta
    }
    
    $obj = [pscustomobject]@{
        intP = 5
        stringP = &quot;hello&quot;
        testP = &quot;test&quot;
    }
    
    $obj | PipePropertyTest
    

    This is really neat because it let's you make a suite of tools that could all output a custom object type and then you can pipe them into each other to get the same kind of "magic" functionality that our tools have (IE get-service, stop-service, restart-service, etc). It also gives you the really nice ability to just read in a CSV with import-csv, then pipe the results into your code as long as you have your parameters matching the headers. I'll do a series of posts in the future about working with different data types if you're unfamiliar with this action.

    Aliases

    Just like we can assign aliases to cmdlets, we can assign aliases for our individual parameters. This actually already exists in some places:

    get-process FAKE1
    get-process FAKE2 -ErrorAction SilentlyContinue
    get-process FAKE3 -EA SilentlyContinue #-ea is an alias for -erroraction
    

    To assign alises to your own parameters, we use another attribute with the nice friendly name [alias()] and we simply list out all the alternate names we want inside of those parens:

    function SimpleAliasParam
    {
        param([alias(&quot;t&quot;)]$a)
        &quot;I got $a&quot;
    }
    
    SimpleAliasParam -a &quot;hi&quot;
    SimpleAliasParam -t &quot;look the alias works&quot;
    

    Why is this cool?

    1. If someone is using your tool already, but you want to change the name of a param (maybe to match some standard you found) then it lets you do so without breaking their code.
    2. Taking in values on the pipe by property name can become very flexible.

    Maybe you have a function that was used to read in a CSV from HR and generate a ton of AD users. You named your parameters after the headers you were given. Everything worked, you made the accounts, etc. 6 months later you need to use it again, but this time Bob from HR sent you a spreadsheet with all different headers. Instead of changing your param names, you can alias them so that either spreadsheet format will work in the future.

     
    $obj = [pscustomobject]@{ #pretend this comes in from import-csv
        Dimension = &quot;c137&quot;
        First = &quot;Rick&quot;
        Last = &quot;Sanchez&quot;
    }
    
    $obj2 = [pscustomobject]@{ #pretend this comes in from import-csv
        Location = &quot;c137&quot;
        FirstName = &quot;Beth&quot;
        LastName = &quot;Smith&quot;
    }
    
    function PipePropertyTest
    {
        param([parameter(ValueFromPipelineByPropertyName)][string]$First,
              [parameter(ValueFromPipelineByPropertyName)][string]$Last,
              [parameter(ValueFromPipelineByPropertyName)][string]$Dimension)
        Write-host &quot;Fist: $first was recieved&quot; -ForegroundColor Green
        Write-host &quot;Last: $Last was recieved&quot; -ForegroundColor magenta
        Write-host &quot;Dimension: $Dimension was recieved&quot; -ForegroundColor Cyan
    }
    
    
    
    $obj | PipePropertyTest
    $obj2 | PipePropertyTest #ERROR TOWN POPULATION: YOU
    
    function PipePropertyTestAlias
    {
        param([parameter(ValueFromPipelineByPropertyName)]
              [string][alias(&quot;FirstName&quot;)]$First,
              [parameter(ValueFromPipelineByPropertyName)]
              [string][alias(&quot;LastName&quot;)]$Last,
              [parameter(ValueFromPipelineByPropertyName)][string]
              [alias(&quot;Location&quot;,&quot;testAlias&quot;)]$Dimension)
        Write-host &quot;Fist: $first was recieved&quot; -ForegroundColor Green
        Write-host &quot;Last: $Last was recieved&quot; -ForegroundColor magenta
        Write-host &quot;Dimension: $Dimension was recieved&quot; -ForegroundColor Cyan
    }
    
    $obj | PipePropertyTestAlias
    $obj2 | PipePropertyTestAlias
    PipePropertyTestAlias -First &quot;Jerry&quot; -Last &quot;Smith&quot; -testAlias &quot;see how I can use them inline too?&quot;
    

     

    This got a bit longer than planned so that’s all for now. I'll do another post talking about some other attributes you have access to. Hopefully this helps you get going on your PowerShell journey and lets you level up your toolset to be as professional and feature rich as possible. If you're looking to add any specific functionality to a tool that I didn't cover, just let me know and I can add it for a future post!

    For the main series post, check back here.

    If you find this helpful don't forget to rate, comment and share 🙂

    How to train, large scale, state of the art, neural networks on the Microsoft Data Science VM

    $
    0
    0

    Dr Adrian Rosebrock – Computer vision expert @ www.pyimagesearch.com , author of a popular Kickstarter deep learning for computer vision book

    Deep Learning for Computer Vision with Python book

    Adrian has posted a great series of guest blogs on how he trains large scale state of the art neural networks on the Microsoft Data Science VM and shares the results of the experiments from using the NVidia  K80 and V100 GPUs

    dsvm_review_libraries

    Here is the full blog series:

    · Deep Learning & Computer Vision in the Microsoft Azure Cloud

    · 22 Minutes to 2nd Place in a Kaggle Competition, with Deep Learning & Azure

    · Training state-of-the-art neural networks in the Microsoft Azure cloud

    Adrian’s review of the Azure DSVM at PyimageSearch.com https://www.pyimagesearch.com/2018/03/21/my-review-of-microsofts-deep-learning-virtual-machine/ 

    If you want to learn about deep learning and computer vision, Adrian book and blog are highly recommended.

    TFS Code Search Does not Return Results

    $
    0
    0

    If code search is setup on a remote server with indexing completed and the results are blank when executing a request, check the event logs for the error message.

    image

    Detailed Message: TF400703: Unable to initialize the specified service Microsoft.VisualStudio.Services.Search.WebServer.CodeSecurityChecksService.

    Inner Exception Details:Exception Message: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms. (type InvalidOperationException)

    Exception Stack Trace:    at System.Security.Cryptography.MD5CryptoServiceProvider..ctor()

    This means FIPS is enabled on the code search server and is impacting the execution of the result set on the search. TFS is not FIPS compliant and is not supported if FIPS is enabled.  This has been the situation from the original version of TFS and extends to the most recent production version and will be the case for the foreseeable future of TFS.

    Resolution:

    Disable FIPS on the code search server.

    Paint 3D: A tool for teaching Science

    $
    0
    0

    Written by Microsoft Learning Consultant, Liz Wilson

    Paint 3D is a wonderful new tool that comes installed with the Windows 10 creators update. It has all the simplicity of the original Paint but enables you to easily create in 3D. You can draw, colour, and stamp stickers in 3D. You can texture your 3D object and turn it from wood to marble. You can even change a 2D doodle into a 3D object easily and simply.

    This simple and powerful tool is a great addition to the Science classroom, allowing both teachers and students to bring science to life in their classrooms and make some tricky new ideas and concepts far more approachable for learners.

     

    Let’s take the water cycle, for example. This is commonly drawn out and labelled as a 2D image but in reality it is far from a flat diagram. By allowing our students to create their own 3D models of this scientific process, they can explore and understand it to a far higher level. Imagine your students creating cross sections of volcanoes, diagrams of flowering plants or even a human eyeball!

     

    Paint 3D isn’t just for students to create their own models, however. Teachers can bring their own models right into the classroom for their students to explore. By selecting the Mixed Reality, models can be brought right into their classrooms. Take an animal cell – this complex structure can be put into the hands of your students to turn, zoom and explore. You can even embed them into your PowerPoint presentations!

     

    Concerned that you don’t have the time to create the models yourself? Just select Remix 3D. This is a space packed full of models that the Remix community have shared. Just select the model that you would like to use and import it into Paint 3D. Now you can edit it, add to it, or use it just as it is!


    Looking to find out more?

    Check out this great video of Microsoft Learning Consultant Jose Kingsley Davies and Creative Director, Jennifer O'Brien, exploring how Paint 3D can be used to enhance Science lessons.

    You can find out more about using Paint 3D across STEM subjects with this great resource on the Microsoft Educator Community: https://education.microsoft.com/courses-and-resources/resources/3d

     

    And if that is not enough, there are more courses on Paint 3D coming soon to the Microsoft Educator Community so keep your eyes peeled! You can also follow the new Twitter handle @Paint3Dedu to find out about all the latest and greatest happening with Paint 3D.



    Introduction to the Microsoft AI Platform

    $
    0
    0

    I recently recorded an introduction to the Microsoft Artificial Intelligence suite of tools and services you can use in your organization, from what is already built  into Microsoft applications you own, through leveraging Cognitive Services, customizing AI, all the way through writing your own AI with Machine Learning, Deep Learning, and Neural Networks.

    I also explained how to run these services, or host the ones you write on Microsoft Azure using web apps, Docker, and other technologies.

    You can find the videos here:




    And I have a full article on these technologies that you can read here.

    We have a whole series of these webinars - you can find the list here.

    Deployment Pipelines For Versioned Azure Resource Manager Template Deployments

    $
    0
    0

    Editor's note: The following post was written by Visual Studio and Development Technologies MVP Peter Groenewegen as part of our Technical Tuesday series. James Chambers of the Technical Committee served as the Technical Reviewer of this piece. 

    Azure Resource Manager templates offer you a declarative way of provisioning resources in the Azure cloud. Resource Manager templates define how resources should be provisioned. When provisioning resources on Azure with Azure Resource Manager, you want to be in control of which resources are deployed, and you want to control their life span. To achieve this control, you need to standardize the templates and deploy then in a repeatable way. This can be done by managing your resource creation as Infrastructure as Code.

    The characteristics of Infrastructure as Code are:

    • Declarative
    • Single source of truth
    • Increase repeatability and testability
    • Decrease provisioning time
    • Rely less on availability of persons to perform tasks
    • Use proven software development practices for deploying infrastructure
    • Idempotent provisioning and configuration.

    In this article, I will explain how to leverage a deployment pipeline and create Azure infrastructure with versioned Azure Resource Manager templates (ARM templates). For the deployments in this article, the VSTS Build and Release pipelines will be used. The code or ARM templates will be managed from a Git repository. Code in the Git repository can use similar practices as any other development project. By updating your code or templates, you can deploy, upgrade or remove your infrastructure at any time. The Azure portal will no longer be used to deploy your infrastructure because all ARM templates are deployed by deployments pipelines. This will give you full traceability and control over what is deployed into your Azure environment.

    Deployment pipelines for deploying versioned ARM templates

    If you have a large number of infrastructure resources, it is good to know what the exact footprint is. If you know this, you can easily redeploy and create test environments without the constant question: to what extent is this infrastructure the same as production? In order to obtain adequate control over your infrastructure, you can apply versioning to the deployments and their content. In this case, ARM templates in combination with VSTS can help you. When applying Infrastructure as Code this way, you can test an actual infrastructure deployment and develop new templates at the same time. To do this you need two deployment pipelines:

    The templates used in the second pipeline are deployed in the first pipeline. These are called linked ARM templates. A linked ARM template allows you to decompose the code that deploys your resources. The decomposed templates provide you with the benefit of testing, reusing and readability. You can link to a single linked template or to a composed one that deploys many resources like a VM, or a complete set of PAAS resources.  

    Deployment pipeline for reusable linked ARM templates

    The goal of this pipeline is to deploy a set of tested linked templates (a version) to a storage account from where they can be used. Each time you perform an update to the templates (pull request), a deployment pipeline is triggered (continuous integration), and once all tests are successful, a new version is deployed and ready to use. The new deployments exist side by side with the earlier deployment. In this way the actual resource deployments can use a specific version. The following figure provides an overview of the pipeline:

    Build

    The deployment starts with a pull request to the master branch of the Git repository. Then a new build is triggered. In this build pipeline, the sources are copied to a build artifact to be used in a release. In addition, a build number is generated that can be used as version number of the released templates.

    Release

    The release has several release steps to ensure that the ARM templates are tested before they are published. Templates are tested by deploying the resources. In the sample, all steps are an automatic process. When the tests succeed, the release continues with the deployment of the templates to the storage account where they can be used for the real infrastructure deployments.

    Deploy test

    The first step is to deploy to a test location on the storage account. This test location will be used to test the ARM templates by doing test deployments. When you deploy the templates, this can also help you in debugging errors by running a test deployment from your local machine where you can point to the test storage container to run the templates from there. The only task in the environment is to do an Azure Blob File Copy. All the linked templates (the artifacts) are deployed to the Azure storage account (configure the storage account in the Azure Portal when you setup your deployment pipeline).

    Test ARM templates

    In the second step, the ARM templates can be tested. First you get a SAS token to access the storage account. The next step consists of deploying the ARM templates. This runs a test ARM template that covers the parameters. If the deployment fails, the pipeline is stopped (asserted). The last step consists of removing the resource group where you have deployed your test resources. If all steps succeed, the templates are approved for release.

    You can perform these regression tests to verify the changes to your templates do not break the common usages. Add for each configuration combination that you want to test a separate step. When you perform this step, you can run them in parallel to grain time. If you split the steps into multiple release definitions step, it is also clear where you have a problem if a specific step fails and find the issue.

    Deploy the production version

    The last step will deploy the templates to a location where the build number is used in the naming convention. The task Azure Blob File Copy is the same as in the first step, only the location where the files are copied to is variable, depending on the build number. In this way the templates can be referenced by using the build number in the URL. This way a team that uses your templates can build and test their deployment on a specific version stable version of the templates.  

    Pipeline for deploying the resource base on the reusable ARM templates

    When all linked templates are deployed, they can be used to perform the deployments of your Azure infrastructure. In the sample pipelines, I have only one test environment, but the number of test environments can be different for each pipeline. One pipeline will deploy the resources of one Azure Resource group.

    Build

    The goal of the build is to produce an artifact of the templates that can be used in the release pipeline. The deployment starts with a pull request to the master branch of the Git repository, which is a trigger for the orchestration to start a new build. In this build pipeline, the sources are copied to a build artifact to be used in a release.

    Release

    The release pipeline will validate, do a test deployment and then deploy the resource to the production environment. The templates in each environment are the same; the only difference is the parameter files. The parameter file can parameterize the sizes of the resources deployed in the different environments. The sizes must be chosen wisely in order to represent the production environment, but keep in mind the costs of running a test environment. In the main template, you keep a variable build, which you use to point to a specific release from the previous pipeline. In this way you can control the deployed version of your shared linked templates.

    Validate templates

    The first step consists of validating the templates for all environments. This is done by running a deployment in Validation Mode.  You can select this option under Deployment mode in the ARM Deployment task. Here, the template is compiled, it is checked to see whether it is syntactically correct and will be accepted by Azure Resource Manager. You have to do this for all environments to check whether the parameter files are correct. When the step succeeds, the deployment to test will start.

    Release test

    The goal of this step is to deploy and test the deployment of the resources. If there is a need for a gatekeeper, approvals can be added at the beginning or end of this step. If not, the deployment of your test environment starts automatically. If possible, use the Deployment Mode option “Complete”. This ensures that the resources in the Azure Resource Group are the same as those defined in the ARM template. The action consists of 2 tasks. First create a SAS token for access to the Azure Storage. The next step performs the actual deployment. If everything succeeds, you can optionally do some manual testing on the resource itself, do infrastructure testing with Pester or create some test scripts.

    Release production

    The production release starts with a gatekeeper (approval). When you are satisfied with the previous (test) resources, an approver can start the production deployment. All resources in the production resource group will be updated according to the ARM template. Try to run your deployment in Complete mode, because then you know that all resources in the resource group are the same as you defined in your Git repository. You are running an infrastructure as Code scenario.

    Final thoughts

    Setting up deployment pipelines for your ARM templates is an up-front investment aimed at giving you control over the resources that are deployed in your Azure environment. When the pipelines are running, changes to your Azure environment are fully controlled from the code, and all changes are traceable from VSTS.


    Peter Groenewegen has been programming computers since he was six. Currently, he focusses on implementing cloud strategies on Microsoft Azure, Software Architecture, DevOps and Application Lifecycle Management. He works as an Azure Cloud Consultant at Xpirit, where he helps clients hands-on with their Cloud implementations. Peter is active on Stackoverflow, writes blogs and speaks at conferences. Follow him on Twitter @pgroene.

    Azure Application Gateway uses the Load Balancer

    $
    0
    0

    I just had an interesting discussion around the use of the Azure Application Gateway (AppGW) and the built-in load balancer (LB) it has. Be aware that when you deploy the Azure Application Gateway, by default it will create 2 instances of it, as seen below:

    You then may be wondering.. what's the mechanism behind the scenes that will make the traffic go to one AppGW or the other?? Well, this is how we explain the concept of AppGW as a PaaS offering. You as an end user will consume and manage the AppGW service, not the underlying components. The truth is that Azure will use the Load Balancer to balance traffic among AppGW instances. This is all configured when you deploy the Application Gateway, and you don't need to worry about it! Behind the scenes, the Azure LB will use a hashing algorithm to decide which AppGW will get the packet.

    So what's the virtual server address of the Load Balancer that sends traffic to the AppGW?  When you assign a fronted IP to your AppGw you are actually assigning an ALB/ILB (public/private) so the IP address in question belongs in reality to the LB

    In summary, to describe this architecture in simple terms, we could say an Azure AppGW is a hybrid product comprising multiple parts, but just exposed as one. It comprises a load balancer, a set of backend Virtual Machines and a management API. The LB is a also a service built from other different services and not really a compute resource with an IP address forwarding traffic - even though that's how it should look like from the outside for majority of cases

    AppGW and Load Balancer are part of the Azure load balancer services and they also comprise Traffic Manager

    Source: https://docs.microsoft.com/en-us/azure/traffic-manager/traffic-manager-load-balancing-azure

    Be aware that AppGW is not a superset of the Load Balancer, they are different products to cover different use cases! To find more information about differences between Azure Load Balancer and Application Gateway

    https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview#load-balancer-differences

     

    Building Microservices with AKS and VSTS – Part 1

    $
    0
    0

    If you happen to find yourself about to build a new application, and you bump into an architect they will tell you that it's very important that it needs to support a "microservices architecture". (This will of course happen before considering if it really should be built that way.)

    If you ask what tools you need to utilize to achieve this state of bliss you shouldn't be very surprised if the name Kubernetes is uttered. (It is after all almost the de facto standard for orchestrating containers these days.) It's usually shortened to k8s so I'm probably going to switch between the two terms going forward.

    If you ask around how to build a Kubernetes cluster you will get a couple of different answers. If one of them involves Azure it's possible that AKS (Azure Kubernetes Service) is mentioned, or managed Kubernetes as it's also known. Being managed it takes away some of the grunt work involved with setting up the cluster from scratch and keep it running. Confusingly enough there is a service called ACS (Azure Container Service) which supports a number of orchestrators, including Kubernetes, and ACS-AKS which is specific to Kubernetes.

    Wait a minute, isn't the Microsoft answer to this something called Service Fabric? Yes, that is also an option. I wouldn't call it a direct competitor to Kubernetes though. It handles more of what is required for a microservices architecture, but to take full advantage of it you might find yourself needing to write the code itself differently. This post isn't an evaluation of which of these are better suited for a given scenario; I happen to have defined that the learning point today is the Kubernetes stack 🙂

    If you want more on the theory behind microservices Gaurav is running a separate series that I can recommend:
    https://blogs.msdn.microsoft.com/azuredev/2018/03/23/introduction-to-microservices/

    Enough with the pre-amble. For some reason you have landed on wanting to use AKS, and you're wondering where to start. If you want to really learn the ins and outs of Kubernetes you can go through "Kubernetes The Hard Way on Azure" (https://github.com/ivanfioravanti/kubernetes-the-hard-way-on-azure) - impressive guide, but kind of a good example of why you would want to go the managed way as well 🙂

    So, you hit the official documentation as well as the search engines, and you start finding yourself more confused by the minute. While individual docs can be good, they often focus on a very specific part, and sometimes it feels like there's something missing for completing the picture. The cloud is supposed to be easy isn't it? Yes, but it turns out there are a lot of moving parts involved in getting this to actually work. I'm not going to try to cover everything in detail, but I will attempt to get as much as possible up and running on the Microsoft stack of services.

    Here's the high level agenda of what we want to cover:
    - Building a sample app in Visual Studio locally.
    - Dockerize it locally.
    - Check the code into a repository that also handles Continuous Integration (CI) and Continuous Delivery (CD).
    - Push images into a container registry.
    - Pulling above images into an AKS cluster.
    - Expose the services outside the cluster.
    - Automating DNS.
    - Automating SSL/TLS.

    Scope
    This is about how you get your code from your IDE to a production-ish state. ("ish" because you might want to bolt on some extra things around availability and scale if you're making money from your code.) The developer "inner loop"; what you do before you push to prod is not covered extensively here.

    There are a number of ways you can architect and implement a Kubernetes cluster, so this isn't the ultimate or necessarily best approach for you. It is however an approach that should bring you to a working setup, and give you an understanding of the basics of using Kubernetes for your microservices.

    Building a web app
    Since this isn't about how to build a web app in general you can technically just step through a wizard in Visual Studio to get a hello world type app. I will be using my API-Playground app, and since it is available on GitHub you can follow along and use that too if you like:
    https://github.com/ahelland/api-playground

    Before we move things to the cloud you should make sure that this actually builds and runs locally. (Restore packages, build, and F5.)

    If things build as expected, and you can actually test things on localhost, you will want to add Docker support. If you don't have it up and running already I recommend downloading Docker for Windows, and installing it. You need Windows 10, and you need virtualization support in your CPU for enabling Hyper-V. (If you don't have this you can still follow along; you're just not able to test the image locally.)

    Stable vs Edge
    There are two types of builds of Docker; the Stable and the Edge (beta). If you run the standard builds of Windows 10 you can choose to with Stable. However I have found that if you run Windows Insider builds things occasionally breaks, and when the fix arrives it comes to the Edge release first. So with beta Windows go for beta Docker 🙂 (Edge can of course be installed on regular Windows builds too.)

    Setting up Visual Studio Team Services (VSTS)
    Visual Studio supports a number of options for deploying code directly from your laptop to a hosted environment, be it on-premises or cloud. However even for your hobby projects it is easier to check code into a repository and handle things from there. Why am I not using GitHub in this article? I use GitHub for a lot of my personal code, but VSTS offers some integration points to Azure that GitHub doesn't. (If you like you can use GitHub for the source, and just use VSTS for CI/CD, but let's not complicate the scenario further here.)

    I will assume that you already have an account for VSTS; there's no cost for your one-man personal projects, and since they are private there's less risk of messing up with secrets and stuff. (Which is nice compared to GitHub where only public repos are free.) If you use the same account for signing in to VSTS, and Azure, that will simplify things later on.

    Create a new project:

    There are different approaches for the initial setup depending on whether you're starting from scratch or not, but I will check in what we already have. (I did not clone my own repo, and just downloaded and extracted the .zip file which means it is not a repo yet.)
    Go to the command line and do a git init in the root directory.

    VSTS will tell you how to push your existing repo:

    Which might look similar to this when executing on the command line:

    If we head back to Visual Studio, and the Team Explorer tab, selecting Changes should look like this:

    So commit these changes, choose Sync, followed by Push:

    Returning to VSTS in the browser, and going to the Code tab there should be a bunch of files available:

    There's things that could differ in my setup, and what you have running on your computer, so I can't really guarantee things will be 100 percent like the above. Maybe you didn't install Git when installing Visual Studio, maybe you haven't logged in to Visual Studio with the same account as VSTS. I'm just hoping the above process is clear enough that you might sort out minor snags on your own 🙂 (That's promising for a tutorial/walkthrough I guess.)

    Ok, so if everything went to plan we have code locally, and we have code in the cloud. This is a good start.

    Now you can add Docker support for the project. Right-click the project in Visual Studio (API-Playground)->Add->Docker Support

    Choose Linux as the operating system. (Yes, Linux. It is not a typo.)

    You should now have a Dockerfile, and a docker-compose section added.

    To verify that all is good on your devbox you should try out the F5 experience. By default you're probably deploying to IIS Express, and this should now have changed to Docker as the deploy target. (The default image should contain the necessary pieces to host web components.)

    This doesn't technically mean we have built a microservice yet. What we have is a monolithic app that has been primed for promotion to a microservice when the time is right.

    Let's pause for a moment here. (And forgive me if this is basic stuff you are already familiar with.) This image is a basic building block for proceeding. If I move this code to a different computer where Docker is installed I can have this up and running in a matter of minutes. (Visual Studio isn't needed either.) So it means we have removed the "it works on my machine" hurdle often preventing mass deployment. (Realistically there are still things that might trip you up, like a proxy component on the network level, different Docker version on the host, well - you catch my drift. But in general you now have a module that can be moved across environments.)

    If you start splitting up your code into different images you could have www.contoso.com in one, and api.contoso.com in another. This leads you to setting up a shared Docker host that developers push images to, and you can replace the www part without affecting the api part. This is great for separating components, but you will run into new challenges.
    Host goes down => services go down.
    Service A needs to communicate with Service B => how do they do that?
    Who takes ownership of the Docker host, opening firewall ports, mapping DNS and the like => developers, or operations?

    Dockerfiles just describe one individual service, and if you have 5 services you have 5 Dockerfiles to maintain. Docker provides another abstraction layer where you define docker-compose files which describe relationships between containers. We will use these in our further setup. The new concern is that while managing 10 monolithic services can be daunting, managing 100 microservices isn't necessarily less work. Which is why you need something to "herd" your services. We often refer to this as an orchestrator, and k8s is just one of several options. At the risk of repeating myself; we have already decided that AKS is our choice for this purpose so we're still not comparing orchestrators, or different Kubernetes configurations 🙂

    Make sure you save your code, and check it in to VSTS.

    I think that rounds up this initial part of our exploration. So tune in to the next installment - same bat time, same bat channel, for more microservice goodness.

    Configuring C++ IntelliSense and Browsing

    $
    0
    0

    Whether you are creating a new (or modifying an existing) C++ project using a Wizard, or importing an project into Visual Studio from another IDE, it’s important to configure the project correctly for the IntelliSense and Browsing features to provide accurate information.  This article provides some tips on configuring the projects and describes a few ways that you can investigate configuration problems.

    Include Paths and Preprocessor Macros

    The two settings that have the greatest effect on the accuracy of IntelliSense and Browsing operations are the Include Paths and the Preprocessor macros.  This is especially important for projects that are built outside of Visual Studio: such a project may build without any errors, but show squiggles in Visual Studio IDE.

    To check the project’s configuration, open the Properties for your project.  By default, All Configurations and All Platforms will be selected, so that the changes will be applied to all build configurations:

    If some configurations do not have the same values as the rest, then you will see <different options>. If your project is a Makefile project, then you will see the following properties dialog. In this case, the settings controlling IntelliSense and Browsing will be under NMake property page, IntelliSense category:

    Error List

    If IntelliSense is showing incorrect information (or fails to show anything at all), the first place to check is the Error List window.  It could happen that earlier errors are preventing IntelliSense from working correctly.  To see all the errors for the current source file together with all included header files, enable showing IntelliSense Errors in the Error List Window by making this selection in the dropdown:

    Error List IntelliSense Dropdown

    Error List IntelliSense Dropdown

    IntelliSense limits the number of errors it produces to 1000. If there are over 1000 errors in the header files included by a source file, then the source file will show only a single error squiggle at the very start of the source file.

    Validating Project Settings via Diagnostic Logging

    To check whether IntelliSense compiler is using correct compiler options, including Include Paths and Preprocessor macros, turn on Diagnostic Logging of IntelliSense command lines in Tools > Options > Text Editor > C/C++ > Advanced > Diagnostic Logging. Set Enable Logging to True, Logging Level to 5 (most verbose), and Logging Filter to 8 (IntelliSense logging):

    Enabling the Diagnostic Logging in Tools > Options > Text Editor > C/C++ > Advanced

    Enabling Diagnostic Logging in Tools > Options

    The Output Window will now show the command lines that are passed to the IntelliSense compiler. Here is a sample output that you may see:

     [IntelliSense] Configuration Name: Debug|Win32
     [IntelliSense] Toolset IntelliSense Identifier:
     [IntelliSense] command line options:
     /c
     /I.
     /IC:RepoIncludes
     /DWIN32
     /DDEBUG
     /D_DEBUG
     /Zc:wchar_t-
     /Zc:forScope
     /Yustdafx.h

    This information may be useful in understanding why IntelliSense is providing inaccurate information. One example is unevaluated project properties. If your project’s Include directory contains $(MyVariable)Include, and the diagnostic log shows /IInclude as an Include path, it means that $(MyVariable) wasn’t evaluated, and was removed from the final include path.

    IntelliSense Build

    In order to evaluate the command lines used by the IntelliSense compiler, Visual Studio launches an IntelliSense-only build of each project in the solution. MSBuild performs the same steps as the project build, but stops short of executing any of the build commands: it only collects the full command line.

    If your project contains some custom .props or .targets files, it’s possible for the IntelliSense-only build to fail before it finishes computing the command lines. Starting with Visual Studio 2017 15.6, errors with IntelliSense-only build are logged to the Output window, Solution Pane.

    Output Window, Solution Pane

    Output Window, Solution Pane

    An example error you may see is:
     error: Designtime build failed for project 'E:srcMyProjectMyProject.vcxproj',
     configuration 'Debug|x64'. IntelliSense might be unavailable.
     Set environment variable TRACEDESIGNTIME=true and restart
     Visual Studio to investigate.

    If you set the environment variable TRACEDESIGNTIME to true and restart Visual Studio, you will see a log file in the %TEMP% directory which will help diagnose this error:

    C:UsersmeAppDataLocalTempMyProject.designtime.log :
     error : Designtime build failed for project 'E:srcMyProjectMyProject.vcxproj',
     configuration 'Debug|x64'. IntelliSense might be unavailable.

    To learn more about TRACEDESIGNTIME environment variable, please see the articles from the Roslyn and Common Project System projects. C++ Project System is based on the Common Project System, so the information from the articles is applicable to all C++ projects.

    Single File IntelliSense

    Visual Studio allows you to take advantage of IntelliSense and Browsing support of files that are not part of any existing projects. By default, files opened in this mode will not display any error squiggles but will still provide IntelliSense; so if you don’t see any error squiggles under incorrect code, or if some expected preprocessor macros are not defined, check whether a file is opened in Single-File mode. To do so, look at the Project node in the Navigation Bar: the project name will be Miscellaneous Files:

    Navigation Bar showing Miscellaneous Files project

    Navigation Bar showing Miscellaneous Files project

    Investigating Open Folder Issues

    Open Folder is a new command in Visual Studio 2017 that allows you to open a collection of source files that doesn’t contain any Project or Solution files recognized by Visual Studio. To help configure IntelliSense and browsing for code opened in this mode, we’ve introduced a configuration file CppProperties.json. Please refer to this article for more information.

    CppProperties.json Syntax Error

    If you mistakenly introduce a syntax error into the CppProperties.json file, IntelliSense in the affected files will be incorrect. Visual Studio will display the error in the Output Window, so be sure to check there.

    Project Configurations

    In Open Folder mode, different configurations may be selected using the Project Configurations toolbar.

    Project Configurations Dropdown

    Project Configurations Dropdown

    Please note that if multiple CppProperties.json files provide differently-named configurations, then the selected configuration may not be applicable to the currently-opened source file. To check which configuration is being used, turn on Diagnostic Logging to check for IntelliSense switches.

    Single-File IntelliSense

    When a solution is open, Visual Studio will provide IntelliSense for files that are not part of the solution using the Single-File mode.  Similarly, in Open Folder mode, Single-File IntelliSense will be used for all files outside of the directory cone.  Check the Project name in the Navigation Bar to see whether the Single-File mode is used instead of CppProperties.json to provide IntelliSense for your source code.

    Investigating Tag Parser Issues

    Tag Parser is a ‘fuzzy’ parser of C++ code, used for Browsing and Navigation.  (Please check out this blog post for more information.)

    Because Tag Parser doesn’t evaluate preprocessor macros, it may stumble while time parsing code that makes heavy use of them. When the Tag Parser encounters an unfamiliar code construct, it may skip a large region of code.

    There are two common ways for this problem to manifest itself in Visual Studio. The first way is by affecting the results shown in the Navigation Bar. If instead of the enclosing function, the Navigation Bar shows an innermost macro, then the current function definition was skipped:

    Navigation Bar shows incorrect scope

    Navigation Bar shows incorrect scope

    The second way the problem manifests is by showing a suggestion to create a function definition for a function that is already defined:

    Spurious Green Squiggle

    Spurious Green Squiggle

    In order to help the parser understand the content of macros, we have introduced the concept of hint files. (Please see the documentation for more information.) Place a file named cpp.hint to the root of your solution directory, add to it all the code-altering preprocessor definitions (i.e. #define do_if(condition) if(condition)) and invoke the Rescan Solution command, as shown below, to help the Tag Parser correctly understand your code.

    Coming soon: Tag Parser errors will start to appear in the Error List window. Stay tuned!

    Scanning for Library Updates

    Visual Studio periodically checks whether files in the solution have been changed on disk by other programs.  As an example, when a ‘git pull’ or ‘git checkout’ command completes, it may take up to an hour before Visual Studio becomes aware of any new files and starts providing up-to-date information.  In order to force a rescan of all the files in the solution, select the Rescan Solution command from the context menu:

    Rescan Solution Context Menu

    Rescan Solution Context Menu

    The Rescan File command, seen in the screenshot above, should be used as the last diagnostic step.  In the rare instance that the IntelliSense engine loses track of changes and stops providing correct information, the Rescan File command will restart the engine for the current file.

    Send us Feedback!

    We hope that these starting points will help you diagnose any issues you encounter with IntelliSense and Browsing operations with Visual Studio. For all issues you discover, please report them by using the Help > Send Feedback> Report A Problem command. All reported issues can be viewed at the Developer Community.

    Viewing all 29128 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>