Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Announcing TypeScript 2.8

$
0
0

TypeScript 2.8 is here and brings a few features that we think you'll love unconditionally!

If you're not familiar with TypeScript, it's a language that adds optional static types to JavaScript. Those static types help make guarantees about your code to avoid typos and other silly errors. They can also help provide nice things like code completions and easier project navigation thanks to tooling built around those types. When your code is run through the TypeScript compiler, you're left with clean, readable, and standards-compliant JavaScript code, potentially rewritten to support much older browsers that only support ECMAScript 5 or even ECMAScript 3. To learn more about TypeScript, check out our documentation.

If you can't wait any longer, you can download TypeScript via NuGet or by running

npm install -g typescript

You can also get editor support for

Other editors may have different update schedules, but should all have excellent TypeScript support soon as well.

To get a quick glance at what we're shipping in this release, we put this handy list together to navigate our blog post:

We also have some minor breaking changes that you should keep in mind if upgrading.

But otherwise, let's look at what new features come with TypeScript 2.8!

Conditional types

Conditional types are a new construct in TypeScript that allow us to choose types based on other types. They take the form

A extends B ? C : D

where A, B, C, and D are all types. You should read that as "when the type A is assignable to B, then this type is C; otherwise, it's D. If you've used conditional syntax in JavaScript, this will feel familiar to you.

Let's take two specific examples:

interface Animal {
    live(): void;
}
interface Dog extends Animal {
    woof(): void;
}

// Has type 'number'
type Foo = Dog extends Animal ? number : string;

// Has type 'string'
type Bar = RegExp extends Dog ? number : string;

You might wonder why this is immediately useful. We can tell that Foo will be number, and Bar will be string, so we might as well write that out explicitly. But the real power of conditional types comes from using them with generics.

For example, let's take the following function:

interface Id { id: number, /* other fields */ }
interface Name { name: string, /* other fields */ }

declare function createLabel(id: number): Id;
declare function createLabel(name: string): Name;
declare function createLabel(name: string | number): Id | Name;

These overloads for createLabel describe a single JavaScript function that makes a choice based on the types of its inputs. Note two things:

  1. If a library has to make the same sort of choice over and over throughout its API, this becomes cumbersome.
  2. We have to create three overloads: one for each case when we're sure of the type, and one for the most general case. For every other case we'd have to handle, the number of overloads would grow exponentially.

Instead, we can use a conditional type to smoosh both of our overloads down to one, and create a type alias so that we can reuse that logic.

type IdOrName<T extends number | string> =
    T extends number ? Id : Name;

declare function createLabel<T extends number | string>(idOrName: T):
    T extends number ? Id : Name;

let a = createLabel("typescript");   // Name
let b = createLabel(2.8);            // Id
let c = createLabel("" as any);      // Id | Name
let d = createLabel("" as never);    // never

Just like how JavaScript can make decisions at runtime based on the characteristics of a value, conditional types let TypeScript make decisions in the type system based on the characteristics of other types.

As another example, we could also write a type called Flatten that flattens array types to their element types, but leaves them alone otherwise:

// If we have an array, get the type when we index with a 'number'.
// Otherwise, leave the type alone.
type Flatten<T> = T extends any[] ? T[number] : T;

Inferring within conditional types

Conditional types also provide us with a way to infer from types we compare against in the true branch using the infer keyword. For example, we could have inferred the element type in Flatten instead of fetching it out manually:

// We also could also have used '(infer U)[]' instead of 'Array<infer U>'
type Flatten<T> = T extends Array<infer U> ? U : T;

Here, we've declaratively introduced a new generic type variable named U instead of specifying how to retrieve the element type of T. This frees us from having to think about how to get the types we're interested in.

Distributing on unions with conditionals

When conditional types act on a single type parameter, they distribute across unions. So in the following example, Bar has the type string[] | number[] because Foo is applied to the union type string | number.

type Foo<T> = T extends any ? T[] : never;

/**
 * Foo distributes on 'string | number' to the type
 *
 *    (string extends any ? string[] : never) |
 *    (number extends any ? number[] : never)
 * 
 * which boils down to
 *
 *    string[] | number[]
 */
type Bar = Foo<string | number>;

In case you ever need to avoid distributing on unions, you can surround each side of the extends keyword with square brackets:

type Foo<T> = [T] extends [any] ? T[] : never;

// Boils down to Array<string | number>
type Bar = Foo<string | number>;

While conditional types can be a little intimidating at first, we believe they'll bring a ton of flexibility for moments when you need to push the type system a little further to get accurate types.

New built-in helpers

TypeScript 2.8 provides several new type aliases in lib.d.ts that take advantage of conditional types:

// These are all now built into lib.d.ts!

/**
 * Exclude from T those types that are assignable to U
 */
type Exclude<T, U> = T extends U ? never : T;

/**
 * Extract from T those types that are assignable to U
 */
type Extract<T, U> = T extends U ? T : never;

/**
 * Exclude null and undefined from T
 */
type NonNullable<T> = T extends null | undefined ? never : T;

/**
 * Obtain the return type of a function type
 */
type ReturnType<T extends (...args: any[]) => any> = T extends (...args: any[]) => infer R ? R : any;

/**
 * Obtain the return type of a constructor function type
 */
type InstanceType<T extends new (...args: any[]) => any> = T extends new (...args: any[]) => infer R ? R : any;

While NonNullable, ReturnType, and InstanceType are relatively self-explanatory, Exclude and Extract are a bit more interesting.

Extract selects types from its first argument that are assignable to its second argument:

// string[] | number[]
type Foo = Extract<boolean | string[] | number[], any[]>;

Exclude does the opposite; it removes types from its first argument that are not assignable to its second:

// boolean
type Bar = Exclude<boolean | string[] | number[], any[]>;

Declaration-only emit

Thanks to a pull request from Manoj Patel, TypeScript now features an --emitDeclarationOnly flag which can be used for cases when you have an alternative build step for emitting JavaScript files, but need to emit declaration files separately. Under this mode no JavaScript files nor sourcemap files will be generated; just .d.ts files that can be used for library consumers.

One use-case for this is when using alternate compilers for TypeScript such as Babel 7. For an example of repositories taking advantage of this flag, check out urql from Formidable Labs, or take a look at our Babel starter repo.

@jsx pragma comments

Typically, users of JSX expect to have their JSX tags rewritten to React.createElement. However, if you're using libraries that have a React-like factory API, such as Preact, Stencil, Inferno, Cycle, and others, you might want to tweak that emit slightly.

Previously, TypeScript only allowed users to control the emit for JSX at a global level using the jsxFactory option (as well as the deprecated reactNamespace option). However, if you needed to mix any of these libraries in the same application, you'd have been out of luck using JSX for both.

Luckily, TypeScript 2.8 now allows you to set your JSX factory on a file-by-file basis by adding an // @jsx comment at the top of your file. If you've used the same functionality in Babel, this should look slightly familiar.

/** @jsx dom */
import { dom } from "./renderer"
<h></h>

The above sample imports a function named dom, and uses the jsx pragma to select dom as the factory for all JSX expressions in the file. TypeScript 2.8 will rewrite it to the following when compiling to CommonJS and ES5:

var renderer_1 = require("./renderer");
renderer_1.dom("h", null);

JSX is resolved via the JSX Factory

Currently, when TypeScript uses JSX, it looks up a global JSX namespace to look up certain types (e.g. "what's the type of a JSX component?"). In TypeScript 2.8, the compiler will try to look up the JSX namespace based on the location of your JSX factory. For example, if your JSX factory is React.createElement, TypeScript will try to first resolve React.JSX, and then resolve JSX from within the current scope.

This can be helpful when mixing and matching different libraries (e.g. React and Preact) or different versions of a specific library (e.g. React 14 and React 16), as placing the JSX namespace in the global scope can cause issues.

Going forward, we recommend that new JSX-oriented libraries avoid placing JSX in the global scope, and instead export it from the same location as the respective factory function. However, for backward compatibility, TypeScript will continue falling back to the global scope when necessary.

Granular control on mapped type modifiers

TypeScript's mapped object types are an incredibly powerful construct. One handy feature is that they allow users to create new types that have modifiers set for all their properties. For example, the following type creates a new type based on T and where every property in T becomes readonly and optional (?).

// Creates a type with all the properties in T,
// but marked both readonly and optional.
type ReadonlyAndPartial<T> = {
    readonly [P in keyof T]?: T[P]
}

So mapped object types can add modifiers, but up until this point, there was no way to remove modifiers from T.

TypeScript 2.8 provides a new syntax for removing modifiers in mapped types with the - operator, and a new more explicit syntax for adding modifiers with the + operator. For example,

type Mutable<T> = {
    -readonly [P in keyof T]: T[P]
}

interface Foo {
    readonly abc: number;
    def?: string;
}

// 'abc' is no longer read-only, but 'def' is still optional.
type TotallyMutableFoo = Mutable<Foo>

In the above, Mutable removes readonly from each property of the type that it maps over.

Similarly, TypeScript now provides a new Required type in lib.d.ts that removes optionality from each property:

/**
 * Make all properties in T required
 */
type Required<T> = {
    [P in keyof T]-?: T[P];
}

The + operator can be handy when you want to call out that a mapped type is adding modifiers. For example, our ReadonlyAndPartial from above could be defined as follows:

type ReadonlyAndPartial<T> = {
    +readonly [P in keyof T]+?: T[P];
}

Organize imports

TypeScript's language service now provides functionality to organize imports. This feature will remove any unused imports, sort existing imports by file paths, and sort named imports as well.

Fixing uninitialized properties

TypeScript 2.7 introduced extra checking for uninitialized properties in classes. Thanks to a pull request by Wenlu Wang TypeScript 2.8 brings some helpful quick fixes to make it easier to add to your codebase.

Breaking changes

Unused type parameters are checked under --noUnusedParameters

Unused type parameters were previously reported under --noUnusedLocals, but are now instead reported under --noUnusedParameters.

HTMLObjectElement no longer has an alt attribute

Such behavior is not covered by the WHATWG standard.

What's next?

We hope that TypeScript 2.8 pushes the envelope further to provide a type system that can truly represent the nature of JavaScript as a language. With that, we believe we can provide you with an experience that continues to make you more productive and happier as you code.

Over the next few weeks, we'll have a clearer picture of what's in store for TypeScript 2.9, but as always, you can keep an eye on the TypeScript roadmap to see what we're working on for our next release. You can also try out our nightly releases to try out the future today! For example, generic JSX elements are already out in TypeScript's recent nightly releases!

Let us know what you think of this release over on Twitter or in the comments below, and feel free to report issues and suggestions filing a GitHub issue.

Happy Hacking!


Microsoft Drivers 5.2.0 for PHP for SQL Server Released!

$
0
0

Hi all,

We are excited to announce the production ready release for the Microsoft Drivers 5.2.0 for PHP for SQL Server. The drivers now support basic select/insert/update/delete functionality with the Always Encrypted feature. The driver enables access to SQL Server, Azure SQL Database and Azure SQL DW from PHP 7.0-7.2 applications on Linux, Windows and macOS.

Notable items about 5.2.0 since 4.3.0:

Added

  • Added support for Always Encrypted (see Features)
    • Support for Windows Certificate Store
    • Support for inserting into and modifying an encrypted column
    • Support for fetching from an encrypted column
  • Added support for PHP 7.2
  • Added support for Microsoft ODBC Driver 17 for SQL Server
  • Added support for Ubuntu 17 (requires Microsoft ODBC Driver 17 for SQL Server)
  • Added support for Debian 9 (requires Microsoft ODBC Driver 17 for SQL Server)
  • Added support for SUSE 12
  • Added Driver option to specify the Microsoft ODBC driver
    • Valid options are "ODBC Driver 17 for SQL Server", "ODBC Driver 13 for SQL Server", and "ODBC Driver 11 for SQL Server"
    • The default driver is ODBC Driver 17 for SQL Server

Changed

  • Implementation of PDO::lastInsertId($name) to return the last inserted sequence number if the sequence name is supplied to the function (lastInsertId)

Fixed

  • Issue #555 - Hebrew strings truncation (requires Microsoft ODBC Driver 17)
  • Adjusted precisions for numeric/decimal inputs with Always Encrypted
  • Support for non-UTF8 locales in Linux and macOS
  • Fixed crash caused by executing an invalid query in a transaction (Issue #434)
  • Added error handling for using PDO::SQLSRV_ATTR_DIRECT_QUERY or PDO::ATTR_EMULATE_PREPARES in a Column Encryption enabled connection
  • Added error handling for binding TEXT, NTEXT or IMAGE as output parameter (Issue #231)
  • PDO::quote with string containing ASCII NUL character (Issue #538)
  • Decimal types with no decimals are correctly handled when Always Encrypted is enabled (PR #544)
  • BIGINT as an output param no longer results in value out of range exception when the returned value is larger than a maximum integer (PR #567)

Removed

  • Dropped support for Ubuntu 15
  • Supplying tablename into PDO::lastInsertId($name) no longer return the last inserted row (lastInsertId)

Limitations

  • Always Encrypted is not supported in Linux and macOS
  • In Linux and macOS, setlocale() only takes effect if it is invoked before the first connection. Attempting to set the locale after connection will not work
  • Always Encrypted functionalities are only supported using MS ODBC Driver 17
  • Always Encrypted limitations
  • When using sqlsrv_query with Always Encrypted feature, SQL type has to be specified for each input (see here)
  • No support for inout / output params when using sql_variant type

Known Issues

  • Connection pooling on Linux doesn't work properly when using Microsoft ODBC Driver 17
  • When pooling is enabled in Linux or macOS
    • unixODBC <= 2.3.4 (Linux and macOS) might not return proper diagnostics information, such as error messages, warnings and informative messages
    • due to this unixODBC bug, fetch large data (such as xml, binary) as streams as a workaround. See the examples here
  • Connection with Connection Resiliency enabled does not resume properly with Connection Pooling (Issue #678)
  • With ColumnEncryption enabled, calling stored procedure with XML parameter does not work (Issue #674)
  • Cannot connect with both Connection Resiliency enabled and ColumnEncryption enabled (Issue #577)
  • With ColumnEncryption enabled, retrieving a negative decimal value as output parameter causes truncation of the last digit (Issue #705)
  • With ColumnEncryption enabled, cannot insert a double into a decimal column with precision and scale of (38, 38) (Issue #706)
  • With ColumnEncryption enabled, when fetching decimals as output parameters bound to PDO::PARAM_BOOL or PDO::PARAM_INT, floats are returned, not integers (Issue #707)

Survey

Let us know how we are doing and how you use our driver by taking our pulse survey: https://aka.ms/mssqlphpsurvey

Get Started

Direct downloads can be found on the Github release tag.

David Engel

Audit SQL Server stop, start, restart

$
0
0

In this article, Application Development Manager Steve Keeler outlines an approach for determining the domain identity of a user who has initiated a stop, start, or restart request on SQL Server services. Although SQL Server contains server and database auditing functionality as part of the product, this cannot be used to determine the identity of a user changing the service state of a SQL Server instance since that operation is occurring at the system rather than database level.


I recently worked with a customer to help resolve an issue with Team Foundation Server where collection databases were not coming online following a service restart. While working on this issue, the following question was posed: "how can we identity the user(s) responsible for stopping, starting, and restarting SQL Server services?".

Preliminary investigation into using SQL Server audit functionality yielded no solution. Checking with Microsoft database specialists, they confirmed that the closest auditing SQL Server could provide for service restarts would tie that operation to the privileged 'sa' identity. Checking with one of Microsoft's Premier Field Engineers specializing in Security, Liju Varghese provided a quick overview on how to implement service level auditing.

The following sections provide details on the settings required at the operating system level to audit service management operations, in this case for the SQL Server service. This is done using group policy objects.


Continue reading here.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Get Community driven Docker images for Web app for Containers

$
0
0

You can now find community driven docker images to try out on Web App for Containers on Github. These images follow best practices for Web app for containers , contain SSH for debugging purposes.

How to deploy

These docker images can be found on Docker hub on hub.docker.com/r/appsvcorg.  Use the latest tag for the most recent version of the image when deploying it to Web app on Azure.

To deploy your application using these community images , follow the steps below

  • Login to Azure portal
  • Create a new web app from Web app for Containers template 
  • Under configure container , select
    • Image source as Docker Hub
    • Repository access as public
    • Enter docker image name in this format : appsvcorg/django-python:0.1 or  appsvcorg/django-python:latest 

  • Click on Create to start the deployment
  • View the Readme.md for the docker image you have selected on github if there are additional configurations to be made after the web app is created.

How to contribute

To make sure your docker image is included , please follow these guidelines. Any docker image that are out of compliance will be added to the blacklist and be removed from Docker hub repository https://hub.docker.com/r/appsvcorg .

Here is the end to end process flow for contributing to this docker hub repository .

When you contribute a new docker image , as the owner of that docker image your responsibilities include:

  • review issues reported on the docker image on Github
  • fix and resolve bugs or compliance issues with the docker image
  • keep the docker image up to date

Get started on how to contribute to the Github repository.

How to remove a docker image from Docker hub

Docker images can be removed from Docker hub and Github repository  when either if the two cases below is applicable :

  • If the owner/primary maintainer of the docker image does not wish to maintain the docker image and remove it since they no longer support it. Please report it to appgal@microsoft.com to remove the docker image from Docker hub and Github repositories .
  • If docker image is outdated , or is has bugs unresolved for more than 3 months the docker image will be removed .

How to report issues

If you want to report issues , please report an issue here. Provide all the necessary information as shown in this template to report an issue in order for us to help with resolving the issue.

 

 

Unit Testing Your JavaScript Code

$
0
0

In a recent post from his blog, Premier Developer Consultant Jim Blizzard discusses how to set up Visual Studio 2017 to run JavaScript-based unit tests.


This week, I demonstrated to a client how they could write unit tests in JavaScript to test their JavaScript code by leveraging Karma, Jasmine, and Chutzpah. The unit tests show up in Test Explorer just like unit tests written in C# do. Setting things up isn’t very difficult and can be completed in just a few minutes.

Let’s take a look at how you can do it in your environment while we start to create a JavaScript library that calculates the score of a bowling game.

Continue reading more on Jim’s blog post.

Important Updates About Us and Access Control Service

$
0
0

UPDATE: Access Control Service will be deprecated on November 7, 2018 and the ability to create new Access Control Service namespaces will be stopped on May 1, 2018 as part of this deprecation process. This includes previously whitelisted user Subscription IDs. 

 

For additional information please take a look at this blog post.

 

For Service Bus, Event Hubs, and Relay customers that currently use Access Control Service (ACS) namespaces you can get guidance on how to migrate with the following articles:

 

Other migration guidance about ACS can be found here.

 

We highly suggest you begin planning and executing on a migration strategy today.

PIX 1803.25 – GPU Occupancy, CPU sampling, automatic shader PDB resolution, and more

$
0
0

Today we released PIX 1803.25 which includes numerous new and updated features:

  • GPU Occupancy provides detailed insight into how shader workloads execute on the GPU. As announced at GDC we have collaborated with NVIDIA to bring console-level performance details about how shaders execute on the hardware to PIX on Windows. Many thanks to our partners at NVIDIA for helping us enable this great feature in PIX on Windows.
    • While the Execution Duration timeline lane shows the total execution time for draws and other rendering operations, the new Occupancy lane shows the associated work as it moves through the GPU’s rendering pipeline. You’ll see how work is broken into vertex shader work, pixel shader work, and so forth giving you a much more accurate view of how the hardware handles your rendering.
    • The Occupancy lane shows VS, HS, DS, GS, PS, and CS work corresponding to the different pipeline stages as well as a stage labeled Internal which allows GPU vendors to account for work that doesn’t map to any of the conventional pipeline stages.
    • To collect GPU occupancy data you have to Collect Timing Data first. Once the timing data is available, click the Enable button in the Occupancy lane to collect the data and populate the view.
    • This feature is currently only available on NVIDIA GPUs and it requires an updated driver. Please make sure to get version 391.35 or later to use this feature.
    • We’re working on surfacing this information for other GPUs as well, so stay tuned for updates.
  • Timing Captures now include an option to collect and analyze CPU samples.
    • PIX now includes a CPU sampling profiler that can optionally be run when taking a timing capture. Viewing CPU samples is useful for determining what code is running on a thread or core for portions of your title that either have been sparsely instrumented with PIX events or not instrumented at all. The integration of samples into timing captures improves the iteration time of your profiling tasks because you can now get a sense of what code is executing at any point in time without having to go back and add additional instrumentation to your title.
  • Timing captures now include the ability to visualize the execution times of individual functions in your title.
    • Timing captures now allow you to track the execution of your title’s functions. You can select which functions to track by viewing callstacks for samples, by analyzing an aggregate view of the samples, or by selecting functions from your title’s PDBs. Each call to the functions you track is displayed on an additional lane per thread (or core) in the capture. As with the ability to collect CPU samples, the ability to track functions improves your iteration time by drastically reducing the need to add instrumentation to your title, rebuild, and redeploy.
  • Automatic shader PDB resolution. We’ve improved how PIX resolves shader debug information (shader PDBs) and described the process for how you can set up your build system to support this. For more information see the documentation for this feature.
    • Your build system can use a suggested, unique name for each shader’s PDB data, and can then strip and store the debug information in a dedicated file. If you follow this process PIX can automatically discover and resolve your shader PDBs and thus provide much better support for inspecting, editing, and debugging shaders.
  • The GPU Memory Usage tab in timing captures now tracks Pipeline States, Command Allocators, and Descriptor Heaps as well.
  • Improved TDR debugging support.
    • We have fixed issues for titles using ExecuteIndirect.
  • Improvements to Pipeline view and resource history.
    • The Pipeline view is now much faster to use. PIX tracks resource access for the entire capture the first time you access the Pipeline or Resource History views. Once this is done these views update near-instantaneously when selecting a new event.
    • Resource history now accurately reflects which bound resources were accessed by shaders.
    • Shader access tracking now also supports DXIL shaders.
  • The Shader Debugger now supports debugging of geometry shaders.
  • Support for buffered IO in the File IO profiler
    • The file IO profiler now displays file accesses from your title even if they’ve been satisfied by the Windows disk cache. A new IO Event Type column in the event list lets you choose between viewing buffered events, non-buffered events or both.
  • DirectX Raytracing (DXR) support. This release also adds enhancements to the support for the new experimental DXR features we released to support the recent GDC announcement. Please make sure to check the dedicated release note and documentation for details. New features in this release:
    • You can now view samplers for DispatchRays calls.
    • Fixed global root parameters for DispatchRays calls when a graphics PSO is bound.
  • Fixed several bugs. Thanks to all of you out there who reported issues!

As always, please let us know if you have feedback on any of the features in PIX on Windows.

GPU occupancy
GPU occupancy view

The screenshot above shows how PIX on Windows visualize GPU occupancy.

  1. Vertex shader work for this event.
  2. Pixel shader work for events other than this one.
  3. Pixel shader work for this event.

CPU sampling
CPU sampling in timing capture

Power BI Tricks, Tips and Tools from the owners of PowerBI.Tips Mike Carlo and Seth Bauer

$
0
0

Power BI Tricks, Tips and Tools from the owners of PowerBI.Tips

Power Tricks, Tips and Tools from the owners of PowerBI.Tips
In this very special webinar the owners of PowerBI.Tips and Power BI MVPs, Seth Baur and Mike Carlo will share with you their huge grab bag of Power Tricks, Tips and Tools they have published to http://PowerBI.Tips over the last 18 months.
Demo’s to include their theme generator, adding data types within the query editor and their latest offering Power BI layouts (and a tour of their latest layout “Cool Blue”).

When 3/28/2018 10AM PST

Where: https://www.youtube.com/watch?v=fnj1_e3HXow


HoloLens RS4 Preview のインストール

$
0
0

このコンテンツは、https://docs.microsoft.com/ja-jp/windows/mixed-reality/hololens-rs4-preview のざっくり訳です。

Windows 10 の次期バージョン RS4のHoloLens 対応プレビュー版が公開されました。まだプレビュー版ですのでインストール等については自己責任でお願いします。

HoloLens RS4 Preview をダウンロードして使うことは、HoloLens RS4 Preview のエンドユーザーライセンスアグリーメント(EULA)に承認したことになります。

このプレビュー版ををインストールすると、HoloLensにあったすべてのコンテンツやアプリなどが消去され、工場出荷状態に戻ります。またプレビュー版のため何かしらのバグ等が出る可能性もあります。そのため、HoloLensを熟知し、今後更新が必要になっても問題がない方のみご利用ください。

このプレビュー版をインストールするに当たっては、最初に最新バージョンのWindows Device Reovery Tool をダウンロードし、HoloLensをWindows Insider Preview に登録する必要があります。

HoloLens RS4 Preview パッケージ

こちらをダウンロードします。中身を解凍すると2つのファイルがあります。

  • rs4_release_svc_analog.retail.10.0.17123.1004.ffu  HoloLens RS4 Preview のイメージ
  • HoloLens 2018 Preview - End User License Agreement (EULA).pdf ライセンスアグリーメント(EULA)

プレビュー版のインストール

  1. リテール版のHoloLens (Windows Holographic 10.0.14393)を起動して、デバイスに RS4 プレビューを適応する承認をInsider Preview ビルドに対して行います。
    • Settings アプリを起動して -> Update & Security -> Get Insider Preview builds -> Get started. を選択します
    • Insider Preview Build 適応のためにRestart を選択してデバイスを再起動し、再び起動するまで待ちます。
    • 何かわからないときはこちらを参照にしてください。Reset & Recovery instructions.
  2. Windows Device Recovery Tool (WDRT) をこちらからインストールします。https://aka.ms/wdrt. バージョンは 3.14.07501(もしくはそれ以降)です。
  3. Windows Device Recovery Tool を使ってOSのPreview ビルドのイメージを焼きます。
    1. Windows Device Recovery Tool をスタートメニューかデスクトップのショートカットから起動します。
      WDRT shortcut
    2. HoloLens  をUSBで接続して、Windwos Device Recovery Tool で認識したら Microsoft HoloLens を選びます。
    3.  画面下の Manual Package selection を選択して、ダウンロードしたOSのイメージの.ffu ファイルを選択します。(ダウンロードしたOSのイメージを直接焼くため )
    4. Local Package の方のバージョンが 10.0.17123.1003 (もしくはそれ以降)になっていることを確認したら、Install Software ボタンを押してOSのインストールを始めます。
    5. このプロセスはHoloLens デバイス上の中身をすべて消去する、という旨の WARNING が出ますので、承諾したら Continue ボタンを押してください。
    6. インストールは夫分かかります。その間は画面にプログレスバーが表示されます。

      ちなみに、ここではプログレースバー → Waiting for Device to boot と表示が変わりHoloLens内部は歯車が出て更新状態になります。
    7. 一旦インストールが終わったら、デバイスは再起動しますので。プロセスを終了するためにFinishボタンを押します。
    8. 新しいOSのバージョンはRecovery Toolでもう一度デバイスを選択し、Devicve Info ページから確認します。
  4. HoloLens の起動時のセットアップの中で、自分のプライベートもしくは仕事用のアカウントでサインインして新しい機能使いましょう。

新機能

  • 起動時の2D、3Dコンテンツの自動配置(起動時に1タップ配置が不要に)
  • Adjustモードでなくてもウィンドウの移動や回転、リサイズが可能に
  • 2Dウィンドウの横方向だけの拡大が可能に
  • 音声コマンドの拡張(Go to Start, Move this)
  • Holograms と Photos のアプリがアップデート
  • Mixed Reality Capture の改善(音量のUP/DOWN同時長押し3秒で録画開始)
  • オーディオの改善
  • HoloLensの中でファイルエクスプローラーが使えます
  • デスクトップPCから、HoloLensのフォト、ビデオ、ドキュメントへのアクセスが楽に
  • セットアップ中での、ブラウザを使った無線LANの認証に対応

開発者向け更新

  • Spatial Mappingの改善
  • 深度バッファを使ったフォーカスポイントの自動選択
  • Holographic Reprojection を停止可能に
  •  アプリがHoloLens 上で動いているか、MRデバイス上で動いているかがAPIでより詳細にわかる

企業向け新機能等

  • マルチAzure ADアカウントを利用可能に
  • サインイン時にWiFiネットワークを変更可能に
  • 個人のMicrosoft アカウントに企業カウントをより簡単に追加可能に
  • MSM環境がなくてもメールのシンクが可能に

IT Pro 向け情報

  • Commercial Suite の新しいOSめいは Windows Holographic for Bussiness
  • セットアップを設定可能に(初期設定時のキャリブレーションとを隠せる)
  • Windows Ocnfigulation Designer
  • バルクAzure ADトークンのサポート
  • Developer CSP向けのパッケージのプロビジョニングを作成可能に
  • キオスクモード用のアサインドアクセス
  • セットアップのログ情報の取得
  • ローカルアカウントのパスワード期限切れへの対応緩和
  • MDM同期状況と詳細の改善

既知の問題

  • Windows Insider Program 設定の問題が何名かから報告されている。もし問題が起きたら、フィードバックハブの中でバグ情報を取得し、再度デバイスを焼き直してください。

開発者の方へ

フィードバックと問題の報告のお願い

  • HoloLens内のフィードバックハブを通じてフィードバックや問題の報告をお願いします。フィードバックハブを使って内部の詳細情報も報告できるので、問題の早期解決につながります。
  • 尚、フィードバックハブがドキュメントフォルダを使う許可が求められるので「Yes」を選択してあげてください。

質問とサポート

Leveraging the new Time Travel Trace API in Debugging tools to find when one or more SharePoint event happened

$
0
0

In my previous post, I showed a proof-of-concept script to list all occasions a process opened a file. JavaScript is easy to program and works for most cases, however in some occasions you need to access resources not available from JavaScript and only a full-fledged debugging extension will do. In this post I will show some highlights of a debugging extension using the new Time Travel Debugging and the new API which at the time of writing was still in preview and may change before final release. The extension contains some other commands as I wrote this for training purposes and decided to leave "as is", so if you are also only interested in writing your extension, you can get some ideas from it. The command for Time Travel Trace (TTD) is "!idnauls". The idea is to list all occasions when SharePoint logs something and show the time, event id (tag), TTD position, category, severity and message of the log entries. It can also filter by tag, category and message. The logic behind this is explained on a previous post. This command only works with TTD dumps (.run) and it requires the new debugging tool which is also in preview version. There is no need to use this extension for other end than learning. NetExt includes a similar command (!widnauls) for this purpose. Special thanks to Ken Sykes and Bill Messmer for the help with new API.

Download project here.

The command to enumerate all calls to SharePoint log

Knowing that most ULS log is done by calling these two export functions "onetnative!ULSSendFormattedTrace", "Microsoft_Office_Server_Native!ULSSendFormattedTrace", and that tag ai108 translates to 0x21b6a2 per my previous post, this command would do the trick:

dx /g @$cursession.TTD.Calls("onetnative!ULSSendFormattedTrace", "Microsoft_Office_Server_Native!ULSSendFormattedTrace").Where(c => c.Parameters[0] == 0x21B6A2).Select(c=> new { param1 = c.Parameters[0], param2 = c.Parameters[1], param3 = c.Parameters[2], param4 = c.Parameters[3], start = c.TimeStart, sequence = c.TimeStart.Sequence, steps = c.TimeStart.Steps } )

Parameters

  • Parameters[0] – Contains the tag in numeric form and is used to filter
  • Parameters[1] – Contains the id of the product/category. This information is in SPLogLevel object and I used this snippet to get the code id and description:

Add-PSSnapin Microsoft.SharePoint.Powershell -ErrorAction SilentlyContinue

$logLevels = (Get-SPLogLevel | Sort Id)

$lines = New-Object 'System.Collections.Generic.List[string]';

foreach($level in $logLevels)

{

$lines.Add("`tcatMap[0x$($level.Id.ToString(""x"))]=""$($level.Area)|$($level.Name.Replace(""|"", "":""))"";");

Write-Host "`tcatMap[0x$($level.Id.ToString(""x"))]=""$($level.Area)|$($level.Name.Replace(""|"", "":""))"";";

}

$lines | Out-File spcat.txt

  • Parameters[2] – Contains the severity which is defined here.
  • Parameters[3] – Contains the pointer to the message string. There is a caveat here. A TTD string is only resolved when the context moves to the time of occurrence.

Logic in a nutshell

  • Run the command to list all times when one of the two functions are called.
  • Go to each position and retrieve the message since it can only be read when the context is moved to the position.
  • Calculate the current time (at target machine).
  • Transform the id of product/category into string using a simple table (this does not require moving the context)
  • Transform the severity id into string (also no context change)
  • Print the result with the TTD position

Extension command code

#define
IfFailedReturn(x) if(FAILED(x)) return
E_FAIL;
void DisplayFound(std::string &FullMessage, std::string& Part)
{
    if (Part.size() == 0)
    {
        g_ExtInstancePtr->Out("%s", FullMessage.c_str());
        return;

    }

    size_t i = 0;

    size_t p = 0;

    while (i < FullMessage.size() - Part.size())

    {

        p = FullMessage.find(Part, i);

        if (p == std::string::npos)

        {

            g_ExtInstancePtr->Out("%s", FullMessage.substr(i).c_str());

            break;

        }

        g_ExtInstancePtr->Out("%s", FullMessage.substr(i, p - i).c_str());

        g_ExtInstancePtr->Dml("<col fg="wbg" bg="srccmnt">%s</col>", Part.c_str());

        i = p+Part.size();

    }


}

HRESULT MoveTo(IModelObject *spStart)

{

    //

    // SeekTo is a key on the object just like anything else. The value of the key is a method.

    //

    CComPtr<IModelObject> spSeekToMethod;

    IfFailedReturn(spStart->GetKey(L"SeekTo", &spSeekToMethod, nullptr));

    //

    // Before we arbitrarily go about using it as a method, do some basic validation.

    //

    ModelObjectKind mk;

    IfFailedReturn(spSeekToMethod->GetKind(&mk));

    if(mk != ObjectMethod)

    {

        return
E_FAIL;

    }

    //

    // ObjectMethod indicates that it is an IModelMethod packed into punkVal. You can QI to be extra

    // safe if desired.

    //

    VARIANT vtMethod;

    IfFailedReturn(spSeekToMethod->GetIntrinsicValue(&vtMethod));

    //ASSERT(vtMethod.vt = VT_UNKNOWN); // guaranteed by ObjectMethod

    CComPtr<IModelMethod> spMethod; // or whatever mechanism you want to guarantee the variant gets cleared. variant_ptr, …

    spMethod.Attach(static_cast<IModelMethod *>(vtMethod.punkVal));

    //

    // Call the method (passing no arguments). The result here is likely to be ObjectNoValue (there is no return value).

    //

    CComPtr<IModelObject> spCallResult;

    IfFailedReturn(spMethod->Call(spStart, 0, nullptr, &spCallResult, nullptr));

    return
S_OK;

}

EXT_COMMAND(idnauls,

    "Command to list ULS position and tag and can be filtered by message or category",

    "{nomessage;b,o;;Do not show the ULS log message (faster processing).}"

    "{tag;s,r;;Tag to search for (e.g.: -tag b4ly). Use * for all tags. Required}"

    "{category;b,o;;Search text in Category or Product and not in message (e.g. -category Claims). Faster processing. Severity not searched}"

    "{message;b,o;;Search text in message and not in category or product (e.g. -message disk is full). Slower processing.}"

    "{;x,o;;Optional filter for message (-message) or category (-category) (e.g.: -message disk is full). Must be the last parameter}")

{

    wasInterrupted = false;

    UINT64 startTime, endTime;


    std::string tag = GetArgStr("tag");

    std::string mess;

    if (HasUnnamedArg(0))

        mess = GetUnnamedArgStr(0);

    bool nomess = HasArg("nomessage");

    bool message = HasArg("message");

    bool catonly = HasArg("category");

    if (catonly && message)

    {

        Out("Error: You have to use either -category or -message. Never bothnn");

        Out("No search was performedn");

        return;

    }

    if (message && tag == "*")

    {

        Out("Warning: When you combine -tag * and -message, it creates a very inefficient query.n");

        Out(" -tag * will retrieve all ULS log entry and -message will require to move to an iDNA possition every time.n");

        Out(" -tag <tag> will only retrieve the ULS logs with this tag and then move to the position to retrieve the message.n");

        Out("Information: Notice that -tag * and -category is ok and still very fast as category is also filtered without moving to the position.nn");

    }

    if ((catonly || message) && mess.size() == 0)

    {

        Out("Error: -category or -message require a filter patternn");

        Out("Example: !idnauls -tag ag9cq -message User was authentticatedn");

        Dml("3dbba3:1254    SharePoint Foundation    Claims Authentication    High    ag9cqt<b>User was authenticated</b>. Checking permissions.nn");

        Out("Example: !idnauls -tag * -category Web Content Managementn");

        Dml("3de9ae:14c0    <b>Web Content Management</b>    Publishing Cache    High    ag0ldn");

        Out("No search was performedn");

        return;

    }

    map<int, int> catVector;

    if (catonly)

    {


        catVector = SPCategories::GetListAreaName(mess);

        if (catVector.size() == 0)

        {

            Out("No category/product contains '%s'n", mess.c_str());

            Out("No search was performedn");

            return;

        }

        Out("Warning: The string '%s' will only be searched on Category/Product, not in messagen", mess.c_str());

        mess.clear();

    }

    if (tag != "*" && (tag.size() < 4 || tag.size() > 5))

    {

        Out("Tag: '%s' is invalidn", tag.c_str());

        Out("It can be either '*' for all or be between 4 and 5 bytesn");

        return;

    }

    unsigned
int tagBin = StrToTag(tag);

    if (tagBin == 0 && tag != "*")

    {

        Out("Tag: '%s' is invalidn", tag.c_str());

        Out("It does not contain a valid tag sequencen");

        return;

    }

    if (tag == "*")

    {

        swprintf_s(Buffer, MAX_MTNAME, L"@$cursession.TTD.Calls("onetnative!ULSSendFormattedTrace", "Microsoft_Office_Server_Native!ULSSendFormattedTrace").Select(c=> new { param1 = c.Parameters[0], param2 = c.Parameters[1], param3 = c.Parameters[2], param4 = c.Parameters[3], start = c.TimeStart, sequence = c.TimeStart.Sequence, steps = c.TimeStart.Steps } )");

    }

    else

    {

        swprintf_s(Buffer, MAX_MTNAME, L"@$cursession.TTD.Calls("onetnative!ULSSendFormattedTrace", "Microsoft_Office_Server_Native!ULSSendFormattedTrace").Where(c => c.Parameters[0] == 0x%p).Select(c=> new { param1 = c.Parameters[0], param2 = c.Parameters[1], param3 = c.Parameters[2], param4 = c.Parameters[3], start = c.TimeStart, sequence = c.TimeStart.Sequence, steps = c.TimeStart.Steps } )", tagBin);

    }

    std::wstring query(Buffer);

#if
_DEBUG

    Out("%s = ", tag.c_str());

    Out("%x    n", tagBin);

    Out("dx %Sn", query.c_str());

#endif

    CComPtr<IHostDataModelAccess> client;

    HRESULT Status;

    REQ_IF(IHostDataModelAccess, client);

    CComPtr<IDebugHost> pHost;

    CComPtr<IDataModelManager> pManager;

    if (FAILED(client->GetDataModel(&pManager, &pHost)))

    {

        Out("Data Model could not be acquiredn");

        return;

    }

    CComPtr<IDebugHostEvaluator2> hostEval;

    CComPtr<IModelObject> spObject;

    pHost->QueryInterface(IID_PPV_ARGS(&hostEval));

    startTime = GetTickCount64();

    if (!SUCCEEDED(hostEval->EvaluateExtendedExpression(USE_CURRENT_HOST_CONTEXT, query.c_str(), nullptr, &spObject, nullptr)))

    {

        Out("Expression could not be evaluatedn");

        return;

    };

    CComPtr<IModelObject> pListOfBreaks;

    CComPtr<IIterableConcept> spIterable;

    if (SUCCEEDED(spObject->GetConcept(__uuidof(IIterableConcept), (IUnknown**)&spIterable, nullptr)))

    {

        CComPtr<IModelIterator> spIterator;

        if (SUCCEEDED(spIterable->GetIterator(spObject, &spIterator)))

        {

            //

            // We have an iterator. Error codes have semantic meaning here. E_BOUNDS indicates the end of iteration. E_ABORT indicates that

            // the debugger host or application is trying to abort whatever operation is occurring. Anything else indicates

            // some other error (e.g.: memory read failure) where the iterator MIGHT still produce values.

            std::vector<UlsInstance> queryResult; // It will store the list of parameters

            UINT32 Index = 0;

            //

            for (;;)

            {

                CComPtr<IModelObject> pBreakItem;

                CComPtr<IKeyStore> spContainedMetadata;

                HRESULT hr = spIterator->GetNext(&pBreakItem, 0, nullptr, &spContainedMetadata);

                if (hr == E_BOUNDS || hr == E_ABORT)

                {

                    break;

                }

                if (FAILED(hr))

                {

                    Out("There was a failure at an Itemn");

                    continue;

                    //

                    // Decide how to deal with failure to fetch an element. Note that pBreakItem *MAY* contain an error object

                    // which has detailed information about why the failure occurred (e.g.: failure to read memory at address X).

                    //

                }

                //

                // Read the values

                //

                CComPtr<IModelObject> tag;

                CComPtr<IModelObject> SevLevel;

                CComPtr<IModelObject> Category;

                CComPtr<IModelObject> Message;

                CComPtr<IModelObject> Start;

                CComPtr<IModelObject> Sequence;

                CComPtr<IModelObject> Steps;

                VARIANT vt_tag, vt_sevlevel, vt_category, vt_message, vt_sequence, vt_steps;

                if (FAILED(hr = pBreakItem->GetKeyValue(L"param1", &tag, NULL
/* &spContainedMetadata */))) { Out("Error reading param1"); continue; }

                hr = tag->GetIntrinsicValue(&vt_tag);

                //hr = tag->GetIntrinsicValueAs(VT_INT_PTR, &vt_tag);

                if (FAILED(hr = pBreakItem->GetKeyValue(L"param2", &Category, NULL
/* &spContainedMetadata */))) { Out("Error reading param2"); continue; };

                hr = Category->GetIntrinsicValue(&vt_category);

                if (FAILED(hr = pBreakItem->GetKeyValue(L"param3", &SevLevel, NULL
/* &spContainedMetadata */))) { Out("Error reading param3"); continue; };

                hr = SevLevel->GetIntrinsicValue(&vt_sevlevel);

                if (FAILED(hr = pBreakItem->GetKeyValue(L"param4", &Message, NULL
/* &spContainedMetadata */)))

                {

                    Out("Error reading param4");        continue;

                };

                hr = Message->GetIntrinsicValue(&vt_message);

                if (hr = FAILED(pBreakItem->GetKeyValue(L"start", &Start, NULL
/* &spContainedMetadata */)))

                {

                    Out("Error reading start"); continue;

                };

                //hr = Start->GetIntrinsicValue(&vt_start); // It fails here because the type in ObjectSynthetic

                if (FAILED(hr = pBreakItem->GetKeyValue(L"sequence", &Sequence, NULL
/* &spContainedMetadata */)))

                {

                    Out("Error reading sequence");        continue;

                };

                hr = Sequence->GetIntrinsicValue(&vt_sequence);

                if (FAILED(hr = pBreakItem->GetKeyValue(L"steps", &Steps, NULL
/* &spContainedMetadata */)))

                {

                    Out("Error reading steps");    continue;

                };

                hr = Steps->GetIntrinsicValue(&vt_steps);

                if (IsInterrupted())

                {

                    break;

                }

                UlsInstance obj;

                obj.Category = vt_category.uintVal;

                obj.Message = vt_message.llVal; //vt_category.llVal);

                obj.SevLevel = vt_sevlevel.uintVal;

                obj.Sequence = vt_sequence.uintVal;

                obj.Steps = vt_steps.llVal;

                obj.tag = vt_tag.uintVal;

                obj.Index = Index;

                if (catVector.size() > 0)

                {

                    if (catVector.find((int)obj.Category) == catVector.end())

                    {

                        continue;

                    }

                }

                // Only move if necessary

                bool show = true;

                string fullMess;


                if ((mess.size() > 0 || !nomess) && SUCCEEDED(MoveTo(Start)))

                {

                    CComBSTR stringConv;

                    CComPtr<IDebugHostContext> context;

                    CComPtr<IDebugHostMemory> memory;

                    if (SUCCEEDED(hr = pHost->QueryInterface(__uuidof(IDebugHostMemory), (void**)&memory)))

                    {

                        ULONG64 BytesRead = 0;

                        Location loc;

                        loc.HostDefined = 0;

                        loc.Offset = obj.Message;

                        if (SUCCEEDED(hr = memory->ReadBytes(USE_CURRENT_HOST_CONTEXT, loc, Buffer, MAX_MTNAME * 2, &BytesRead)))

                        {


                            Buffer[MAX_MTNAME - 1] = L'';

                            fullMess = CW2A(Buffer);

                            if (mess.size() > 0)

                            {

                                show = fullMess.find(mess) != std::string::npos;

                            }

                        }

                    }

                    if (hr != S_OK)

                    {

                        fullMess = "*** Unable to read memory ***";

                        show = true;

                    }

                }

                if (show)

                {

                    Index++;

                    Dml("<link cmd="!tt %S">%S</link>t", obj.IDnaPosition().c_str(), obj.IDnaPosition().c_str());

                    string area;

                    string prod;

                    string sev = SPCategories::GetSevLevel(static_cast<int>(obj.SevLevel));

                    SPCategories::GetAreaName(obj.Category, area, prod);

                    if (!nomess)

                    {

                        SYSTEMTIME time;

                        if (!GetTime(time, true))

                        {

                            Out("??/??/???? ??:??:??.??t");

                        }

                        else

                        {

                            Out("%02i/%02i/%04i %02i:%02i:%02i.%02it", time.wMonth, time.wDay, time.wYear, time.wHour,

                                time.wMinute, time.wSecond, time.wMilliseconds / 10);

                        }

                    }

                    Out("%st", area.c_str());

                    Out("%st", prod.c_str());

                    Out("%st", sev.c_str());

                    Out("%st", TagToStr(obj.tag).c_str());


                    if (!nomess)

                    {

                        if (mess.size() > 0 && !catonly)

                            DisplayFound(fullMess, mess);

                        else

                            Out("%s", fullMess.c_str());

                    }

                    queryResult.push_back(obj);

                    Out("n");

                }

            }

            Out("%u Instancesn", Index);

            endTime = GetTickCount64();

            Out("Search took %f secondsn", (float)(((float)endTime - (float)startTime) / (float)1000));

        }

    }

}

 

 

Example 1 – Looking for a particular tag and message

Example 2 – Listing all instances

Experiencing Data Access Issue in Azure and OMS portal for Azure log Analytics- FairFax – 03/27 – Investigating

$
0
0
Initial Update: Tuesday, 27 March 2018 23:54 UTC

We are aware of issues within NPM data in OMS portal and Fairfax Azure portal for Azure Log analytics and are actively investigating. All customers may experience  issue while accessing NPM data in OMS portal and Azure portal.

The following data types are affected: NPM data in OMS portal and  Azure Portal.
Work Around:  None

  • Next Update: Before 03/28 03:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Rahim

Microsoft Imagine Cup 2018 – Regional Final Schedule

$
0
0

Dear Imagine Cup Participants

 

Thank you for your submission. We have come up with a presentation schedule for teams who are interested in going to one of our 4 onsite locations to present their projects, in 15 min slots. Below you will find this schedule for every team that has registered.

 

Please, if you find your team name mentioned in a region that is too far for you (you are from Islamabad or Lahore, and region assigned to you is Karachi for example) get in touch with Muhammad Sohaib (v-musoha@microsoft.com / 0343-3555716) ASAP so that you can be assigned a new location.

 

Regional contact names for each region, the location for each region’s along with the regional host locations, and there details are also given below against each region.

 

In case of any general queries or concerns, please reach out immediately to Muhammad Sohaib at contact details provided. For region specific queries, please reach out to regional contacts given below. Please note, these timings are for ONSITE presentations. If a team is not able to come for onsite presentation, their online submitted video and other deliverables will be judged and marked accordingly.

Note: If you don't see your team listed below, that means your submission was incomplete. Please get in touch with Sohaib, we are looking at how we can entertain such teams at this point in time. Please, when you reach out, MAKE SURE YOU LET US KNOW WHICH REGION / UNIVERSITY you belong to.

Contact Names & Host Locations:

 

Karachi:

Higher Education Regional Center Karachi – Muhammad Sohaib: 0343-3555716 / v-musoha@microsoft.com

 

Lahore:

Higher Education Regional Center Lahore – Sheikh Rizwan: 0312-5166755 / v-shrizw@microsoft.com

 

Peshawar:

Pearl Continental Hotel, Peshawar – Sheikh Rizwan: 0312-5166755 / v-shrizw@microsoft.com

 

Quetta:

Serena Hotel, Quetta – Muhammad Sohaib: 0343-3555716 / v-musoha@microsoft.com

 

Team List & Schedule:

 

Karachi:

Team Name Regional Final Region Timings Date
3 Coders Karachi 10:15am 29th March
Ali Ahmed Karachi 10:30am 29th March
BEAMS Karachi 10:45am 29th March
E-Henna Karachi 11:00am 29th March
Faaiz-ul-Hassan Karachi 11:15am 29th March
Factotum Karachi 11:30am 29th March
Hammad ur Rehman, Sohaib Nadeem, Nazneen Kausar Karachi 11:45am 29th March
imaginecodeR Karachi 12:00pm 29th March
IoT Solutions Karachi 12:15pm 29th March
ISU Robotics Karachi 12:30pm 29th March
LimeLite Karachi 12:45pm 29th March
MAAS Karachi 1:00pm 29th March
Mars games Karachi 1:15pm 29th March
ProRecruit Karachi 2:30pm 29th March
Psycaria Karachi 2:45pm 29th March
Reigning Tech(Order Booking System For Masses) Karachi 3:00pm 29th March
Softaych Karachi 3:15pm 29th March
Tabhouse Karachi 3:30pm 29th March
Team atmotech Karachi 3:45pm 29th March
TechyTeam Karachi 10:15am 30th March
charming Karachi 10:30am 30th March
DSU Game Developers Karachi 10:45am 30th March
Team-HYPER-REC Karachi 11:00am 30th March
Code Clone Finders Karachi 11:15am 30th March
Mern Blazers Karachi 11:30am 30th March
Team WSP Karachi 11:45am 30th March
TechUnion Karachi 12:00pm 30th March
Brain Busters Karachi 12:15pm 30th March
Devil's Whisper Karachi 12:30pm 30th March
Logical Processors Karachi 2:00pm 30th March
NeuroSquad Karachi 2:15pm 30th March
Pak Agile Karachi 2:30pm 30th March
Team Ninja Karachi 2:45pm 30th March
TechTor Karachi 3:00pm 30th March
Fast track Karachi 3:15pm 30th March
Humachines Karachi 3:30pm 30th March

 

Lahore:

Team Name Regional Final Region Timings Date
HNS Lahore 11:00am 29th March
Syed Chaos Lahore 11:15am 29th March
The MRI Lahore 11:30am 29th March
Mehar's... Lahore 11:45am 29th March
Queen Bees Lahore 12:00pm 29th March
Star Girls Lahore 12:15pm 29th March
The G Power Lahore 12:30pm 29th March
Camelion Lahore 12:45pm 29th March
Cyber Bullies Lahore 1:00pm 29th March
cybertise Lahore 1:15pm 29th March
Game Developers Lahore 2:30pm 29th March
Garrisonian Lahore 2:45pm 29th March
H & N Lahore 3:00pm 29th March
hotel LGU Lahore 3:15pm 29th March
LGU Lahore 3:30pm 29th March
Lions Heart Lahore 3:45pm 29th March
Power Puff Lahore 4:00pm 29th March
SH Lahore 11:00am 30th March
Spectrum Finders Lahore 11:15am 30th March
Team Farhan Lahore 11:30am 30th March
Team Girls Lahore 11:45am 30th March
TMK Lahore 12:00pm 30th March
Team MNS-UAM Lahore 12:15pm 30th March
Team zee Lahore 12:30pm 30th March
SPARTANS_NTU Lahore 2:00pm 30th March
Tech ninjas Lahore 2:15pm 30th March
The Amigos Lahore 2:30pm 30th March
Gameplay2050 Lahore 2:45pm 30th March
CESTINO - Smart Waste Managment Lahore 3:00pm 30th March
Chaser Express Lahore 3:15pm 30th March
IT Bugs Lahore 3:30pm 30th March
ITI Solutions Lahore 3:45pm 30th March

 

Peshawar:

Team Name Regional Final Region Timings Date
Addonexus Peshawar 11:00am 2nd April
XTECH 10 Peshawar 11:15am 2nd April
Code 4 life Peshawar 11:30am 2nd April
Cusit Data Warriors Peshawar 11:45am 2nd April
IntrecX Peshawar 12:00pm 2nd April
ALI AZIZ Peshawar 12:15pm 2nd April
Civil Rocks 3 Peshawar 12:30pm 2nd April
Fe Amaan Peshawar 12:45pm 2nd April
Lublin Pakistan Peshawar 1:00pm 2nd April
SAINT-NUST Peshawar 11:00am 3rd April
techwork Peshawar 11:15am 3rd April
Usman nazir Peshawar 11:30am 3rd April
CodeDetectives Peshawar 11:45am 3rd April
alpha 10 Peshawar 12:00pm 3rd April
Wec Snake team Peshawar 12:15pm 3rd April

 

Quetta:

Team Name Regional Final Region Timings Date
Abdullah Sabir Quetta 11:00am 2nd April
BUITEMS Computer Engineers Quetta 11:15am 2nd April
ChildBook Quetta 11:30am 2nd April
computer engineer Quetta 11:45am 2nd April
CONE Quetta 12:00pm 2nd April
crime maculation team Quetta 12:15pm 2nd April
Genymotion Quetta 12:30pm 2nd April
Markhors Quetta 12:45pm 2nd April
Nerd Herd Quetta 1:00pm 2nd April
Project cars Quetta 11:00am 3rd April
Team ASB Quetta 11:15am 3rd April
Zalmis Quetta 11:30am 3rd April
Tehreem shfiq ,kinza ishfaq ,laraib ali Quetta 11:45am 3rd April

 

Experiencing Data Access Issue for Azure log Analytics – 03/28 – Resolved

$
0
0

Final Update: Wednesday, 28 March 2018 01:22 UTC

We've confirmed that all systems are back to normal with no customer impact as of 03/28 01:22 UTC. Our logs show the incident started on 03/27/2018 21:59 UTC and that during the 3h 33m that it took to resolve the issue, all of the customers who have configured Networking Monitoring Solution would have experienced issues in Fairfax region with no latest data being reflected in the dashboard. Impact was limited to Network Monitoring solution.


Root Cause:   This issue was caused by a configuration change in our services.
Lessons Learned: We understand the issue completely and additional steps have been taken to avoid such occurrences in future.



We understand the customers rely on Network Performance Monitoring as a critical service and apologize for any impact this incident has caused


-Rahim


[Skype for Business for iOS/Android] –履歴が重複表示される事象について

$
0
0

こんばんは。 Japan Skype for Business Support Team です。

Skype for Business for iOS/Android のモバイルクライアントにおいて、不在着信履歴や通話履歴が重複して表示されるケースがあります。 これは、クライアント自身がコールを受信した際にローカルで記録される履歴と、Exchange のメールボックスに配信された通知メールを、EWS で取得した履歴が、一致したコールであると判断できずマージ処理されないために発生します。 発生する環境やシナリオとして、現在以下の2種類が確認されておりますが、現バージョンにおいては実装上の動作となります。

  • PC クライアントにより不在着信履歴が保存されるケース
    PC クライアントとモバイルクライアントの両方に、同時にサインインしている場合に発生します。
    Exchange 連携が有効で Unified Messaging が無効な場合、PC クライアントが不在着信履歴メールを保存します。
    モバイルクライアントはこの不在着信履歴メールを EWS で取得し、ローカルに残された履歴とは別に表示します。
    _
  • SSCH (Server Side Conversation History) で通話履歴が保存されるケース
    SSCH は Skype for Business Server 2015 と Skype for Business Online で追加された機能となります。
    モバイルクライアントを制御する UCWA サービスが、通話履歴の情報をユーザーのメールボックスに配信します。 (UCWA)
    モバイルクライアントはこの不在着信履歴メールを EWS で取得し、ローカルに残された履歴とは別に表示します。
    _

< 不在着信履歴の例 >
_

< SSCH 通話履歴の例 >

・ほぼ同時に履歴が残されるため時刻が一致します
・同じ番号が表示されます
・SSCH による配信が遅れるため数分程度時刻が異なります
・PSTN GW の設定に依存して SIP ドメインが付加されます

_

Unified Messaging を有効にすることで、不在着信通知メールが Exchange Server / Exchange Online から配信される場合は EWS で取得しないため重複表示とはなりません。 また、オンプレミスの Skype for Business Server 2015 では SSCH を配信しないことも可能なため、重複表示に対する対策にはなります。 なお、Skype for Business Online では SSCH を現状無効化することはできません。

_

免責事項:
本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Tuesday Featured Post: Hesitating! Don’t Be Afraid to Ask Questions

$
0
0

"He who asks a question remains a fool for five minutes. He who does not ask remains a fool forever."

Good Day All!

We are back with the Tuesday’s Featured Post where we discuss a Forum Post or a Forum Thread from the MSDN or TechNet Forums and then highlight the value that they added.

Among the various interesting posts in the forum this is my pick, About Unit Testing asked by Sakura Data from SQL Server Forum. In this post the Original Poster is like to perform Unit Testing in SQL Server Platform and wanted to know what tools are needed to get start.

What grabbed my attention is that there are lot of people in the community who feels shy to asking simple question. They think that this will look bad and make them dumb in the community. I think this forum post is an inspiration for the silent majority in the community to raise their voice and break the myth.

This thread is answered by the Visakh16, who answered the question gracefully. Visakh16 pointed to an article which describes the unit testing inside database project with step by step explanation.

Sometimes community members lacked the courage to ask a simple question and this forum post is a good example for the silent majority audience in the community to break the myth. Remember, asking dumb questions allows you to develop courage. Courage is the ability to do something that scares you. Like facing most fears, the more we face them, the smaller they become.

"You never know the truth. You know 'a' truth."

With this forum post we can see that whether the question is simple or not we the community members are always here to help each other to find a solution.

Thank You

-Ninja Sabah


blogpost_jfapa

Use Microsoft Forms with your favourite and familiar apps

$
0
0

Microsoft Forms as a relatively new app within Office 365 has undergone rapid developments so that it has now become a firm favourite as both a classroom and admin tool for Educators using Office 365. Did you know that it's now even easier to use Forms with your colleagues and students seamlessly? Through integration with the Office family, Microsoft Forms can easily collect information from your favourite and familiar apps. Check out the information below from the Forms Team. 

New Banner -1.png


Forms for Excel

Forms for Excel, powered by Microsoft Forms, has replaced Excel Survey and builds a live data connection between Microsoft Forms and Excel. The responses you collect in your form will show up, real time, in your Excel workbook.

Excel-3.gif

 


Forms in Microsoft Teams

You can now access Microsoft Forms directly in Microsoft Teams. Set up a Forms tab to create a new form or insert an existing one, create notifications for your form via connector, or conduct a quick poll using Forms Bot.

Teams-1.gif

 


Forms web part for SharePoint

SharePoint has been widely used to share ideas and collect feedback. You can now use a Microsoft Forms web part on your SharePoint pages to collect responses or show survey results right on your site.

SP-2.gif


Find your group forms in portal

The forms you have created in Microsoft Teams or SharePoint team sites belongs to the O365 group. In Forms portal, there is a new features, "Recent group form", where you could quickly access the group forms you have used recently.

Group form.png


Integrating Microsoft Forms into PowerPoint (under development)

Microsoft Forms' new integration with Microsoft PowerPoint will allow a teacher to easily insert a quiz to a PowerPoint deck. Click the Forms icon in PowerPoint ribbon, the list of forms will be showed in the task pane. You can select a pre-created form and embed it to the current slide. Students who view this presentation can fill the form and submit without leaving PowerPoint.

ppt large.png

PPT2.png

Forms integration in PowerPoint is currently being developed and will be available to desktop users of PowerPoint in a few months.

The following content has been repurposed from the Forms blog site, check it out here. 


Interested in using Microsoft Forms and want to know how? Complete this course on the Microsoft Educator Community to get started.

Upgrade of SSRS from SQL 2008 R2 to SQL 2012

$
0
0

Yesterday, I encountered a weird scenario where in the SSRS component was failing to upgrade from SQL 2008 R2 to SQL 2012 where in the SQL database engine and all the other components succeeded.

SSRS component upgrade was failing with the below error :

TITLE: Microsoft SQL Server 2012 Setup
------------------------------

The following error has occurred:

A Secure Sockets Layer (SSL) certificate is not configured on the Web site.

 

------------------------------

On checking the Summary logs for the upgrade found that below logs for the SSRS component :

Feature: Reporting Services - Native
Status: Failed: see logs for details
Reason for failure: An error occurred during the setup process of the feature.
Next Step: The upgrade process for SQL Server failed. Use the following information to resolve the error, and then repair your installation by using this command line: setup /action=repair /instancename=MSSQLSERVER
Component name: SQL Server Reporting Services
Component error code: 0x80131500
Error description: A Secure Sockets Layer (SSL) certificate is not configured on the Web site.

As suggested, I tried to repair the SQL server instance and even the repair failed with the below error :

TITLE: Microsoft SQL Server 2012 Setup
------------------------------

The following error has occurred:

The Report Server WMI provider cannot create the virtual directory. This error occurs when you call SetVirtualDirectory and the UrlString is already reserved. To continue, clear all URL reservations by calling RemoveURL and then try again.

 

------------------------------

Now this was a more explanatory error which indicated, that there were certain UrlString already reserved before the SetVirtualDirectory function call occurs.

Thus opened the command prompt with admin privilege and ran the command netsh http show urlacl to list the Reserved URLs.

With the reserved URLs, I was able to remove all those related to the Reports and ReportServer using the commands netsh http delete urlacl url=https://....../Reports/ and netsh http delete urlacl url=https://....../ReportServer/.

Post which I listed all the remaining URLs using the same command netsh http show urlacl and it did not list any URLs related to the Reports and Report Server.

Now when we tried to repair the SSRS component, we succeeded with the repair and eventually the component was upgraded to SQL server 2012 build.

 

Hope this helps !! Happy Reporting !!

VSTS/TFS Continuous Deployment to App Service Environment (ASE) after Disabling TLS 1.0

$
0
0

If you are a regular reader of my blog, you will have noticed that I have been spending some time working with Azure App Service Environment (ASE). It is available in Azure Government and should be the technology of choice for Government Web Apps. In a previous blog post, I have described how to do CI/CD with ASE, but I have also described and recommended that you disable TLS 1.0 for your ASE. If you have tried to do a Web App Deployment from Visual Studio Team Services (VSTS) or Team Foundation Server (TFS) into an ASE after disabling TLS 1.0, you may have noticed that it fails. The problem is that MSDeploy (running on your build agent) will try to use TLS 1.0 to deploy your application and it will fail. In this blog, I will describe this problem so that you can recognize it and I will also show you how to fix it.

If you deploy an ASE in to a virtual network along with a build agent, you can use that build agent from VSTS or TFS to deploy into a Web App in an ASE. The local build agent is needed since the ASE cannot be seen from the hosted build agents in VSTS. The configuration is described here and it would look something like this:

The JumpBox in the diagram above is only needed to test the setup if you have no other VMs or on-premises machines with access to the virtual network (through VPN or Express Route). If you try to use the agent to deploy without making any modifications to it, you will get an error that looks something like this:

 

 

The specific error text is repeated here:

2018-03-23T17:39:21.4813236Z [command]"C:Program FilesIISMicrosoft Web Deploy V3msdeploy.exe" -verb:sync -source:package='C:agent_workr1adotnetcore-example-ASP.NET Core-CIdrops.zip' -dest:contentPath='ase-site',ComputerName='https://ase-site.scm.cloudynerd.us:443/msdeploy.axd?site=ase-site',UserName='$ase-site',Password='********',AuthType='Basic' -enableRule:AppOffline -enableRule:DoNotDeleteRule -userAgent:VSTS_94a19df8-3720-4cbc-8661-facce05aa290_release_1_1_1_1
2018-03-23T17:39:21.9926944Z Info: Using ID 'e97b4322-ee2a-4c6c-9777-04582963a0fe' for connections to the remote server.
2018-03-23T17:39:23.2433126Z ##[error]Failed to deploy web package to App Service.
2018-03-23T17:39:23.2435199Z ##[error]Error: Could not complete the request to remote agent URL 'https://ase-site.scm.cloudynerd.us/msdeploy.axd?site=ase-site'.
Error: The underlying connection was closed: An unexpected error occurred on a send.
Error: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
Error: An existing connection was forcibly closed by the remote host
Error count: 1.

The problem is, as indicated above, that msdeploy.exe is trying to use TLS 1.0. You can fix that by forcing the .NET Framework used by msdeploy.exe to use the "Strong Crypto" option. To do this, create a file, e.g. called strong-crypto.reg with the following content:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINESOFTWAREMicrosoft.NETFrameworkv4.0.30319]
"SchUseStrongCrypto"=dword:00000001

[HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoft.NETFrameworkv4.0.30319]
"SchUseStrongCrypto"=dword:00000001

Then right-click the file and choose "merge". This modify the registry accordingly. If you then repeat the deployment, you should see a successful completion:

I have published a template for DevOps with ASE, which includes a build agent template. Since the ASE deployed in this scenario has TLS 1.0 disabled, I have modified the configuration of the build agent such that the registry edits are made automatically. This is accomplished in the ConfigureASEBuildAgent.ps1 script. Specifically the lines:

        Registry StrongCrypto1
        {
            Ensure      = "Present"
            Key         = "HKEY_LOCAL_MACHINESOFTWAREMicrosoft.NETFrameworkv4.0.30319"
            ValueName   = "SchUseStrongCrypto"
            ValueType   = "Dword"
            ValueData   = "00000001"
        }

        Registry StrongCrypto2
        {
            Ensure      = "Present"
            Key         = "HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoft.NETFrameworkv4.0.30319"
            ValueName   = "SchUseStrongCrypto"
            ValueType   = "Dword"
            ValueData   = "00000001"
        }

In conclusion, disabling TLS 1.0 on an ASE will cause automated deployments with msbuild.exe to fail. The solution is to enforce the "Strong Crypto" option and we can achieve that with some registry edits. Let me know if you have questions/comments/suggestions.

 

Curious case of IISExpress error “Failed to register URL” when working with OWINHost

$
0
0

when working with an application using a bunch of micro services , ran into this error with iisexpress

Failed to register URL "http://localhost:3001/" for site "Contoso.Web" application "/". Error descriptio
n: Cannot create a file when that file already exists. (0x800700b7)

C:Program FilesIIS Express>iisexpress /config:D:PROJECTSapi.vsconfigapplicationhost.config /trace:error
Starting IIS Express ...
Initializing the W3 Server Started CTC = -1673430150
W3 Server initializing WinSock. CTC = -1673430134
W3 Server WinSock initialized. CTC = -1673430118
W3 Server ThreadPool initialized (ipm has signalled). CTC = -1673430118
Start listenerChannel http:0
Failed to register URL "http://localhost:3001/" for site "Contoso.Web" application "/". Error descriptio
n: Cannot create a file when that file already exists. (0x800700b7)
Failed to initialize site bindings
Error initializing ULATQ. hr = 800700b7
Terminating W3_SERVER object
InitComplete event signalled
Process Model Shutdown called
Unable to start iisexpress.

Cannot create a file when that file already exists.
For more information about the error, run iisexpress.exe with the tracing switch enabled (/trace:error).

So iisexpress is complaining that another process is already sing the port 3001. But I was sure that no other process is using the port 3001. I had checked using all the methods explaining in my own post how-to-fix-the-error-port-is-currently-used-by-another-application

But still IISExpress is not able to bind to the port 3001. I knew one thing  where this port was used by another microservices . We had a total of 4 services and one of them was running inside IIS and other 3 were running as a OWIN HOST inside different windows services (using TopShelf.)

When I tested my owin service on the same port,it is able to bind fine. You can see owin host starting code below


WebApp.Start(hostAdress, appBuilder 
{
//application initialization code
});

This code is using Microsoft.Owin.Hosting to start a OWIN web server in the hostaddress .in our case hostaddress was http://localhost:3001.

So it seems OWIN is able to bind but iisexpress is not able to . I tried many things

  • Run both IISExpress and OWIN Services under administrator
  • Added urlacl for http://localhost:3001

netsh http add urlacl url=http://localhost:3001/ user=Everyone

  • removed urlacl

netsh http delete urlacl http://+:3001/

None of this seems to fix the iisexpress error. Then it struck me,even though we started the OWINHost using WebApp.Start, in the code,it was never stopped. This process which OWINHost is running is exited but  the entry did not get removed from the http.sys.

 

So in order to stop an OWINHost,what you have to do is


IDisposable webhost=WebApp.Start(hostAdress, appBuilder 
{
//application initialization code
});

 

Now to stop the webhost,we have to call dispose.Since we had this code inside a windows service,dispose was called when an exception occured or the service is stopped


webhost.Dispose();

Or rather the correct way to use OWinHost is using using block as explained here

So iisexpress was not able to start the service because OWIN service had it started but not stopped before the process exited.

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>