Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Release Notes for Resource Scheduling Optimization (v2.0.17335.1) – Dynamics 365

$
0
0

Applies to: Field Service and Project Service Automation on Dynamics 365 9.0.x and Dynamics 365 8.2.x

We’re pleased to announce the latest update to Resource Scheduling Optimization (v2.0.17335.1). This blog post lists the new capabilities and bug fixes included in this release.

This release is compatible with both Dynamics 365 8.2.x and Dynamics 365 9.0.x. To update to this release, visit the Admin Center for Dynamics 365 online, Applications page, to apply the update.

Resource Scheduling Optimization (RSO) engine

Improvements

  • Performance improvements for end-to-end optimization

Bugs

  • Fixed: On Dynamics 365 9.0.x organizations, the Refresh button on the Optimization Request grid on the RSO scope-related schedule board doesn't refresh the grid
  • Fixed: On Dynamics 365 9.0.x organizations, the RSO scope-related schedule board lock icon isn't showing on Edge
  • Fixed: Optimization requested fails with the error message: “System failed to modify some bookings” even though there is no booking being modified during the optimization run
  • Fixed: On Dynamics 365 9.0.x organizations, booking details for optimization don't show unchanged bookings

Deployment app for Resource Scheduling Optimization

Improvements

  • Manual Link step is no longer needed. The admin doesn’t have to explicitly click the Link button after a first-time deployment or when upgrade finishes. The system automatically links the RSO instance to the selected Dynamics 365 organization. In the case where a link fails or or any other exception occurs, the admin can still retry clicking Link.
  • Automatic rollback if system upgrade fails. The system automatically performs a rollback to the previous version if an upgrade to a newer version fails.

 

For more information:

 

Feifei Qiu

Program Manager

Dynamics 365, Field & Project Service Team


L2TP/IPsec Client Access VPN with Pre-shared key is not working in TMG

$
0
0

Recently I worked on an issue where the L2TP/IPSec Client Access VPN was not working through TMG. The TMG was configured to use Pre-Shared key for the IPsec policy but the VPN clients were unable to establish the session with the TMG server.

In the Event viewer we, got the below error

The following error occurred in the Point to Point Protocol module on port: VPN2-220, UserName: testuser@xxx.com. The connection was prevented because of a policy configured on your RAS/VPN server. Specifically, the authentication method used by the server to verify your username and password may not match the authentication method configured in your connection profile. Please contact the Administrator of the RAS server and notify them of this error.

We verified all the configuration on the TMG and client machines but couldn’t find anything wrong. Then we collected the ikee traces from the TMG server and found the below interesting entry in the log.

5020 Auth methods: 1

5021 -- 0 --

5022   Type: Certificate

 

The Type was showing as Certificate instead of Pre-Shared key. Further checking in the RRAS settings, we found that the TMG was unable to save the Pre-Shared key in the RRAS configuration. In RRAS settings, the Pre-shared key was not even showing as enabled.

So we investigated further by collecting various trace logs and finally figured out the TMG server had the below registry key enabled.

HKLMSystemCurrentControlSetservicesRasManParametersDisableSavePassword 

The value for the DisableSavePassword was set to 1 and this was preventing TMG from saving the Pre-shared key. After disabling this registry setting, the TMG was able to configure the Pre-shared key in the RRAS and VPN clients were able to connect successfully.

Author:

ANIL GEORGE 
Microsoft Security Support Escalation Engineer

Reviewer:

KARTHIK DIVAKARAN 
Microsoft Windows Escalation Engineer

Intel Management Engine の脆弱性と Surface デバイスについて

$
0
0

こんにちは。 Surface 法人向けサポートです。

 

先般 Intel 社より、第 6 世代以降の CORE プロセッサーなどで確認された脆弱性について情報が公開されております。

この問題について、弊社 Surface 開発部門にて、対応状況として以下の情報を公開いたしましたので、本記事では日本語にて内容をご紹介します。

 

(弊社開発部門の記事)

Intel Management Engine Vulnerability and Surface Devices

 

(概要)

マイクロソフトでは、 Intel Management Engine の脆弱性 (Intel-SA-00086) の存在を認識しております。Intel 社の脆弱性検出ツールでは、現時点でマイクロソフト Surface デバイスを、このセキュリティ アドバイザリに対して脆弱としてリストします。

マイクロソフトでは、事象を調査し、以下の点を確認しております。

 

  1. リモートでこの脆弱性を攻撃するには、Intel Active Management Engine (AMT)  が必要です。現時点の Surface デバイスは、AMT を実行していないため、Management Engine に対してリモートでの接続を許可することはありません。
  2. ローカルでこの脆弱性を攻撃するには、 USB 経由の Direct Connect Interface (DCI) が必要ですが、Surface デバイスでは提供していません。

 

これらの点より、弊社では、この脆弱性を利用した攻撃は、Surface デバイスでは顕著に軽減されていると考えております。

弊社では、弊社デバイスが信頼できセキュアであることを確実にすることに深く注意を払っており、インテル社と協働して現在提供しているデバイスへの修正プログラムの開発を進めています。この修正プログラムは、近い将来リリースすることを見込んでいます。

Cloud computing guide for researchers – Understanding shipping behaviour with Microsoft Azure

$
0
0

Keeping track of global shipping has previously suffered from a lack of data. Current tracking technology has transformed the problem into one of an overabundance of information, as huge amounts of vessel tracking data are slowly becoming available, mostly due to the Automatic Identification System (AIS). Due to the volume of this data, traditional data mining and machine learning approaches are challenged when called upon to decipher the complexity of these environments. Scientists from the University of the Aegean, Dept. Product & Systems and Design Engineering and MarineTraffic (the leading vessel tracking website), are using data science in the cloud to transform billions of records of spatiotemporal (AIS) data into information for understanding the patterns of global trade by adopting distributed processing approaches. The research team led by Assoc Prof. Dimitris Zissis, is leveraging novel algorithmic approaches and Microsoft Azure, to transform Big Data into actionable information for a multitude of maritime stakeholders. Members of the team include: Giannis Spiliopoulos, Researcher at MarineTraffic; Konstantinos Chatzikokolakis, Senior Researcher at MarineTraffic; Elias Xidias, Senior Researcher at the University of the Aegean.

Why?

The importance of a well-developed understanding of the maritime traffic patterns and trade routes is critical to all seafarers and maritime stakeholders. Unlike roads, shipping lanes are not carved in stone. Their size, boundaries and content vary over space and time, under the influence of external sources. Today we only have a vague understanding of the specific routes vessels follow when travelling between ports, which is an essential metric for calculating any valid maritime statistics and indicators (e.g trade indicators, emissions and others). From a security perspective, it is necessary for understanding areas of high congestion, so that smaller vessels can avoid collisions with bigger ships. Moreover, an understanding of vessel patterns at scale can assist in the identification of anomalous behaviors and help predict the future location of vessels. Additionally, by combining ship routes with a model to estimate the emission of vessels (which depends on travel distance, speed, draught, weather conditions and characteristics of the vessel itself), emissions of e.g. CO2 and NOx can be estimated per ship and per national territory

From data to knowledge

Data is received through the MarineTraffic system and for the purposes of this work we use a dataset of approximately 5 billion messages recorded from January to December 2016. As the amount of available spatiotemporal data grows to massive scales, it is becoming clear that applying traditional techniques to AIS data processing can lead to processing times of several days, if applied to global data sets of considerable size (current datset over 500G). Due to this, for the distributed processing tasks, we rely on a HDInsight Azure Spark (2.1.0 ver-sion) cluster made up by: 6 worker nodes (D4v2 Azure nodes), each one equipped with 8 processing cores and 28 GB RAM; and 2 head nodes (D12 v2 Azure nodes), each one equipped with 4 processing cores and 28 GB RAM, summing up to a total of 56 computing cores and 224 GB RAM.

The work is described in detail in Giannis Spiliopoulos, Dimitrios Zissis, Konstantinos Chatzikokolakis, A Big Data Driven Approach to Extracting Global Trade Patterns, Mobility Analytics for Spatio-temporal and Social Data with VLDB 2017, Munich 2017.

Global scale data analysis for researchers

Today, the growing number of distributed sensors and tracking systems are generating overwhelming amounts of high velocity spatiotemporal data. Processing these datasets is a highly complex task. Traditional state of the art techniques and technologies have proven incapable of dealing with such volumes of loosely structured spatiotemporal data. Towards building more efficient systems, methods of distributing processing and storage across a cluster of computers has been proposed. The trend towards more data (big data) is leading to changes in the computing paradigm, and in particular to the notion of computational approaches. A transition is currently taking place from a computing-centric model where data live on disk and are moved to a central processing unit for computational tasks towards a data-centric model, where computation is moved towards the data.

Using services from Microsoft's Azure, the project harnesses the power of the cloud to perform complex analytics on these datasets. Thanks to Microsoft Azure the researchers are able to reduce the processing time from several days to just a few hours. Find out more on our Cloud Computing Guide for Researchers.

Need access to Microsoft Azure?

There are several ways you can get access to Microsoft Azure for your research. Your university may already make Azure available to you, so first port of call is to speak to your research computing department. There are also other ways for you to start experimenting with the cloud:

Migrating SAP ASE database to Microsoft SQL Server using SSMA

$
0
0

Hi all,

This blog covers the process of migrating SAP ASE databases to Microsoft SQL Server using SQL Server Migration Assistant (SSMA) tool.

 

Sybase SAP Adaptive Server Enterprise (ASE) database can be migrated to Microsoft SQL Server 2008/2008 R2/2012/2014/2016/SQL Server 2017/Azure SQL database on Windows/Linux using Microsoft SQL Server Migration Assistant (SSMA) for SAP Adaptive Server Enterprise (ASE) tool. SSMA for Sybase converts ASE database objects to SQL Server database objects, creates those objects in SQL Server and then migrates data from ASE to SQL Server or Azure SQL database.

 

SSMA tool consists of the following:

  • SSMA for SAP ASE client (SSMA Client) which need to be installed on Client machine
  • SSMA extension pack for Sybase component which need to be installed on SQL Server machine

 

You install the client application on the computer from which you will perform the migration steps. You must install the extension pack files on the computer that is running SQL Server where migrated databases will be hosted.

 

In this blog, we are covering the below:

  • Prerequisites for Installing SSMA tool
  • Installing the SSMA for Sybase (Client Component and the extension pack)
  • Creating SSMA project for Migration

 

Prerequisites for Installing SSMA tool:

SSMA is designed to work with ASE 11.9.2 or later versions and all editions of SQL Server. Before you install SSMA, make sure that the computer meets the following requirements:

  • Windows 7 or later versions, or Windows Server 2008 or later versions.
  • Microsoft Windows Installer 3.1 or a later version.
  • The Microsoft .NET Framework version 4.0 or a later version. The .NET Framework version 4.0 is available on the SQL Server product media.
  • The Sybase OLEDB/ADO.Net/ODBC provider and connectivity to the Sybase ASE database server that contains the databases you want to migrate.
  • Access to and sufficient permissions on the computer that hosts the target instance of SQL Server where you will be migrating database objects and data.

 

 

Installing the SSMA for Sybase Client Component:

Before Installing Sybase client components, ensure that Sybase providers component is installed on the client machine. The following instructions provide the basic installation steps for installing Sybase providers. The exact instructions will differ depending on the version of the Sybase Setup program.

  1. Run the Sybase ASE Setup program.
  2. Select custom setup.
  3. On the feature selection page, select the ODBC, OLE DB and ADO.NET data providers.
  4. Verify the selected features, and then click Finish to install the data provider

 

 

Once the Sybase provider component is installed, download the latest version of SSMA client tool, refer the link SQL Server Migration Assistant download page

To install the SSMA client, launch the msi file.

If the Sybase Components are not installed below, required component missing window appears. Ensure that the Sybase provider component is installed prior to SSMA client tool installation.

 

 

Installing SSMA for Sybase extension Pack:

Once the Installation of client tool is complete on client machine, for using Server-side data migration, you must also install components on the computer that is running SQL Server. These components include the SSMA extension pack, which supports data migration, and Sybase providers to enable server-to-server connectivity.  

When you migrate data from ASE to SQL Server, the data migrates directly between ASE and SQL Server. It does not go through SSMA client because this would slow down the data migration.

Before Installing the Sybase extension pack on the SQL Server, ensure that the Sybase provider components are installed on the SQL Server machine as well.

Once the Sybase provider components are installed, double click on the Sybase Extension pack.msi file:

 

During the Sybase extension pack installation, setup points for the SQL server instance details where the extension pack database is created.

 

 

 

 

Extension pack installs the utility database in SQL instance mentioned in the previous step.

 

The installation creates the below databases:

             Sysdb: Contains the tables and stored procedures that are required to migrate data

              Ssmatesterdb_syb: Contains the schema ssma_sybase_utilities, in which the objects (Tables, Triggers, Views) used by  the SSMA tester component are created.

 

 

Once the extension pack is installed, control Panel Add or remove program reflects the SSMA for Sybase extension pack.

 

 

Create a SSMA Project for Schema/Data Migration:

Launch the Microsoft SQL Server Migration Assistant for Sybase:

 

 

Click on File New Project and mention the destination SQL version.

 

Next step is to connect to the Sybase data source:

Post connecting to the Sybase database source, select the database/Objects which are to be migrated to SQL Server.

 

Connect to the destination SQL Server instance:

 

Ensure that the Sybase database and the SQL Server have the same compatibility. If they are different, the below warning message is displayed.

 

Post connecting to both Sybase and SQL Server, SSMA tool automatically, maps the data type from Sybase to SQL Server as highlighted in the below screenshot:

 

Create the conversion report by selecting the Sybase schema and click on "Create Report" option:

 

The report is created in html format and is stored at location specified during Project creation: The report gives an inventory of the Sybase schema and the effort needed to convert the Sybase schema to SQL Server schema.

Sample report looks like below which can be drilled down further:

 

 

Next step is to “Convert Schema” if the default type mapping selected by the SSMA tool are not preferred:

 

The Output pane can be referred for schema conversion status:

 

Converting the schema creates a database in the Local metadata, but it’s not reflected on SQL Server yet, which can be verified from connecting to SQL Server instance using SSMS tool.

To replicate the schema of Sybase objects on SQL Server, right click on the Local Metadata database from SSMA tool and click on “Synchronize with database” which creates the database in SQL Server and applies the schema.

 

 

 

Note that only the schema is replicated and not the data.

 

Next step is to Migrate the data using “Migrate data” option.

 

Once the data migration is complete, migration report can be reviewed to check the Total and Migrated Rows and the success rate of migration.

 

Output pane also displays the status of Migration operation.

Now, row count in SQL server displays the number of rows migrated to SQL server.

Refer the below article to perform the incremental data migration using SSMA.

https://blogs.msdn.microsoft.com/ssma/2010/10/04/how-to-perform-incremental-data-migration-using-ssma/

Once the data is successfully loaded to SQL Server, next step is to point the application connection strings to SQL Server and access the data stored in SQL Server.

 

Hope the above steps mentioned will help you in migration process.

Please share your feedback, questions and/or suggestions.

Thanks,

Don Castelino | Premier Field Engineer | Microsoft

 

Disclaimer: All posts are provided AS IS with no warranties and confer no rights. Additionally, views expressed here are my own and not those of my employer, Microsoft.

MIEE Spotlight- Andrew Stansfield

$
0
0


Today's MIEE Spotlight highlights the work of Andrew Stansfield, Technical Lead at Durham University with 20 years of experience using Microsoft Technologies.

 

Andrew was Technical Lead for Collaboration on a New World Programme at Durham University, where he was responsible for covering Office 365, including Skype for Business and Exchange and hardware provisions. Throughout the project Andrew has introduced, driven adoption and created support models for all members of the University as they undertook this digital transformation. Impacting over 15, 000 students and 4,000 staff, Andrew has revolutionised the way they work at Durham University, with more changes to come as they utilise more Office 365 apps and explore the use of the cloud.

Since the adoption of these 'New World' Microsoft technologies, Durham is now at the forefront with technology enabling Staff to adopt agile working practices, reducing their carbon footprint and also ensuring their graduates are ready for the workplace. Through the passion and dedication of MIEExpert Andrew, we are sure they will continue to develop and do great things in Education.

You can follow @ajstan67 on Twitter  to keep up to date with the amazing work he is doing using Microsoft technologies at Durham University.


Interact with the Sway below to see how Andrew is winning with Microsoft technologies in his own words below!


Follow in the footsteps of our fantastic MIEE's and learn more about how Microsoft can transform your classroom with the Microsoft Educator Community.

 

 



Follow in the footsteps of our fantastic MIEE's and learn more about how Microsoft can transform your classroom with the Microsoft Educator Community.

[Advent Calendar 2017 Day12] API Management 導入編

$
0
0

こちらの記事は、Qiita に掲載した Microsoft Azure Tech Advent Calendar 2017 の企画に基づき、執筆した内容となります。 カレンダーに掲載された記事の一覧は、こちらよりご確認ください。

みなさまこんにちは。Azure サポート チームの片岡です。
本日のブログでは、App Service などの Web 系サービスを利用して公開された Web API 群の前段に配置する API Management というサービスをご紹介します。
App Service などのように主役となるサービスではございませんが、Web API の管理をしやすくするという観点で一役買うサービスであり、二―ズも増えてきております。

すべてを紹介すると長くなってしまうので、今回は導入編として簡単な機能を紹介いたしますが、今後もカスタマイズ機能などについて随時紹介していきます。
また、下記では発行者ポータルでの手順を紹介しておりますが、現在 API Management は Azure ポータルへの移行を進めており、基本的には Azure ポータルでも同様の設定が可能です。

1. API Management とは ?

ひとことで言うと、Web API の Gateway となるサービスです。
近頃の Web サービスでは、ユーザーが実際にアクセスする UI をもった Web ページから、バックエンドのWeb API を呼び出すという作りとなっていることが一般的です。
Web API がひとつのみであれば簡易に管理が可能ですが、複数の Web API を呼び出す、特にドメインが異なる Web API を複数呼び出す場合には、その分管理が煩雑となります。
API Management は以下のような機能を提供し、Web API の管理に一役買います。

- 管理ポータルの提供
- ドメインの統一
- Web API のユーザー管理、アクセス キーの発行
- HTTP ヘッダーなど、HTTP リクエスト/レスポンスのカスタイマイズ

では、早速それぞれの機能を見ていきましょう。

2. 管理ポータルの提供

API Management をデプロイすると、発行者ポータル、および開発者ポータルという 2 つの管理用ポータルが併せてデプロイされます。
発行者ポータルでは Web API のコール数などが確認できるダッシュボードの表示や Web API の登録、ユーザー管理などの機能が提供され、開発者ポータルでは実際の Web API 呼び出しをテストするといった機能が提供されております。
それぞれのポータルは、Azure ポータルから直接アクセスすることが可能です。

3. ドメインの統一

ひとつの Web サービスでは多くの機能が提供されておりますので、その分多くの Web API が呼び出されることがあります。
複数の Web API ですので、ドメインが異なることも一般的ですが、API Management では、すべての Web API を同じドメインに統一し、パスのみで Web API を区別できます。
API の登録は、発行者ポータルから以下のように登録が可能です。

3-1) 発行者ポータルの [APIs] – [ADD API] をクリックします。

3-2) 実際の Web API の URL などを入力します。以下の例では、実際の Web API の URL は http://contoso.azurewebsites.net/ ですが、API Management のドメインである https://apim.ykataoka.com/ に統一する例です。
同じ API Management サービスに登録された API 群はすべて同じドメイン名となり、その後に続くパスで API を区別します。

3-3) 最後に Web API を選択して "Operations" タブをクリックし、"Operation" を登録します。GET リクエストや POST リクエストなどの HTTP メソッド、実際のアクセス パス (下記例は http://contoso.azurewebsites.net/todolist) などの情報を入力し、Web API の登録は完了です。

上記の手順で登録した Web API は、開発者ポータルにて実際に動作を確認することができます。
以下の画像は、API Management を作成した際に既定作成される Echo API のページですが、"Try it" ボタンをクリックすることで動作が確認できます。

4. Web API のユーザー管理、アクセス キーの発行

API Management では、ユーザーおよび Product を登録することでアクセス キーが発行され、登録されたユーザーが Web API を利用することが可能となります。
Product とは、API や、その API に設定しているポリシーなどをまとめた概念であり、各ユーザーは Product をサブスクライブする必要がございます。例えば「○○ の Product のユーザーは 1 分間に最大 5 回までの呼び出しを許可する」、「□□ の Product のユーザーは無制限に呼び出しを許可する」といった制御に利用されます。
各 Web API は、必ずいずれかの Product に登録されている必要があり、その Product のルールに沿った呼び出しが可能です。ユーザーと Product はいずれも発行者ポータルから登録します。

4-1) ユーザー登録

4-2) Product 登録

5. HTTP ヘッダーなど、HTTP リクエスト/レスポンスのカスタイマイズ

HTTP ヘッダーなどのカスタマイズというタイトルとはしましたが、ポリシーという機能であり、HTTP リクエストやレスポンスのカスタマイズ、HTTP レスポンス コードの変更、ログ機能の追加など、ブログでは紹介しきれなほどとにかく様々なカスタマイズが可能となる API Management の中核機能です。
ポリシーは、Azure ポータルや発行者ポータル上で設定可能であり、XML 形式で定義を行います。基本的な設定方法、またよくご利用をいただくアクセス制限ポリシーについては、以下でご紹介をしております。

How to set or edit Azure API Management policies
https://docs.microsoft.com/ja-jp/azure/api-management/set-edit-policies

API Management のアクセス制限ポリシー
https://docs.microsoft.com/ja-jp/azure/api-management/api-management-access-restriction-policies

また、アクセス制限ポリシー以外にも様々な実装が可能であり、例えば HTTP リクエストのヘッダーの内容に応じた処理を行う、HTTP レスポンスのステータス コードをカスタマイズするといった処理も可能です。
それらのポリシーについて、また XML 内で利用できるポリシー式は、以下でご紹介をしております。ポリシー式は、System.DateTime や System.Text など、.NET Framework で利用可能なクラスが利用できたりと、比較的なじみやすい形式の変数が利用可能です。

API Management の高度なポリシー
https://docs.microsoft.com/ja-jp/azure/api-management/api-management-advanced-policies

API Management ポリシー式
https://docs.microsoft.com/ja-jp/azure/api-management/api-management-policy-expressions

なお、ポリシーは、<inbound>/<backend>/<outbound>/<on-error> の 4 つのタグから構成され、制御したい内容に応じて、タグ内にポリシーを定義します。(<on-error> は省略可能)
例えば、API Management が受信するクライアントからの HTTP リクエストに関する制御を行いたい場合には <inbound> タグ、バックエンドの API から返された HTTP レスポンスに関する制御を行いたい場合には <outbound> タグに制御内容を定義します。
そして、それぞれの処理の中で何らかのエラー (許可されていない IP アドレスからのアクセスが発生したなど) が発生した場合には、<on-error> タグ内の処理が実行されます。

例といたしまして、IP アドレスによるアクセス制限を行い、許可されていない IP アドレスからのアクセスに対しては HTTP 599 エラーを返すポリシー式は以下の通りとなります。

~~~~~
<policies>
<inbound>
<ip-filter action="allow">
<address>IP アドレス</address>
</ip-filter>
<base />
</inbound>
<on-error>
<return-response>
<set-status code="599" reason="AccessDenied" />
<set-body>Unallowed IP address.</set-body>
</return-response>
</on-error>
</policies>
~~~~~
※ <backend> および <outbound> は割愛しております。

その他にも様々なカスタマイズが可能なので、ぜひ Web API の利用にご利用いただけますと幸いです!

Web Hack Wednesday Series 3

$
0
0

I'm very happy to announce that after quite a break, series 3 of Web Hack Wednesday will be launching on Wednesday 13th December 2017 at 12:00 mid-day (UK time).

This series will be published on YouTube where you can also find our full archive of older episodes too.

We have 6 brand new episodes for you which cover the following exciting topics:

  • How GigSeekr have used advanced LUIS techniques in their bot (will be released at 12:00 on Wednesday 13th December)
  • Tips and Tricks for Visual Studio Code
  • How JD Sports used the Cortana Intelligence Recommendation solution
  • A look at the new CSS grid specification
  • What is new in Visual Studio 2017 for web developers
  • A look a the new WebP image specification

Each episode will be published at 12:00 mid-day for the next 6 weeks.

For this series we have released show notes and samples on GitHub so you can try out what we've covered in each episode yourself.

Those of you who follow the show may also know that my wing-man Martin Beeby has since left Microsoft and started a position working for Oracle. Series 3 was all recorded before Martin's departure and is being released with his consent. Going forward I'm panning to do a new show following the same idea as Web Hack Wednesday with some other colleagues.

I'll generally tweet announcements about the videos and associated show notes each Wednesday so watch @MartinKearn  and #WebHackWednesday for updates.

I hope you enjoy the series, please let me know about any feedback, topics you'd like to see covered or general comments via Twitter.


North Bay Python 2017 Recap

$
0
0

Bliss, the default background from Windows XP

Last week I had the privilege to attend the inaugural North Bay Python conference, held in Petaluma, California in the USA. Being part of any community-run conference is always enjoyable, and to help launch a new one was very exciting. In this post, I'm going to briefly tell you about the conference and help you find recordings of some of the best sessions (and also the session that I presented).

Petaluma is a small city in Sonoma County, about one hour north of San Francisco. Known for their food and wine, it was a surprising location to find a conference, including for many locals who got to attend their first Python event.

If the photo to the right looks familiar, you probably remember it as the default Windows XP background image. It was taken in the area, inspired the North Bay Python logo, and doesn't actually look all that different from the hills surrounding Petaluma today.

Nearly 250 attendees converged on a beautiful old theatre to hear from twenty-two speakers. Topics ranged from serious topics of web application accessibility, inclusiveness, through to lighthearted talks on machine learning and Django, and the absolutely hilarious process of implementing merge sort using the import statement. All the videos can be found on the North Bay Python YouTube channel.

George London (@rogueleaderr) presenting merge sort implemented using import

Recently I have been spending some of my time working on a proposal to add security enhancements to Python, similar to those already in Powershell. While Microsoft is known for being highly invested in security, not everyone shares the paranoia. I used my twenty-five minute session to raise awareness of how modern malware attacks play out, and to show how PEP 551 can enable security teams to better defend their networks.

Steve Dower (@zooba) presenting on PEP 551

(Image credit: VM Brasseur, CC-BY 2.0)

While I have a general policy of not uploading my presentation (slides are for speaking, not reading), here are the important links and content that you may be interested in:

Overall, the conference was a fantastic success. Many thanks to the organizing committee, Software Freedom Conservancy, and the sponsors who made it possible, and I am looking forward to attending in 2018.

The North Bay Python committee on stage at the end of the conference

Until the next North Bay Python though, we would love to have a chance to meet you at the events that you are at. Let us know in the comments what your favorite Python events are and the ones you would most like to have people from our Python team come and speak at.

Blog Moved

#PowerBI #DirectQuery with #Azure #HDInsight Interactive Query Cluster

$
0
0

Quick video on how to use Azure HDInsight Interactive Query with PowerBI DirectQuery

 

Performance Degradation in South Central US – 12/12 – Mitigated

$
0
0

Final Update: Tuesday, December 12th 2017 21:15 UTC

We’ve confirmed that all systems are back to normal as of December 12th 20:47 UTC. Our logs show the incident started on December 12th 20:32 UTC and that during the 15 minutes that it took to resolve the issue customers experienced slow performance and failures while using Visual Studio Team Services in South Central US. Sorry for any inconvenience this may have caused.

  • Root Cause: The issue self mitigated during investigation. Initial investigation points to contention in one of our backend databases. We continue to investigate towards full understanding of the root cause.
  • Chance of Re-occurrence: High
  • Incident Timeline: 15 minutes – 20:32 UTC through 20:47 UTC on December 12th 2017.

Sincerely,
Manohar


Initial Update: Tuesday, December 12th 2017 20:58 UTC

We're investigating Performance Degradation in South Central US.

  • Next Update: Before Tuesday, December 12th 2017 21:30 UTC

Sincerely,
Manohar

Copy production database to stage or test environment using PowerShell or TSQL

$
0
0

Scenario: when your test environment need a fresh copy of the data from production or any other scenario when fresh data is needed on another database you might find yourself seeking a quick and easy solution.

Solution Using TSQL:
when you connect to any of the databases on the server you can copy the database to another database on the same server.

CREATE DATABASE [ProdDB_TSQLFresh] AS COPY OF ProdDB;
DROP DATABASE ProdDB_TSQLCopy;
ALTER DATABASE ProdDB_TSQLFresh MODIFY NAME = ProdDB_TSQLCopy;

Solution Using PowerShell:
"Removing old copy..."
Remove-AzureRmSqlDatabase -ResourceGroupName "ResourceGroupName" -ServerName "ServerName" -DatabaseName "DatabaseName"
"Start Copy... Please wait ... "
New-AzureRmSqlDatabaseCopy -ResourceGroupName "ResourceGroupName" `
-ServerName "ServerName" `
-DatabaseName "SourceDatabaseName" `
-CopyResourceGroupName "DestinationResourceGroupName" `
-CopyServerName "DestinationServerName" `
-CopyDatabaseName "DestinationDatabaseName"
" !Done! "

Automation:
How to automate the process to run on a regular basis?

we recommend using Azure Automation to schedule these commands to run.

follow the instructions here to run it as a PowerShell script or Invoke-SQLCMD to run the T-SQL statements

if you choose the PowerShell option use this piece of code to authenticate with your run-as account:
(This code is in your newly created automation account in the samples runbooks)

$connectionName = "AzureRunAsConnection"
try
{
# Get the connection "AzureRunAsConnection "
$servicePrincipalConnection=Get-AutomationConnection -Name $connectionName
"Logging in to Azure..."
Add-AzureRmAccount `
-ServicePrincipal `
-TenantId $servicePrincipalConnection.TenantId `
-ApplicationId $servicePrincipalConnection.ApplicationId `
-CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}
catch {
if (!$servicePrincipalConnection)
{
$ErrorMessage = "Connection $connectionName not found."
throw $ErrorMessage
} else{
Write-Error -Message $_.Exception
throw $_.Exception
}
}

Last Two Weeks on DirectX Shader Compiler (2017-12-12)

$
0
0

What's new?

  • Lots of SPIR-V work.
  • 16-bit tests are coming along! We're leveraging the scripts under /utils/hct more and more.
  • Shaved off a few minutes off AppVeyor to keep it under control.
  • Some dndxc UI improvements, mostly better perf when colorizing disassembly.
  • A number of fixes for lib_ compilation targets.

Also, I'm going to be taking a few days off, so updates will be a bit more irregular until the new year.

Enjoy!

C# 7 Series, Part 7: Ref Returns

$
0
0

C# 7 Series

Part 1: Value Tuples
Part 2: Async Main
Part 3: Default Literals
Part 4: Discards
Part 5: Private Protected
Part 6: Read-only structs
Part 7: (This post) Ref Returns

Background

There are two ways to pass an value into a method:

  1. Pass by value. When an argument is passed into a method, a copy of the argument (if it is a value type) or a copy of the argument reference (if it is a reference type) is passed. When you change the argument in the method, the changes (single assignment or compound assignments) is reflected to the copy of the argument/argument reference and it is not reflected to the argument or argument reference itself. This is the default Way in .NET languages (ByVal in Visual Basic.)
  2. Pass by reference. When an argument is passed into a method, either the argument itself (if it is a value type) or the argument reference (if it is a reference type) is directly passed. No other copies are generated. When you change the argument in the method, the changes (single assignment or compound assignments) will reflect to the argument or argument reference itself. This is indicated by using ref keyword in C# or ByRef keyword in Visual Basic.

For example, the FCL’s Array.Resize() method takes a ref parameter of type Array, this ref value is modified in the method implementation to point to the new space of the array after the resize. You will be able to continue using the variable for that array and access the new space:

byte[] data = new byte[10];
Array.Resize(ref data, 20);
Console.WriteLine($"New array size is: {data.Length}");

Method can also return a value to the caller, but before C# 7.0, all returns are by value by default.

With C# 7.0 or later, ref returns are supported.

Ref Returns

Ref Returns are method return values by references. Similar to ref values passed as method arguments, ref returns can be modified by the caller, and any changes (assignments) to the value will reflect to the original value returned from the method.

In C#, you can make a ref return using the return ref keywords.

Please see the following example.

private static ref T ElementAt<T>(ref T[] array, int position)
{
     if (array == null)
     {
         throw new ArgumentNullException(nameof(array));
     }

     if (position < 0 || position >= array.Length)
     {
         throw new ArgumentOutOfRangeException(nameof(position));
     }

     return ref array[position]; }

The purpose of the above method is to obtain a reference to the element at the specific position of an array. Later you can use this reference to change the value of the element; because it is a ref value, the changes will apply to the original value in the array.

To use this method, use ref locals:

private static void Main(string[] args)
{
    int[] data = new int[10];
    Console.WriteLine($"Before change, element at 2 is: {data[2]}");
    ref int value = ref ElementAt(ref data, 2);
    // Change the ref value.
    value = 5;
    Console.WriteLine($"After change, element at 2 is: {data[2]}");
}

The Visual Studio IntelliSense will indicate the calling method is a ref return method.

image

The output of this code is as below:

image

Call Methods with Ref Returns

As previous example, to get a reference of a ref return value, you will need to use ref locals and also put the ref keyword in front of the method (ref in both left and right.)

ref int value = ref ElementAt(ref data, 2);

You can also call this method without the ref keyword, making the value returned by value.

int value = ElementAt(ref data, 2);

In this case, the program output will be as the following:

image

However, you need either specify ref in both sides, or not have ref in both sides; you cannot specify ref in one side and not specify ref in the other side.

Additionally, the following code also works:

ElementAt(ref data, 2) = 5;

You will get the same output as the first example. The element at the position 2 is changed from 0 (default value) to 5. This works because the ref returns can appear as an LValue.

Restrictions

There are certain restrictions apply.

  1. The value to be ref returned in a method must be a ref local; the source of this ref local can be an actual ref/out argument of this method, or a field in the type where the method is declared.
  2. You cannot ref return a void type.
  3. You cannot ref return a null literal. But you can ref return a ref local whose value is null.
  4. You cannot ref return from an async method, because the return value might be uncertain by the time the async method returns.
  5. Constants and enums are not allowed to be ref returned.

Conclusion

Ref returns is an extension of the C# language, it can improve the performance of the code by reducing the possibility of copying values from method returns. This is useful for those low level programming components such as interoperability, cross-platform, or resource constrained scenarios (Mobile, IoT etc.,) This feature requires no CLR changes because the similar concepts already exist. To use this feature, you will need to use C# language 7.0, by upgrading your Visual Studio version to 2017 or later.


Next LCS release planned for January 2018

$
0
0

We are skipping the second December release of LCS. The next release of LCS is will be available January 8., 2018.

Application Insights – Advisory 12/12

$
0
0
Between 04 December 2017, 20:40 UTC - 20:55 UTC and between 05 December 2017, 20:15 UTC - 20:30 UTC some customers may have experienced very short availability web test gaps only in Zurich location. Our team have mitigated the issue and availability web test is flowing
in Zurich Location with no impact.


We apologize for any inconvenience.


-Deepesh

Azure IoT Hub にデバイスを複数台登録する方法

$
0
0

Azure Iot Hubに、数十台以上の規模でデバイス登録を行い、Iot Hubに接続するためのキー情報を得たいようなシナリオがあります。この場合、Azure Portal のデバイス エクスプローラーで一つ一つ手作業でこれを行うのは大変です。

 

clip_image002

 

 

そのような時、以下のサンプルをそのまま動作いただければ、一度に複数台「デバイス登録」と「Iot Hubに接続するためのキー情報を得る」ことができます。

 

  How to read events from an IoT Hub with the Service Bus Explorer

  https://code.msdn.microsoft.com/How-to-read-events-from-an-1641eb1b

 

   Create a new device to the device identity registry using the Microsoft.Azure.Devices.RegistryManager.AddDeviceAsync(Microsoft.Azure.Devices.Device)"> AddDeviceAsync method of the  RegistryManager class.

 

動作手順の例は以下の通りです。

 

<動作手順>

1. 上記ページの「Download」の右にある「C# (4.8MB)」をクリックし、How to read events from an IoT Hub with the Service Bus Explorer.zip をダウンロードし、解凍します。

2. Visual Studio 2015 で、C# フォルダの DeviceEmulator.sln を開きます。

3. ソリューションエクスプローラーで、ソリューションをビルドします。

 

clip_image004

 

4. C#DeviceEmulatorbinDebug フォルダに DeviceEmulator.exe が生成されますので、これを起動します。

5. Azure Portal IoT Hub のリソースを選択し、[共有アクセス ポリシー] で、[iothubowner] ポリシーをクリックし、[iothubowner] ウィンドウで IoT Hub 接続文字列を確認します。

 

clip_image006

 

6. DeviceEmulator.exe ConnectionString に、5. で確認した接続文字列を入力します。

7. Device Count に、作成したい個数を入力します。

8. Create をクリックすると、Device が作成され、キーも取得できます。

 

clip_image008

 

 

なお、上記に該当する実装は、MainForm.cs InitializeDevicesAsync() です。

以下のfor ループでDevice Count の個数分、AddDeviceAsync メソッドでデバイスの登録を行っていることを確認できます。

 

            for (var i = 1; i <= txtDeviceCount.IntegerValue; i++)

 

                        // try to create the device in the device identity registry

                        device = await registryManager.AddDeviceAsync(new Device(deviceId));

 

また、プライマリキーもその次の行で取得できます。

 

                        WriteToLog($"Device [{deviceId}] successfully created in the device identity registry with key
[{device.Authentication.SymmetricKey.PrimaryKey}]");

 

 

上記の実装を各言語で実施するにあたり、AddDeviceAsync() に相当する処理が、他の言語ではどうなっているのか確認するために、以下のドキュメントの「デバイスID の作成」の項目がご参考になるかと思います。

 

Azure IoT Hub とシミュレートされたデバイス入門チュートリアル

https://docs.microsoft.com/ja-jp/azure/iot-hub/iot-hub-get-started-simulated

 

.NET

https://docs.microsoft.com/ja-jp/azure/iot-hub/iot-hub-csharp-csharp-getstarted#create-a-device-identity

 

Java

https://docs.microsoft.com/ja-jp/azure/iot-hub/iot-hub-java-java-getstarted#create-a-device-identity

 

Node.js

https://docs.microsoft.com/ja-jp/azure/iot-hub/iot-hub-node-node-getstarted#create-a-device-identity

 

Python

https://docs.microsoft.com/ja-jp/azure/iot-hub/iot-hub-python-getstarted#create-a-device-identity

 

REST API の場合は以下をご覧ください。

 

REST API

https://docs.microsoft.com/en-us/rest/api/iothub/DeviceApi/PutDevice

 

 

以上の内容がお役に立ちましたら幸いです。

 

Azure IoT 開発サポートチーム 津田

 

テクニカル ドキュメントが便利になりました

$
0
0

Microsoft Japan Business Intelligence Tech Sales Team 伊藤

 

https://docs.microsoft.com は、開発者と IT プロフェッショナル向けの Microsoft テクニカル ドキュメント、API リファレンス、コード サンプル、クイックスタート、チュートリアルのホームです。今まで製品ごとにバラバラに提供していた情報をこちらに統合しています。まだ https://powerbi.microsoft.com/ja-jp/ のリンクは変更されていませんが、Power BI のドキュメントもこちらで公開しています。他の製品でも同様ですが、今回は Power BI のドキュメントを使ってご紹介します。

image

 

「Power BI」をクリックすると下記のページが表示されます。

image

調べたいコンポーネントが特定できていればいいのですが、グラフのように Power BI サービスにも Power BI Desktop にも含まれるものについて調べたいときは、日本語で検索できず情報が探しづらかったかと思います。新しいドキュメントサイトではコンポーネントをまたいで、日本語で検索可能になりました。上記ページの右上にある虫眼鏡アイコンをクリックして検索窓からキーワードを入力すると、本文にその言葉を含むページが検索できます。例えば「書式」で検索すると執筆時点で 58 件ヒットします。

image

もう一つの方法として「Power BI サービス」などのページに遷移してから、左側のペインにあるフィルターにキーワードを入力する方法です。こちらはドキュメントのタイトルに入力したキーワードを含むページに絞り込んで表示します。

image

これの便利なところは、コンポーネントを横断的に検索しつつ、カテゴリ分けが認識できるよう結果が表示される点です。

また、各ページの上部に紫色の帯が表示されていますが、[有効] ボタンをクリックすることで、カーソルを合わせた文について英語が表示できるようになります。

image

この機能を使うと、ある機能について英語で何というのかを確認できるので、Feedback サイトで関連する要望が挙がっていないかを確認して投票したり (フィードバック方法については ブログ記事「Power BI についてフィードバックをしてみよう」 をご覧ください) 、英語のブログを検索したりするのに役立ちます。ちなみに、URL の「ja-jp」を「en-us」に変更することで、全文を英語にできます。

なお、画面右下に「このページは役に立ちましたか。」というポップアップが表示されており [いいえ] ボタンをクリックすると、ドキュメントの改善のためのご提案を送信いただけます。ドキュメントそのものについてお気づきの点がありましたら、こちらからフィードバックをお願いします。

image

WIndows 10 Fall Creators Update での Credential Provider に対する影響について

$
0
0

こんにちは、WIndows SDK サポートチームです。今回、Windows 10 Fall Creators Update の OS の仕様変更により、Credential Provider に影響が出ることがあります。その解説を以下に記載しましたので、ご確認ください。

Windows 10 Fall Creators Update では、Winlogon Automatic Restart Sign-On (ARSO) の機能がよりよいパワーサイクルを提供する機能拡張が行われました。本処理では、シャットダウン / リブート後のユーザーがログインされた後のアップデート処理を自動的に実施するために、ARSO の機能により、自動ログイン –> ロック を行います。

Winlogon Automatic Restart Sign-On (ARSO)
https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/manage/component-updates/winlogon-automatic-restart-sign-on--arso-

この動作により、Credential Provider の ICredentialProvider::SetUsageScenario にて、上記の自動ログインが捕捉されなくなります。その結果、ロック解除時に Credential Provider へ処理がわたりますため、初回の呼び出しであっても、ロックステータスとなります。

- 回避策

以下のポリシーまたは、レジストリをセットいただきますと、ARSO を無効化することが可能です。
初回のログオン時を捕捉し、処理を行われている Credential Provider を開発いただいている場合、以下の回避策の利用をご検討ください。

Policy Registry Location: HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem

Type: DWORD

Registry Name: DisableAutomaticRestartSignOn

Value: 0 or 1

0 = Enabled

1 = Disabled

winlogon

Viewing all 29128 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>