Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Resolving Azure ARM REST API Versions Conflict In ARM Templates

$
0
0

This post is from Premier Developer consultant Adel Ghabboun.


Usually if you use the Azure Portal Automation Script feature to generate an ARM template and copy that to a new Visual Studio Azure Resource Group project, Visual Studio editor will complain about the API version that is used in the template and suggest a new version. It looks like this:

clip_image001

If you click Ctrl + Space, Visual Studio will show you all the possible values:

clip_image002

Now, if you change the value and try to deploy this template to your Azure resource group, the result will come back failed with a deployment error:

"error": {
"code": "NoRegisteredProviderFound",
"message": "No registered resource provider found for location 'eastus' and API version '2016-08-01' for type 'components'. The supported api-versions are '2014-04-01, 2014-08-01, 2015-05-01, 2014-12-01-preview'.

Now, which one should we use?

Let’s look at the PowerShell Azure SDK command Get-AzureRMResourceProvider to retrieve the API version information by following the below steps (Note that Azure SDK must be installed on your machine prior to running the Azure commands below, See Install and Configure Azure PowerShell ):

  1. Open Windows PowerShell ISE in Admin Mode
  2. Run this command to login to your Azure Account
    Login-AzureRmAccount
  3. Login to your account using your Azure credentials
  4. Run the following command (Assuming Application Insights is the resource we want to create)
    (Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Insights).ResourceTypes | Where {$_.ResourceTypeName -eq 'components'} | Select -ExpandProperty ApiVersions
    2015-05-01
    2014-12-01-preview
    2014-08-01
    2014-04-01

Now, using the API version that is provided by the PowerShell command will solve the issue.

Conclusion:

As a best practice and to avoid any deployment issue regardless if you are using Visual Studio or other tools to deploy your Azure templates, always run the PowerShell command mentioned above to retrieve the API versions information and use the latest one.


GPUs in the task manager

$
0
0

The below posting is from Steve Pronovost, our lead engineer responsible for the GPU scheduler and memory manager.

GPUs in the Task Manager

We're excited to introduce support for GPU performance data in the Task Manager. This is one of the features you have often requested, and we listened. The GPU is finally making its debut in this venerable performance tool.

To understand all the GPU performance data, its helpful to know how Windows uses a GPUs. This blog dives into these details and explains how the Task Manager's GPU performance data comes alive. This blog is going to be a bit long, but we hope you enjoy it nonetheless.

System Requirements

In Windows, the GPU is exposed through the Windows Display Driver Model (WDDM). At the heart of WDDM is the Graphics Kernel, which is responsible for abstracting, managing, and sharing the GPU among all running processes (each application has one or more processes). The Graphics Kernel includes a GPU scheduler (VidSch) as well as a video memory manager (VidMm). VidSch is responsible for scheduling the various engines of the GPU to processes wanting to use them and to arbitrate and prioritize access among them. VidMm is responsible for managing all memory used by the GPU, including both VRAM (the memory on your graphics card) as well as pages of main DRAM (system memory) directly accessed by the GPU. An instance of VidMm and VidSch is instantiated for each GPU in your system.

The data in the Task Manager is gathered directly from VidSch and VidMm. As such, performance data for the GPU is available no matter what API is being used, whether it be Microsoft DirectX API, OpenGL, OpenCL, Vulkan or even proprietary API such as AMD's Mantle or Nvidia's CUDA.  Further, because VidMm and VidSch are the actual agents making decisions about using GPU resources, the data in the Task Manager will be more accurate than many other utilities, which often do their best to make intelligent guesses since they do not have access to the actual data.

The Task Manager's GPU performance data requires a GPU driver that supports WDDM version 2.0 or above. WDDMv2 was introduced with the original release of Windows 10 and is supported by roughly 70% of the Windows 10 population. If you are unsure of the WDDM version your GPU driver is using, you may use the dxdiag utility that ships as part of windows to find out. To launch dxdiag open the start menu and simply type dxdiag.exe. Look under the Display tab, in the Drivers section for the Driver Model. Unfortunately, if you are running on an older WDDMv1.x GPU, the Task Manager will not be displaying GPU data for you.

Performance Tab

Under the Performance tab you'll find performance data, aggregated across all processes, for all of your WDDMv2 capable GPUs.

GPUs and Links

On the left panel, you'll see the list of GPUs in your system. The GPU # is a Task Manager concept and used in other parts of the Task Manager UI to reference specific GPU in a concise way. So instead of having to say Intel(R) HD Graphics 530 to reference the Intel GPU in the above screenshot, we can simply say GPU 0. When multiple GPUs are present, they are ordered by their physical location (PCI bus/device/function).

Windows supports linking multiple GPUs together to create a larger and more powerful logical GPU. Linked GPUs share a single instance of VidMm and VidSch, and as a result, can cooperate very closely, including reading and writing to each other's VRAM. You'll probably be more familiar with our partners' commercial name for linking, namely Nvidia SLI and AMD Crossfire. When GPUs are linked together, the Task Manager will assign a Link # for each link and identify the GPUs which are part of it. Task Manager lets you inspect the state of each physical GPU in a link allowing you to observe how well your game is taking advantage of each GPU.

GPU Utilization

At the top of the right panel you'll find utilization information about the various GPU engines.

A GPU engine represents an independent unit of silicon on the GPU that can be scheduled and can operate in parallel with one another. For example, a copy engine may be used to transfer data around while a 3D engine is used for 3D rendering. While the 3D engine can also be used to move data around, simple data transfers can be offloaded to the copy engine, allowing the 3D engine to work on more complex tasks, improving overall performance. In this case both the copy engine and the 3D engine would operate in parallel.

VidSch is responsible for arbitrating, prioritizing and scheduling each of these GPU engines across the various processes wanting to use them.

It's important to distinguish GPU engines from GPU cores. GPU engines are made up of GPU cores. The 3D engine, for instance, might have 1000s of cores, but these cores are grouped together in an entity called an engine and are scheduled as a group. When a process gets a time slice of an engine, it gets to use all of that engine's underlying cores.

Some GPUs support multiple engines mapping to the same underlying set of cores. While these engines can also be scheduled in parallel, they end up sharing the underlying cores. This is conceptually similar to hyper-threading on the CPU. For example, a 3D engine and a compute engine may in fact be relying on the same set of unified cores. In such a scenario, the cores are either spatially or temporally partitioned between engines when executing.

The figure below illustrates engines and cores of a hypothetical GPU.

By default, the Task Manager will pick 4 engines to be displayed. The Task Manager will pick the engines it thinks are the most interesting. However, you can decide which engine you want to observe by clicking on the engine name and choosing another one from the list of engines exposed by the GPU.

The number of engines and the use of these engines will vary between GPUs. A GPU driver may decide to decode a particular media clip using the video decode engine while another clip, using a different video format, might rely on the compute engine or even a combination of multiple engines. Using the new Task Manager, you can run a workload on the GPU then observe which engines gets to process it.

In the left pane under the GPU name and at the bottom of the right pane, you'll notice an aggregated utilization percentage for the GPU. Here we had a few different choices on how we could aggregate utilization across engines. The average utilization across engines felt misleading since a GPU with 10 engines, for example, running a game fully saturating the 3D engine, would have aggregated to a 10% overall utilization! This is definitely not what gamers want to see. We could also have picked the 3D Engine to represent the GPU as a whole since it is typically the most prominent and used engine, but this could also have misled users. For example, playing a video under some circumstances may not use the 3D engine at all in which case the aggregated utilization on the GPU would have been reported as 0% while the video is playing! Instead we opted to pick the percentage utilization of the busiest engine as a representative of the overall GPU usage.

Video Memory

Below the engines graphs are the video memory utilization graphs and summary. Video memory is broken into two big categories: dedicated and shared.

Dedicated memory represents memory that is exclusively reserved for use by the GPU and is managed by VidMm. On discrete GPUs this is your VRAM, the memory that sits on your graphics card.   On integrated GPUs, this is the amount of system memory that is reserved for graphics. Many integrated GPU avoid reserving memory for exclusive graphics use and instead opt to rely purely on memory shared with the CPU which is more efficient.

This small amount of driver reserved memory is represented by the Hardware Reserved Memory.

For integrated GPUs, it's more complicated. Some integrated GPUs will have dedicated memory while others won't. Some integrated GPUs reserve memory in the firmware (or during driver initialization) from main DRAM. Although this memory is allocated from DRAM shared with the CPU, it is taken away from Windows and out of the control of the Windows memory manager (Mm) and managed exclusively by VidMm. This type of reservation is typically discouraged in favor of shared memory which is more flexible, but some GPUs currently need it.

The amount of dedicated memory under the performance tab represents the number of bytes currently consumed across all processes, unlike many existing utilities which show the memory requested by a process.

Shared memory represents normal system memory that can be used by either the GPU or the CPU. This memory is flexible and can be used in either way, and can even switch back and forth as needed by the user workload. Both discrete and integrated GPUs can make use of shared memory.

Windows has a policy whereby the GPU is only allowed to use half of physical memory at any given instant. This is to ensure that the rest of the system has enough memory to continue operating properly. On a 16GB system the GPU is allowed to use up to 8GB of that DRAM at any instant. It is possible for applications to allocate much more video memory than this.  As a matter of fact, video memory is fully virtualized on Windows and is only limited by the total system commit limit (i.e. total DRAM installed + size of the page file on disk). VidMm will ensure that the GPU doesn't go over its half of DRAM budget by locking and releasing DRAM pages dynamically. Similarly, when surfaces aren't in use, VidMm will release memory pages back to Mm over time, such that they may be repurposed if necessary. The amount of shared memory consumed under the performance tab essentially represents the amount of such shared system memory the GPU is currently consuming against this limit.

Processes Tab

Under the process tab you'll find an aggregated summary of GPU utilization broken down by processes.

It's worth discussing how the aggregation works in this view. As we've seen previously, a PC can have multiple GPUs and each of these GPU will typically have several engines. Adding a column for each GPU and engine combinations would leads to dozens of new columns on typical PC making the view unwieldy. The performance tab is meant to give a user a quick and simple glance at how his system resources are being utilized across the various running processes so we wanted to keep it clean and simple, while still providing useful information about the GPU.

The solution we decided to go with is to display the utilization of the busiest engine, across all GPUs, for that process as representing its overall GPU utilization. But if that's all we did, things would still have been confusing. One application might be saturating the 3D engine at 100% while another saturates the video engine at 100%. In this case, both applications would have reported an overall utilization of 100%, which would have been confusing. To address this problem, we added a second column, which indicates which GPU and Engine combination the utilization being shown corresponds to. We would like to hear what you think about this design choice.

Similarly, the utilization summary at the top of the column is the maximum of the utilization across all GPUs. The calculation here is the same as the overall GPU utilization displayed under the performance tab.

Details Tab

Under the details tab there is no information about the GPU by default. But you can right-click on the column header, choose "Select columns", and add either GPU utilization counters (the same one as described above) or video memory usage counters.

There are a few things that are important to note about these video memory usage counters. The counters represent the total amount of dedicated and shared video memory currently in used by that process. This includes both private memory (i.e. memory that is used exclusively by that process) as well as cross-process shared memory (i.e. memory that is shared with other processes not to be confused with memory shared between the CPU and the GPU).

As a result of this, adding the memory utilized by each individual process will sum up to an amount of memory larger than that utilized by the GPU since memory shared across processes will be counted multiple times. The per process breakdown is useful to understand how much video memory a particular process is currently using, but to understand how much overall memory is used by a GPU, one should look under the performance tab for a summation that properly takes into account shared memory.

Another interesting consequence of this is that some system processes, in particular dwm.exe and csrss.exe, that share a lot of memory with other processes will appear much larger than they really are. For example, when an application creates a top level window, video memory will be allocated to hold the content of that window. That video memory surface is created by csrss.exe on behalf of the application, possibly mapped into the application process itself and shared with the desktop window manager (dwm.exe) such that the window can be composed onto the desktop. The video memory is allocated only once but is accessible from possibly all three processes and appears against their individual memory utilization. Similarly, application DirectX swapchain or DCOMP visual (XAML) are shared with the desktop compositor. Most of the video memory appearing against these two processes is really the result of an application creating something that is shared with them as they by themselves allocate very little. This is also why you will see these grow as your desktop gets busy, but keep in mind that they aren't really consuming up all of your resources.

We could have decided to show a per process private memory breakdown instead and ignore shared memory. However, this would have made many applications looks much smaller than they really are since we make significant use of shared memory in Windows. In particular, with universal applications it's typical for an application to have a complex visual tree that is entirely shared with the desktop compositor as this allows the compositor a smarter and more efficient way of rendering the application only when needed and results in overall better performance for the system. We didn't think that hiding shared memory was the right answer. We could also have opted to show private+shared for regular processes but only private for csrss.exe and dwm.exe, but that also felt like hiding useful information to power users.

This added complexity is one of the reason we don't display this information in the default view and reserve this for power users who will know how to find it. In the end, we decided to go with transparency and went with a breakdown that includes both private and cross-process shared memory. This is an area we're particularly interested in feedback and are looking forward to hearing your thoughts.

Closing thought

We hope you found this information useful and that it will help you get the most out of the new Task Manager GPU performance data.

Rest assured that the team behind this work will be closely monitoring your constructive feedback and suggestions so keep them coming! The best way to provide feedback is through the Feedback Hub. To launch the Feedback Hub use our keyboard shortcut Windows key + f. Submit your feedback (and send us upvotes) under the category Desktop Environment -> Task Manager.

General availability of Storage Service Encryption and "Secure transfer required"

$
0
0

*Today, we are excited to announce the general availability of Storage Service Encryption for Azure Files Storage, as well as the "Secure transfer required" feature now being supported in the Azure Government Storage account.

Storage Service Encryption for Azure File Storage

Azure File Storage is a fully managed service providing distributed and cross platform storage. IT organizations can lift and shift on premises file shares to the cloud using Azure Files, by simply pointing the applications to Azure file share path. Thus, agencies can start leveraging cloud without having to incur development costs to adopt cloud storage. Azure Files Storage is now the first fully managed file service offering encryption of data at rest.

Azure Government customers already benefit from Storage Service Encryption for Azure Blob Storage. Encryption support for Azure Tables and Queues will be coming by June.

Microsoft handles all the encryption, decryption and key management in a fully transparent fashion. All data is encrypted using 256-bit AES encryption, also known as AES-256, one of the strongest block ciphers available. Customers can enable this feature on all available redundancy types of Azure File Storage – LRS and GRS. There is no additional charge for enabling this feature.

You can enable this feature on any Azure Resource Manager storage account using the Azure Portal, Azure Powershell, Azure CLI or the Microsoft Azure Storage Resource Provider API.

Find out more about Storage Service Encryption with Service Managed Keys.

"Secure transfer required"

These features enhances the security of your storage account by enforcing all requests to your account through a secure connection.

Please note: This feature is disabled by default. For more details, see the article "Require secure transfer".

We welcome your comments and suggestions to help us continually improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails, click "Subscribe by Email!" on the Azure Government Blog. To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.

HealthVault S67 release

$
0
0

HealthVault S67 release has been deployed to PPE and will be available next week in production environment. There is no changes impacting to HealthVault developers. Please use the MSDN Forum to report any issues.

AX 2012 - Posting definitions are required for year-end general ledger close ‘Select the Use posting definitions’ checkbox on General ledger parameters

$
0
0

INTRODUCCIÓN

Considere que al final de un ejercicio, después de que se cierre cada período, debe ejecutar el proceso de cierre de fin de año para transferir saldos de apertura al nuevo año. Puede ejecutar el proceso de cierre de fin de año tantas veces como sea necesario a medida que se realicen los ajustes necesarios.

 

CIERRE DE AÑO FISCAL

En algunas ocasiones la forma de 'Transacciones de apertura' (Ruta. Contabilidad general > Procesos periódicos > Cierre de ejercicio > Transacciones de apertura) para realizar el proceso de cierre de año fiscal pudiera parecer diferente a la que comúnmente conoce, incluyendo campos como: 'Balances through fiscal period' y 'Post to fiscal period', entro otros.


Así mismo, podría observar que aparece el siguiente mensaje de advertencia: 'Posting definitions are required for year end general ledger close. Select the 'Use posting definitions' checkbox on General ledger parameters.'

 

Estos campos son habilitados por la solución de Sector Público.

Si los procesos de negocio que está ejecutando no requieren de esta solución, puede deshabilitarlos considerando las siguientes opciones por versión:

 

AX 2012 RTM con Feature Pack

Deshabilite la funcionalidad de Sector Público, siguiendo el procedimiento recomendado en el blog: How to disable the Public Sector solution when using Microsoft Dynamics AX 2012 Feature Pack.

https://blogs.msdn.microsoft.com/axsupport/2012/08/06/how-to-disable-the-public-sector-solution-when-using-microsoft-dynamics-ax-2012-feature-pack/

 

AX 2012 R2 y AX 2012 R3

Considere que la solución de Sector Público en estas versiones está en la capa SYP, y no en la capa FPK como ocurría en AX 2012 RTM FP, por lo que el procedimiento mencionado anteriormente no debe realizarse. Por el contrario, puede desmarcar los campos: 'Usar la dimensión de fondos para las transacciones de fin de año' y 'Usar dimensión de fondos para transferir órdenes de compra'.

Estos campos están en los parámetros de Contabilidad general, en la sección de Cierre de año fiscal (Ruta. Contabilidad general > Configurar > Parámetros de contabilidad general > Sección Contabilidad > Pestaña Cierre de año fiscal).


Una vez que se hayan desmarcado esos campos, podrá visualizar la forma 'Transacciones de apertura' como la conoce y no se obtendrá el mensaje de advertencia mencionado anteriormente.


Para M


 

What we've learned from .NET Core SDK Telemetry

$
0
0

We are releasing .NET Core SDK usage data that has been collected by the .NET Core CLI. We have been using this data to determine the most common CLI scenarios, the distribution of operating systems and to answer other questions we've had, as described below.

As an open source application platform that collects usage data via an SDK, it is important that all developers that work on the project have access to usage data in order to fully participate in and understand design choices and propose product changes. This is now the case with .NET Core.

.NET Core telemetry was first announced in the .NET Core 1.0 RC2 and .NET Core 1.0 RTW blog announcements. It is also documented in .NET Core telemetry docs.

We will release new data on a quarterly schedule going forward. The data is licensed with the Open Data Commons Attribution License.

.NET Core SDK Usage Data

.NET Core SDK usage data is available by quarter in TSV (tab-separated values) format:

The Data

.NET Core has two primary distributions: the .NET Core SDK for development and build scenarios and the .NET Core Runtime for running apps in production. The .NET Core SDK collects usage data while the .NET Core Runtime does not.

The SDK collects the following pieces of data:

  • The command being used (for example, build, restore).
  • The ExitCode of the command.
  • The test runner being used, for test projects.
  • The timestamp of invocation.
  • Whether runtime IDs are present in the runtimes node.
  • The CLI version being used.
  • Operating system version.

The data collected does not contain personal information.

The data does not include Visual Studio usage since Visual Studio uses MSBuild directly and not the higher-level .NET Core CLI tools (which is where data collection is implemented).

You can opt-out of telemetry by setting the DOTNET_CLI_TELEMETRY_OPTOUT variable, as described in .NET Core documentation.

Shape of the Data

The following is an example of the data you will find in the TSV (tab separated values) files.

You will notice misspellings, like "bulid". That's what the user typed. It's information. Maybe we should implement the same kind of "Did you mean this?" experience that git has. Food for thought.

C:dotnet-core-cli-data>more dotnet-cli-usage-2016-q3.tsv
Timestamp       Occurences      Command Geography       OSFamily        RuntimeID       OSVersion       SDKVersion
9/1/2016 12:00:00 AM    1       bulid   India   Windows win7-x86        6.1.7601        1.0.0-preview1-002702
9/8/2016 12:00:00 AM    1       bulid   Republic of Korea       Windows win81-x64       6.3.9600        1.0.0-preview2-003121
9/19/2016 12:00:00 AM   1       bulid   United States   Windows win81-x64       6.3.9600        1.0.0-preview2-003121
9/12/2016 12:00:00 AM   1       bulid   Ukraine Windows win81-x64       6.3.9600        1.0.0-preview2-003121
8/12/2016 12:00:00 AM   2       bulid   Netherlands     Windows win10-x64       10.0.10240      1.0.0-preview1-002702
9/14/2016 12:00:00 AM   1       debug   Hong Kong       Windows win10-x64       10.0.14393      1.0.0-preview2-003121
9/14/2016 12:00:00 AM   1       debug   United States   Linux   ubuntu.16.04-x64        16.04   1.0.0-preview2-003121
8/27/2016 12:00:00 AM   1       debug   Belarus Windows win10-x64       10.0.10586      1.0.0-preview2-003121
9/16/2016 12:00:00 AM   1       debug   India   Darwin  osx.10.11-x64   10.11   1.0.0-preview2-003131
8/31/2016 12:00:00 AM   1       debug   Sweden  Windows win10-x64       10.0.10586      1.0.0-preview2-003121
8/26/2016 12:00:00 AM   1       debug   Netherlands     Windows win10-x64       10.0.10586      1.0.0-preview2-003121
9/27/2016 12:00:00 AM   2       debug   United States   Windows win10-x64       10.0.10586      1.0.0-preview2-003121
8/2/2016 12:00:00 AM    1       debug   Ireland Linux   ubuntu.16.04-x64        16.04   1.0.0-preview2-003121
8/10/2016 12:00:00 AM   1       debug   United States   Windows win7-x64        6.1.7601        1.0.0-preview1-002702
8/18/2016 12:00:00 AM   1       debug   United States   Linux   ubuntu.16.04-x64        16.04   1.0.0-preview2-003121

Data for .NET Core 2.0

The data that has been collected with the .NET Core SDK has demonstrated some important gaps in our understanding of how the product is being used. The following additional data points are planned for .NET Core SDK 2.0.

  • dotnet command arguments and options -- Determine more detailed product usage. For example, for dotnet new, collect the template name. For dotnet build --framework netstandard2.0, collect the framework specified. Only known arguments and options will be collected (not arbitrary strings).
  • Containers -- Determine if the SDK is running in a container. Useful to help prioritize container-related investments.
  • Command duration -- Determine how long a command runs. Useful to identify performance problems that should be investigated.
  • Target Framework(s) -- Determine which target frameworks are used and whether multiple are specified. Useful to understand which .NET Standard versions are the most popular and whether new guidance should be written, for example.
  • Hashed MAC address -- Determine a cryptographically (SHA256) anonymous and unique ID for a machine. Useful to determine the aggregate number of machines that use .NET Core.

Product Findings and Decisions

This data has been very useful to the .NET Core team for a year now. In some cases, like looking at overall usage or at the usage of specific commands, we are very reliant on this data to make decisions. For more specific decisions, like the case of removing the OpenSSL dependency on macOS, we used the data as secondary evidence to user feedback.

Here are some interesting findings that we have made based on this data:

  • .NET Core usage is growing -- >10% month over month.
  • .NET Core usage is geographically diverse -- used in 100s of countries and all continents.
  • .NET Core CLI tools are a very important part of the overall .NET Core experience -- relative to .NET Framework, the CLI tools are novel.
  • Developers do not use the .NET Core SDK the same way on Windows, macOS and Linux -- the popular commands are different per OS.
  • The publishing model for .NET Core apps is likely confusing some people -- the difference in the popular commands suggests a use of .NET Core that differs from our guidance (more investigation needed).
  • We have more work to do to reach out to the Linux and macOS communities -- we would like to see increased use of .NET Core on thoses OSes.
  • Our approach to supporting Linux (one build per distro) isn't providing broad enough support -- .NET Core was used on high 10s of Linux distros yet it only works well on 10-20 distros.
  • There are gaps in the data that limit our understanding -- we would like to know if the SDK is running in a container, for example.

We made the following changes in .NET Core 2.0 based on this data:

  • .NET Core 2.0 ships with a single Linux build, making it easier to use .NET Core on Linux. .NET Core 1.x has nearly a dozen Linux builds for specific distros (for example, RHEL, Debian and Ubuntu are all separate) and limits support to those distros.
  • .NET Core 2.0 does not require OpenSSL on macOS, with the intention of increasing adoption on macOS.
  • .NET Core 2.0 will be easily buildable from source so that Linux distros can include .NET Core in their package repository/archive/collection. We are talking to distros about that now.
  • We will attend and/or encourage local experts to participate in more conferences (globally) to talk about .NET Core.

More forward-looking:

  • Fix the build and publishing model for .NET Core -- the differences between run, build and publish are likely confusing people.
  • Enable more CLI scenarios -- enable distribution of tools, possibly like the way NPM does global installs.

.NET Core SDK Insights

The data reveals interesting trends about .NET Core SDK usage. Let's take a look at historical data (since June 2016):

Note: this data is just from direct use of the CLI. There is of course a significant amount of .NET Core usage via Visual Studio, as well.

Command Variations by Operating System

There are some interesting and surprising differences in command usage between operating systems. We can see that build is by far the leading command on Windows, run on Linux, and restore on macOS. I'd interpret this to say that we're seeing a lot of application development on Windows, maybe more "kicking the tires" applications on macOS scaffolded using Yeoman (since dotnet new usage is low on macOS), while Linux is primarily being used to host applications.

Note: The chart says "OSX", which is the old name for macOS.

Weekly Trends

.NET Core usage is growing over time. You can see that there's an obvious cycle that follows the work week.

Geographic Distributions

It's interesting to take a look at the geographic variations in operating system usage. Most geographies have a mix, but you can see that some areas run predominantly on a single operating system, at least as it relates to .NET Core usage.

This data and visualization is based on the IP address seen on the server. It is not collected by the CLI. The IP address is not stored, but converted to a 3-octet IP address, which is effectively a city-level representation of that data.

Overall Operating System Distribution

Given .NET's roots, it's not surprising to see a large Windows following. It's exciting to see substantial Linux and Darwin (macOS) usage as well.

Operating System Version Distribution

It looks like .NET Core is running mostly on the newest operating system versions at this point. This aligns with our expecation that .NET Core has been adopted mostly by "early adopters" to this point. In 2-3 years, we expect that the operating system distribution will be more varied.

More to Come

We will continue to make this data available to you in a timely manner, and we're going to look into making it possible for you to visualize the kinds of trends we're seeing (like in the images above). For now, we're making the raw data available to you.

Thanks to everyone that has been using .NET Core. The community engagement on the project has been amazing and we are making a great product together. This information is helping us make the product better and will become even more useful in the future. We are now doing our part to make the data collected publicly available. This makes good on a statement that we made at the start of the project, that we would release the data. We now look forward to other developers reasoning about this data and using it as part of project decision making.

Diagnostic Improvements in Visual Studio 2017 15.3.0

$
0
0

This post as well as described diagnostics significantly benefited from the feedback by Mark, Xiang, Stephan, Marian, Gabriel, Ulzii, Steve and Andrew.

Visual Studio 2017 15.3.0 release comes with a number of improvements to the Microsoft Visual C++ compiler's diagnostics. Most of these improvements are in response to the diagnostics improvements survey we shared with you at the beginning of the 15.3 development cycle. Below you will find some of the new or improved diagnostic messages in the areas of member initialization, enumerations, dealing with precompiled headers, conditionals, and more. We will continue this work throughout VS 2017.

Order of Members Initialization

A constructor won't initialize members in the order their initializers are listed in code, but in the order the members are declared in class. There are multiple potential problems that can stem from assuming that actual initialization order will match the code. A typical scenario is when the member initialized earlier in the list is used in a later initializer, while the actual initialization happens in the reverse order because of the order of declaration of those members.

// Compile with /w15038 to enable the warning
struct B : A
{
    B(int n) : b(n), A(b) {} // warning C5038: data member 'B::b' will be initialized after base class 'A'
    int b;
};

The above warning is off-by-default in the current release due to the amount of code it breaks in numerous projects that treat warnings as errors. We plan to enable the warning by default in a subsequent release, so we recommend to try enabling it early.

Constant Conditionals

There were a few suggestions to adopt the practice popularized by Clang of suppressing certain warnings when extra parentheses are used. We looked at it in the context of one bug report suggesting that we should suppress "warning C4127: conditional expression is constant" when the user puts extra () (note that Clang itself doesn't apply the practice to this case). While we discussed the possibility, we decided this would be a disservice to good programming practices in the context of this warning as the language and our implementation now supports the 'if constexpr' statement. Instead, we now recommend using 'if constexpr'.

    if ((sizeof(T) < sizeof(U))) …
        // warning C4127 : conditional expression is constant
        // note : consider using 'if constexpr' statement instead

Scoped Enumerations

One reason scoped enumerations (aka enum classes) are preferred is because they have stricter type-checking rules than unscoped enumerations and thus provide better type safety.  We were breaking that type safety in switch statements by allowing developers to accidently mix enumeration types. This often resulted in unexpected runtime behavior:

enum class A { a1, a2 };
enum class B { baz, foo, a2 };
int f(A a) {
    switch (a)
    {
    case B::baz: return 1;
    case B::a2:  return 2;
    }
    return 0;
}

In /permissive- mode (again, due to the amount of code this broke) we now emit errors:

error C2440: 'type cast': cannot convert from 'int' to 'A'
note: Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)
error C2046: illegal case

The error will also be emitted on pointer conversions in a switch statement.

Empty Declarations

We used to ignore empty declarations without any diagnostics, assuming they were pretty harmless. Then we came across a couple of usage examples where users used empty declarations on templates in some complicated template-metaprogramming code with the assumption that those would lead to instantiations of the type of the empty declaration. This was never the case and thus was worth notifying about. In this update we reused the warning that was already happening in similar contexts, but in the next update we'll change it to its own warning.

struct A { … };
A; // warning C4091 : '' : ignored on left of 'A' when no variable is declared

Precompiled Headers

We had a number of issues in large projects arising from the use of precompiled headers on very large projects. The issues weren't compiler-specific per se, but rather dependent on processes happening in the operating system. Unfortunately, our one error fits all for this scenario was inadequate for the users to troubleshoot the problem and come up with a suitable workaround. We expanded the information that the errors contained in these cases in order to be better able to identify a specific scenario that could have led to the error and advise users on the ways to address the issue.

error C3859: virtual memory range for PCH exceeded; please recompile with a command line option of '-Zm13' or greater
note: PCH: Unable to get the requested block of memory
note: System returned code 1455: The paging file is too small for this operation to complete
note: please visit https://aka.ms/pch-help for more details
fatal error C1076: compiler limit: internal heap limit reached; use /Zm to specify a higher limit

The broader issue is discussed in greater details in our earlier blog post: Precompiled Header (PCH) issues and recommendations

Conditional Operator

The last group of new diagnostic messages are all related to our improvements to the conformance of the conditional operator ?:. These changes are also opt-in and are guarded by the switch /Zc:ternary (implied by /permissive-) due to the amount of code they broke. In particular, the compiler used to accept arguments in the conditional operator ?: that are considered ambiguous by the standard (see section [expr.cond]). We no longer accept them under /Zc:ternary or /permissive- and you might see new errors appearing in source code that compiles clean without these flags.

The typical code pattern this change breaks is when some class U both provides a constructor from another type T and a conversion operator to type T (both non-explicit). In this case both the conversion of the 2nd argument to the type of the 3rd and the conversion of the 3rd argument to the type of the 2nd are valid conversions, which is ambiguous according to the standard.

struct A
{
	A(int);
	operator int() const;
};

A a(42);
auto x = cond ? 7 : a; // A: old permissive behavior prefers A(7) over (int)a.
                       // The non-permissive behavior issues:
                       //     error C2445: result type of conditional expression is ambiguous: types 'int' and 'A' can be converted to multiple common types
                       //     note: could be 'int'
                       //     note: or       'A'

To fix the code, simply cast one of the arguments explicitly to the type of the other.

There is one important exception to this common pattern when T represents one of the null-terminated string types (e.g. const char*, const char16_t* etc., but you can also reproduce this with array types and the pointer types they decay to) and the actual argument to ?: is a string literal of corresponding type. C++17 has changed the wording, which led to change in semantics from C++14 (see CWG defect 1805). As a result, the code in the following example is accepted under /std:c++14 and rejected under /std:c++17:

struct MyString
{
	MyString(const char* s = "") noexcept; // from const char*
	operator const char*() const noexcept; //   to const char*
};
MyString s;
auto x = cond ? "A" : s; // MyString: permissive behavior prefers MyString("A") over (const char*)s

The fix is again to cast one of the arguments explicitly.

In the original example that triggered our conditional operator conformance work, we were giving an error where the user was not expecting it, without describing why we give an error:

auto p1 = [](int a, int b) { return a > b; };
auto p2 = [](int a, int b) { return a > b; };
auto p3 = x ? p1 : p2; // This line used to emit an obscure error:
error C2446: ':': no conversion from 'foo::<lambda_f6cd18702c42f6cd636bfee362b37033>' to 'foo::<lambda_717fca3fc65510deea10bc47e2b06be4>'
note: No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called

With /Zc:ternary the reason for failure becomes clear even though some people might still not like that we chose not to give preference to any particular (implementation-defined) calling convention on architectures where we support multiple:

error C2593: 'operator ?' is ambiguous
note: could be 'built-in C++ operator?(bool (__cdecl *)(int,int), bool (__cdecl *)(int,int))'
note: or       'built-in C++ operator?(bool (__stdcall *)(int,int), bool (__stdcall *)(int,int))'
note: or       'built-in C++ operator?(bool (__fastcall *)(int,int), bool (__fastcall *)(int,int))'
note: or       'built-in C++ operator?(bool (__vectorcall *)(int,int), bool (__vectorcall *)(int,int))'
note: while trying to match the argument list '(foo::<lambda_717fca3fc65510deea10bc47e2b06be4>, foo::<lambda_f6cd18702c42f6cd636bfee362b37033>)'

Another scenario where one would encounter errors under /Zc:ternary are conditional operators with only one of the arguments being of type void (while the other is not a throw expression). A common use of these in our experience of fixing the source code this change broke was in ASSERT-like macros:

void myassert(const char* text, const char* file, int line);
#define ASSERT(ex) (void)((ex) ? 0 : myassert(#ex, __FILE__, __LINE__))

error C3447: third operand to the conditional operator ?: is of type 'void', but the second operand is neither a throw-expression nor of type 'void'

The typical solution is to simply replace the non-void argument with void().

A bigger source of problems related to /Zc:ternary might be coming from the use of the conditional operator in template meta-programming as some of the result types would change under this switch. The following example demonstrates change of conditional expression's result type in a non-meta-programming context:

      char  a = 'A';
const char  b = 'B';
decltype(auto) x = cond ? a : b; // char without, const char& with /Zc:ternary
const char(&z)[2] = argc > 3 ? "A" : "B"; // const char* without /Zc:ternary

The typical resolution in such cases would be to apply a std::remove_reference trait on top of the result type where needed in order to preserve the old behavior.

In Closing

You can try these improvements today by downloading Visual Studio 2017 15.3.0 Preview. As always, we welcome your feedback – it helps us prioritize our work as well as the rest of the community is resolving similar issues. Feel free to send any comments through e-mail at visualcpp@microsoft.com, Twitter @visualc, or Facebook at Microsoft Visual Cpp. If you haven't done so yet, please check also our previous post in the series documenting our progress on improving compiler diagnostics.

If you encounter other problems with MSVC in VS 2017 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice.

Thank you!
Yuriy

SQL Server is using only one processor/CPU

$
0
0

Recently I have collaborated in some cases with the same symptom: customer reported that is using SQL Server and noticed that in the machine only one CPU hit 100% utilization, while the utilization of the other CPUs didn´t have a sustained usage of more than 10%.

The scenarios have the same characteristic: using a SQL Server Express in a virtualized environment.

DBAs should be thinking about affinity mask and hard binding behavior, but this is not the case here, and the explanation is much simpler than that: product limit.

If you check the documentation, for instance, Features Supported by the Editions of SQL Server 2014 (https://msdn.microsoft.com/en-us/library/cc645993(v=SQL.120).aspx), you will notice:

Maximum Compute Capacity Used by a Single Instance (SQL Server Database Engine) -> Express -> Limited to lesser of 1 Socket or 4 cores

SQL Server express support up to 4 cores in the SAME socket, so the virtual machine definition matters a lot.

In the following image you can see that the virtual machine is being defined by having 12 sockets with a single core per socket, so a SQL Server Express in this machine would only use one core.


This is really common for deployments that use SQL Server Express, like Skype for business (former Lync), that being smaller VMs, sometimes we don´t give proper attention to its configuration, as we would for a VM that would host SQL Server Enterprise.

A similar scenario can also happens for SQL Server Standard, but in this case using only 4 processors in a 16 socket virtual machine (Limited to lesser of 4 Sockets or 16 cores).

To avoid performance issues related to NUMA / vNUMA definition, our recommendation is to have the same virtual layout as the one defined in your physical processor architecture.

This scenario should be really straightforward to DBAs, but sometimes the organization doesn´t have a DBA involved when using SQL Server express, so hopefully this post can help someone.

Luciano Caixeta Moreira -  [ Luti ]
luticm79@hotmail.com
http://www.linkedin.com/in/luticm
www.twitter.com/luticm


GPUs in my task manager

$
0
0

Bryan Langley has posted about GPUs in the task manager. Definitely worth reading, as there are a number of reasonable but non-obvious decisions on how to best display aggregate data, and when shared resources are counted multiple times or not.

Also having the driver version and DirectX version handily under the charts is very convenient, I think.

Enjoy!

A few performance tips for using the OneNote API

$
0
0

Hello world,

In StackOverflow and twitter, we often hear questions on how to make queries to the API faster. Here are a few recommendations:

Use $select to select the minimum set of properties you need

When querying for a resource (say for example, sections inside a notebook), you make a request like:

GET ~/notebooks/{id}/sections

This retrieves all the properties of the sections - but perhaps you don't really need them all, maybe you only need the id and the name of the section... in that case, it is better to say:

GET ~/notebooks/{id}/sections?$select=id,name

The same applies to other APIs - the fewer properties you select, the better.

Use $expand instead of making multiple API calls

Suppose you want to retrieve all of the user’s notebooks, sections, and section groups in a hierarchical view.

It can be tempting to do the following:

  1. Call GET ~/notebooks to get the list of notebooks
  2. For every retrieved notebook, call GET ~/notebooks/{notebookId}/sections to retrieve the list of sections
  3. For every retrieved notebook, call GET ~/notebooks/{notebookId}/sectionGroups to retrieve the list of section groups
  4. … optionally recursively iterate through section groups

While this will work (with a few extra sequential roundtrips to our service), there is a much better alternative. See our blog post about expand for a more details.

GET ~/api/v1.0/me/notes/notebooks?$expand=sections,sectionGroups($expand=sections)

This will yield the same results in one network roundtrip with way better performance – yay!

When getting all pages for a user, do so for each section separately

While we do expose an endpoint to retrieve all pages, this isn't the best way to get all the pages the user has access to - when the user has too many sections, this can lead to timeouts/bad performance. It is better to iterate each section, getting pages for each one separately.

So it is significantly better to do several (especially if you don't need all sections):

GET ~/sections/{id}/pages

Than several (this API is paged, so you won't be able to fetch them all at once):

GET ~/pages

When getting page metadata, override default lastModifiedDate ordering

It is faster for us to get pages when we don't have to sort them by lastModifiedDate (which is the default ordering) - you can achieve this by sorting by any other property:

GET ~/sections/{id}/pages?$select=id,name,creationDate&$orderby=creationDate

Happy coding,
Jorge

nbgrader to automate assessment and grading with JupyterHub on Azure Data Science VM

$
0
0

Jupyter notebook on Microsoft Data Science Virtual Machine

The Anaconda distribution on the Microsoft Data Science VM comes with a Jupyter notebook, an environment to share code and analysis. The Jupyter notebook is accessed through JupyterHub. You sign in using your local Linux user name and password.

The Jupyter notebook server has been pre-configured with Python 2, Python 3, and R kernels. There is a desktop icon named "Jupyter Notebook" to launch the browser to access the notebook server. If you are on the VM via SSH or X2Go client, you can also visit https://localhost:8000/ to access the Jupyter notebook server.

Note

Continue if you get any certificate warnings.

You can access the Jupyter notebook server from any host. Just type https://<VM DNS name or IP Address>:8000/

Note

Port 8000 is opened in the firewall by default when the VM is provisioned.

We have packaged sample notebooks--one in Python and one in R. You can see the link to the samples on the notebook home page after you authenticate to the Jupyter notebook by using your local Linux user name and password. You can create a new notebook by selecting New, and then the appropriate language kernel. If you don't see the New button, click the Jupyter icon on the top left to go to the home page of the notebook.

In the video below you can see how *nbgrader*, a tool developed by and for instructors to create and grade rich, interactive assignments in a Jupyter notebook.

creating_assignment

Standalone  Usage

On its own, nbgrader provides functionality for creating assignments and then for both automatic and manual grading of submissions via email or a VLE service

JupyterHub Usage

When combined with JupyterHub, it supports the full grading pipeline: creating assignments, releasing them to students, collecting submissions, grading, and generating personalized feedback.

To demonstrate the use of nbgrader, the following video will walk the you through the basics of the tool, and then outline two example workflows: one using standalone nbgrader, and one using nbgrader with JupyterHub. In doing so, I will illustrate to instructors how they can create their own assignments in the notebook using nbgrader.


Resources

http://kristenthyng.com/blog/2016/09/07/jupyterhub+nbgrader/ How to use JupyterHub with nbgrader

https://nbgrader.readthedocs.io/en/latest/user_guide/installation.html NBgrader Installation instructions

https://github.com/jupyter/nbgrader NBGrader on Github

https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-virtual-machine-overview Data Science VM Overview

https://developer.rackspace.com/blog/deploying-jupyterhub-for-education/ Using Jupyterhub on Docker

Application Insights Planned Maintenance - 07/13 - Final Update

$
0
0
Final Update: Friday, 21 July 2017 23:52 UTC

Maintenance has been completed on infrastructure for Application Insights Availability Web Test feature.
Necessary updates were installed successfully on all nodes, supporting Availability Web Test feature.

-Deepesh

Planned Maintenance: 17:00 UTC, 18 July 2017 – 00:00 UTC, 22 July 2017

The Application Insights team will be performing planned maintenance on Availability Web Test feature. During the maintenance window we will be installing necessary updates on underlying infrastructure.

During this timeframe some customers may experience very short availability data gaps in one test location at a time. We will make every effort to limit the amount of impact to customer availability tests, but customers should ensure their availability tests are running from at least three locations to ensure redundant coverage through maintenance. Please refer to the following article on how to configure availability web tests: https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/

We apologize for any inconvenience


-Deepesh

Top stories from the VSTS community - 2017.07.21

$
0
0

Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics.

image TOP STORIES

image VIDEOS

  • VS Team Services - Test Case Explorer v2 - ALM Rangers
    Anthony Borton provides an overview of the Test Case Explorer v2 extension, and walks through how you can use it. Created by Mathias Olausson and Mattias Skold, the extension enables users to manage their test cases and clone test plans and suites.
  • Advanced MSBuild Extensibility with Nate McMaster - Nate McMaster
    Nate McMaster shows how to write your own MSBuild tasks in C#. Since you're developing your tasks in C#, you can use standard .NET libraries, debug your code, etc.
  • Sharing MSBuild Tasks as NuGet Packages with Nate McMaster - Nate McMaster
    Nate McMaster shows how to publish an MSBuild task as a NuGet package, allowing you to share it and reuse it in your own projects.

TIP: If you want to get your VSTS news in audio form then be sure to subscribe to RadioTFS .

image FEEDBACK

What do you think? How could we do this series better?
Here are some ways to connect with us:

  • Add a comment below
  • Use the #VSTS hashtag if you have articles you would like to see included

Use Microsoft Graph API to reach on-premises, cloud users of hybrid Exchange 2016

$
0
0

Le Café Central de DeVa - Microsoft Graph API

As you aware, Office 365 and Exchange Online provide a new way to work with email, calendars, and contacts. The Mail, Calendar, and Contact REST APIs provide a powerful, easy-to-use way to access and manipulate Exchange data.

In this video, you will see Venkat, Principal Program Manager lead will walk you through how to use Microsoft Graph to reach on-premises and cloud users of hybrid Exchange 2016 deployments, in addition to Office 365 and Outlook.com.  It’ll discuss how your application can handle versions of servers on-premises and in the cloud, and how on-premises Exchange 2016 is set up to support Microsoft Graph and OAuth.

For more info about related documentation, you can get it started at http://graph.microsoft.io/en-us/docs/overview/hybrid_rest_support

Hope this helps.

Rendering a large report in SharePoint mode fails with maximum message size quota exceeded error message

$
0
0

Recently, i was working on a scenario where reporting services 2012 was configured in SharePoint 2013 integrated mode. We were exporting a report that was ~240MB in size. When we do that, the reports fails with the following exception:

07/19/2017 13:37:57.46  w3wp.exe (0x2C80)                        0x0834 SQL Server Reporting Services  Service Application Proxy      00000 Monitorable Notified the load balancer and raising RecoverableException for exception: System.ServiceModel.CommunicationException: The maximum message size quota for incoming messages (115343360) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. ---> System.ServiceModel.QuotaExceededException: The maximum message size quota for incoming messages (115343360) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.     --- End of inner exception stack trace ---    Server stack trace:      at System.ServiceModel.Channels.MaxMessageSizeStream.PrepareRead(Int32 bytesToRead)     at System.ServiceModel.Channels.MaxMessageSizeStream.Read(Byte[] buffer, Int32 offset, Int32 count)    ... 00c1069e-df14-5005-0000-0ec04d857de3

To resolve the issue, there are few config files that you need to modify the value that was defined as 115343360 which is 115MB. This is a two step process.

1. First, you need to go to the SharePoint app servers that hosts Reporting Services service application. This can be identified by:

SharePoint central administration -> System Settings -> Manage Servers in Farm and make a note of the servers that are hosting "SQL Server Reporting Services Service"

2. In each of the machines that are identified in #1, go to the web.config file located under:

"C:Program FilesCommon Filesmicrosoft sharedWeb Server Extensions15WebServicesReporting"

3. Take a backup of the file.

4. Open it in a text editor and locate the <bindings> section.

5. Replace all the 115343360 values with 2147483647. The value is ~2GB

6. Save and close the file. Here is how the modified version would look like. Please do modify any other values in the below section to reflect what you see below.

<customBinding>
<binding name="http" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" allowInsecureTransport="true" />
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
<binding name="https" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" />
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpsTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
</customBinding>

7. Perform an IISRESET.

8. Remember, the steps #2 thro' #7 has to be done on all the machines identified in step #1.

9. Secondly, you need to go to each of the SharePoint WFE server including the app servers (that hosts Reporting services which was identified on step #1) and locate client.config file from the following location:

"C:Program FilesCommon Filesmicrosoft sharedWeb Server Extensions15WebClientsReporting"

10. Take a backup of the file.

11. Open it in a text editor and locate the <bindings> section.

12. Replace all the 115343360 values with 2147483647. The value is ~2GB

13. Save and close the file. Here is how the modified version would look like. Please do modify any other values in the below section to reflect what you see below.

<customBinding>
<!-- These are the HTTP and HTTPS bindings used by all endpoints except the streaming endpoints -->
<binding name="http" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" allowInsecureTransport="true" />
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
<binding name="https" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" />
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpsTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
<!--
These are the HTTP and HTTPS bindings used ONLY by the streaming endpoints.

Details:
1) The only difference between these bindings and the ones above is that these include long
running operations causing the security timestamp in the header to become stale.
In order to avoid staleness errors, the maxClockSkew is set to 1 hour.
2) Any changes made to the above bindings should probably be reflected below too.
-->
<binding name="httpStreaming" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" allowInsecureTransport="true">
<localClientSettings maxClockSkew="01:00:00" />
</security>
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
<binding name="httpsStreaming" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport">
<localClientSettings maxClockSkew="01:00:00" />
</security>
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpsTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
</customBinding>

14. Perform an IISRESET.

15. Remember, the steps #9 thro' #14 has to be done on all the WFE machines as well as the app servers (that hosts Reporting services which was identified on step #1).

Now the large report that you are trying to export should come up fine as expected be it from within SharePoint or from any custom application.

This article applies to all the reporting services version from SSRS 2012 thro' SSRS 2016 and for SharePoint 2010 thro' SharePoint 2016. The path referenced in steps #2 and #9 varies between 14 / 15 / 16 according to the SharePoint versions 2010 / 2013 / 2016 respectively.

Hope this helps!

Selva.

[All posts are AS-IS with no warranty and support]


Installation of SSDT for BI on SQL 2014

$
0
0

A few days back, I was working with one of our partners who had a requirement of installing SQL Server Data Tools (SSDT) for Business Intelligence on their SQL server 2014 instance. When it comes to the same scenario in SQL 2012, the installation of SSDT is a straight forward procedure by selecting it on the feature selection page.

Also found that there were a lot of partners seeking assistance in the installation of SSDT for BI on SQL server 2014 instance. Thus tried figuring out a way by browsing through various discussion forums which yielded nothing. Addressing this requirement of the partner pool, here is the blog providing a work around for the installation of SSDT.

The root cause for the complexity in the installation process is that SSDT is not listed in the feature selection page of SQL server 2014 media.

Thus, we have to install the Data tools using an additional .exe file which can be downloaded for free from the article: https://www.microsoft.com/en-in/download/details.aspx?id=42313
On launching the Setup.exe of the downloaded media of SSDT- BI , though it is an addition of feature to the existing SQL instance if the “Add features to an existing instance of SQL server 2014” option is selected in the Installation Type page as shown:

This results with an error in the “Feature Configuration Rules” page as shown below:

The reason behind this error is that the SSDT media for SQL 2014 which we need to download as an additional feature is a x86 version which is not matching our SQL server version of x64.

Thus a workaround for this issue is to select the “Perform a new installation of SQL server 2014” in the Installation type page instead of the “Add features to an existing instance of SQL server 2014”.

On the next Feature Selection Page, select the SQL Server Data Tools - Business Intelligence for Visual Studio 2013 and this will ensure a successful installation of SSDT for BI on the SQL server 2014 instance.

Happy installingg!!!

 

Troubleshooting SQL Server Upgrade Issues

$
0
0

Recently, one of my partners, were facing issues upgrading SQL server 2008 from Service Pack 2 to Service Pack 3. 

On checking the summary.txt in the setup bootstrap logs found the following error information :

------------------

Final result: The patch installer has failed to update the shared features. To determine the reason for failure, review the log files.
Exit code (Decimal): -2068578304
Exit facility code: 1204
Exit error code: 0
Exit message: The INSTALLSHAREDWOWDIR command line value is not valid. Please ensure the specified path is valid and different than the INSTALLSHAREDDIR path.

------------------

And on the detail.txt logs could see the below error information :

Slp: Validation for setting 'InstallSharedWowDir' failed. Error message: The INSTALLSHAREDWOWDIR command line value is not valid. Please ensure the specified path is valid and different than the INSTALLSHAREDDIR path.
Slp: Error: Action "Microsoft.SqlServer.Configuration.SetupExtension.ValidateFeatureSettingsAction" threw an exception during execution.
Slp: Microsoft.SqlServer.Setup.Chainer.Workflow.ActionExecutionException: The INSTALLSHAREDWOWDIR command line value is not valid. Please ensure the specified path is valid and different than the INSTALLSHAREDDIR path.

------------------

Looking at it, I could nail the issue down to two possible suspects :- Permissions issue or Registry key entry corruption

++ Checking on permissions, the user was the local administrator and had all the required set of permissions.

++ Thus took a process monitor trace on the next launch of the upgrade to figure out when the process is failing which was the registry key invoked.

On analyzing the trace, I could find out that the setup.exe was checking the registry key HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoftMicrosoft SQL Server100VerSpecificRootDir and thus could figure out that the WOW32 execution folder of SQL server 2008 R2 was not set correctly.

Checked the value of the string 'B1D55012528AA294F86D6C035CEAC33B' at the registry key path HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInstallerUserDataS-1-5-18ComponentsC90BFAC020D87EA46811C836AD3C507F and it was found to be "C:Program Files (x86)Microsoft SQL Server"

On a precautionary note, backed up the registry keys.

Later modified the value of the registry key 'VerSpecificRootDir' in the path HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoftMicrosoft SQL Server100 from "C:Program Files (x86)Microsoft SQL Server100" to "C:Program Files (x86)Microsoft SQL Server".

Rebooted the server once and thus the upgrade went through seamlessly and thus we had the SQL Server upgraded from Service Pack 2 to Service Pack 3.

Hope this helps.. Happy troubleshooting!!

Using Azure Functions, Cosmos DB and Powerapps to build, deploy and consume Serverless Apps

$
0
0

Azure Functions can be used to quickly build Application as Micro Services, complete with turnkey integration with other Azure Services like Cosmos DB, Queues, Blobs, etc , through the use of Input and output Bindings. Inbuilt tools can be used to generate Swagger definitions for these Services, publish them and consume them in Client side Applications running across device platforms.

In this article , an Azure Function App comprising of 2 different Functions that perform CRUD operations on data residing in Azure Cosmos DB, will be created. The Function App would be exposed as a REST callable endpoint that would be consumed by a Microsoft Powerapps Application. This use case does not require an IDE for development. It can be built entirely from the Azure Portal and the browser.

[The Powerapps App file, C# Script files, Yaml file for the Open API Specs created for this article can be downloaded from this Gtihub location here]

  1. Creation of a DocumentDB database in Azure Cosmos DB

Use the Azure portal to create a DocumentDB database. For the use case described in this article, created is a Collection (expensescol) that stores Project Expense details, comprising the attributes shown below.

2. Creation of a Function App that implements the Business Logic in Service

Two Functions are created in this Function App using C# Scripts.

  • GetAllProjectExpenses that returns all the Project Expenses Data from the collection in Cosmos DB
  • CreateProjectExpense that creates a Project Expense Record in Cosmos DB

a) Function GetAllProjectExpenses ->

The Input and output Binding configured for this Function:

Apart from the HTTPTrigger input binding for the incoming request, an additional input binding for Cosmos DB  is configured that retrieves all the Expense records from the database. Due to this binding, all the Expense records are available to the Run Method through the 'documents' input parameter - see screenshot of the C# Script used in this Function, below.

[Note: The scripts provided here are only meant to illustrate the point, and do not handle best practices, Exceptions, etc]

Refer to the Azure Documentation for detailed guidance on configuring Bindings in Azure Functions, for HTTPTriggers and Azure CosmosDB

b) Function CreateProjectExpense ->

The binding configuration used for this Function is:

Notice that there are 2 output bindings here, one for the HttpResponse and the other is the binding to Cosmos DB to insert the expense record into it.

[Note: When the Run method in  a Function is invoked asynchronously, we cannot use an 'out' parameter to the Cosmos DB Binding and an 'out' for the HttpResponse in it. In such cases, we need to add the document meant for insertion into an IAsyncCollector Object reference, 'collector' in this case. Note that the  parameter 'collector' is used in the output binding to Cosmos DB, shown above . Refer to the documentation here for more info pertaining to scenarios with multiple output parameters]

3. Test the Functions created 

Use Postman to ensure both the Functions work without errors. The HttpTrigger Url can be obtained from the C# Script Editor View of the Function

4. Generating an OpenAPI (Swagger) Definition for the Function App

A Function App could contain different Functions, each of which could potentially be written in different programming languages. All of these Functions or individual 'Micro Services' could be exposed through a single base end point that represents the Function App. From the Application Settings, navigate to the 'API Endpoint' Tab.

Click on the button 'Generate API definition template' to generate a base definition of the Swagger. But it lacks all the elements required to fully describe the Functions. The definition, described in Yaml format, has to be manually edited in the editor pane. The Yaml file created for this Function is available along with the other artefacts in this blog Post.

Refer to this , this  and this links that provides guidance on working with Yaml to create the Swagger definitions, or using other options to create it.

[Note: The samples considered in the links above use simple primitive types as parameters in the Method calls. The scenario in this article however deals with Collections, and needs more work to get the Yaml right. Refer to the artefacts download link in this article to view the Yaml that was created for the scenario in this blog post]

[Note: For simplicity in this article, I have considered the option provided by Functions to add the API key in the Request URL, under the key 'code'.  For more secure ways to deal with it, use Azure AD integration or other options]

After the Yaml is created and the definition is complete, test the requests from the Test console on the Web Page, and ensure that the Functions work without errors. Once tested, click on the button 'Export to Power Apps and Flow' to export the Swagger definition and create a Custom connector in the latter.

5. Create a new custom Connection in powerapps.microsoft.com from the connector registered in the previous step. Embed the Security code for the Function App. This gets stored with the connection and automatically included in the request by Powerapps to the REST Services deployed on Azure Functions.

6. Create a new Powerapps App that would consume the REST Services exposed by Azure Functions in the earlier steps

While you could start with a blank Template, it involves some work to create the different Forms required in the App for 'Display All', 'Edit' and 'Browse All' use cases. Powerapps supports the ability to automatically generate all these Forms and provide a complete App, when selecting a Data Source like OneDrive, SharePoint Office 365 Lists, and many others. Since the 'ProjExpensesAPI' Connector we have created is a custom one, this Wizard is not available to create the App automatically.

To work around this, I have created a Custom List in Office 365, that has the same fields as in the Expense data returned by the Function App. I used the wizard to generate a complete App based on the Custom List in Office 365, and then changed all the Data Source references from it to the 'ProjExpensesAPI' Connection.

 

Note in the screenshot above, how the Logged in User context can be passed through 'Excel like' functions to the Search Box. The data is filtered after it is received from the REST APIs. Notice how our custom API is invoked below, and the data returned is filtered using the expression shown

The screen shots of the App with each of the Forms is shown below. This App can be run on any of Windows, Android or iOS Mobile Devices.

Test the App to ensure that all the REST API operations like GetAllExpenses and CreateProjectExpense requests work from the App. It can then be published by the user and shared with others in the Organization.

The powerapps App file is also provided along with the other artefacts in this article.

 

Microsoft Azure - Artifical Intelligence Data Science Stack

$
0
0

Microsoft now has an amazing offering for building and hosting your AI/Data Science solution.

AI Stack Overview

Infrastructure

Azure Batch https://azure.microsoft.com/en-us/services/batch/

Docker Images for AI/Data Science  https://github.com/Azure/batch-shipyard/tree/master/recipes

Data Platforms

SQL Database/SQL Server https://docs.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2017

Azure Datalake https://azure.microsoft.com/en-us/solutions/data-lake/

Azure Analysis Services https://azure.microsoft.com/en-us/services/analysis-services/

Azure Cosmos DB https://azure.microsoft.com/en-us/services/cosmos-db/

Hardware

FPGA https://azure.microsoft.com/en-gb/resources/videos/build-2017-inside-the-microsoft-fpga-based-configurable-cloud/

GPU http://gpu.azure.com/

Processing

Process data in Azure DataScience VM https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-virtual-machine-overview#whats-included-in-the-data-science-vm

Azure Batch AI Training https://batchaitraining.azure.com/

Azure ML Experimentation & Management https://docs.microsoft.com/en-us/azure/machine-learning/

Azure Jupyter Notebooks http://notebooks.azure.com

Frameworks

All these come as standard on the Azure Data Science VM which is available on Windows, Ubuntu or Centos see https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-virtual-machine-overview or can be run under containers using the Azure Batch Shipyard Images https://github.com/Azure/batch-shipyard/tree/master/recipes

CNTK https://docs.microsoft.com/en-us/cognitive-toolkit/cntk-on-azure
Tensorflow
R Server
Torch
Theano
Scikit
Keras
Nvidia Digits
CUDA, CUDNN
Spark
Hadoop
 

Services

Machine Learning and Toolkits https://docs.microsoft.com/en-us/azure/machine-learning/

Cognitive Services https://azure.microsoft.com/en-gb/services/cognitive-services/

Bot Framework https://dev.botframework.com

REST APIs intelligence in the cloud http://aka.ms/cognitive

 

Resources

Microsoft AI News

Azure DataScience VM

Container images https://github.com/Azure/batch-shipyard/tree/master/recipes

Announcing AI for Earth: Microsoft’s new program to put AI to work for the future of our planet

Microsoft Build 2017: Microsoft AI – Amplify human ingenuity

Microsoft AI products and services

Project Prague: What is it and why should you care?

$
0
0

Guest blog from Charlie Crisp, Microsoft Student Partner at the University of Cambridge

image

Charlie has been a Microsoft Student Partner at the University of Cambridge

This year at Build, Microsoft announced a ton of cool new stuff, including the new Cognitive Services Labs – a service which allows developers to get their hands on the newest and most experimental tools which are being developed by Microsoft.

Particularly exciting was the announcement Project Prague which aims to empower developers to make use of advanced gesture recognition within their applications without even needing to write a single line of code.

image

And why should you care? Well aside from being ridiculously cool, this is the sort of stuff that even your non-techie friends will want to hear about. So let me set the scene…

The use of keyboards dates back to the 1870’s where they were used to type and transmit stock market text data across telegraph lines which was then immediately printed onto ticker tape. Mice, on the other hand, took a lot longer to come about, and it wasn’t until 1946 that the world was first introduced to the ‘Trackball’ – a pointing device used as an input for a radar system developed by the British Royal Navy.

Ever since computers have been used primarily with a keyboard and mouse (or trackpad) and advances in technologies such as intelligent pens and gesture control have done little to change this. It is a fact, however, that navigating through different right-click menus and keyboard shortcuts can be very cumbersome and time-consuming.

Gesture control can provide a great alternative way of interacting with a computer in a natural and intuitive way. Whether it’s moving and rotating pictures, navigating through tabs, or inserting emojis, Project Prague allows developers to recognise and react to any gesture which a user can make with their hands.

But the coolest part of this is not how easy this technology is for users, but how easy it is for developers.

Gestures are defined as a series of different hand positions, and this can be done either in code or by using Microsoft’s visual interface. Then it is as simple as adding an event listener which will be triggered whenever that gesture is recognized. Microsoft will even automatically generate visual graphics which will show the user what gestures are available to use in any given program, and what the effects of these gestures are.

If you are even half excited as I am about this, then I would urge you to check out aka.ms/gestures which has more information and a series of awesome demos which are well worth a watch. You can even sign up to test out the technology for yourself thanks to the wonders of Cognitive Services Labs! At the very least, it’s a great way of really freaking out your grandparents!

If you have found this interesting and want to learn more, then I strongly suggest that you check out https://labs.cognitive.microsoft.com/en-us/project-prague which has documentation, code samples and the SDK.

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>