Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Issue Mitigated: Login page not displaying properly in Australia

$
0
0

Earlier today, some users in Australia were unable to render Azure AD B2C’s sign in pages due to a local CDN (aspnetcdn.com) returning incorrect content. The issue has been mitigated. Specifically, the issue was caused by an incorrect content-type being returned when the client fetched JQuery. The engineering team deployed a fix to mitigate the issue and is continuing to monitor the situation. If you are still facing this issue, please contact Azure Support.


upload media

Windows 2016 出現 0xc0000135 ServerManager.exe 無法開啟

$
0
0

問題描述:

============

Windows Server 2016 IIS .NET Framework 都移除後, Server Manager 及事件檢視器等都無法開啟

錯誤訊息如下:

 

 

解決方式:

============

DISM 指令先將 .NET Framework 安裝後, 即可以正常開啟 Server Manager

dism /online /enable-feature /featurename:NetFx4

 

 HTH. Jacky

test

Word upload image

Nakoukněme pod pokliču patchování u platformních služeb (PaaS) v Azure

$
0
0

Platformní služby jsou skvělé. Dostáváte SLA na službu a patching OS a prostředí či služby necháváte na Microsoft. Nemusíte to řešit. Vrtá vám ale hlavou, jak to Azure dělá? Posbírejme informace z veřejně dostupných blogů a podívejme se tomu pod kapotu.

Platformní služba, OS a runtime/služba je starost Microsoftu

Připomeňme zásadní výhodu PaaS. Celá služba je samozřejmě bezpečná a aktualizovaná jak z pohledu OS tak z hlediska runtime či prostředí (například PHP stack v App Service nebo SQL služba u Azure SQL DB apod.). Dodržuje dané SLA a patching není výmluva – ten se do toho samozřejmě počítá. Například na SQL DB máte SLA 99,99% na službu, tedy na funkční přístup do vaší databáze a do toho se musí vejít havárie serverů, racků, datových center a samozřejmě i věci jako je patching. Zajímá vás jak to Azure dělá?

První příklad – Azure App Service (webové aplikace)

Podívejme se nejprve na službu App Service. Jde o platformní řešení pro vaše webové aplikace. Funguje tak, že existují nějaké řídící zdroje (pushování kódu, monitoring, load-balancing, TLS šifrování a tak podobně) a pro vlastní aplikace se používá tzv. Service Plan. Jde v zásadě o jedno a více VM, které jsou přiděleny jen vám, ale které jsou plně spravované Microsoftem. Uvnitř těchto VM pak vznikají izolované prostory pro vaše webové aplikace (řekněme například jakési IIS kontejnery). Patching se týká dvou věcí – operačního systému pod tím, typicky Windows, a patching aplikační prostředí, například PHP stacku. Oboje pro vás platforma řeší automaticky (nutno zmínit, že u aplikačního prostředí se takto aplikují typicky minor patche, které nemají vliv na stávající kód – generační obměna runtime na novou major verzi je obvykle řešena tak, že se objeví jako nový runtime, na který můžete zmigrovat dle svého uvážení postupně, jak otestujete aplikaci – některé nejstarší pak mohou z platformy odejít, ale o tom jste s dostatečným předstihem informováni).

Fajn, jak se to tedy dělá? Obvykle to probíhá v pravidelném cyklu jednoho měsíce, který koreluje s uvolňováním záplat Windows, nicméně v případě kritických chyb se tyto implementují dle potřeby i mimo cyklus. Prvním krokem je, že testovací tým si všechno důkladně otestuje. Co se děje dál?

Druhým krokem je deployment do tzv. canary regionů. To je něco, co v portálu nevidíte, a většina zákazníků o těchto regionech netuší. Jedná se o testovací regiony, takovou laboratoř pro testování změn v Azure, než se pustí do produkčních regionů. Někteří velcí zákazníci mohou o přístup do canary regionu požádat. Dělají to ti největší uživatelé a běží tam svoje testy (případně získají přístup k prototypům budoucích funkcí Azure – ty jsou obvykle nejprve v canary pro private preview pro velmi omezené publikum, následně v private preview v normálním regionu pro širší, ale stále omezené publikum, potom v public preview dostupném pro všechny, ale bez SLA, a pak jde funkce do General Availability). Patche se tedy nasadí v canary regionu a neustále se sbírají metriky jak z obrovského množství testovacích aplikací, tak i z testovacích prostředí „interních zákazníků“ (XBOX, Office365 apod.) a externích zákazníků (zmíněné velké firmy, které získají do canary přístup). Teprve pokud je tady vše v pořádku, postupuje se dál.

Pokračovat ve čtení

Investigating issues with hosted build agents in South Central US – 03/29 – Mitigated

$
0
0

Final Update: Thursday, March 29th 2018 08:35 UTC

We’ve confirmed that all systems are back to normal as of March 29th 2018 08:00 UTC. We have collected necessary logs and recycled the web role instances which mitigated the issue. Our logs show the incident started on 03/28 14:30 UTC.
Sorry for any inconvenience this may have caused.

Sincerely,
Rakesh Reddy


Initial Update: Thursday, March 29th 2018 07:03 UTC

We are investigating issues with hosted build agents in South Central US region wherein the customer would experience builds being stuck for a longer duration of time and would eventually fail. Customers should retry to queue the build. We are working with our DevOps team to mitigate the issue.

We apologize for the inconvenience caused to the impacted customers.

  • Next Update: Before Thursday, March 29th 2018 08:35 UTC

Sincerely,
Rakesh Reddy

E2 Spotlight- Dan Bunker

$
0
0


Here is the first in a series of blogs which will share and highlight the great stories of our amazing MIEEs who attended the E2 Global Education Exchange in Singapore! We asked them to share their experiences of becoming a MIEE as well as some of their personal highlights from the Road to E2! First up in the series is Dan Bunker, Education Consultant at United Learning Group who attended as one of our MIEExperts. 


Tell us about yourself and your MIEE journey so far

I work across the primary schools in a multi-academy trust and my role as an educational technologist, is to support them with consultancy, training and guidance with regard to all things related to educational technology. As a group, we have adopted Office 365 across all our schools with each school managing their own separate tenancies. Having attended a few Microsoft Classroom events at showcase schools I was introduced to the Microsoft Educator Community. Being able to access courses in your own time and gain badges based on their successful completion really appealed to me and became quite addictive!  I’m still trying to catch up with some of the other UK MIEEs in terms of number of badges and points etc. and that slight competitive element is definitely a motivator for me. Talking to other teachers and MIEEs, I know I’m not alone on this one!

Through the MEC, I found out about the MIE Expert programme and decided to apply as I had built up my expertise across a range of Office 365, computing and 21st Century Learning Design areas by undertaking the courses and relating this to my own role out in the field. I am so pleased to have been accepted as MIE Expert as it has opened up a whole new world of collaboration for me – especially via the E2 experience.


What were you looking to get out of the E2 experience?

I didn’t have too many preconceptions of the event. I was hoping to get some insights into the Microsoft product road maps and was looking forward to hearing the inspiring stories from the keynote speakers. I was certainly looking forward to meeting the MIEEs from around the globe and working with the UK team - #E2TeamUK. Of course, I was also really excited to be able to visit Singapore. I had been before about 12 years ago and I never thought I get the chance to go again.


Three highlights of your E2 experience were:

Number one has to be working with a great team of UK MIEEs. A number of us supported and cajoled each other into taking the Microsoft Certified Educator (MCE) exam, achieving a 100% success rate across the team.  I think importantly, we really gelled as a team and are already collaborating and connecting with each other. We even have our own special Teams area that utilises the guest access functionality, so we can share insights, resources and just keep the conversation growing.

Secondly, the opportunity to work with MIEEs from different countries on the Educator Challenge activity gave a real global perspective to the event. We had to hack a lesson plan, add in an element of computational thinking to the lesson and create a video to explain and extoll the virtues of the activity and show a range of resources / apps we would use in the lesson. Working with people across different languages certainly brought its challenges, but we really proud of our lesson and supporting video and couldn’t believe we didn’t the win the major prize! Fortunately, fellow UK MIEE, Susan Sey, kept the honour of #E2TeamUK high by leading the winning team in the algorithms category.


The educator challenge involved a global dimension

Thirdly, Singapore itself had a real starring role in the trip. The hospitality shown by Microsoft, the venue and the Singaporeans in general was amazing and we had such a memorable time exploring Marina Bay by night and exploring the walks along the Singapore river. Some of us even found time to visit the altitude roof top bar to experience the view and enjoy the odd Singapore sling.

Team UK Exploring Marina Bay



How will this experience impact on your role back in your institution?

I think, and hope, it will broaden my horizons and bring in expertise from outside our group, either directly on occasions or certainly by having an expert group to call upon for additional advice, ideas and resources. The trip has certainly opened my eyes to the opportunities out there for global collaborations. Emma Nääs’ keynote about her use of Skype certainly got us all thinking. I’ve already followed up with Microsoft and other members of the team to put into place support visits for one our schools looking for reference sites for their use of Office 365 cloud storage. I’m sure there will be lots more to follow.

I’ve tried to capture some event highlights in the Sway blog below:

 


Follow @DCJBunker on Twitter to keep up to date with the great things he is doing using technology.



Coding for biology using Jupyter Notebooks

$
0
0

Guest post by Dr Benjamin Hall  University of Cambridge

BenHall[1]

Part of the new world of biology is understanding how to best use computers to do research. Whilst the field of computational biology isn’t new, novel ideas constantly in computer science and mathematics opening up new ways of working and addressing problems. Adopting these techniques can be daunting without some examples to work from, and that’s a big part of what my classes are about.

As part of the systems biology course at the University of Cambridge, I talk about how to write simulation engines. That is, software tools that explore how cells grow and die, or how molecules move in space, without doing experiments in a laboratory. In lectures I discuss the motivations and caveats of developing your own code; when to build bespoke systems, and when to use your other peoples. We talk about balancing risks and benefits, and, working from examples, how you can understand complex datasets.

During the associated computing-laboratory practical, I introduce two concepts that are the students haven’t seen before; functional programming in F#, and constraint solving using the theorem prover Z3. They’ve indirectly used both of these in previous studies- in an earlier practical they use the BioModelAnalyzer, which is written with both F# and Z3.

I teach them these less common approaches to show some of the advantages and opportunities that come from solving problems in a different way. One example is type checking and units of measure in F#; these features effectively rule out some bugs by preventing code that gives the wrong final units. This is massively useful in physical simulators, where the units can be checked after transposing complex functions. Similarly, variable immutability in functional programming closes another class of bugs. These examples and others each show how writing their code differently can offer unexpected advantages; insights that can be reused in future programming.

I updated the practical to run in Jupyter Notebooks, and made the underlying code freely available under the MIT license.

This practical is intended as a brief introduction to the F# programming language and the SMT solver Z3. In the course of this practical you will be performing two types of biological simulation; you will be writing a small Gillespie simulator for the single progenitor model of epithelial stem cells, and editing and exploring logical models of small biological networks. This practical builds on the discussions of F# and Z3 in lecture 4, and the demonstrations in the associated supervision. The goal of this practical is to allow you to see how you model different systems using a functional programming language (F#) and formal logic (using Z3). The final questions in each section are more open ended so aim to spend about 1.5 hours on each component.

Parts of this tutorial are available as an Azure Notebook but you can download and install the notebooks via Github on on a Microsoft Data Science Virtual Machine click the button below to launch these in Notebooks:

Azure Notebooks

Jupyter Notebooks are a new way of writing code where code, detailed comments (using Markdown and Latex for formatting) and images coexist in a single page, accessed through a web browser. It’s used by researchers in my lab to write code and generate visualisations of datasets, and is an increasingly popular way of coding and teaching coding.

The primary advantage to me for Jupyter was that the documentation for the course was embedded around the code. This includes latex formatted math symbols, and allowed the documents to stand alone without handouts- the students could sit down, open a web-browser, and get started. A further benefit was the portability; installing anaconda, and the F# kernel takes minutes and makes it easy to work outside of the lab. This further encourages keen students to tinker with parts of the code that interest them most once they’ve left. This portability can be taken advantage of in Azure notebooks- a cloud based Jupyter instance. This further reduces the barriers as all a student needs is a browser and an internet connection.

This is the first year I’ve run the practical using Jupyter Notebooks, and it’s been a big hit. With strong feedback from the students and researchers I excited to see how we will continue to use the technology in future!

Biology in the Cloud

http://biomodelanalyzer.org/

https://news.microsoft.com/en-gb/2016/10/31/cancer-versus-computer-science/

Github Resources for the course

https://github.com/hallba/WritingSimulators

Getting started with F#

https://notebooks.azure.com/Microsoft/libraries/samples/html/FSharp%20for%20Azure%20Notebooks.ipynb

Reference for using F# in Jupyter notebooks.

Getting started with Azure Notebooks

https://azure.microsoft.com/en-us/resources/videos/azure-friday-azure-notebooks/

http://notebooks.azure.com/faq

How to undelete releases of a release definition?

$
0
0

One of our customer have reported as he deleted few releases of a given release definition accidentally and wanted to know a way to bulk undelete those releases. The undelete can be done within 15 days of the soft delete and after that releases are permanently deleted. It can be undeleted either using Powershell or C# program.

Using powershell:-

===============

param (

[Parameter(Mandatory=$true)]

[ValidateNotNullOrEmpty()]

[string] $token,

[int32] $definitionId

)

## Construct a basic auth head using PAT

function BasicAuthHeader()

{

param([string]$authtoken)

$ba = (":{0}" -f $authtoken)

$ba = [System.Text.Encoding]::UTF8.GetBytes($ba)

$ba = [System.Convert]::ToBase64String($ba)

$h = @{Authorization=("Basic{0}" -f $ba);ContentType="application/json"}

return $h

}

# Fill in your account name/project name/definition name

$accountName = "anjani1"

$projectName = "TFVCProj"

$definitionNameToRecover = "New Release Definition"

# Find the Id of release definition that got deleted

$deletedReleasesUri = "https://$accountName.vsrm.visualstudio.com/$projectName/_apis/Release/releases?api-version=4.1-preview.6&isDeleted=true&definitionId=$definitionId"

$h = BasicAuthHeader $token

$deletedReleases = Invoke-RestMethod -Uri $deletedReleasesUri -Headers $h -ContentType “application/json" -Method Get

$deletedReleasesIds = $deletedReleases.value.id # | ConvertTo-Json -Depth 100

write-host "Found the total deleted releases : $deletedReleasesIds.count"

For ($i=0; $i -lt $deletedReleasesIds.count; $i++)

{

$deletedReleaseId = $deletedReleasesIds[$i];

Write-Host "Found the deleted id : $deletedReleaseId";

# Recover the deleted release

$undeleteReason = '{ "Comment" : "Deleted by mistake" }'

$undeleteReleaseUri = "https://$accountName.vsrm.visualstudio.com/$projectName/_apis/Release/releases/$deletedReleaseId`?comment=$undeleteReason&api-version=4.1-preview.6"

$undeletedDefinition = Invoke-RestMethod -Uri $undeleteReleaseUri -Headers $h -ContentType “application/json" -Method Put -Body $undeleteReason

write-host "Release id: $deletedReleaseId recovered successfully"

}

Using C#: [Refer blog this to know as how to use ReleaseManagementClient]

==============

using System;

using Microsoft.VisualStudio.Services.Client;

using Microsoft.VisualStudio.Services.Common;

using Microsoft.VisualStudio.Services.ReleaseManagement.WebApi.Clients;

using Microsoft.VisualStudio.Services.WebApi;

using Microsoft.VisualStudio.Services.ReleaseManagement.WebApi;

using Microsoft.VisualStudio.Services.ReleaseManagement.WebApi.Contracts;

using System.Diagnostics;

 

namespace ReleaseHttpClientSample

{

class Program

{

static void Main(string[] args)

{

Uri serverUrl = new Uri("https://anjani1.visualstudio.com");

var project = "TFVCProj";

VssCredentials credentials = new VssClientCredentials();

credentials.Storage = new VssClientCredentialStorage();

 

VssConnection connection = new VssConnection(serverUrl, credentials);

 

ReleaseHttpClient rmClient = connection.GetClient<ReleaseHttpClient>();

 

var releases = rmClient.GetReleasesAsync(project, definitionId: 1, isDeleted: true).Result;

 

foreach(var release in releases)

{

rmClient.UndeleteReleaseAsync(project, release.Id, "Deleted by mistake").SyncResult();

}

 

}

}

}

Experiencing Data Loss Issue in OMS portal – 03/29 – Investigating

$
0
0
Update: Thursday, 29 March 2018 10:21 UTC

We are investigating daily quota limit issue within OMS Portal. Root cause is not fully understood at this time. There are 79 customers who are signed for standalone tier continue to experience restriction of 512 MB daily quota limit. We are working to establish the start time for the issue, initial findings indicate that the problem began at 03/29 ~07:15 UTC. 
  • Work Around: None
  • Next Update: Before 03/29 12:30 UTC

-Mohini

Lesson Learned #35: How to transfer the login and user from OnPremise to Azure SQL Database

$
0
0

In some situations, we need to transfer the logins/user or contained user from our OnPremise Server to Azure SQL Database.

In previous version of SQL Server, running the script included in this URL the process will be successfully, but trying to execute into Azure SQL Database you will face the following errors at the step 2:

  • Msg 262, Level 14, State 18, Procedure sp_hexadecimal, Line 1 [Batch Start Line 5] CREATE PROCEDURE permission denied in database 'master'.
  • Msg 262, Level 14, State 18, Procedure sp_help_revlogin, Line 1 [Batch Start Line 37] CREATE PROCEDURE permission denied in database 'master'.

Unfortunately, this script is not supported to use in Azure SQL Database, due to several differentials in the engine. Even the Windows users will be not supported in Azure SQL Database, just only SQL Logins and Azure Active Directory. Also, some other properties of the users like DEFAULT_LANGUAGE or DEFAULT_DATABASE is not supported.

Also, there is very important to know that Azure SQL Database currently, doesn't support Windows Logins, so, if you need to export your database to Azure SQL Database, please, follow these steps:

  • Create a new database from a backup of your database
  • Remove the Windows Logins
  • Create the export
  • And import the database.

As we cannot use a native format backup, you need to generate a bacpac that will have the login, password and user.

  • If you are not using a contained database, if you try to create the user directly, for example, using Generate script/Export Data from SQL Server Management Studio, you are going to have the following scenario, when the user will be created.

CREATE USER [User1] FOR LOGIN [User1] WITH DEFAULT_SCHEMA=[dbo]
GO
ALTER ROLE [db_owner] ADD MEMBER [User1]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Table_1](
[Id] [int] NOT NULL,
[Name] [varchar](20) NOT NULL,
CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

  • So, you just only to create the login before executing the script: CREATE LOGIN [User1] WITH PASSWORD=’<PASSWORD123>’
  • The SID is not possible to transfer between OnPremise environment to Azure SQL Database. For example, if I try to execute the following I will have the following error:

CREATE LOGIN [User1]
WITH PASSWORD=N'8AHisc8cRB5dBpDaOuA7irbHP4CD',
SID = 0x54E338B637F93B4BAF54855EE70

In another hand, if you are transfering logins from Azure SQL Database to Azure SQL Database. In this case we have 4 different scenarios:

  • If you have created the login, review this link Geo-Replication
  • If you have contained users, there is not needed to do anything.
  • If you have the login and transferred the data using bacpac, there is not needed to do anything.
  • If you have the login and the transferred of the data has been used using CREATE DATABASE .. AS COPY OF from different servers, you need to do the following the same page that geo-replication

Compare two text files with PowerShell

$
0
0

Many of the times, we have need to compare 2 code files which are identical but some minor differences, mainly due to spelling/typing mistakes.

This script tries to attend this issue by comparing 2 files line by line ignoring white space, tabs, line breaks.

Here is sample output:

.compareTextFiles.ps1 -inputFilePath $leftFilePath -compareFilePath $rightFilePath
********************************************
Line number 17
table is for CRUD methods
table is for CRUD methods
********************************************

********************************************
Line number 117
', '.join('?', * len(values))
', '.join('?' * len(values))
********************************************

********************************************
Line number 135
values = [rec[v] for v in klist] # a list of values ordered by key
values = [rec[v] for v in klist] # a list of values ordered by key
********************************************

As we can see from about output, on line no 117, the input file has extra ","

 

Download Link:

compareTextFiles

X++ the most extensible language on the planet!

$
0
0

Disclaimer: I have no idea if the title is true or not. There are no world cups for languages competing against each other for the trophy. What I do know is that X++ has taken some gigantic leaps forward lately in terms of extensibility – if someone decides to host a world cup; I'd be happy to sign up X++ for the contest! And by the way: Isn't X++ the perfect name for the most eXtensible language?

"Extensibility" is an overloaded term. In this context it means the ability to extend code from another library without editing their source code – in a manner that allows multiple extenders to live side-by-side without risk of collision.

Here is a top-ten list of the extensibility features in X++:

The innovation outlined above is driven by the need to deprecate source code editing of other's libraries (overlayering is just glorified source code editing). This has only been possible as we are in full control of the language and compiler. A strong justification for keeping a proprietary language alive and kicking.

For visitors not familiar with X++, let me mention that X++ is a .NET language; the compiler emits IL code; the CLR is the runtime. The language is used to write business applications in Dynamics 365 for Finance and Operations.

Happy extending.

Azure Content Spotlight – Virtual Machine Serial Console (Preview)

$
0
0

Welcome to another Azure Content Spotlight! These articles are used to highlight items in Azure that could be more visible to the Azure community.

One of the most requested feature is now finally added to Azure! Virtual Machines Serial Console provides a debugging and maintainance tool for Virtual Machines. It can be used for network issues and OS configuration issues on your Linux and Windows VMs.

If you want more information about this cool new feature, you can refer to the following websites:

 


Dynatrace Managed Instance now available for Azure Government

$
0
0

As teams across government agencies are hard at work modernizing their IT portfolios, it's helpful to have the best tools for optimizing application performance in the cloud.

Built with AI technology, Dynatrace provides full stack, all-in-one monitoring and operations analytics for the public sector at massive scale, in the largest government environments. Dynatrace utilizes artificial intelligence to understand the complexities of performance in hyper-dynamic cloud environment, and it's now available for Azure Government.

“The availability of Dynatrace for Azure Government addresses the growing demand for application performance management as government entities continue to modernize and migrate applications to the cloud. Our public sector customers can now provision a Dynatrace managed cluster while maintaining compliance with government security regulations.” - Bob Dorch, Director of Federal Sales for Dynatrace

Dynatrace natively integrates with Microsoft Azure, making it the most powerful performance management solution available for Azure. Through Azure’s extension technology, Dynatrace natively embeds its OneAgent technology directly into the Azure infrastructure. One-click deployment via the Azure Government portal delivers the full picture of the entire IT environment within 5 minutes.

Dynatrace builds on top of the productivity, intelligence and hybrid capabilities of Azure, supporting mutual customers with enhanced container and application performance monitoring across their organization. It gives end-to-end performance insights across the entire cloud environment, from infrastructure, through containers, up to the end user level, thus providing actionable insights to business stakeholders, operations staff, and development teams. Dynatrace is unique in its field because it goes beyond APM and, due to its API and AI capabilities, it becomes a strategic platform for digital organizations.

Dynatrace dashboard

Azure dashboard in Dynatrace

 

Launch your Dynatrace Managed instance at the Azure Government Marketplace and get started today! Or visit the Dynatrace website to read more about Dynatrace's Azure Monitoring capabilities. 

 

We welcome your comments and suggestions to help us improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails by clicking “Subscribe by Email!” on the Azure Government Blog.

 

LCS Service Degradation Restored | Unable to view Microsoft Dynamics AX 2012 projects

$
0
0

User Experience: LCS users may have been unable to view their AX 2012 projects

Final Status: We determined that the authentication errors were triggered when the service responsible for viewing AX 2012 projects attempted to connect to a group of servers that were in an unhealthy state. We restarted the impacted servers, restoring functionality.

Update 1: After reviewing logs, we have identified authentication errors within a service responsible for viewing AX 2012 projects. We are investigating the cause of the authentication errors.

Incident Start Time: Thursday, March 29, 2018, 4:22 PM UTC

Incident End Time: Thursday, March 29, 2018, 7:41 PM UTC

Preliminary Root Cause: The service responsible for viewing AX 2012 projects experienced authentication errors when attempting to connect to unhealthy infrastructure.

Next Steps: We are reviewing telemetry and system logs from this incident to identify and resolve the cause of the unhealthy servers. In addition, we will implement monitoring for these types of authentication errors in order to take proactive mitigation steps prior to service impact.

This is the final update on the incident.

Windows 10 RS4 Preview for HoloLens and ONNX offline Machine Learning

$
0
0

Recently we announced that Windows 10 now includes the ability to run Open Neural Network Exchange (ONNX) models natively with hardware acceleration.

This announcement now brings 100s of millions of Windows devices, ranging from IoT edge devices to HoloLens to 2-in-1s and desktop PCs, into the ONNX ecosystem.

Yesterday we released the Windows 10 RS4 preview to HoloLens so this now allows Data scientists and developers creating AI models will be able to deploy their innovations to this large user base. From an academic perspective I have lots of HoloLens developers wanting to  building apps that are use AI models offline to deliver more powerful and engaging experiences.

So what is ONNX?

ONNX is an open source model representation for interoperability and innovation in the AI ecosystem.

Microsoft actually helped start ONNX in September 2017, and with support from many other companies, ONNX v1 launched in December 2017.

Thanks to ONNX-ML, Windows supports both classic machine learning and deep learning, enabling a spectrum of AI models and scenarios. Developers can obtain ONNX models to include in their apps in several ways:

  • Create and train ONNX models using any of the many frameworks that currently support generating ONNX models.
  • Convert models from various sources including SciKit-Learn and CoreML into ONNX models by using ONNXML tools.
  • Obtain pre-trained ONNX models from the growing ONNX model zoo.

You can learn more about how to integrate ONNX models into Windows apps here.

If your interested in adding ONNX to your HoloLens Apps see my colleagues Mike Taulty blogs

First Experiment with Image Classification on Windows ML from UWP

Second Experiment with Image Classification on Windows ML from UWP (on HoloLens)

Third Experiment with Image Classification on Windows ML from UWP (on HoloLens in Unity)

Get Image Metadata (Part 3)

$
0
0

In Continuation of the Get Image Metadata (Part 2), In this post we will build and compile to code we wrote and than we will Test it with a few Images.


In the Error List verify that all errors are gone before Building your Code

Image1

If you don't errors continue to Build and Compile the Code, if for some reason you have errors or when you Try and Build the code an error pops up, Locate the error and step through the blog posting to see where the issue exist.

Click on Build | Rebuild Solution and if the Solution builds Successfully you should see “Rebuild All: 1 Succeeded”

image2

Now at the Top of Visual Studios Click on the Green Start

image3

Now your User Interface should pop up

image4

In the UI click on the 3 dots in the little box …

image5

Brows for an image and click on Open after you have the image selected you wish to Analyze.

Image6

Click on the Analyze button

Review the Results

image7

Repeat test with several images.

Accessing a .NET Bot’s State via Dependency Injection

$
0
0

The .NET BotBuilder SDK has deprecated the StateClient class, leaving some to wonder how they should access their bot’s state. Premier Developer Consultant Ben Williams shares an example of using dependency injection in your BotFramework projects.


When using the .NET BotBuilder SDK’s dialog system, you can access the bot state using the dialog context. However, what if you don’t have the dialog context handy?

You have two options

  1. Pass the context around all the time
  2. Use the already built in IoC container to get the bot state.

With v3 of the Microsoft Bot Builder SDK, the documentation describes how to configure your bot to store its bot state data in either Azure Table storage or in a Cosmos DB. Notice that in the documentation, you configure the storage providers via Autofac registration (which Bot Builder uses internally to manage its various services). You should be able to get them back from the Autofac container via dependency injection.

You can read more of Ben’s post here.

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>