Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Webinář: Novinky v Microsoft Azure z konference Microsoft Ignite

$
0
0

Novinky v Microsoft Azure z konference Microsoft Ignite

Jazyk: Čeština
Termín: 18.10.2018
Čas: 15:00 - 16:00

Chcete znát všechny hlavní novinky z oblasti Microsoft Azure, které byly prezentovány na konferenci Microsoft Ignite? Připojte se k nám na online webinář (nejen pro partnerské společnosti), pod vedením technického specialisty Lukáše Patky, který vám je uceleně představí.

Agenda:

  • Windows Virtual Desktop v Azure
  • Nové kategorie virtuálních strojů
  • Standard, Premium, Ultra.. jaký storage si vybrat?
  • Řízení oprávnění v Azure Files
  • Nativní CDN od Microsoftu
  • Azure Blueprint
  • a další

Registrace ZDE


Buri


Why doesn’t GetTextExtentPoint return the correct extent for strings containing tabs?

$
0
0


A customer reported that the
Get­Text­Extent­Point and
Get­Text­Extent­Point32 functions
do not return the correct extents for strings that contain
tabs.
The documentation does say that
they do not support carriage return and linefeed,
but nothing about tabs.



The
Text­Out and
Get­Text­Extent­Point functions
do not interpret control characters.
They take the string you pass,
convert the code points to glyphs,
string the glyphs together,
and display or measure the result.



They don't move the virtual carriage to the "left margin"
when they encounter a U+000D CARRIAGE RETURN,
or move it down by the "line height"
when they encounter a U+000A LINE FEED,
or forward to the next "tab stop",
when they encounter a U+0009 CHARACTER TABULATION,
or to the left by "some distance"
when they encounter a U+0008 BACKSPACE,¹
or clear the "screen"
when they encounter a U+000C FORM FEED,
or change the "typewriter ribbon color"
when they encounter U+000E SHIFT IN and U+000F SHIFT OUT,
or beep the speaker
when they encounter a U+0007 BELL.



At best, you'll get the graphics for the various control
characters, like ␉ for the horizontal tab,
but more likely you'll get ugly black boxes.



If you want to render text with tabs, use
Tabbed­Text­Out.
If you want to measure text with tabs, use
Get­Tabbed­Text­Extent.
The
Draw­Text function can both render
and measure,
and it also supports carriage returns and line feeds.



Still no luck with backspace, changing the typewriter ribbon color,
clearing the screen, or beeping the speaker, though.
For those you're on your own.



¹
What would that even mean if you backspaced beyond the start of the string?
Does this mean you could have a string whose extent is negative?

So, um, what are we looking at?

$
0
0


Some of

my relatives from Vancouver, British Columbia

enjoy going on cruises.
During one cruise along the Alaska coast,
they found a large number of passengers gathered on the viewing deck
and gazing toward the shore in awe and amazement.



My relatives looked out toward shore but couldn't identify what it was
that drew everyone's attention.
Was something interesting happening on shore?
Was there a whale in the ocean?



After failing to identify the source of the awe and amazement,
one of them asked one of the awe-filled passengers,
"So, um, what are we looking at?"



The other passenger replied,
"Why, the snow-covered mountains, of course.
Aren't they gorgeous?"



My relatives were less impressed,
because they live in Vancouver,
where

a view of snow-covered mountains
is what you get every winter
.



They surmised that most of the people on the cruise
are from parts of the world where they don't get

a view of snow-capped mountains on their daily commute
.



I'll be on my first cruise next week.
The blog will be running on autopilot.
This means that I won't be around to approve comments manually.
Only comments from validated users
who have previously commented on the blog will get through automatically.

Troubleshooting Query timeouts in SQL Server

$
0
0

Hello all,

 

While working with SQL Server, one of the common issue application teams encounter is timeouts. In this article, we are covering approach to troubleshoot SQL Command/Query timeouts, data collection and data analysis steps to isolate further.

There are 2 types of timeouts.

  1. Command Timeout
  2. Connection Timeout.

For more information on connection timeout issues, refer here:

 

 

Simulating Command timeout:

 

For demonstration purpose, I am using a sample web application which connections to the SQL database Adventureworks2014. The application connects to the database to retrieve customers/Products details.

To simulate the timeout error, click on Customers menu:

 

From the exception stack trace, its evident that application is trying to retrieve some data and System.Data.SqlClient.SqlCommand.ExecuteReader method is called, but since the ExecuteReader method execution has not completed in the Command Timeout value (30 seconds), the application is reporting Command Timeout.

 

Exception Details: System.ComponentModel.Win32Exception: The wait operation timed out

Stack Trace:

[Win32Exception (0x80004005): The wait operation timed out]
[SqlException (0x80131904): Execution Timeout Expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.]
System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction) +3305692
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose) +736
System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady) +4061
System.Data.SqlClient.SqlDataReader.TryConsumeMetaData() +90
System.Data.SqlClient.SqlDataReader.get_MetaData() +99
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString, Boolean isInternal, Boolean forDescribeParameterEncryption, Boolean shouldCacheForAlwaysEncrypted) +604
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async, Int32 timeout, Task& task, Boolean asyncWrite, Boolean inRetry, SqlDataReader ds, Boolean describeParameterEncryptionRequest) +3303
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, TaskCompletionSource`1 completion, Int32 timeout, Task& task, Boolean& usedCache, Boolean asyncWrite, Boolean inRetry) +667
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) +83
System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) +301
System.Data.Entity.Infrastructure.Interception.InternalDispatcher`1.Dispatch(TTarget target, Func`3 operation, TInterceptionContext interceptionContext, Action`3 executing, Action`3 executed) +104
System.Data.Entity.Infrastructure.Interception.DbCommandDispatcher.Reader(DbCommand command, DbCommandInterceptionContext interceptionContext) +499
System.Data.Entity.Core.EntityClient.Internal.EntityCommandDefinition.ExecuteStoreCommands(EntityCommand entityCommand, CommandBehavior behavior) +36


Identifying SQL Queries which are Timing Out:

Next step is to identify the SQL queries timing out. Easier approach would be check with the application team on the queries executed. Using SQL tools like Extended Events and SQL Profiler, we can also identify the queries timing outs using Attention event.

If you are using SQL Extended Events, collect the events like SQL_Batch_Starting, SQL_batch_completed along with attention event to identify the queries timing out.

 

 

 

DATA COLLECTION FOR QUERY TIMEOUTS:

Another easier approach to collect the data would be to use PSSDIAG Configuration manager to collect the relavant data like Performance Statistics dmv data, Profiler/Extended event data, blocking data, etc.

 

PSSDIAG data collection utility can be downloaded from: https://github.com/Microsoft/DiagManager

Once you launch PSSDIAG configuration utility, fill in the necessary details like: Instance name, SQL Build, Scenario and Profiler/Xevent.

While capturing the data to track query timeouts, ensure that Attention events are selected in Errors section.

Once the events are selected, save the pssd.zip file and extract the contents of the zip file.

 

To initiate the data collection, navigate to the pssdiag location using command prompt(Run as Administrator) and launch pssdiag.cmd

Once "SQLDIAG collection started" message appears, reproduce the timeout issue. Once the issue is reproduced, stop the data collection using CTRL + C.

 

 

 

DATA ANALYSIS:

To analyze the PSSDIAG data collected, you can use SQLNexus tool which can be downloaded from: https://github.com/Microsoft/SqlNexus

Installation of SQL nexus tool details are at: https://github.com/Microsoft/SqlNexus/wiki/Installation

Once the installation is complete, launch the tool to import the PSSDIAG data collected earlier.

 

Select the database to host the Nexus data.

Click on Import and point to the PSSDIAG output folder:

 

 

An interesting feature of SQL nexus is that it can import the SQL Profiler trace data collected and output the trace files by SPID, which will help when you are interested in tracing the Profiler data at SPID level.

 

Once the data import is successful:

Click on Read Trace reports which is a starting point to review all batch/Statement level queries. To review the timeouts:

From the above screenshot, its evident that the query executed by session ID 53 has timed out.

Launching the SQL profiler trace captured, verify that SPID 53 has timed out after 30 seconds as highlighted below. Review the CPU time as well, in our scenario, CPU time is in milli seconds, which explains that the query is not a CPU bound query. Hence you need to identify the waits associated with session ID 53.

Reviewing the Blocking details, we can see that session ID 53 is waiting throughout on OBJECT:5:997578592:0 wait resource with wait type: LCK_M_IS. Since its execution didn’t complete in 30 seconds, it has timed out and application raises ABORT with SQL Server.

 

Next step would be to identify the headblocker query and identify the blocking query and check if it’s a RUNNER/Waiter query.

 

SUMMARY:

We can use PSSDIAG and SQL Nexus tools to identify the queries timing out and root cause of timeout issue. Capture the SQL profiler trace/Xevent with Attention event which will help in narrowing down the SPID’s timing out. Also, not that the PSSDIAG and SQL nexus tool can be used to collect data for Slow query performance, General bottleneck analysis and reviews waits and address other SQL performance issues as well.

 

Hope this blog helps in isolating query timeout issues.

 

Please share your feedback, questions and/or suggestions.

Thanks,
Don Castelino | Premier Field Engineer | Microsoft

Disclaimer: All posts are provided AS IS with no warranties and confer no rights. Additionally, views expressed here are my own and not those of my employer, Microsoft.

[Resolved] Classroom lab users may not be able to access their labs.

$
0
0

Final update: We would like to inform you that this issue is fixed now and classroom lab users should be able to sign in successfully into their labs. Please reach out to us for any questions.

We would like to inform you that we are currently experiencing an issue at our end which is blocking classroom lab users from accessing their labs. The bug is in the sign in experience and we are actively working on fixing it. We will post an update on this blog as soon as the fix is out.

We apologize for the inconvenience and thank you for your patience.

-Azure Lab Services Team

Utilizing PaaS services and ARM deployment templates on Azure Government

$
0
0

There are many Platform as a Service (PaaS) offerings available in Azure Government that you can easily leverage to build compelling applications. It is extremely easy to get up and running with PaaS services, and PaaS services offer a quicker path to compliance. Other than some different endpoints, developing with PaaS services in Azure Government is virtually identical to the developer experience in Azure public.

In this post, we’ll cover the Azure Government PaaS sample application. We will walk through setting up a simple CRUD (Create/Read/Update/Delete) web application that interacts with a Cosmos DB, Redis Cache, Storage Queue, and authenticates with Azure AD. We will also leverage ARM templates to deploy and configure applications in Azure Government. We’ll also show videos for this walk-through so you can see how easy it is to do step-by-step.

Traffic Case Application:

Our PaaS sample consists of a Traffic Case App, which is a web application running in Azure Government that allows users to manage a repository of traffic violation cases. The user can file a new case, edit an existing one, and view cases that have been closed. We go through each PaaS service included in our sample:

  • Web App: Our sample is a ASP.NET Core 2.1 Azure Web App running in Azure Government.
  • AAD Authentication: When the user opens the app they are directed to the Azure AD login page. The Traffic Case app is configured to reference a specific Azure AD app registration, which is used to authenticate users to a specific Azure AD tenant. The video shows how AAD authentication is integrated.
  • Cosmos DB: All the CRUD operations such as creating, editing, and deleting a case consist of interacting with a Cosmos DB. The cases are written as individual JSON documents to a collection in the database. There are also SDKs available to easily interact with Cosmos DB, which you can read about here. Developing with Cosmos DB is exactly the same in Azure Government as Azure public – just point your connection string to the instance in Azure Gov.
  • Storage Queue: When the status of a case is set to “Closed”, the case is written to a storage queue. The app reads from this queue to then display the closed cases for the user. The video will show this implementation.
  • Redis Cache: To take load off our database and increase performance and throughput, it’s common to cache static data that rarely changes. We’ll utilize a Redis cache for this purpose. A Redis cache is read from to populate a static list of statuses for a case. We implemented a cache-aside pattern, so that when a user creates or edits a case, the list of statuses will be read from the Redis Cache. There is a console in the Azure portal where users can query their cache directly, this is all shown in the screencast.

How to run the Azure Government PaaS sample (Video)

ARM Templates:

The PaaS application consists of many Azure resources. Deploying each of these resources manually would be time consuming. Fortunately, these deployments can be automated with Azure Resource Manager (ARM). The repository includes an ARM template that not only deploys all of the resources needed for the Traffic Case App, but also dynamically configures connections to each service for the web application!

Best practices for ARM templates include never hardcoding anything. This ensures that the template works across different environments. For example, if you hard-code an endpoint for a service that is in Azure public, that same ARM deployment wouldn’t work for Azure Government. In our ARM template we are dynamically setting all locations to the resource group location, so when this is deployed in Azure Government the location will be set to an Azure Government region. In order to grab config settings from the PaaS services such as Cosmos DB, Azure Storage, and Redis Cache we are calling a method that lists the keys when that service has been deployed. We can see that when we run this ARM deployment template, all the connection strings and keys have already been added to the application settings for our web app. This fully automated template ensures a quick and painless deployment, allowing users to get started using the application immediately!

Check out the video of the ARM deployment here:

How to deploy the Azure Resource Manager template (Video)

We walked through setting up a basic CRUD application in Azure Government and configuring it to interact with various PaaS services. Please check out the code for the PaaS sample – as well as other samples in our Azure Gov GitHub library – and go through these PaaS and ARM template screencasts which walk through the topics above more in-depth!

Also, be sure to subscribe to the Microsoft Azure YouTube Channel to see the latest videos on the Azure Government playlist.

 

Top Stories from the Microsoft DevOps Community – 2018.10.12

$
0
0

I'm back! One of the great privileges of my job is that I get to spend a lot of time talking to customers about DevOps. But that often means a lot of time on the road, and stepping away from drinking at the firehose of great content coming from the Azure DevOps community. But now that I'm back in the office, I've found a great bunch of DevOps links:

Five Things about Azure DevOps
I'm not sure that Burke understands how therapy animals work, but I am sure that Damian and Burke understand Azure DevOps. Not only do you get to see bunnies that attack, and Burke attempting a terrible Australian accent, but you get to see how we bleeped our then-secret name change in front of the audience at Microsoft Build.

Performance Tuning an Azure DevOps Build Configuration
Having fast continuous integration build and test pipelines is one of the keys to increased velocity and faster delivery. Jeffrey Palermo has a few tips to keep your builds fast using Azure Pipelines.

Azure DevOps – Never manually create a Docker container again!
Azure DevOps isn't just for developers - it's useful for your infrastructure team, too. Justin Paul shows you how to build a continuous integration pipeline to save time building Docker containers. You can take the newest code and deliver a docker image to a registry so that you can start consuming it immediately.

Azure DevOps Task Group
Wanting to reuse a sequence of tasks in an Azure Pipeline? Panu Oksala shows you how to create a Task Group, encapsulating several task steps so that they can be added to another build or release pipeline, like any other task.

Azure DevOps Podcast Episode 5: Dave McKinstry on Integrating Azure DevOps and the Culture of DevOps
Dave McKinstry, a Program Manager for Azure DevOps, joins Jeffrey to talk about his journey through the DevOps industry, how to move forward with automated deployment, what a modern skillset looks like in today's DevOps environment, and more.

Azure DevOps Pipelines and Sonar Cloud gives free analysis to your OS project
Azure Pipelines helps you build your open source project freely, without installing any software - and SonarCloud offers free cloud-hosted static code analysis for your open source project, also without having to install any software. Gian Maria Ricci shows you the value in combining these two tools.

Did you find some great content about Azure DevOps or DevOps on the Microsoft platform? Drop me a line on Twitter - I'm @ethomson.

Step by step video on how to fix the SharePoint Workflow issue caused by .NET patch


If Windows Update sent you Intel Audio Controller version 9.21.0.3755 by mistake, uninstall it

$
0
0

An Intel audio driver was incorrectly pushed to devices via Windows Update for a short period of time earlier this week.  After receiving reports from users that their audio no longer works, we immediately removed it and started investigating.  If your audio broke recently, and you're running Windows 10 version 1803 or above, please check to see if the incorrect driver was installed. To regain audio, we recommend you uninstall the driver.

  1. Type Device Manager in the search box
  2. Find and expand Sound, video, and game controllers
  3. Look for a Realtek device, or a device that has a yellow triangle with an exclamation point
  4. Click on the device to select it
  5. From the View menu, choose Devices by connection
  6. Look at the parent device - this will be called something like "Intel SST Audio Controller" (Intel Smart Sound Technology driver version 09.21.00.3755)
  7. Right-click the controller device
  8. Choose Properties
  9. You should get a dialog like below. Click on the Driver tab as shown.
  10. If the driver version is 9.21.0.3755, you have the driver that was sent to you incorrectly.
  11. Click Uninstall Device. You will get a popup asking if you want to remove the driver too, as shown.
  12. Check the checkbox as shown, so the driver will be removed.
  13. Click Uninstall.
  14. When the driver is uninstalled, reboot your system.
  15. Your audio (speakers and headphones) should now work.

Restrict access to login for the WordPress running on Azure web app container

$
0
0

For WordPress sites that running on Apache server in Azure web app for containers, here is sample code to restrict access to login pages, such as wp-login.php or wp-admin

1. FTP to files in /home/site/wwwroot, find the file ".htaccess" (create one if it does not exist)

2. Add the code below to ".htaccess", replace the IP "xx.xx.xx.xx" by that you allow to access wp-login.php

<Files wp-login.php>
Order Deny,Allow
Deny from all
SetENVIf X-Client-IP "xx.xx.xx.xx" AllowAccess
Allow from env=AllowAccess
</Files>

Note: Since /wp-admin will direct to wp-login.php, no need to define the rule for wp-admin

Known Issue with applying binary updates to an 8.0 environment

$
0
0

For customers who are trying to apply binary updates to a Finance and Operations version 8.0 environment, you will see the following error message incorrectly showing up.

"Modules on the environment do not match with modules in the package. Please verify if all modules on the environment exist in package or not."

We are actively investigating this issue and will hotfix Lifecycle Services to remove this check by end of day today, October 12, 2018. This blog post will be updated once this issue is fixed.

Known Issue with database refresh requests submitted through Lifecycle Services

$
0
0

[Update as of 12th October 2018, 6PM PST -  The hotfix containing the fixes for the issues causing database refresh request failures in the last 3 days has been rolled out to Production. As mentioned below, please trigger rollback if your environment is in a failed state and then resubmit a new request to refresh you sandbox with production data. ]

 

Starting on Monday, October 8th 2018, there were technical issues that caused some Database Refresh requests to fail. We have investigated the failures and are releasing a Lifecycle Services hotfix that will address all the known issues by Oct 12, 5 PM PST.

What you can do
If you have a Database Refresh request that failed for any reason that is outside of your control (anything that it is not data related), we have added a new Rollback button to your target environment. Please roll back your environment to the last good known state and then resubmit a new Database Refresh request.

We understand that timely refreshes of data is imperative for your own diagnostic, testing, and training needs. We'll continue to actively monitor issues. If you still see issues after the hotfix, please raise a support request. We will update this blog post once the hotfix is released.

New feature – Sandbox to Production Service Requests

$
0
0

With the LCS release on Monday October 8th 2018, we are pleased to announce the general availability of a new Service request type that targets the Go Live experience.

Many customers and partners use what is commonly referred to as a “Golden Configuration” environment, which is a database populated with basic data elements and configurations for a business. In addition to the data, migration is performed and final smoke tests are executed as part of a typical Go Live.

To get a golden configuration database imported in to your Sandbox environments (Tier-2 Standard Acceptance Test or higher), follow the steps in this topic, Copy Finance and Operations databases from SQL Server to production Azure SQL Database environments.

Previously, to copy the prepared Sandbox in to the Production environment, one of the following processes needed to be performed:
• Support request of type “Other” -  This was needed to explain Go Live and timing.
• Support ticket - This required internal transfer from Microsoft Customer Support teams to Dynamics Service Engineering (SRE) teams.

Now, with this feature there is a dedicated service request type we hope to reduce Go Live issues and make this last step in your deployment a smooth one. To use this feature, click the Support menu item from within the project and select the Service Requests tab. When you click Add you will see a new request type 'Sandbox to Production' that you can use to move the golden configuration database to Production as part of the go-live flow.

Experiencing Latency and Data Loss issue in Azure Portal for Many Data Types in East US – 10/13 – Resolved

$
0
0
Final Update: Saturday, 13 October 2018 05:50 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/13, 05:30 UTC. Our logs show the incident started on 10/13, 01:50 UTC and that during the 3 hours 40 minutes that it took to resolve the issue some customers in EUS region may have experienced data access issues in Azure Portal and may have experienced around 6% of data loss in metrics and possible data loss in continuous export. Service Map, VM Insights and container insights may have also seen data access issues during the impacted window.
  • Root Cause: The failure was due to azure storage outage in East US region.
  • Incident Timeline: 3 Hours & 40 minutes - 10/13, 01:50 UTC through 10/13, 05:30 UTC

We understand that customers rely on Application Insights, Service Map, VM Insights and Container Insights as critical services and apologize for any impact this incident caused.

-Mohini Nikam


Initial Update: Saturday, 13 October 2018 04:01 UTC

We
are aware of issues within Application Insights starting at Saturday, 12 October 2018 6:50 PST. and are actively investigating.
We are seeing ingestion delays for data, along with 6% data loss for metrics
and possible data loss in continuous export.



Service Map and VM Insights, Container Insights also experienced ingestion
delays in the region.




  • Work Around: none
  • Next Update: Before 10/13 06:30 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Suraj



Experiencing Data Access Issue in Azure and OMS portal for Log Analytics – 10/13 – Resolved

$
0
0
Final Update: Saturday, 13 October 2018 06:25 UTC

We've confirmed that all systems are back to normal with no customer impact as of 10/13, 05:30 UTC. Our logs show the incident started on 10/13, 01:50 UTC and that during the 3 hours 40 minutes that it took to resolve the issue some customers in East US region may have experienced data access issues in Azure and OMS portal for Log analytics. 
  • Root Cause: The failure was due to Azure Storage outage in East US region.
  • Incident Timeline: 3 Hours & 40 minutes - 10/13, 01:50 UTC through 10/13, 05:30 UTC

We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Mohini Nikam


Initial Update: Saturday, 13 October 2018 04:23 UTC

We are aware of issues within Log Analytics and are actively investigating. Customers in East US may experience ingestion delays starting from Saturday, 13 October 2018 6:50 PST.
  • Work Around:none
  • Next Update: Before 10/13 06:30 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Suraj




Microsoft Flow – Copy files from SharePoint to a local PC

$
0
0

Microsoft Flow – Copy files from SharePoint Online to a local PC

 

I recently assisted a customer with an automated workflow through Microsoft Flow that copies files from a SharePoint Online site library to a file system share on a local machine being used as an archive server. I found several forum discussions and blog posts with a similar workflow, but nothing that helped solve the issues I was hitting. As a result, the goal of this post is to walk through this scenario including screenshots and descriptions of where I was getting stuck.

Log in to Flow (https://us.flow.microsoft.com).

In the top right, click on the settings/gear icon and navigate to the Connections site.

Microsoft Flow Connections

 

We will be creating a connection in Office 365 that leverages the on-premises data gateway. This connection will then later be used in our Flow scenario to establish the connection to the file share from Office 365 (or Azure).

In the Connections screen, click on the New Connection button to add a new connection.

In the search bar on the top right, enter “File System” and hit the search button. This will allow us to add the file system connection where you’ll see the following screenshot.

On Premises Data Gateway File System

 

Enter the details for the root folder in the form of a local drive (c:test) or a network share (\server1folder) and supply credentials to access this location. After supplying this information, you need to supply the valid on-premises data gateway, which acts as a bridge between on-premises data and Azure cloud services, such as Microsoft Flow, PowerApps, Power BI, and Logic Apps. If you haven’t yet set up the on-prem data gateway, it is very quick and straightforward following the documentation.

Now that we have the connection established, we will create the Flow. There are several methods to create a new Flow leveraging existing templates. In my scenario, I created it within the SharePoint library.

SharePoint Flow

 

By initiating from this location, we see context-based suggestions based on the SharePoint items. Select the template titled “When a new file is added in SharePoint, complete a custom action”.

Microsoft Flow Templates

 

When you open the new Flow template, you will see a single task titled “When a file is created (properties only)”. One important thing to notice here is that the task only collects the metadata for the entry as opposed to the deprecated version which collected the actual file. This leads to one additional step to get the file contents. In this initial task, you can either provide the site address, library name, and folder where the files will be created or you can also set up another connection through the on-prem data gateway with this information that you can use in this step.

SharePoint Flow

 

Next, we’ll add the step to get the file contents. Click the + button under the existing step and search for “Get file content” in the action search window. Select the highlighted step below to get file contents from SharePoint.

Get File Content 

 

In this task, provide the SharePoint site address and select the file Identifier.

SharePoint File Identifier

 

Now that we have the file contents, the final step is to create the task to copy the file. Add a new step and select the File System task to create a file.

Flow File System

 

Since you have already created the connection through the on-prem data gateway, this step is very easy to configure. Select the folder path from the folder icon. This will reference the path you created in the gateway connection. For the file name and contents, select the dynamic content from the previous steps to carry forward the name and contents. Notice here that you could modify the name for archival purposes if needed.

Create file in Microsoft Flow

 

With that, you’ll have a 3-step flow that is initiated with the creation of an entry into a SharePoint library that copies the file to a local file share. Note that there are several very helpful templates in Flow that allow you to automate many other actions. This is a simple example to help get you started.

 

Microsoft Flow SharePoint to PC

 

Enjoy,
Sam Lester (MSFT)

 

Ingest Azure Redis Cache messages into Elasticsearch, Logstash and Kibana cluster deployed in Azure Kubernetes Service (AKS)

$
0
0

This is third article on the series on deploying Elasticsearch, Logstash and Kibana (ELK) in Azure Kubernetes Service (AKS) cluster. The first article covered deploying non-SSL ELK to AKS and consuming messages from Azure Event Hub. The second article described how to secure communications in ELK and use Azure AD SAML based SSO for Kibana and Elasticsearch. In this article I am going to share steps needed to ingest Azure Redis Cache messages into Elasticsearch using Logstash's Redis plugin.

Azure Redis Cache is based on the popular open-source Redis cache. It is typically used as a cache to improve the performance and scalability of systems that rely heavily on backend data-stores. Logstash's Redis plugin will read events from Redis instance. I will create a Logstash event processing pipeline where I will define Redis as input and Elasticsearch as output. The component diagram has been updated to add Azure Redis Cache integration.

The dev tools used to develop these components are Visual Studio for Mac/VS Code, AKS Dashboard, kubectl, bash and openssl. The code snippets in this article are mostly yaml snippets and are included for reference only as formatting may get distorted thus please refer to GitHub repository for formatted resources.

Create Azure Redis Cache

Create Azure Redis Cache using Portal or Azure CLI command az redis create

Navigate to this resource and keep note of the settings listed below as you are going to need them in subsequent sections

  1. Host Name: You will need to specify Host name is Logstash pipeline.
  2. Secondary Access Key: You will need to specify password in Logstash pipeline.
  3. StackExchange.Redis connection string: You will need to specify connection string in AzureRedisCacheSample so as to publish messages to Redis.
  4. Port: Default non-SSL port is 6379. By default, non-SSL port is disabled thus the port to which Logstash pipeline will connect is 6380.

Logstash Redis input plugin

As mentioned previously, I will create a Logstash pipeline which will use Logstash Redis input plugin. This input will read events from a Redis instance; it supports both Redis channels and lists. You can read more about Redis input plugin. You can get installed plugins list by following steps listed below

  • Run command kubectl exec -ti {Logstash_Pod_Name}  bash to connect to Logstash POD.
  • Run command bin/logstash-plugin list to see installed plugins

Deploy Logstash to Azure Kubernetes Service

Logstash is data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to Elasticsearch. Logstash will use Azure Event Hub plugin and Redis input plugin to ingest data into Elasticsearch. In this article, I am going to share main pointers about changes needed to Logstash resources i.e. ConfigMap and Deployment in order to subscribe to Azure Redis Cache only. You can refer to first two parts of this series for more details.

The steps needed to deploy Logstash to AKS are listed below

Create a Kubernetes ConfigMap

Create a new pipeline in Logstash for Azure Redis cache integration and main pointers about changes needed to this resource as compared to earlier parts of this series are

  • A new pipeline has been defined i.e. azureredis is the pipeline id and azureredis.cfg is the path of the configuration file for Redis cache integration. azureredis.cfg file will be mounted from ConfigMap. The Logstash event processing pipeline has three stages: inputs → filters → outputs. This file defines the logstash pipeline for Redis Cache.
    • Specify host => "{YOUR_REDIS_HOST_NAME}" based on your Redis instance host name
    • The sample Redis client is publishing messages to channel, thus data_type is channel data_type => "channel"
    • The channel name I have specified is 'messages'. Update channel name key => "messages" based on your specified channel name.
    • Specify password as per your Redis Cache secondary key value password => "{YOUR_REDIS_SECONDARY_KEY}"
    • Since HTTPS is enabled, I have specified port as 6380 port => 6380. The default setting is non-SSL port i.e. 6379.
    • SSL is enabled ssl => true
    • Output is Elasticsearch and Index name is defined as index => "azureredis-%{+YYYY.MM.dd}

The yaml snippet to create this resource is displayed below

apiVersion: v1
kind: ConfigMap
metadata:
name: sample-logstash-configmap
namespace: default
data:
logstash.yml: |
xpack.monitoring.elasticsearch.url: https://sample-elasticsearch:9200
dead_letter_queue.enable: true
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: Password1$
xpack.monitoring.elasticsearch.ssl.ca: "/usr/share/logstash/config/elastic-stack-ca.pem"
pipelines.yml: |
- pipeline.id: azureeventhubs
path.config: "/usr/share/logstash/azureeventhubs.cfg"
- pipeline.id: azureredis
path.config: "/usr/share/logstash/azureredis.cfg"
azureeventhubs.cfg: |
input {
azure_event_hubs {
event_hub_connections => ["{AZURE_EVENT_HUB_CONNECTION_STRING};EntityPath=logstash"]
threads => 2
decorate_events => true
consumer_group => "$Default"
storage_connection => "{STORAGE_ACCOUNT_CONNECTION_STRING}"
storage_container => "logstash"
}
}
filter {
}
output {
elasticsearch {
hosts => ["sample-elasticsearch:9200" ]
user => "elastic"
password => "Password1$"
index => "azureeventhub-%{+YYYY.MM.dd}"
ssl => true
cacert => "/usr/share/logstash/config/elastic-stack-ca.pem"
}
}
azureredis.cfg: |
input {
redis {
host => "{YOUR_REDIS_HOST_NAME}"
key => "messages"
data_type => "channel"
password => "{YOUR_REDIS_SECONDARY_KEY}"
port => 6380
ssl => true
}
}
filter {
}
output {
elasticsearch {
hosts => ["sample-elasticsearch:9200" ]
user => "elastic"
password => "Password1$"
index => "azureredis-%{+YYYY.MM.dd}"
ssl => true
cacert => "/usr/share/logstash/config/elastic-stack-ca.pem"
}
}
logstash.conf: |

Create a Kubernetes Deployment

The only change needed to Logstash_Deployment.yaml is to mount the new pipeline configuration for Redis i.e. azureredis.cfg. You can refer to previous articles of this series for further details about this resource.

volumeMounts:
- name: sample-logstash-configmap
mountPath: /usr/share/logstash/azureredis.cfg
subPath: azureredis.cfg

Deploy Logstash resources to AKS

Deploy Logstash resources to AKS. Once deployed run AzureRedisCacheSample to publish messages to Redis. This is a sample client (.NET Core 2.1) to send messages to Redis channel. This sample also subscribes to Redis channel locally and prints the messages. You need to update StackExchange.Redis connection string and channel name.

After sending a few messages, navigate to Kibana endpoint and you will see events received and events emitted in monitoring section

You can also create an Index filter in Kibana and messages sent to Redis will be displayed

 

This completes the article on ingesting Azure Redis Cache messages into Elasticsearch using Logstash's Redis plugin. The sample code for this article can be downloaded from GitHub.

 

 

Intelligent Cloud Architect Deep Dive – Registration Open!

$
0
0

Partner Technical Deep Dives are an intense 4-day immersive training experience that offers Microsoft Partners critical deep technical skilling to build and grow their practices. The training is focused on core technical scenarios and solutions via whiteboard design sessions and hands-on labs delivered by cloud experts. The Deep Dive has 4 tracks: Infrastructure and Management, Application Development, Data, and Artificial Intelligence.

Upon completion, participants will be better able to:

  • Position the Microsoft Azure Platform and Services
  • Design and architect solutions based on our Microsoft cloud and data platform
  • Develop and present impactful and substantial solutions to their customers
  • Confidently respond to customer objections

Learning Experience:

Four-day, in-person immersion L300-L400 learning experience includes:

  • 25% Session presentations
  • 25% Whiteboard design sessions
  • 50% Hands-on labs

When: November 26 - 29, 2018

Where: Sydney Harbour Marriott Hotel at Circular Quay

Registration Fee: $999 AUD per attendee for admittance to the Deep Dive.

Attendees are responsible for all other costs associated with their attendance, including travel and accommodation costs.

Register for Deep Dive Now

What can you expect?

Each specialized track will focus on a theme for the day and progress from presentations that include vision, roadmap and updates; followed by designing solutions in whiteboard design sessions and building solutions in a practical hands-on lab. The sessions and labs are all designed with our Engineering teams, based on real-world solutions, which highlight best practices.

Track themes:

  • Azure Infrastructure and Management: Migrating to Azure, Azure Governance, Security and Management, Optimization
  • Application Development: Application Modernization, Container and Microservices, Serverless, DevOps
  • Data: Database Modernization, Hybrid Data Base, Big Data and Visualization, Intelligent Analytics
  • Artificial Intelligence: Modernize Application, Intelligent Bots, Custom AI in Azure, Implementing Custom AI

 

DigiLearn: Building community, sharing practice and recognising achievement

$
0
0

Today's guest blog post comes from one of our MIE Experts Chris Melia, a Learning Technologist at the University of Central Lancashire. Read his account of his MIEE journey below and how he is working to recognise and share digital achievements.


Introduction

My name is Chris Melia and I am a Learning Technologist at the University of Central Lancashire. I work closely with our Faculty of Health and Wellbeing as digital learning lead, to effectively embed technology into learning and teaching practice. I am passionate about my work and find the collaboration between academic colleagues and technologists to be particularly rewarding.

MIE Journey

I became an MIE Expert last August after becoming more involved with the Microsoft Educator Community. At our institution, we have invested heavily in Microsoft technology solutions. This has seen us equip every member of academic staff with a Surface Pro and transform our learning spaces to support more innovative teaching methods. Office 365 has provided a suite of applications we have successfully embedded across the learner experience to setup e-portfolios, start online communities and create more engaging and interactive learning materials. In March of this year, I was fortunate enough to present at the Microsoft ‘Transformational Technologies’ conference hosted at UCLan, and more recently at the ‘Association for Learning Technology’ conference in Manchester. These case study sessions looked at the impact of using Microsoft Teams to create active learner communities within the School of Nursing. Fantastic feedback following the events highlighted that colleagues from other institutions have since been inspired to start running their own pilots with Teams. As an MIE Expert, I aim to showcase the work that we do, share good practice and hopefully inspire others to adopt new and innovative digital approaches.

 


 

DigiLearn: Building community, sharing practice and recognising achievement

How do we recognise and share the digital achievements of our academic colleagues?

At the University of Central Lancashire, ‘DigiLearn’ is rapidly becoming the solution.

It all starts with a community of practice…

As a Faculty Learning Technologist, one of my key objectives has been the fostering of a community of practice around digital approaches to learning, teaching and assessment. This community facilitates the sharing of digitally inspired discussion and best practice examples, as well as connecting colleagues across five disciplines, in a large Faculty of over 400.

Microsoft Teams provided the ideal solution to host the community, with its very intuitive and collaborative interface. Additionally, the accessibility of the Teams environment perfectly meets the flexibility and mobility of Microsoft Surface devices, with which all our academics are now equipped.

Over time, the Teams space and subsequent digital learning events have become known simply as ‘DigiLearn’. This Faculty community now thrives with case studies and conversations, whilst also providing the ideal environment for me to disseminate the latest technology related innovations and initiatives.

 

 

Discovery of the MEC - gamification

The discovery of the Microsoft Educator Community (MEC) is a real eye opener. The wealth of resources available is staggering, particularly as our University now drives forward the use of Office 365 tools across learning and teaching.

Our colleagues are able earn badges on completion of courses, and work towards professional accreditation and recognition as Microsoft Innovative Educators (MIEs) and Microsoft Innovative Educator Experts (MIEEs). Gamification has clearly played its part across staff engagement with these programmes. There is a real sense of healthy competition amongst our academics to earn the associated points and badges to gain visible recognition.

The DigiLearn community has given our colleagues somewhere to share their MEC achievements and MIEE Sway submissions have helped colleagues in reflecting on their digital practice. All our MIEEs have now shared their presentations across the Faculty to encourage and inspire others in taking part.

 

UCLan DigiLearn recognition programme

Building on the success of the DigiLearn Teams community and MIE/MIEE recognition, a broader concept has now been developed. This new programme sits around a wider institutional recognition framework, that enables and empowers our colleagues in sharing their digital approaches, reflecting on practice and celebrating success.

 

The framework is fundamentally defined around three levels of award. Important, is that these titles are clearly identifiable nouns, best representing the level of achievements involved.

 

Participating colleagues must start with the Practitioner route and progress upwards through the programme, with each stage acting as a pre-requisite for the next.

 

There are three core strands embedded across each level of award:

1. Engagement with the DigiLearn community in Microsoft Teams

2. Effective use of Microsoft Surface technology

3. Achievements on the Microsoft Educator Community (MIE/MIEE)

 

Along with these elements, each level holds its own set of unique additional criteria around sharing practice, initially at an internal faculty level (Practitioner) moving onto university level (Advocate) and finally, externally (Champion). Required evidence includes a combination of blog posts, written and video case studies, presentations and publications.

 

 

Each criterion across the framework are aligned to one or more of the five themes that define our institution’s Learning and Teaching Strategy (PLACE).

 

 

 

Submissions of evidence are handled through Microsoft Forms, with each award level having its own entry page. Evidence generally takes the form of written statements or weblinks related to the achievement, publication or event referenced in the submission. These are then assessed and ratified by members of the University’s Technology Enabled Learning and Teaching (TELT) team.

 

 

 

If further evidence is required colleagues are given the opportunity to submit additional material and where successful, are awarded recognition at Faculty, University and external level.


Certificates are presented by the Faculty Executive Team at development days, where achievements can be shared and celebrated amongst colleagues. Lanyards have proved particularly popular and feature the awarded DigiLearn title and corresponding stripes. They provide a visual form of recognition which colleagues wear with pride. The Open Badges have been added alongside existing MEC achievements to our colleagues LinkedIn profiles, as well as making regular appearances on email signatures.

The DigiLearn recognition programme pilot is only just underway in our Faculty of Health and Wellbeing, but the impact and momentum generated so far is extremely positive. There are already numerous award achievers across the Faculty, with a significant number of colleagues currently working towards their submissions.

Here is some feedback from colleagues who have successfully progressed through the programme to achieve DigiLearn Champion status.

 


 

“Assembling my evidence to meet criteria at the different levels was a useful and self-affirming process to go through. I am sure that many lecturing staff will already have achievements to meet the criteria.

Each level on the DigiLearn scheme helps to expand digital learning skills and deepen reflection on practice.

I hope that it will bring greater recognition of the potential of digital learning technologies and techniques. I think that it will help staff not only to appreciate the fantastic work that they are already doing, but also to identify further innovations to add to their practice.”

Hazel Partington – Senior Lecturer (School of Community Health and Midwifery)

 

 

 


 

 

 

“The DigiLearn recognition programme has not only motivated me to upskill and develop my technology enhanced teaching, it has also given credence to my long-established teaching practice.

I would strongly recommend that all colleagues at the university take up these opportunities. DigiLearn is celebratory in its nature and progressive in its purpose.”

Andrew Sprake – Lecturer (School of Sport and Wellbeing)

 

 

 


 

DigiLearn is a perpetually evolving initiative, as more colleagues achieve recognition for their capabilities and worthwhile enhancements are identified and embedded. A bespoke professional academic practice framework will support the implementation phase of our ambitious Learning and Teaching Strategy and digital capabilities are fundamental to its success. The DigiLearn initiative has already become a recognised measure of personal achievement for academic colleagues alongside the benefits acknowledged by our students of being part of an active, digitally enabled learning community.

Temporary Post Used For Theme Detection (e3bada26-a624-48f3-8543-df49918f6b0e – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

$
0
0

This is a temporary post that was not deleted. Please delete this manually. (df2e2e83-bd9c-4699-859c-08ef1e339354 - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>