Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Service Fabric with OMS Log Analytics

$
0
0

Introduction

We wanted to give you a quick intro to how you can use Log Analytics to monitor your Service Fabric cluster with the updated Service Fabric Analytics solution in OMS Log Analytics for Windows clusters that rolled out at the end of last month.   The solution for Service Fabric has been enhanced to combine key Container and Platform level events into one comprehensive dashboard. This is particularly useful for users who have multiple applications on their cluster using either reliable services or containers or applications using both. We've now created a single dashboard that helps you to see what’s happening in your cluster, the underlying infrastructure, your applications, and in your containers. Let's dig into how you can set up your cluster to be monitored with Service Fabric Analytics and OMS Log Analytics.

Setup

Existing OMS users:

Users who already have the Service Fabric Analytics solution set up do not need to take any action as the solution will auto update. If you want to monitor containers or collect performance counters from your cluster, you can add the container solution and OMS Agent to your nodes, respectively.

New users:

Before adding the solution, make sure you have the WAD extension configured correctly to ensure that Service Fabric events and other required diagnostics data is collected from your cluster.

The new Service Fabric Analytics solution is available via the Azure Marketplace just like before; check out the instructions to add the solution here. This will show you how to configure your solution to read from the tables the WAD extension sends data to.

Next up, set up the OMS Agent to collect performance counters for your nodes. This is the recommended way to collect perf counters / metrics from your machines because of the ease with which you can pick and modify the counters you want collected. Add the OMS Agent to your nodes via the Azure CLI or by updating your cluster’s ARM template – see instructions here.

If you are running containers, you will also need to add the Container Monitoring Solution to your workspace and the OMS Agent  to your nodes. This is because the current design of the OMS Agent requires that the Container Monitoring Solution is added to the Log Analytics workspace for it to start collecting docker logs and stats. This dependency on the containers solution is temporary, and we are working to add the same requirements to the Service Fabrics Analytics solution.

Using OMS Log Analytics

Once you have the resource(s) added, you can view the Service Fabric Analytics dashboard from within the Azure portal itself by clicking the tile under Summary on solution's blade.

OMS solution resource in Azure Portal

This takes you to the consolidated dashboard with cluster and container data. Scroll to the right to view container deployments and performance metrics.

You can also run custom queries and create alerts. Click on one of the tiles or graph to navigate to "Log Search" in the workspace. For example, to alert on all events related to a cluster rolling back an upgrade, search for the following query and click “New Alert Rule”.

ServiceFabricOperationalEvent
| where EventId > 29628 and EventId < 29631

You can also configure the OMS Agent to collect specific performance counters. Navigate to the OMS Workspace’s page in the portal – from the solution’s page the workspace tab is on the left menu.

Once you’re on the workspace’s page, click on “Advanced settings” in the same left menu.

Then click on Data > Windows Performance Counters to start collecting specific counters from your nodes via the OMS Agent. Here is a list of perf counters that we recommend you collect. For those of you that are using Reliable Services or Actors in your applications, add the Service Fabric Actor, Actor Method, Service, and Service Method counters as well. More information can be found on these at Diagnostics and performance monitoring for Reliable Actors and Diagnostics and performance monitoring for Reliable Service Remoting.

This will allow you to see how your infrastructure is handling your workloads, and set relevant alerts based on resource utilization. For example – you may want to set an alert if the total Processor utilization (Processor(_Total)% Processor Time) goes above 80% or below 5%. The counter name you would use for this is “% Processor Time.” You could do this by creating an alert rule for the following query:

Perf
| where CounterName == "% Processor Time" and InstanceName == "_Total"
| where CounterValue >= 80 or CounterValue <= 5.

Recommendations

We recommend using the WAD extension for Windows clusters to get platform, reliable services, and reliable actors events flowing into this solution. WAD is also extensible to send data to other solutions via ‘sinks’ such as Application Insights or Event Hubs.

For infrastructure and container monitoring, we recommend using the OMS agent to collect any performance counters (including Service Fabric specific counters) and container metrics.


Announcing TypeScript 2.9 RC

$
0
0

Today we're excited to announce and get some early feedback with TypeScript 2.9's Release Candidate. To get started with the RC, you can access it through NuGet, or use npm with the following command:

npm install -g typescript@rc

You can also get editor support by

Let's jump into some highlights of the Release Candidate!

Support for symbols and numeric literals in keyof and mapped object types

TypeScript's keyof operator is a useful way to query the property names of an existing type.

interface Person {
    name: string;
    age: number;
}

// Equivalent to the type
//  "name" | "age"
type PersonPropertiesNames = keyof Person;

Unfortunately, because keyof predates TypeScript's ability to reason about unique symbol types, keyof never recognized symbolic keys.

const baz = Symbol("baz");

interface Thing {
    foo: string;
    bar: number;
    [baz]: boolean; // this is a computed property type
}

// Error in TypeScript 2.8 and earlier!
// `typeof baz` isn't assignable to `"foo" | "bar"`
let x: keyof Thing = baz;

TypeScript 2.9 changes the behavior of keyof to factor in both unique symbols as well as numeric literal types. As such, the above example now compiles as expected. keyof Thing now boils down to the type "foo" | "bar" | typeof baz.

With this functionality, mapped object types like Partial, Required, or Readonly also recognize symbolic and numeric property keys, and no longer drop properties named by symbols:

type Partial<T> = {
    [K in keyof T]: T[K]
}

interface Thing {
    foo: string;
    bar: number;
    [baz]: boolean;
}

type PartialThing = Partial<Thing>;

// This now works correctly and is equivalent to
//
//   interface PartialThing {
//       foo?: string;
//       bar?: number;
//       [baz]?: boolean;
//   }

Unfortunately this is a breaking change for any usage where users believed that for any type T, keyof T would always be assignable to a string. Because symbol- and numeric-named properties invalidate this assumption, we expect some minor breaks which we believe to be easy to catch. In such cases, there are several possible workarounds.

If you have code that's really meant to only operate on string properties, you can use Extract<keyof T, string> to restrict symbol and number inputs:

function useKey<T, K extends Extract<keyof T, string>>(obj: T, k: K) {
    let propName: string = k;
    // ...
}

If you have code that's more broadly applicable and can handle more than just strings, you should be able to substitute string with string | number | symbol, or use the built-in type alias PropertyKey.

function useKey<T, K extends keyof T>(obj: T, k: K) {
    let propName: string | number | symbol = k; 
    // ...
}

Alternatively, users can revert to the old behavior under the --keyofStringsOnly compiler flag, but this is meant to be used as a transitionary flag.

import() types

One long-running pain-point in TypeScript has been the inability to reference a type in another module, or the type of the module itself, without including an import at the top of the file.

In some cases, this is just a matter of convenience - you might not want to add an import at the top of your file just to describe a single type's usage. For example, to reference the type of a module at an arbitrary location, here's what you'd have to write before TypeScript 2.9:

import * as _foo from "foo";

export async function bar() {
    let foo: typeof _foo = await import("foo");
}

In other cases, there are simply things that users can't achieve today - for example, referencing a type within a module in the global scope is impossible today. This is because a file with any imports or exports is considered a module, so adding an import for a type in a global script file will automatically turn that file into a module, which drastically changes things like scoping rules and strict module within that file.

That's why TypeScript 2.9 is introducing the new import(...) type syntax. Much like ECMAScript's proposed import(...) expressions, import types use the same syntax, and provide a convenient way to reference the type of a module, or the types which a module contains.

// foo.ts
export interface Person {
    name: string;
    age: number;
}

// bar.ts
export function greet(p: import("./foo").Person) {
    return `
        Hello, I'm ${p.name}, and I'm ${p.age} years old.
    `;
}

Notice we didn't need to add a top-level import specify the type of p. We could also rewrite our example from above where we awkwardly needed to reference the type of a module:

export async function bar() {
    let foo: typeof import("./foo") = await import("./foo");
}

Of course, in this specific example foo could have been inferred, but this might be more useful with something like the TypeScript language server plugin API.

Breaking changes

keyof types include symbolic/numeric properties

As mentioned above, key queries/keyof types now include names that are symbols and numbers, which can break some code that assumes keyof T is assignable to string. Users can avoid this by using the --keyofStringsOnly compiler option:

// tsconfig.json
{
    "compilerOptions": {
        "keyofStringsOnly": true
    }
}

Trailing commas not allowed on rest parameters

#22262
This break was added for conformance with ECMAScript, as trailing commas are not allowed to follow rest parameters in the specification.

Unconstrained type parameters are no longer assignable to object in strictNullChecks

#24013
The following code now errors:

function f<T>(x: T) {
    const y: object | null | undefined = x;
}

Since generic type parameters can be substituted with any primitive type, this is a precaution TypeScript has added under strictNullChecks. To fix this, you can add a constraint on object:

// We can add an upper-bound constraint here.
//           vvvvvvvvvvvvvvv
function f<T extends object>(x: T) {
    const y: object | null | undefined = x;
}

never can no longer be iterated over

#22964

Values of type never can no longer be iterated over, which may catch a good class of bugs. Users can avoid this behavior by using a type assertion to cast to the type any (i.e. foo as any).

What's next?

We try to keep our plans easily discoverable on the TypeScript roadmap for everything else that's coming in 2.9 and beyond. TypeScript 2.9 proper should arrive towards the end of the month, but to make that successful, we need all the help we can get, so download the RC today and let us know what you think!

Feel free to drop us a line on GitHub if you run into any problems, and let others know how you feel about this RC on Twitter and in the comments below!

Webinar 5/17 A look at the Common Data Service for Apps, Common Data Service for Analytics and Power BI Insights Apps

$
0
0

On March 21st the Business Applications Group announced a couple of new technologies: Common Data Service for Apps, Common Data Service for Analytics and Power BI Insights Apps.

In this demo heavy webinar Charles Sterling and Matthew Roche will take a tour of these new and soon-to-be offerings.  Demos to include creating PowerApps Canvas based application that put data into the Common Data Service, a Model Based PowerApps application that is built on top of the Common Data Service, creating a Common Data Service Analytics Data Pool with online Power Query, creating reports with Power BI Desktop against a Common Data Service Analytics Data Pool and finally showing how to get instant value from Common Data Service Analytics using Power BI Insight Apps.

 

When:  5/17/2018 10AM PST

Where: https://www.youtube.com/watch?v=1lYcHgDllxE 

Matthew Roche

Presented by Matthew Roche and Charles Sterling

Matthew Roche is an experienced program manager, data architect, software developer, trainer and mentor with over two decades of experience in the Microsoft data platform and developer ecosystem. His current role as Senior Program Manager on the Microsoft Cloud & Enterprise team allows him to extend the features and influence the direction of Microsoft Business Intelligence, Data Governance, and Information Management products and services. 

Before joining Microsoft in 2008, Matthew was a Microsoft Most Valuable Professional (MVP) for SQL Server. Matthew holds a wide range of professional certifications including Microsoft Certified Trainer, Microsoft Certified Database Administrator, Microsoft Certified Solution Developer, Microsoft Certified Technology Specialist, Microsoft Certified Professional Developer, Microsoft Certified IT Professional and Oracle Certified Professional.

Specialties: Storytelling, Business Intelligence, Data Governance, Data Warehousing, Information Management, ETL

 

Updated IIS FTP Service Extensibility References

$
0
0

It's hard to believe that it has already been six years since I wrote my Extensibility Updates in the FTP 8.0 Service blog, and it has been nine years since I wrote my FTP 7.5 Service Extensibility References blog. (Wow… where has all that time gone?) In any event, those blogs introduced several of the APIs that were added to the IIS FTP service which allow developers to provide custom pre-processing and post-processing functionality. (For example, automatically unzipping a compressed file after it has been uploaded, or implementing anti-leeching functionality.)

That being said, for some reason the managed-code APIs that were introduced in later versions of IIS were never fully documented, but we have rectified that problem. With that in mind, the links listed below will take you to the appropriate article for each API.

And as always, there are a dozen or so articles in the Developing for FTP section of the official IIS documentation that will walk you through the process of creating your own custom FTP functionality for IIS.

I hope this helps!

Disk partition in Azure VMs – using ARM Templates to specify disk configurations

$
0
0

It is considered a best practice to define your Azure Resources using ARM Templates. We all know that. But what about in-VM configurations that you needed like say disk partitions, and other similar things.

Well it turns out you can specify such post VM instantiation configuration changes, which are actually not executed by ARM proper but by the OS itself once stood up. However the good thing is, you can use the vehicle/medium of an ARM template to specify these configuration changes. Once could also use DSC extensions to do this.

Following are some useful links on this topic.

As mentioned the script itself can be part of the template. Execution happens on the VM and not at the ARM layer.
Some additional references:-

  1. Custom Script Extension Schema (Windows)
  2. Custom Script Extension Schema (Linux)
  3. Template based deployment of Script extensions (Windows)
  4. Template based deployment of Script extensions (Linux)

There is also another technique where the sequence of commands you needed to be executed can be put inside an inline variable as below and then refer to this variable as the content of the configuration that needs to be done on the VM coming up and logging in the first time.

"<FirstLogonCommands>
 <SynchronousCommand>
 <Order>1</Order>
 <Description>Create diskpart input file</Description> 
 <CommandLine>
 powershell.exe -Command Write-Output "select disk 2 ' create partition primary ' format quick ' assign " | Out-File C:\diskpart.txt
 </CommandLine> 
 </SynchronousCommand>
 <SynchronousCommand>
 <Order>2</Order>
 <Description>Create formatted partition</Description> 
 <CommandLine>diskpart.exe /s C:\diskpart.txt</CommandLine> 
</SynchronousCommand>
</FirstLogonCommands>"

In the above fragment we are specifying two commands to be executed in an order. The first one creates the file, 
whose contents are used/fed to the second command - in this case the disk partition utility - diskpart.exe. These 
commands are synchronous in nature as opposed to being async, so that order of execution is preserved. 

Putting all of this together, the following is an example of using two inline variables (a) unattendAutoLogonXML and (b) unattendFirstRunXML which are then passed to the WindowsConfiguration to be executed on VM boot.

  "variables": {
"unattendAutoLogonXML":"[concat('<AutoLogon><Password><Value>',parameters('adminPassword'),'</Value></Password><Domain></Domain><Enabled>true</Enabled><LogonCount>1</LogonCount><Username>',parameters('adminUsername'),'</Username></AutoLogon>')]",

    "unattendFirstRunXML":"<FirstLogonCommands><SynchronousCommand><CommandLine>powershell.exe -Command Write-Output "select disk 0 ' select partition 1 ' extend" | Out-File C:\diskpart.txt</CommandLine><Description>Create diskpart input file</Description><Order>1</Order></SynchronousCommand><SynchronousCommand><CommandLine>diskpart.exe /s C:\diskpart.txt</CommandLine><Description>Extend partition</Description><Order>2</Order></SynchronousCommand></FirstLogonCommands>",

...
...
...  

      "osProfile": {

         "computerName": "[variables('vmName')]",

          "adminUsername": "[parameters('adminUsername')]",

          "adminPassword": "[parameters('adminPassword')]",

          "windowsConfiguration": {

             "additionalUnattendContent": [

               {

                    "passName": "oobeSystem",

                    "componentName": "Microsoft-Windows-Shell-Setup",

                    "settingName": "AutoLogon",

                    "content": "[variables('unattendAutoLogonXML')]"

                },

                {

                    "passName": "oobeSystem",

                    "componentName": "Microsoft-Windows-Shell-Setup",

                    "settingName": "FirstLogonCommands",

                    "content": "[variables('unattendFirstRunXML')]"

                }

            ]

          }

At Microsoft, every day is Global Accessibility Awareness Day

$
0
0

Man in brown jacket with glasses smiling with two people in wheelchairs in the background

Retired U.S. Army Capt. Mike Luckett works for Warfighter Engaged, a nonprofit organization that provides gaming equipment to wounded vets, and helped test the new Xbox Adaptive Controller

 

As we celebrate the seventh annual Global Accessibility Awareness Day, across Microsoft we are thinking daily about how technology can empower the 1 billion people worldwide who have disabilities. Not only is it important that we do this for our customers and our employees, it’s also an exciting area for technology and innovation to drive incredible impact.

Today we are announcing new technology and resources for people with disabilities. You can learn more by visiting the Official Microsoft Blog from Microsoft’s Chief Accessibility Officer, Jenny Lay-Flurrie. She shares several updates from across the company and discusses why it’s both exciting to think about the tremendous opportunity to empower people with technology and humbling to think about our responsibility to get it right.

We are incredibly proud to introduce the Xbox Adaptive Controller. It is a first-of-its-kind device as an affordable, easy-to-set-up, and readily available Xbox Wireless Controller designed for gamers with limited mobility. Game on!

We’re also launching a new Microsoft Accessibility website to make it easy to find, discover and experience all that we are doing. Check it out and please continue to share your ideas on the Accessibility UserVoice Forum and the Disability Answer Desk.

Global Accessibility Awareness Day is every day here at Microsoft.

Enabling accessibility in Microsoft Visio

$
0
0

Humans are innately visual by nature. We respond to and process visual data through pictorial charts or graphs better than poring over spreadsheets or reports. Tools like Microsoft Visio and Visio Online help people visualize and leverage the wealth of data they work with. People can simplify and convey complex concepts and data in a universal manner. They can create, store, share and collaborate on visuals and diagrams in real-time.

Visio has helped numerous users create diagrams, flowcharts and blueprints for their data with ease. Over the years, the platform has evolved to keep up with the demands of the digital age. It is now linked to the cloud and offers visual styles that are modern and contemporary.

However, until recently persons with visual impairments or physical disabilities were not able to use the tool to their benefit. To help more people leverage Visio’s features and create visuals despite their limited dexterity, low vision or other disabilities, our team added new accessibility features to the platform.

Here’s how we made Visio more accessible:

Enhancing Visio accessibility

Making a digital tool for data visualizations more accessible presented a unique set of challenges. Unlike a text or image editor, Visio works with unique diagrams, titles, and structures. A flowchart or Venn diagram has more layers of information than a simple spreadsheet or text document. Reading and describing these diagrams to a user with visual impairment (VI) or other disabilities required innovative new means of reading, creating, editing and sharing.

The switch from Microsoft Active Accessibility (MSAA) to Microsoft UI Automation (UIA) enabled better screen reader tools. UIA is the next evolution of our suite of assistive technology products and automated testing tools. UIA offers a number of improvements to the accessibility features of all our products, including Visio.

UIA offers a filtered view of the tree structure of the user interface. In this tree structure, the desktop is the root, the applications are immediate children, and the various UI elements are descendants of those children. This new approach along with other innovations has helped us make Visio more accessible.

Creation and consumption of accessible diagrams

Individual shapes can be identified and described based on localized control types. The Visio diagrams can be read by a Screen Reader, which reads the specific formatting details like size, position and color details for Visio shapes. A Screen Reader also helps understand the connections between the shapes by reading out the start and the end points of connectors. The style set can now be specified so that users have a sense of direction, as well as starting and ending points, while connecting different shapes within flowcharts. Recent upgrades can keep these documents accessible while being exported to a PDF format. Within an exported PDF, these new features can detect linear structures within a tree diagram and read aloud various elements.

To make the diagrams seem as natural as possible for a user, the new engine reads the relationships between various texts, elements and shapes in the diagram. The engine uses this information to create a traversal flow of descriptions for shapes and texts so that the user can easily follow the pattern and understand the flowchart or diagram. Communicating the relationship between shapes to the accessibility tools makes the accessibility features feel more natural for the user. Similarly, the formats have been upgraded to ensure the accessibility features can be changed without compromising performance.

Accessibility features in Visio

 

Visio’s accessibility features have been designed to be as effortless and natural as possible. Users can easily create new content on Visio with the help of a screen reader such as Job Access With Speech (JAWS) or Windows Narrator. With UIA, the relationships between shapes and diagrams is automatically established and ordered so that a user can easily navigate the diagram. The latest feature, Data Visualizer, helps users transform data from Excel spreadsheets into Visio diagrams. Users can add Alt-Text in the Excel table to make the output diagrams accessible.

 

With the help of Alt-Text and a defined navigation order, the user can now convert documents to a PDF format and easily share them with everyone.

Enabling a more inclusive experience for all

Making Visio accessible helps us bring this powerful tool to more users, especially students with disabilities. Using Visio with these tools is more natural and seamless than ever before. The diagrams can now be interpreted by screen readers, making this visual platform more accessible for everyone. The platform now adapts to specific user disabilities and preferences to offer a more inclusive environment to create, share and consume accessible diagrams.

By adopting the universal standard for user interfaces and adding unique features for accessibility, our team has managed to open this visual platform to a wider user base than ever before. These modifications and new features are in line with our mission to empower every person and every organization on the planet to achieve more.

OneNote is an integral part of daily teaching toolkit

$
0
0

Today's guest blog post comes from the amazing Lee Whitmarsh, a longstanding MIEExpert in the UK, who is having an amazing impact on his students through his use of OneNote in his teaching as well as having a positive influence to inspire many colleagues in his own institution. Read Lee's story to here how in his own words. 


 

My name is Lee Whitmarsh and I am the Curriculum Leader for Art, Design and Technology at Alsager School in Cheshire UK. I have been given the opportunity to be part of the wonderful MIEE programme which has allowed me to develop my use of technology in teaching with students, parents and staff. I believe creative and effective use of technology is fundamental in building a progressive educational experience for all. The experience of working as part of the MIEE team has profoundly changed the way I teach, think about and engage with education.

 

 


What are the benefits of using OneNote?

 

 

I can not speak highly enough of the benefits of having OneNote as an integral part of your daily teaching toolkit.  We have developed our use of OneNote across three pedagogic areas

 

 

  • For students with the A Level Photography course. This has cascaded, and we have shared this approach with staff across the school who now use OneNote across such diverse areas as Music, Food Technology, English and Engineering.
  • For staff with a Faculty OneNote and individual OneNote Teacher planners
  • For Curriculum Management and faculty development and reflection.

 

OneNote has given time back to staff in the classroom as the content is accessible 24/7 from any device in and out of school; enabling us to share resources, approaches, track student progress, give immediate formative feedback via digital inking, photographing student work, written teacher/peer/self feedback and video and sound recording of tutorials.

 

 

 

 

 

 


What impact has the use of OneNote had on staff and students?

I have been working with staff from across the school sharing my MIEE experiences and inspirations from other fellow MIEE.  I have asked a colleague to share his experiences using OneNote.

“OneNote is important as my personal hub. Prior to OneNote I would have folders with various documents and print-outs for each class and course taught (Graphic Products, Resistant Materials and Engineering). Now each class has a page with sub-pages for individual students, data points, SEND information and so on. Its great how I can 'print' documents to these areas with the click of a button - especially my to-read section - from any of my devices.  Microsoft's multi-platform strategy has been a game changer. 

 I also used OneNote to create a digital textbook for an A-level theory class. Here I gathered information from books and PowerPoint presentations along with diagrams, videos, hyperlinks to other sites and past papers. Everything in one place ready to help them learn or revise.” 

 

 

 

Speaking to students they have told me that OneNote allows them to develop their projects at their own pace and has enabled them to move forward more effectively in and out of lessons as they have a digital hub to support their learning.  Having OneNote across multiple devices has allowed students to have this digital hub with them 24/7 via the OneNote app.

In terms of learning outcomes our entire Photography A Level is now run digitally through OneNote and we are seeing significant improvement in the student attainment which in turn has seen an increase in the number of students taking the subject.  We have 40 A-Level students this year and without OneNote we would not have the opportunity to give differentiated formative feedback direct into the student’s notebooks and manage the course as effectively as we are currently doing.

 

 


Follow the great work of Lee Whitmarsh further

 

Daily Edventures Blog post 

“OneNote has enabled us to work with different people, different cultures, and just collaborate and share with students what’s going on outside the classroom.” – Lee Whitmarsh, UK

Lee's own blog site

https://lwhitmarshblog.wordpress.com/

OneNote Central Twitter Link – Fantastic resource and support for OneNote

https://twitter.com/OneNoteC

Lee's Twitter account


 


CHALLENGE #4 – Translate Item Descriptions (LEVEL 2)

$
0
0

Scenario

For companies that provide item descriptions in more than one language, storing these descriptions on the item card (via extended text, for example) provides a ready-to-use reference.  As new items are entered, a service would translate the item description into one or more languages and store the translation for reference or use on orders or e-commerce engines.

To complete this challenge, you will need

To complete this challenge, you will need:

Expected result

Steps

  • Create an empty app
  • Create a table extension for the item table and add fields
  • Create a page extension for the item card and add fields
  • Add a Codeunit with a function that translates a text from one language to another.
  • Add a button to the Item card, calling the translate method for all new description fields.

Hints

  • In VS Code, use Ctrl+Shift+P and type AL GO and remove the customerlist page extension
  • Use the ttableext, and the tfieldtext snippet
  • Use the tpageext and the tpagefield snippet
  • Use HttpClient to communicate with the Web Service and use Json types (JsonObject, JsonToken, JsonArray and JsonValue) to build content and extract values from the Web Service result. See https://docs.microsoft.com/en-us/azure/cognitive-services/translator/reference/v3-0-translate 
  • Use taction snippet and add code to call the translate method

Cheat Sheets

 

Happy coding

Freddy Kristiansen
Technical Evangelist

CHALLENGE #5 – Classify your Customers (LEVEL 1)

$
0
0

Scenario

Add a field to classify your customers in Inactive, Bronze, Silver, Gold customers.

To complete this challenge, you will need

  • A Dynamics 365 Business Central Sandbox Environment
  • Visual Studio Code with the AL Extension installed
    • Azure VMs will have VS Code pre-installed

Expected result

Steps

  • Create an empty app
  • Create a table extension for the customer table and add an option field
  • Create a page extension for the customer card and add the field

Hints

  • In VS Code, use Ctrl+Shift+P and type AL GO and remove the customerlist page extension
  • Use the ttableext, and the tfieldoption snippet
  • Use the tpageext and the tpagefield snippet

Cheat sheets

 

Happy coding

Freddy Kristiansen
Technical Evangelist

Time sync, synchronization on an Azure App Service

$
0
0

An Azure App Service is PaaS and synchronizes the clocks based on the hosting platform.  As per this request, the drift may be up to 2 seconds and are synced once per week.  That request was made some years ago when an App Service was running on Windows Server 2012 and IIS 8.  With the introduction of Windows Server 2016 there have been some improvements in regards to the Windows Time Service and NTP.

You can view this post, “How to check Azure App Service OS version, what version of IIS” to see which Windows Server version your App Service is running on.  You should find that the version is now Windows Service 2016 and IIS 10.

Read here as well to find out why is time important.

I have read that the clocks are synced more often than once per week now and that the total clock drift is still within the tolerance for PCI-DSS purposes.  I would expect the clocks to be synched as per the setting in UpdatePollInterval of ~64 seconds.

*There are no App Service mechanisms for syncing the the clock yourself and if you require a more precise level of synchronization then you can consider an IaaS Azure offering.

I was able to run the W32TM command and check some stats and configurations on the App Service VM.  I didn’t try them all but this one looked interesting.  It displays a strip chart of the offset between this computer and another computer.

w32tm /stripchart /computer:time.windows.com /dataonly /samples:5

This has an output showing that in Figure 1.

image

Figure 1, time drift, time synchronization Azure App Service

You can also check the configuration of the W32Time on an Azure App Service using the following command in KUDU/SCM:

REG QUERY "HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesW32TimeConfig"

image

I have discussed other details you can find out using the REG QUERY, check these out:

When I dumped out the W32Time configuration I learned a few things:

  • The App Service VMs are apparently configured for High Accuracy as described here.
  • Managing time and the synchronization between servers is hard
  • There is a w32time.log file in the D:Windows directory, as seen in Figure 2, have a look through it.

image

Figure 2, how to troubleshoot time synchronization and Azure App Service

Here is what I could find about some internals of the configuration of the W32Time synchronization feature.

AnnounceFlags   0x0000000a  (10) Specifies whether this computer is a time server or a reliable
time server. A computer will not be classified as reliable
unless it is also classified as a time server.
ClockAdjustmentAuditLimit   0x00000320  (800)
ClockHoldoverPeriod   0x0000c350  (50000)
EventLogFlags   0x00000002  (2) Specifies the events that the time service logs.
FrequencyCorrectRate   0x00000004  (4) Affects the rate at which the time service corrects the clock.
This value, in units equal to the reciprocal of the rate at
which the clock is corrected, is multiplied by the number
of clock ticks in 64 seconds, to produce the base gain in the
phase-locked loop (PLL) of the clock correction algorithm.
HoldPeriod   0x00000005  (5) Specifies the period of time for which spike detection is
disabled in order to bring the local clock into synchronization
quickly. A spike is a time sample indicating that time is
off by a number of seconds, usually received after good
time samples have been returned consistently.
LargePhaseOffset Specifies the time offset, in seconds, by which times greater
than or equal to this value are to be considered suspicious
and possibly erroneous.
LastKnownGoodTime   0x
LocalClockDispersion   0x0000000a  (10) This setting, expressed in seconds, is applicable only when
the Network Time Protocol (NTP) server is using the time of
the local CMOS clock. The LocalClockDispersion value indicates
the maximum error in seconds that is reported by the NTP
server to clients that are requesting a time sample.
MaxAllowedPhaseOffset   0x0000012c  (300)
MaxNegPhaseCorrection   0xffffffff        (4294967295) Specifies the time, in seconds, of the largest negative time
correction that the Windows Time service is allowed to make.
If the service determines that a change larger than this is
required, then the service logs an event instead.
MaxPollInterval   0x0000000f  (15) Specifies the longest interval (in units of 2n seconds, where
n is the value of this entry) that is allowed for system
polling. While the system does not request samples less
frequently than this, a provider may refuse to produce samples
when requested to do so.
MaxPosPhaseCorrection   0xffffffff        (4294967295) Specifies the largest positive time correction, in seconds,
that the Windows Time service is allowed to make. If the
service determines that a change larger than this is
required, then the service logs an event instead.
MinPollInterval   0x0000000a  (10) Specifies the shortest interval (in units of 2n seconds,
where n is the value of this entry) that is allowed for
system polling. While the system does not request samples
more frequently than this, a provider may produce samples
whenever it wants.
PhaseCorrectRate   0x00000001  (1)
PollAdjustFactor   0x00000005  (5) Controls the decision to increase or decrease the
system-recommended poll interval. Larger values mean that a
smaller amount of error will cause the poll interval to be
decreased.
SpikeWatchPeriod   0x00000384  (900) Specifies the amount of time, in seconds, that a suspicious
offset must persist before it is accepted as correct.
TimeJumpAuditOffset   0x00007080  (28800)
UpdateInterval   0x00007530  (30000) Specifies the number of intervals, .01 seconds in length,
between phase correction adjustments.
UtilizeSslTimeData   0x00000001  (1) Determines the approximate current time from outgoing SSL
connections. This time value is used to monitor the local
system clock and correct any gross errors.

How do I create a SAL annotation for a structure with a variable-length array?

$
0
0


Some Windows structures

end with an array of size 1
.
If you try to access any members of that array beyond the first,
you may get a static analysis error.



typedef struct THINGGROUP
{
DWORD NumberOfThings;
THING Things[ANYSIZE_ARRAY];
};

void ProcessAllTheThings(_In_ const THINGGROUP* group)
{
for (DWORD index = 0; index < group->NumberOfThings; index++) {
// static analysis warning: possible index past end of array
// when NumberOfThings >= 2
ProcessOneThing(group->Things[index]);
}
}



How do you tell the Visual Studio static analysis tool that
the size of the Things array is specified by
the NumberOfThings member?



You use

the _Field_size_ annotation
.
The documentation doesn't really give an example of this case,
so here you go:



typedef struct THINGGROUP
{
DWORD NumberOfThings;
_Field_size_(NumberOfThings)
THING Things[ANYSIZE_ARRAY];
};

Azure Log Analytics: Disk Space Usage – Part 2

$
0
0


My previous post on this topic is one of the most viewed (according to our blog analytics in the last week).   So I thought it was time to share some extra queries that you many find helpful.

Please see the previous post,

Part 1: https://blogs.msdn.microsoft.com/ukhybridcloud/2017/12/08/azure-log-analytics-disk-space-usage/ 

The original query I produced was this:

// original capacity query
Perf
| where ObjectName == "LogicalDisk" and CounterName == "% Free Space"
| summarize FreeSpace = min(CounterValue) by Computer, InstanceName
| where strlen(InstanceName) ==2 and InstanceName contains ":"
| where FreeSpace < 50
| sort by FreeSpace asc
| render barchart kind=unstacked

I have now refined it, it’s now a stacked chart (so more than one driver letter is shown per computer)  I’ve also added a Let statement to set the % value (my new line #1).  This now shows in my example 9 servers that have drives below 60% usage (and one server that has two drives below that value).  These servers have a lot of spare space (currently), so you will probably be changing that value to something much smaller?

// New capacity query
let setpctvalue = 60; // enter a % value to check
Perf
| where TimeGenerated > ago(1d)
| where ObjectName == "LogicalDisk" and CounterName == "% Free Space"
| where InstanceName contains ":"
| summarize FreeSpace = min(CounterValue) by Computer, InstanceName
| where FreeSpace < setpctvalue
| render barchart kind = stacked

image

If you want you can also restrict this query and add a sort to it by adding these lines just before the final one.  I just wanted the Top 5 disks listed – this is useful if you have many servers and drives below your required threshold.

| top 5 by min_CounterValue
| sort by min_CounterValue desc nulls last

image

To add to this capability I wanted to track down some C: drive issues, and also wanted to look over more than a 1 day period.  In this example I selected 7 days, once again to aid readability I added a Let statement for this.

I also added  bin (TimeGenerated) to see the extra days of data, and this also allows a TimeChart to be used.  An extra modification was changing line #7 to look at only C:  - other drive letters are available.

// Chart C: if its under nn% over the past nn days
let setpctValue = 50; // enter a % value to check
let startDate = ago(7d); // enter how many days to look back on
Perf
| where TimeGenerated > startDate
| where ObjectName == "LogicalDisk" and CounterName == "% Free Space"
| where InstanceName contains "C:"
| summarize min(CounterValue) by Computer, InstanceName, bin (TimeGenerated, 1d)
| where min_CounterValue < setpctValue
| sort by min_CounterValue desc nulls last
| render timechart

This showed 4 servers and the data for the past 7days – and I now have this published to my Azure Dashboard.

image


That’s all for today, there will be a Part 3 where I’ll look at a Trend line and a capacity estimate to see when the drive space will be zero.

Azure Content Spotlight – What’s New with Cognitive Services

$
0
0

Welcome to another Azure Content Spotlight! These articles are used to highlight items in Azure that could be more visible to the Azure community.

This weeks content spotlight is all about Azure Cognitive Services. Seth Juarez's AI Show on Channel 9 provides regular updates on all the new AI features on the Azure platform, including Cognitive Services. See below a collection of the latest video's of that show, including video's that cover the updates and enhancements announced at the BUILD 2018 conference:

 

-Sjoukje

.NET Framework May 2018 Preview of Quality Rollup for Windows 10

$
0
0

Today, we are releasing the May 2018 Preview of Quality Rollup for Windows 10 1703 (Creators Update) and Windows 10 1607 (Anniversary Update).

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Resolves an issue in deserialization when using a collection, for example, ConcurrentDictionary by ignoring casing. [524135]
  • Resolves instances of high CPU usage with background garbage collection. This can be observed with the following two functions on the stack: clr!*gc_heap::bgc_thread_function, ntoskrnl!KiPageFault. Most of the CPU time is spent in the ntoskrnl!ExpWaitForSpinLockExclusiveAndAcquire function. This change updates background garbage collection to use the CLR implementation of write watch instead of the one in Windows. [574027]

Networking

  • Fixed a problem with connection limit when using HttpClient to send requests to loopback addresses. [539851]

WPF

  • A crash can occur during shutdown of an application that hosts WPF content in a separate AppDomain. (A notable example of this is an Office application hosting a VSTO add-in that uses WPF.) [543980]
  • Addresses an issue that caused XAML Browser Applications (XBAP’s) targeting .NET 3.5 to sometimes be loaded using .NET 4.x runtime incorrectly. [555344]
  • A WPF application can crash due to a NullReferenceException if a Binding (or MultiBinding) used in a DataTrigger (or MultiDataTrigger) belonging to a Style (or Template, or ThemeStyle) reports a new value, but whose host element gets GC'd in a very narrow window of time during the reporting process. [562000]
  • A WPF application can crash due to a spurious ElementNotAvailableException. This can arise if:
    1.Change TreeView.IsEnabled
    2.Remove an item X from the collection
    3.Re-insert the same item X back into the collection
    4.Remove one of X's subitems Y from its collection
    (Step 4 can happen any time relative to steps 2 and 3, as long as it's after step 1. Steps 2-4 must occur before the asynchronous call to UpdatePeer, posted by step 1; this will happen if steps 1-4 all occur in the same button-click handler.) [555225]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog.

Product Version Preview of Quality Rollup KB
Windows 10 1703 (Creators Update) Catalog
4103722
.NET Framework 4.7, 4.7.1 4103722
Windows 10 1607 (Anniversary Update) Catalog
4103720
.NET Framework 4.6.2, 4.7, 4.7.1 4103720

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:


Azure Cosmos DB customer profile: Jet.com

$
0
0

Jet.com powers an innovative e-commerce engine on Azure in less than 12 months!

Authored by Anna Skobodzinski, Aravind Krishna, and Shireesh Thota from Microsoft, in conjunction with Scott Havens from Jet.com.

This article is part of a series about customers who’ve worked closely with Microsoft on Azure over the last year. We look at why they chose Azure services and take a closer look at the design of their solution.

In this installment, we profile Jet.com, a fast-growing technology company with an e-commerce platform built entirely on Microsoft Azure, including the development and delivery of infrastructure, using both .NET and open-source technologies. To learn more about Jet.com, see the Jet.com video from the Microsoft Build 2018 conference.

Headquartered in Hoboken, New Jersey, Jet.com was cofounded by entrepreneur Marc Lore, perhaps best known as the creator of the popular Diapers.com e-commerce site that he eventually sold to Amazon. Jet.com has been growing fast since it was founded in 2014 and has sold 12 million products, from jeans to diapers. On August 8, 2016, Walmart acquired Jet.com, which is now a subsidiary of Walmart.com. Today the entrepreneur and his team compete head-on with Amazon.com through an innovative online marketplace called Jet.com.

Jet.com microservices approach to e-commerce

To compete with other online retailers, Jet.com continuously innovates. For example, Jet.com has a unique pricing engine that makes price adjustments in real-time. Lower prices encourage customers to buy more items at once and purchase items that are in the same distribution center.

For innovations like this, the technology teams need to be able to make changes to individual service components without having deep interdependencies. So they developed a functionality-first approach based on an event-sourced model that relies on microservices and agile processes.

Built in less than 12 months, the Jet.com platform is composed of open source software, Microsoft technologies such as Visual F#, and Azure platform-as-a-service (PaaS) services such as Azure Cosmos DB.

"We are forced to design systems with an eye on predicting what we’ll need them to do at 100x the volume."

- Jeremy Kimball: VP Engineering, Jet.com

Figure 1. The Jet.com technology stack mixes, third-party software, open source and Microsoft technologies and services on Azure, color-coded by technology.

Flexible microservices

If you’re running a large stack, you don’t want to take down all your service instances to deploy an update. That’s one reason why the core components of the Jet.com platform—for example, order processing, inventory management, and pricing—are each composed of hundreds of microservices. A microservice does one thing and only that one thing. This simplicity makes each microservice elastic, resilient, minimal, and complete, and their composability allows engineers to distribute complexity more evenly across a large system.

If Jet.com engineers need to change a service or add a feature, they can do so without affecting other parts of the system. The microservices architecture supports their fast-paced, continuous delivery and deployment cycles. The team can also scale their platform easily and still update services independently.

Event-sourced architecture

An event sourcing pattern is the basis for communication among Jet.com microservices. The conventional design pattern for a retail and order processing system typically stores the state of orders in a database, then queries the database to discover the current state and updates with the latest state. But what if a system also needs to know how it arrived at the current state? An event sourcing pattern solves this problem by storing all changes to an application state as a sequence of events.

For example, when an item is placed into a shopping cart at Jet.com, an order-processing microservice first places an event on the event stream. Then the inventory processing system reads the event off the event stream and makes a reservation with one of the merchants in the Jet.com ecosystem. The pricing engine reads these events, calculates pricing, and places a new event on the event stream, and so on.

"When we were building Jet's next-generation event sourcing platform, CosmosDB offered the low latency, high throughput, global availability, and rich feature set that are critical to our success."

Scott Havens: Director of Software Engineering, Jet.com

Figure 2. Event sourcing model drives the Jet.com architecture.

Events can be queried, and the event log can be used to reconstruct past states. Performing writes in an append-only manner via event sourcing is fast and scalable for systems that are write-heavy, in particular when used in combination with a write-optimized database.

Built for speed: the next-generation architecture

With ambitious goals and rapid growth, the Jet.com engineers always look for ways to enhance the shopping experience for their customers and improve their marketing systems. As their customer traffic grew, so too has the demand for scalability. So the team looked for efficiency gains in the inventory system that tracks available quantities of all SKUs from all partner merchants. This system also tracks the quantities held in reserve for orders in progress. Sellable quantities are shared with the Jet.com pricing engine, with the goal of minimizing reject rates from the merchants due to lack of inventory.

To make their services faster, smarter, and more efficient, Jet kicked off Panther, the next-generation inventory processing system. Panther needed to meet several important goals:

  • Improve the customer experience by reserving inventory earlier in the order pipeline.
  • Enhance insights for the marketing and operations teams by providing more historical data and better analytics.
  • Unify inventory management responsibilities typically spread across multiple systems and teams.

Jet’s existing inventory management system was not event sourced. The open source event storage system could not meet the latency and throughput requirements of inventory processing in an event-sourced manner at that scale. The Panther team knew that the existing storage system would not scale sufficiently to handle their ever-growing user base. Implemented as an infrastructure-as-a-service (IaaS) solution, its management was handled by the team as well. They wanted a solution that was easier to manage, supported high availability, offered replication across multiple geographic locations, and above all, performed well—backed up by a solid service-level agreement (SLA).

They prototyped a Panther system based on Azure Cosmos DB, a globally-distributed, massively scalable, low-latency database service on Azure. When the prototype showed promising results, they knew they had their solution. Panther uses a globally distributed Azure Cosmos DB for the event storage (sourcing) and event processing. Events from upstream systems are ingested directly into Azure Cosmos DB. The data is partitioned by the order ID and other properties, and their collections are scaled elastically to meet the demand. A microservice listens to changes that are written to the collections and emits new events to the event bus in the order they were committed in the database for the downstream services to consume.

Figure 3. The Panther inventory system dataflow. All services are written in F#.

View a larger version of this diagram.

Implementing event storage

Jet.com chose Azure Cosmos DB for Panther because it met their critical needs:

  • To serve a massive scale of both writes and reads with a low, single-digit millisecond latency.
  • To distribute their data globally, while serving requests locally to the application in each region.
  • To elastically scale both storage and throughput on demand, independently, at any time, and worldwide.
  • To guarantee the first-class, comprehensive SLAs for availability, latency, throughput, and consistency.

The event sourcing pattern requires a high volume of writes from the database, since every action or inter-service communication is handled through a database write command. Jet.com chose Azure Cosmos DB largely because, as a write-optimized database, it could handle the massive ingestion rates they needed.

They also needed the elasticity of storage and throughput to manage the costs carefully. Like most major retailers, Jet.com anticipates big peaks during the key shopping days, when the expected rate of events can increase 10 to 20 times compared to the norm. With Azure Cosmos DB, Jet.com can fine-tune the provisioned throughput on an hourly basis and pay only for what they need for a given week, or even hour, and change it at any time worldwide.

For example, during November 2016, Jet.com provisioned a single geo-replicated Azure Cosmos DB collection for order event streams with a configured throughput between 1 to 10 million request units (RUs) per second. The collection was partitioned using order ID as the partitioning key. Their access patterns retrieved events of a certain type for a certain order ID and the time range, so their service and database scaled seamlessly as the number of customers and orders grew. During Black Friday and Cyber Monday, the most popular shopping days, they were able to run with a provisioned throughput of 1 trillion RUs in 24 hours to satisfy their customer demand.

Low-latency operations

The event store running on Azure Cosmos DB was a mission-critical component for Panther. Not only was fast throughput essential, low latencies were an absolute requirement, and microservices had to operate smoothly. Azure Cosmos DB provides comprehensive, industry-leading SLAs—not just for 99.99 percent availability within a region and 99.999 percent availability across regions, but also for guaranteed single-digit-millisecond latency, consistency, and throughput. As a part of the latency SLA, Azure Cosmos DB guarantees that the database will complete more than 99 percent of reads under 10 ms and 99 percent of writes under 15 ms. On average, the observed latency numbers are substantially lower. This guarantee, backed up by the SLA, was an important benefit for Jet.com’s operations.

With rigorous latency objectives for the Panther service APIs, Jet.com actively monitored the 99th and 99.9th percentile request times for their APIs. These times were tied to the performance of the underlying NoSQL database calls. To meet these latency goals, the Jet.com engine had been exhausting its operational cycles to manage the scaling and configuration of their pre-Panther database. With the move to Azure Cosmos DB, the operational load has significantly lightened, so the Jet engineers can spend more time on other core services.

Turnkey global distribution

For Panther, operational and scalability improvements were important, but global distribution was mandatory. Azure Cosmos DB offered a turnkey solution that was a game-changer.

Most Panther services ran in the Azure East US2 region, but the data needed to be available in multiple locations worldwide, including the Azure West US region, where the data was used to perform less latency-critical business processes. The read region in the West had to be able to read the data in the order it was written by the write region in the East, and it had to happen with no more than a 15-minute delay. Azure Cosmos DB tunable consistency models help the team navigate the needs of various scenarios with minimal code and guaranteed correctness. For instance, relying on the bounded staleness consistency model ensures the 15-minute business requirement is met.

Azure Cosmos DB change feed support

Jet.com engineers wanted to ingest events related to Panther into persistent storage in Azure Cosmos DB as soon as possible. Then, they wanted to consume these events from many microservices to perform other activities such as making reservations with ecosystem merchants and updating an order’s fulfillment status. For correct order processing, it was crucial that these events were processed once and exactly once, and in the committed order, in Azure Cosmos DB.

To do this, the Jet.com engineers worked closely with the Azure Cosmos DB team at Microsoft to develop change feed support. The change feed APIs provide a way to read documents from a collection in the order in which they are modified. Azure Cosmos DB provides a distributed API that can be used to process the changes available in different ranges of partition keys in parallel, so that Jet.com can use a pool of machines to distribute the processing of events without having to manage the orchestration of work across the many machines.

Summary

Teams often take a lift-and-shift approach to the cloud using the technologies they’ve used before. To gain the scale they needed affordably, Jet.com realized that they had to use products designed for the cloud, and they were willing to try something new. The result was an innovative e-commerce engine running in production at massive scale and built in less than 12 months.

Within just a few weeks, a prototype based on Service Fabric proved that Panther could support the massive scale and the functionality Jet needed plus high availability and blazing fast performance across multiple regions. But what really made Panther possible was adding Azure Cosmos DB for the event store. Coupling an event-sourcing pattern with a microservices-based architecture gave them the flexibility they needed to keep improving Jet.com and delight their customers.

Certified Kubernetes Administrator (CKA) CNCF exam preparation resources that I found useful

$
0
0

Just wanted to share some of the resources that I found particularly useful in preparing for Certified Kubernetes Administrator (CKA) exam by Cloud Native Computing Foundation (CNFC) and The Linux Foundation.

I had the pleasure of taking the exam a few days ago. It was fun, challenging, and rewarding!

Good luck upskilling, practicing, and acing the CKA exam!

image

Please leave questions below or on Twitter https://twitter.com/ArsenVlad

Windows 10 Docker & GUI

$
0
0

Recently, whilst working with YOLO on Docker I received the following message:

Gtk_WARNING **: cannot open display:

To resolve this, I turned to Xming X Server, an X11 display server for Microsoft Windows.  Once installed, simply add your ip address to the c:Program Files (x86)Xmingx0.hosts file eg:

localhost
192.168.0.5

Then run Xming.  It'll sit in the System Tray and say "Xming Server:0.0"

Then from within PowerShell or VS Code Terminal set a DISPLAY environmental variable to your ip address eg:

PS> set-variable -name DISPLAY -value 192.168.0.5:0.0

Then run your docker container:

PS> docker run -it --privileged -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix daltskin/darknet bash

If you don't have a docker image to test this on, you can try it with Firefox eg:

FROM ubuntu:16.04
RUN apt-get update && apt-get install -y firefox
CMD /usr/bin/firefox

Then run the following commands (changing the ip address obvs)

PS> set-variable -name DISPLAY -value 192.168.0.5:0.0
PS> docker build firefox
PS> docker run -it --privileged -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix firefox

Then wait for the UI magic 🙂

How Azure IoT helped me buy a new house – Part 1

$
0
0

Premier Developer Consultant, Steve St Jean, shares a personal story on how he used Azure IoT to figure out a solution to a problem that many of us face – high electric bills. In the series, Steve shares the process and code that he used to implement this solution.


Telemetry data is an important component of any good DevOps process. Being able to observe and capture the data from a running system opens up a lot of opportunity to improve the system. I had a simple problem at home that telemetry could help me solve. With this in mind, I over-engineered a solution to the question, "Why is my office always so hot?".

The solution involved designing a sensor rig around an Espressif ESP8266 microcontroller development board running ESP8266 core for Arduino, a BME280 integrated environmental unit, and a simple photocell. The ESP8266 has built-in WiFi which I connected to my local WiFi network to send the temperature, barometric pressure, humidity, and amount of light to Azure IoT hub. From there I configured Azure Stream Analytics to process the incoming data and send it to PowerBI for display and review.

After realizing that there was no impact on the temperature based on my presence or absence, that the a/c was never turning off, that we were paying the electric bill, and that this house was a rental so we had no ability to fix it, we decided to build a new, energy-efficient house in a new neighborhood. Now my office is quite comfortable and my electric bill is about half of what it was in the rental. IoT FTW (for the win)!

Read this blog post to understand Steve’s process and code used to implement this solution.

What is API Management Context Request MatchedParameters?

$
0
0

The API Management public documentation for the request context variable usable in policy expressions at https://docs.microsoft.com/en-us/azure/api-management/api-management-policy-expressions#ContextVariables declares a property MatchedParameters: IReadOnlyDictionary<string, string> which is a bit ambiguous. Here is some insight learned from Vitalli Kurokhtin.

MatchedParameters collection captures (and only captures) parameters that were present in operation UrlTemplate. Those are path and/or query parameters, but only those query parameters that are in template. So if you have template as:

/user/{uid}/credentials?include={include}

And make request as:

/user/1/credentials?include=password&hash=true

MatchedParameters will have:

uid:1
include:password

Url.Query on the other hand will have:

include:password
hash:true

See the operation contract's UrlTemplate parameter at https://docs.microsoft.com/en-us/rest/api/apimanagement/apioperation/get#operationcontract

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>