Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Resource move validation failed. Cannot move Azure App Service

$
0
0

The scenarios in which you can and cannot move an Azure App Service are nicely documented here “App Service limitations”.  I also wrote an article about this as well here “Moving an Azure App Service”.

It is possible to move your Azure App Service between subscriptions and resource groups.  Like I have stated many times though, you cannot, with a click, physically move an App Service Plan nor an App Service (unless they remain in the same tenant).  Remember the logical hierarchy is: Subscription –> Resource Group –> App Service Plan –> App.  The App Service Plan and App have a physical/virtual relationship and a logical one, which is why they simply cannot be moved by a clickity-clack, don’t talk back (*).

One of the scenarios I got asked about was about the following:

“App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, it must be moved back to that original resource group first, and then it can be moved across subscriptions.”

The question was how to see if the App Service was previously in a different resource group and which one it was in.

If this happens in real-time the ERROR DETAILS explain it for you.  Here is what I got when I reproduced the error:

“Cannot move resources because some site(s) are hosted by other resource group(s) but located in resource group 'MOVEIT-MOVEIT-AGAIN'. The list of sites and corresponding hosting resource groups: 'sitename1:PARKED-DB3-001,sitename2:PARKED-DB3-001,sitename3:PARKED-DB3-001,sitename4:PARKED-DB3-001,sitename5:PARKED-DB3-001,sitename5:PARKED-DB3-001'. This may be a result of prior move operations. Move the site(s) back to respective hosting resource groups and try again. (Code: BadRequest, Target: Microsoft.Web/serverFarms)”

But, I was asked the question and here is the answer I have.  The simplest way I could see the current and previous Resource Group was by using the Azure PowerShell cmdlet Get-AzureRmResource as seen below, the output is shown in Figure 1.  The current Resource Group name is the value under the ResourceGroupName heading and the original Resource Group is under the Tags heading.

Get-AzureRmResource
   -ResourceId "/subscriptions/<SUB-ID>/resourceGroups/<RG-NAME>/resources" 
  | select Name, ResourceType, ResourceGroupName, Tags

image

Figure 1, how to see original Resource Group of an Azure App Service

You can also see the same in Resource Manager here, but it is not as friendly, see Figure 2.

image

Figure 2, how to see original Resource Group of an Azure App Service

Now you know and so do I.


Final year project using Microsoft HoloLens and Self-Attachment Therapy

$
0
0

Guest post by Nana Asiedu-Ampem University College London, Master of Engineering (MENG) student, Imperial College London, Computing Degree

clip_image002


Introduction

Psychotherapy is a form of treatment aimed at improving the mental health of people suffering from mental illnesses such as depression and anxiety disorders. These kinds of treatment usually involve a patient suffering from a mental illness discussing their feelings and emotions in a way that allows them to be better equipped to control them. This project focuses on a type of psychotherapy known as Self-Attachment Therapy (SAT) (1) with the aid of Augmented Reality (AR).

Mental illness is often linked to a person's lack of secure attachment during their childhood (e.g. uncaring and unreliable parents) which has affected their ability to form meaningful connections that can help them overcome their illness. Self-Attachment Therapy aims to help patients suffering from mental illness form a secure attachment with themselves so they become more self-reliant when it comes to controlling their emotions. The patient initially takes the role of the "adult-self" who is perceived as a primary caregiver to an imaginary child that embodies the person's "inner-child". The adult-self represents the rational part of the patient as well as their strengths. The inner-child is a representation of the patient's emotions, insecurities and vulnerabilities. It is up to the patient, who is imitating the primary caregiver for the imaginary inner-child, to comfort and show affection towards the inner-child to make them feel loved and valued. This is mainly done through the patient verbally complimenting their inner-child. Through the process of the patient comforting their inner-child, they form an important bond with them that they can use to overcome their troubles.

The project involves making an application using the Microsoft HoloLens that performs the Self-Attachment Therapy with a patient by allowing them to visually view their inner-child through a 3D avatar model. The 3D avatar will be perceived as a hologram placed in a position set by the application in the space around them. The hologram will display several key features of a child such as their face, hair and body. The patient would be able to view the emotions of the avatar as well as interact with it through speech. The avatar will initially show a sad emotion and frown, but through the interactions with the patient they can become happy and smile. This is part of the process of the patient forming a bond with their inner-child. The goal of the project is to provide a proof of concept of an application that can significantly improve the patient's engagement with SAT by using technology to make them feel more immersed in a safe environment where they can improve their mental health.

How it was implemented

The application was developed on the Unity game engine using the C# programming language to implement scripts. The app made extensive use of the Microsoft Mixed Reality Tool Kit. A 3D model of a generic child was created in Blender. It was then imported into Unity so it can be used as the holographic representation of the patient’s inner-child. The model was created by a colleague who was working on a similar project that creates child models based on the patient’s features.

clip_image004

The model has a number of animations which allows it to express various emotional states such as: happy, sad and scared. The model also has an animation that allows it to perform a dance.

SAT comprises of 4 stages that the patient has to perform to form a loving connection with their inner-child (Check the reference for more details on the stages (1)). To help the patient feel more immersed with the therapy, the inner-child was given spatial awareness on the patient’s location and their surroundings. Meaning the application places the inner child in an open space in the patient’s room with its shoes positioned on the floor. This was achieved by having the HoloLens scan the patient’s environment using the devices many sensors and the spatial mapping library provided by the Mixed Reality Tool Kit. This allowed the application to create an internal representation of the room’s floor which is used to place the inner-child on the floor. Also, 3D meshes that matched various objects in the patient’s room were created (such as tables and chairs). Unity uses ray casting to test collisions with these objects within the application. This was used to avoid placing the inner-child in positions that were obstructed with objects (e.g. a clear open space in the room will be found to place the inner-child, as opposed to placing it in the middle of a table). The inner-child is also constantly turning to face the patient. These features were implemented to make the inner-child feel like a more realistic addition to the user’s environment. This will make it significantly easier for the patient to perform the therapy stages.

How it will be used

Self-Attachment Therapy was developed by the project supervisor, Edalat Abbas. He created the project so he can research how well SAT works when combined with immersive technology. This project will be used in his research to help improve the process of the therapy.

Therapists will use the application with the HoloLens to help explain the process of SAT to their patients. Patients will be able to read vital information regarding the therapy through holographic boards that follow their line of sight. This will make it much easier for therapists to administer the therapy as the patient will already have access to all important information through the application itself. Initially, SAT required patients to close their eyes and imagine a younger version of themselves that embodies their imaginary inner-child was in front of them. Some patients will struggle with consistently imagining a realistic inner-child which will hinder their ability to form a meaningful connection with them. This will be a big challenge for therapists who will find it difficult to demonstrate how their patient's inner-child should look and how they should react during each stage of the therapy. However, the HoloLens alleviates these concerns from the therapist and patient by visually providing the patient with a representation of their inner-child. This makes it much easier for the patient to perform the therapy without having to constantly imagine a child they have to interact with. This will make the therapist’s work much easier as they no longer have to dictate the exact action the patient or inner-child has to perform at each stage.

Future projects

There is a lot of room for future extensions that will enhance the impact of the therapy. The application could make use of the Internet to allow a patient to incorporate content they enjoy with the therapy. For example, part of the therapy has the patient sing a soothing song to relax their inner-child. The Internet can be used to play an instrumental version of a song the user enjoys to help them through the process. The therapy could also benefit through the use of a more comprehensive spatial mapping process. Currently, the room is scanned once before the therapy begins. This can be updated to allow the application to scan the patient’s environment in real-time providing the application with a more detailed representation of the patient’s surroundings. This can then be used to augment the user’s room further by adding holograms that interact with real world objects. For example, the patient could download a photo they like from the Internet and turn it into a photo frame they can place on their wall. These extensions are intended to make the experience of performing the therapy with the HoloLens more enjoying, thus improving the patient’s motivation to carry out the therapy.

clip_image006clip_image008clip_image010clip_image012clip_image014

References

1: Self-attachment: A self-administrable intervention for chronic anxiety and depression https://www.doc.ic.ac.uk/research/technicalreports/2017/DTRS17-3.pdf

Face Recognition, API code and more: the Friday Five is back!

$
0
0
Fabian Gosebrink

Starting with Angular and Microsoft's Face Recognition API

Fabian Gosebrink is a professional software engineer, Microsoft MVP,  and Microsoft Technology Ambassador in Switzerland. He is also a Microsoft Certified Specialist in web application development and regular speaker at Microsoft events in Switzerland. He helps companies and projects to build web applications with AngularJS, Angular2, ASP.NET, ASP.NET Core, and all the build tools around it. Fabian is very into new technologies and helps to grow his community, by leading the biggest german speaking C# forum “mycsharp.de”. Follow him on Twitter @FabianGosebrink.

Are static methods faster in execution compared to instance methods (.NET)?

Jiří Činčura is an independent developer focusing on data and business layers, language constructs, parallelism and databases. Specifically Entity Framework, asynchronous and parallel programming, cloud and Azure. He’s a Microsoft MVP and you can read his articles, guides, tips and tricks at www.tabsoverspaces.com. Follow him on Twitter @cincura_net. 

Cleaner API Code with ResultFilter and ValueTuple

Kevin Dockx is a freelance solution architect, Pluralsight author & consultant, living in Antwerp (Belgium). These days he's mainly focused on RESTful architectures & security for web applications and mobile applications. That said, he still keeps an eye out for new developments concerning other products from the Microsoft .NET (Core) stack. Follow him on Twitter @KevinDockx.

Windows 10 IoT Core for Raspberry Pi 3 Model B+

Jiong Shi has been a Microsoft MVP for nine years, and is interested in Windows 10 IoT Core, Windows Embedded, Azure IoT and UWP. He is an associate professor at the School of Computer and Electronic Engineering, Zhejiang Wanli University, China. He is an author of the book “Windows 10 IoT Application Developer Guide”, a blogger, and speaker for Ignite China. Jiong is active in local developer community, speaking at technical conferences, writing articles on hackster, contributing to Windows IoT Core open-source projects, and  serving as a mentor and Microsoft Community Contributor. Follow him on Twitter @dearsj001.

Microsoft Cognitive Toolkit: Installation On Windows 7 

Asma Khalid is a Technical Evangelist, Technical Writer and Fanatic Explorer. She enjoys doodling with technologies, writing stories and sharing her knowledge with community. Her core domain is Software Engineering, but she’s also experienced in Product Management, Product Monitoring, Product Implementation, Product Execution and Product Coordination. She is the 1st female from Pakistan to receive Microsoft Most Valuable Professional (MVP) recognition and the 1st female from Pakistan to receive C-sharp corner online developer community Most Valuable Professional (MVP) recognition. She has 6+ years of experience as IT professional, freelancer & entrepreneur. She is currently working on her entrepreneur venture, AsmaKart. Follow her on Twitter @asmak.

 

Introduction to Data Science using F# and Azure Notebooks

$
0
0

Guest post by Nathan Lucaussy, Microsoft Student Partner at Oxford University.

image_thumb

Introduction to Data Science using F# and Azure Notebooks - Part 1: Functional Programming Basics via Plotting and Genetic Algorithms

Hello! I hope you'll enjoy the following blog post - it details the particular kind of algorithm that got me so excited about studying Computer Science in the first place! I'm Nathan, a second-year student reading Computer Science and Philosophy at the University of Oxford. I'm mainly interested in Machine Learning, Algorithms and Concrete AI Safety (ensuring AI algorithms do as they're told) - but in my spare time I enjoy playing the guitar and travelling. Reach me on LinkedIn: https://www.linkedin.com/in/nathan-lucaussy-60a16816a/ In this blog post we'll be getting used to F#'s functional style before deploying it for some data analysis in Azure Notebooks: our main task will be modelling the temperature of London over 3 years. We'll start off with plotting a time series and cleaning the dataset. This will introduce us to

  • the functional concept of Higher-Order Functions
  • F#'s type providers
  • the XPlot charting package
  • F#'s package dependency manager.

In the second half of this post, we will look at devising a Genetic Algorithm for fitting a sinusoidal curve to temperature data. By devising this regression algorithm, we'll be doing some Data Science, but we'll also be looking at important F# features:

  • functional recursive functions
  • in-detail pattern matching
  • wholemeal programming

We will do all of this using Azure Notebooks (in fact, in this very notebook!). Azure Notebooks is a free, online platform in the Microsoft Cloud providing an interactive development environment in F# but also Python and R. Its interactivity is particularly useful for data analysis, allowing instant visualisation of results.

A/ Cleaning the data and plotting the temperatures in XPlot.Plotly

The most widely used library for data science in F# is FsLab. It provides a number of packages, of which we will use:

  • FSharp.Data - gives access to data expressed in structured file formats, in this case a .csv file
  • XPlot.Plotly - builds interactive charts for data visualisation

Azure Notebooks natively supports Paket (the dependency manager for .NET's - and by extension F#'s - package repository NuGet). Follow the steps below to load the required packages directly from the NuGet repository:

1) #load "Paket.fsx" enables Paket within the Azure Notebooks environment.

2) Paket.Dependencies.Install """ ... """ This adds dependencies from the Nuget repository

3) Paket.Package ["FsLab"] generates dependencies for the downloaded packages

4) #load "Paket.Generated.Refs.fsx" to perform the actual referencing

Finally, we use the "open" keyword to open a namespace to the Notebook environment - much like Python's import.

#load "Paket.fsx"  
Paket.Dependencies.Install """  
frameworks: net45 
source https: //nuget.org/api/v2  
nuget FSharp.Data 
nuget XPlot.Plotly 
"""  
Paket.Package["XPlot.Plotly"  
      "FSharp.Data"]
#load "XPlot.Plotly.Paket.fsx"
#load "XPlot.Plotly.fsx"
#load "Paket.Generated.Refs.fsx"  
open System 
open FSharp.Data 
open XPlot.Plotly    

A.1 Preparing and cleaning the data

We now need to load in the weather data from the file Condition_Sunrise.csv. This is the data we will want to perform our analytics on. This is where F# really shines - F# offers type providers : an extremely efficient way to parse data and metadata from structured sources to an F#-intelligible schema. We make use of the CSV type provider.

The following type declaration:

  • creates a schema
  • infers types automatically from the CSV columns
  • retrieves column names from the first header row.

1. type Weather = CsvProvider < "/home/nbuser/library/Condition_Sunrise.csv" >  

F# specific: We are introduced to another of F#'s functional features: let bindings. In imperative languages, variables are bound to a memory address in which a value is placed - this may then be changed with another value. In F# let bindings bind an identifier with a value or function - they are immutable. Related to this idea is the fact that when used functionally F# treats instructions as expressions. That there is no real notion of state in a functional program makes it so much simpler to reason about program semantics.

We may now load the data from our CSV file - this is done by loading into an instance of the type given by the type provider:

let my_data = Weather.Load("/home/nbuser/library/Condition_Sunrise.csv")  

The object created is a schema which may be converted to an iterable sequence by calling the function Rows

F# specific: Notice the |> infix operator (pronounced pipe forward). It passes its first argument as an argument to it's second argument, a another function. This operator makes a huge difference with regards to readability of code, especially when performing data transforms on arrays.

Azure Notebooks' interactivity means we can read the first row of our CSV data. Immediately, we observe that some of the data will be of no use to us - we only need the data in the first two columns: DateTime and Temp.

let first_row = my_data.Rows | >; Seq.head 
first_row  
Out: ("December 12, 2012 at 07:07AM", 29, "Partly Cloudy", 46, 30)  

To select the required data we use an array comprehension, creating pairs of elements comprising of only the date and temperature.

let data_array = [ |  
    for row in my_data.Rows - >; (row.DateTime, row.Temp) |  
]  

We can lighten the array even further. Because the first column gives time at sunrise and second column represents the temperature at sunrise, we remove the time from each string of time.

To do this, we split the partition the string before and after the keyword 'at', and take the initial portion of the split string, using the Array.head function.

let removeTimeFromDateString(str: string) = str.Split([ | " at " | ], StringSplitOptions.None) | >; Array.head  

F# specific: We now have an array of tuples all of which have a string as a first element. But this string represents a date! How do we parse it as machine-understandable time? Here F#'s .NET Framework integration proves very useful: the DateTime package provides a function Parse that correctly parses our date format. ToOADate then converts DateTime objects into a numerical date format, to which we subtract the first date to make numbers more manageable i.e. starting from 0.

F# specific: In functional languages, Higher-Order Functions are prevalent. These are functions that either take in functions as arguments or return functions given arguments (or both). We have already met one: |> (pipe forward) . Note below the use of Array.map - it applies a function to every single element of an array.

_F# specific _ Lambdas, or anonymous functions, are often used in functional languages. They act like regular functions except they are unnamed - the syntax for defining lambdas in F# is: fun x -> 2*x, as an example. Below an anonymous function is used as argument to Array.map

let pruned_array = Array.map(fun(x: string, y: int) - >; ((x | > removeTimeFromDateString | > System.DateTime.Parse).ToOADate() - 41255.0, y)) data_array  
let date_values = pruned_array | >; Array.map fst 
let temp_values = pruned_array | >; Array.map snd  

A.2 Plotting temperatures as a function of time using XPlot.Plotly

Since XPlot.Plotly's namespace is open to our environment, we can now create XPlot objects. We choose to create a Scatter object (corresponding to the data organisation of a scatter-plot) because we have more than 1000 data points and a histogram, for example, would hinder clarity.

Note that the x and y series are passed in as arrays.

let trace1 = Scatter(x = (pruned_array | >; Array.map fst), y = (pruned_array | > Array.map snd), name = "Temperatures")  

Azure Notebooks allows for wonderful inline plotting:

trace1 | > Chart.Plot  

image_thumb[3]

B/ Using the Genetic Algorithm to fit a sine curve line to a periodic phenomenon: temperature

With these basics in place, we can start the regression process. Ultimately, we aim to provide a line of best fit giving temp_values as a function of date_values

B.1 The Genetic Algorithm

The genetic algorithm is used to solve optimisation problems by mimicking the process of evolution. Given a candidate solution - an Individual's particular characteristics (usually called Traits), we may generate a Population of Individuals each differing slightly from the source Individual in its Traits through randomnness. Having devised a way of ranking individuals (Fitness) in a Population, we perform a crossover of the best Individuals with the rest of the Population, adding in some randomness to escape local efficiency maxima. Each generation improves on the previous one, hence approximating an optimal solution.

For a graphical illustration of the Genetic Algorithm process, the following video, where a genetic algorithm learns to walk, may be of interest: https://youtu.be/xcIBoPuNIiw

· In our case, the individuals are four-tuples of Double values, for which we create a type Individual: they correspond to the values (a,b,c,d) in the family of the functions of the form a×(sin(b×x+c))+d. Such tuples capture all possible sinusoidal functions.

· We devise additional types : a Population will be a list of Individuals, Parents a pair of Individuals

When devising this large piece of code, which will ultimately run as a single function, we will use a design heuristic called wholemeal programming: never once will we look at the individual data contained within the data arrays or the list of individuals. Instead, we will repeatedly apply functions to the whole of the population. This kind of programming is distinctly functional.

type Individual = double * double * double * double 
type Population = Individual list 
type Parents = Individual * Individual  

Crucial to the genetic algorithm is a function that inserts randomness at various stages of the process - the following adds or removes up to 10% of a Double value:

let addTenPercRandom(random_gen: Random)(x: double): double = x * ((double(random_gen.Next(-100, 100)) / 1000.) + 1.)  

We also need a higher-order function that applies a function _f_ to every element of our 4-tuple individuals. You might recognise this as a map, and indeed it is the natural map on the tuple structure.

let tupleMap(f: Double - > Double)(w, x, y, z) = (f w, f x, f y, f z)  

B.2 Building the initial population

Because Genetic Algorithms are prone to getting stuck at local maxima, it is often useful to introduce a guess for the starting individual. We build our first population's generation around this individual.

  • The function makeIndividual creates a single individual with some randomness around a guess individual

F# Specific: When writing programs in a functional style, we aim to avoid using loops (indeed, in purely functional languages like Haskell, it is very hard, near impossible, to do so). The functional alternative is using recursion. The let rec keyword instructs the compiler that this is a recursive function. We pass a parameter count which is decreased at each iteration, until we reach a base case, 0.

F# Specific: To handle the base case and recursive cases differently, we use another distinctively functional feature: pattern matching. The match keyword compares the value of size with 0 or any other integer, defaulting to the empty list in the 0 case and building the list recursively when non-zero using the :: (cons) operator - it appends a first element to a list.

let makeIndividual(random_gen: Random)(guess_tuple: Individual): Individual = 
(tupleMap(addTenPercRandom random_gen) guess_tuple) 
let rec makePopulation(random_gen: Random)(size: int)(guess_tuple: Individual): Population = 
        match size with 
        | 0 - >; List.empty 
        | n - >; 
(makeIndividual random_gen guess_tuple):: makePopulation random_gen(size - 1) guess_tuple  

B.3 Evaluating the fitness of an individual

The second ingredient of the Genetic Algorithm is a function that evaluates the performance of an individual at the given task - called a fitness function.

In our case, it consists in approximating the temperature values - for this we create a function which calculates the image of the sine function for a given date and given individual. This is the purpose of findSinValue.

Fitness will be a measure of statistical squares: the sum of the squares of the differences between the simulated value and the actual temperature value, for each value in the temperature value array.

We define the type Result as a record of the Individual and the Fitness of that given individual - so that for clarity they are paired up.

The function simulate is defined in a very functional way:

  • findSinValue for a given individual is mapped to every value in the date array
  • the anonymous binary function (fun x y -> ((x - double y)**2.)) computes the squares
  • the higher-order function Array.map2 (analogous to Haskell's ZipWith) applies a binary function to values of two arrays in index order. It applies the anonymous function above to elements from the array of simulated sine values and the temperature values in turn.
  • Finally Array.sums sum the squares
type Result = {  
       Fitness: Double;IndividualTested: Individual  
}  
let findSinValue(a, b, c, d)(x_val: Double): Double = a * (sin((b * x_val) + c)) + d  
let simulate(individualTested: Individual): Result =  
    let chiSquared = (Array.map2(fun x y - >; ((x - double y) * * 2.))(date_values | > (Array.map(findSinValue individualTested))) temp_values) | > Array.sum {  
            Fitness = chiSquared;  
            IndividualTested = individualTested  
        }  

B.4 Evolving the next generation

Given a previously generated population, how do obtain a new, improved generation?

We devise a mechanism for crossing-over two individuals' traits. For balance, each parent gives half of the traits - there are thus 6 ways to arrange the traits. The merge function gives one of these ways depending on the number passed to it as argument.

The crossOver function selects a random way to merge parents' traits by passing one of six random numbers to the merge function.

We can now crossover individuals. For a whole population, we first extract the top-ranking half of the population:

  • sorting individual by fitness
  • taking the top half
  • extracting the individuals from the Result record

This is done by composing the three functions List.sortBy, List.take, List.map.

From the top half we extract the best two individuals:

  • they are immediately added to the next generation so as not to loose top performing individuals from each generation
  • the rest of the population is crossed-over with both the top individual and the second best, using the higher-order function map.
  • this newly-formed portion is then mutated using a mutation function, in our case 10% randomness.

This process yields a new generation that is at least as good as the previous one.

1.    let rng = Random()  
 
1.    let merge n(a, b, c, d)(a ',b', c ',d') = 
2.        match n with 
3.        | 0 - >; (a, b, c ',d') 
4.        | 1 - >; (a ',b', c, d) 
5.        | 2 - >; (a ',b,c', d) 
6.        | 3 - >; (a, b ',c,d') 
7.        | 4 - >; (a ',b,c,d') 
8.        | 5 - >; (a, b ',c', d) 
9.        | _ - >; raise(System.ArgumentException("There are only six cases!")
10.        ) 
11.    
1.    let crossOver(parents: Parents): Individual =  
2.        let randomCrossingOrder = rng.Next(6) 
3.        merge randomCrossingOrder(parents | >; fst)(parents | > snd) 
4.    
5.    let generateNextPopulation(mutatePopulation: Population - >; Population)(crossOver: Parents - > Individual)(results_generation: Result list): Population =  
6.            let best_individuals = results_generation 
7.            |>; List.sortBy(fun result - > result.Fitness) 
8.            //sort list in order of ascending mean squares  
9.            |>; List.take((results_generation.Length + 2) / 2) 
10.            //take the best half of the generation  
11.            |>; List.map(fun result - > result.IndividualTested) 
12.            //retrieve indivudals from elements of type Result record  
13.            best_individuals 
14.            |>; function | head::second::tail - > 
15.                (head::second::
16.        mutatePopulation
17.            ([
18.                tail | >; List.map(fun individual - > crossOver(head, individual));
19.                tail | >; List.map(fun individual - > crossOver(second, individual))
20.             ] | >; List.concat)) 
21.                       | _ - >; 
22.        raise(System.ArgumentException("Population not large enough to crossover!"))  
 

B.5 Repeating the simulation over 100 generations

Before running the evolution 100 times, we need to set starting parameters:

· As population size, we choose to have 1000 individuals so that there is sufficient variety of traits, including outliers which will avoid local maxima.

· As starting guess, we use basic mathematical properties to find start values by inspection: a is the amplitude of the periodical pattern, b is 2π÷period2π÷period(since this is a yearly phenomenon, we estimate the period to be approximately 365 days), c is the phase shift and d is the vertical shift.

· the repeatSimulation function is a natural candidate for recursion, taking the new generation's population each time. We obtain fitness for each individual using the previously-defined simulate function. We find the best individual via minimum squares-sum at each generation; for verbosity purposes this is printed each time. Finally, when we reach the last generation, the best individual is returned.

Notice how this last function uses only previously defined functions to manipulate data without ever looking at it. This is 'wholemeal programming'.

1.    let starting_guess = (30., 0.015, (-20.), 50.) let starting_size = 1000  
2.    let starting_population = makePopulation rng starting_size starting_guess  
3.    let mutation(pop: Population): Population = pop | >; List.map(tupleMap(addTenPercRandom rng)) 
4.    let rec repeatSimulation max round_number pop = 
5.        match round_number with 
6.        | count when count = max - >; pop | > List.head 
7.        | count - >;  
8.            let generation_results = pop | > List.map simulate  
9.            let best = generation_results | >; List.minBy(fun result - > result.Fitness) 
10.            printfn "Best fitness in this generation: %A"  
11.            best.Fitness  
12.        let new_generation = generation_results 
13.            | >; generateNextPopulation mutation crossOver new_generation 
14.            | >; repeatSimulation max(count + 1)  
 
1.    repeatSimulation 100 0 starting_population  yields: 
Best fitness in first generation: 155707.3092

Best fitness in 100th generation: 92311.57632

Conclusion: Results

Result Individual: (20.61520275,0.01711181459,−21.17986916,47.20517328)

We obtain a close solution: the sum of squares is 92048.62, which translates to a Root-Mean-Square of approximately 7.5. Given how the data has high variance with respect to a least-mean-squares fit of a sinusoidal curve, this is a very good result.

To illustrate the fit, let's go back to the original graph. We can compute the values for our sinusoidal model and overlay it on the Plotly Chart:

image_thumb[6]

It is safe to say we have observed some of F#'s most tremendous capabilities, notably through it's interoperability with the .NET Framework, Type Providers (FSharp.Data - but they also give access to R and Python packages) and strong type system. These features make it particularly suited for data analysis. Note that the Genetic Algorithm is a fun way to learn about functional programming because it is simple to understand; however it is not very efficient. Watch this space for presentations of more efficient Machine Learning algorithms in F# using Azure Notebooks!

Try this in Azure Notebooks

Azure Notebook Version:

https://notebooks.azure.com/anon-ioqeiw/libraries/FSharpAzureDataScience/html/Intro_to_FSharp_AzureNotebooks.ipynb

Building an Azure Event Grid app. Part 1: Event Grid Topic and a .NET Core custom app event publisher.

$
0
0

In order to have a sample “data feed” for an application I wanted to use Azure Event Grid to create a topic that events could be published to and clients could subscribe to in order to process the events.

The generic scenario is that of alarms (of whatever sort you’d imagine; car, house, IoT device) for which the events represent status updates.

This is the first post about getting started but in my next posts I will cover consuming the events in an Azure Logic App and turning the custom event publishing app into a Docker image.

Creating an Event Grid Topic

The first step is to create the Event Grid Topic. As Event Grid is serverless, this is as easy as you might expect (no thinking about infrastructure, sizing or scaling). The step by step how to is here but you simply give it a name, a resource group and choose the region to run it in. Wait a few seconds and that’s it, the Topic is created:

Creating a publisher (custom app)

I wanted to create an app that would keep generating events, representing alarms and their status, and publishing these to the Event Grid Topic created above.

Publishing an event is simply performing an HTTP Post, and although I chose to implement this in .NET Core, you could choose almost any language.

The data object in an Azure Event Grid event is a JSON payload, and therefore I started by thinking about what that data would consist of:

For my purposes I wanted the alarm to include the device id, an image sent by the alarm (in fact a URL to an image in blob storage), the location of the alarm (longitude and latitude) and the status (green, amber, red).

To create the app I simply used

dotnet new console

and then started editing it in Visual Studio Code

code .

A good reference for getting started with exactly this can be found here.

The key point for me that I’d call out is that if you’re used to the .NET HttpClient then in .NET Core HttpClient doesn’t have a PostAsJsonAsync method. Instead what I’ve ended up doing is create a class as follows:

public class JsonContent : StringContent

{

public JsonContent(object obj) :

base(JsonConvert.SerializeObject(obj), Encoding.UTF8, "application/json"){ }

}

Then call the PostAsync method but passing in the JSON content:

_client.PostAsync(_eventTopicEndpoint, new JsonContent(alarmEvents));

This requires that the Newtonsoft.Json package is added to the csproj file:

<PackageReference Include="Newtonsoft.Json" Version="9.0.1" />

At it’s core publishing to an event Grid Topic is then:

Set the headers including the Event Grid Topic key:

_client.DefaultRequestHeaders.Accept.Clear();

_client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));

_client.DefaultRequestHeaders.Add("aeg-sas-key", _eventAegSasKey);

Create an event with a payload of an object that matches the JSON schema you want:

AlarmEvent alarmEvent = new AlarmEvent {topic = _eventTopicResource, subject = "Alarm", id = Guid.NewGuid().ToString(), eventType = "recordInserted", eventTime = DateTime.Now.ToString("yyyy-MM-ddTHH:mm:ss.FFFFFFK"), data = payload };

AlarmEvent[] alarmEvents = { alarmEvent };

Post the event:

HttpResponseMessage response = await _client.PostAsync(_eventTopicEndpoint, new JsonContent(alarmEvents));

There are other ways of doing all of this of course but if you want it then all my code can be found in GitHub. A reasonable amount of the code is my attempt at setting configurable boundaries for the geographical location for the devices, but it’s not relevant here so I won’t go into it.

Before you commit to Git, make sure you have an appropriate .gitignore file to ensure you’re only staging the files you need. For .NET Core I used this one:

https://github.com/dotnet/core/blob/master/.gitignore

This reduced the number of files I needed to stage and commit from 20 to 4. There are a wide range of .gitignore files here for most languages.

I didn’t want to hardcode some of the key Event Grid information both so that keys didn’t end up in GitHub and also to make it reusable for other Topics. Therefore I can now run my console app from the command line providing some key arguments:

dotnet run <EventTopicURL> <EventResourcePath> <EventKey>

where:

  • EventTopicURL: the endpoint for the Event Grid Topic and can be copied from the Overview blade in the Azure Portal.
  • EventResourcePath: the path to the resource and is of the form: /subscriptions/< Azure subscription id>/resourceGroups/(Event Grid Topic resource group name)/providers/Microsoft.EventGrid/topics/(Event Grid Topic name).
  • EventKey: the key for the Event Grid Topic and can be copied from the Access Keys blade in the Azure Portal.

In the next post I’ll look at how to subscribe to the Topic with a Logic App to both test and consume the events.

Cheers,
Giles

Building an Azure Event Grid app. Part1: Event Grid Topic and a .NET Core custom app event publisher.

Building an Azure Event Grid app. Part2: Adding a Logic App subscriber to the Event Grid Topic.

Building an Azure Event Grid app. Part3: Turning the publisher app into a Docker image.

Top Stories from the Microsoft DevOps Community – 2018.08.10

$
0
0

Time for my favorite part of the week: sitting back on a Friday afternoon and catching up on the hard work going on in the DevOps community. And this week didn't disappoint, there were so many great stories that it was hard to pick just a few of my favorites.

Creating a CI/CD pipeline with VSTS and Compute Engine
Our friends at Google have put together a nice tutorial showing how you can use VSTS to build a CI/CD pipeline into Compute Engine. They demonstrate with Orchard CMS — an ASP.NET MVC app — but it's widely applicable to any developer on any platform.

The Dynatrace Unbreakable Pipeline in VSTS and Azure? Bam!
When Abel Wang saw Dynatrace's Unbreakable Pipeline, he knew he had to build it with VSTS and Azure. So he did — and he did it serverless with VSTS Deployment Gates and Azure Functions.

Using Azure Shell with VSTS Git Repositories and VS Code
Do you use Azure Shell? If so, you can now work with your VSTS-hosted Git repositories right from within Azure Shell. Tarun Arora shows you how to clone a repository, commit a change and push those changes back to the server.

Serverless, DevOps, and CI/CD: Part 2
Just because you're going serverless doesn't mean that you can start skipping best practices; Azure Functions need a CI/CD pipeline, too. Jeff Hollan shows how to build a proper pipeline for serverless.

Managing DNS with DNSControl, CloudFlare, DNSimple, GitHub, VSTS, Key Vault, and Docker!
Sure, you can build and deploy your code with VSTS. But what about building and deploying your DNS!? Kieran Jacobsen checks DNS zones into a Git repository and deploys their changes with a CI/CD pipeline.

Building an Azure Event Grid app. Part 2: Adding a Logic App subscriber to the Event Grid Topic.

$
0
0

I want to have something that subscribes to the Event Grid Topic, and when a new event is published takes action depending on the details of the event. In my scenario, I want to pay attention to and take action on alarm events with a status of red.

There are many options here. I could write custom code, and that code could be in a container, or could be a serverless function. However, a Logic App seems a good fit as it’s:

  • Serverless, so I don’t need to think about OS, infrastructure, sizing, scale etc..
  • Potentially a no code option.

Creating the Logic App

Very easy, in the Portal Create a Resource | Logic App | Create. Then give it a name, a resource group and a location:

Wait a few seconds and then go to the newly created Logic App. The first time the Logic App Designer will show, if not select Edit, and then select the “When a Event Grid event occurs” option:

Sign in to Azure to allow the trigger to access the relevant Azure subscription (if you want to access a separate subscription see Building an Azure Event Grid app. Part 4: Adding a Service Principal to the Event Grid Topic.)

Select the subscription and then set the Resource Type to Microsoft.EventGrid.Topics:

You should then see the Event Grid Topic that you created before:

Select the topic and save. Then select + New Step:

The event itself contains a JSON data payload, therefore the next step is to search for JSON and select the Parse JSON action:

Click in the Content field and select the Data object, which is one of the fields available from the event:

Now paste this schema (or your variation of it) into the Schema field:

{
    "properties": {
        "deviceId": {
            "type": "number"
        },
        "image": {
            "type": "string"
        },
        "latitude": {
            "type": "number"
        },
        "longitude": {
            "type": "number"
        },
        "status": {
            "type": "string"
        }
    },
    "type": "object"
}

You will now have access to the data in the event, and can therefore implement any logic that you wish.

In my case I start by adding a Condition, and if the status in the event is red, then pass the image to the Describe Image action (which uses the Vision Cognitive Service) to extract the description of the image. If the description is non-threatening (for example the description states the image is of a cat) then I treat this as a false positive and don’t raise the alarm (send emails, SMS etc.).

Put together whatever workflow you want using any of the hundreds of actions available, save the app and test it by running the publishing app and checking that the Logic App is working.

Remember to stop the publishing app generating events when you don’t need it, and be aware that you can disable and enable the Logic App when you don’t want it to run.

Cheers,
Giles

Building an Azure Event Grid app. Part 1: Event Grid Topic and a .NET Core custom app event publisher.

Building an Azure Event Grid app. Part 2: Adding a Logic App subscriber to the Event Grid Topic.

Building an Azure Event Grid app. Part 3: Turning the publisher app into a Docker image.

Building an Azure Event Grid app. Part 4: Adding a Service Principal to the Event Grid Topic.

Building an Azure Event Grid app. Part 5: Add a sprinkling of DevOps with a dash of VSTS.

Building an Azure Event Grid app. Part 3: Turning the publisher app into a Docker image.

$
0
0

Why do I want my publisher app to be a Docker image? Because I’d potentially like to be able to run the app from different environments and/or have other people run the app from their environments. Not having to rebuild or distribute the binaries would make it easier and I can be more certain that it will “just work”.

How to take my code (.NET Core in my example but again could be pretty much anything) and containerize it? I edited my app using Visual Studio Code and that made the process very easy.

  1. Make sure you’ve got the Docker extension installed in VS Code.
  2. Get VS Code to generate the Dockerfile for you by just answering some simple questions.
  3. Get VS Code to build the image for you.

That’s it. Very straightforward and now I have a Docker image for my app, run docker images and see your new image, or use the Docker explorer in VS Code for a UI equivalent.

Now I can run docker run <imagename> and pass in the same arguments as discussed in part 1.

But that’s only useful on my dev box. What I can now do is push the image to a repository so that I, or someone else, can pull down the image and run it on another environment. The obvious repository options for me are DockerHub and Azure Container Registry. In this example I’ll push to DockerHub as I want it to be easily publicly available:

  1. Create a DockerHub login if you don’t have one.
  2. Login to Docker at the command line:
docker login
  1. Build the container with a suitable tag (the prefix being your DockerHub id)
docker build -t gdavi/alarms-iot-simulator .
  1. Push the image to the repository:
docker push gdavi/alarms-iot-simulator

You should then have your image in the repository. Mine is at https://hub.docker.com/r/gdavi/alarms-iot-simulator/

Now you can perform a docker pull and have the image on another environment quickly, and ready to run as soon as it’s downloaded.

Cheers,
Giles

Building an Azure Event Grid app. Part 1: Event Grid Topic and a .NET Core custom app event publisher.

Building an Azure Event Grid app. Part 2: Adding a Logic App subscriber to the Event Grid Topic.

Building an Azure Event Grid app. Part 3: Turning the publisher app into a Docker image.

Building an Azure Event Grid app. Part 4: Adding a Service Principal to the Event Grid Topic.

Building an Azure Event Grid app. Part 5: Add a sprinkling of DevOps with a dash of VSTS.


Building an Azure Event Grid app. Part 4: Adding a service principal to the Event Grid Topic.

$
0
0

For my scenario I created the Azure Event Grid Topic in my subscription but I’d like clients outside of my Azure subscription to be able to subscribe to the Topic.

The way to do that is to use a service principal. This creates an App in Azure Active Directory that you can then permit to access to your entire subscription, a resource group or any individual resource. In this case I want to just provide sufficient access to my Event Grid Topic for subscribers.

Read and follow this excellent step by step how to for creating a service principal, including checking permissions and assigning the AAD app to the resource:

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal.

This breaks down into:

Create the AAD application.

Take note of:

  • Application ID. This will also be referred to as the Client ID. It can be found again in the AAD app afterwards.
  • Key. This will also be referred to as the Client Secret. This can only be viewed when you save it. If you don’t make a note you’ll need to create a new key (easy, but just be aware).
  • Tenant ID. This will also be referred to as the Directory ID. It can be found again in the AAD app afterwards.

Assign the AAD application to a role.

It’s quite straightforward and doesn’t take long. Having done this you can then access the Topic from another Azure subscription.

For the Logic App in part 2, for example, the initial trigger action of Azure Event Grid can then use the Connect with Service Principal option:

Enter the Service Principal connection details that you made a note of above:

And then you can select the Resource Type (Microsoft.EventGrid.Topics) and you will see the relevant Topic from another subscription:

Cheers,
Giles

Building an Azure Event Grid app. Part 1: Event Grid Topic and a .NET Core custom app event publisher.

Building an Azure Event Grid app. Part 2: Adding a Logic App subscriber to the Event Grid Topic.

Building an Azure Event Grid app. Part 3: Turning the publisher app into a Docker image.

Building an Azure Event Grid app. Part 4: Adding a Service Principal to the Event Grid Topic.

Building an Azure Event Grid app. Part 5: Add a sprinkling of DevOps with a dash of VSTS.

Visual Studio Toolbox: Improved User Experience in VSTS & Azure DevOps Projects

$
0
0

In this episode, Dmitry is joined by Jeremy Epling to discuss and show demos of how the VSTS team is working to improve the products web-based user interface for things like home page, general navigation, "my work" and favorites and around specific feature areas like Build Status and Release Management (RM) editor pages.

They also demo Azure DevOps Projects, a great way to get started on learning how to setup full CI/CD pipelines powered by VSTS, Application Insights and Azure infrastructure such as Web Apps or Azure Kubernetes Service (AKS) and do this right from the Ibiza Portal.

Resources:

Connect with the Microsoft Power BI Team at Power Summit Phoenix October 15-18 in Phoenix, AZ

$
0
0

            Bringing all the local user groups together for collaboration, networking and hands-on learning opportunities

You're invited to join Microsoft at Power Summit Phoenix, held October 15-18 in Phoenix, AZ at the Phoenix Convention Center.

Hosted by the Power Platform User Groups, this inaugural event blends special access to Microsoft leadership with credible peer-to-peer knowledge exchange. Power Summit sessions are specialized to meet the needs of Business Users, Analysts, Administrators and Developers.

Access to Microsoft
Microsoft will be at Power Summit in a big way, including a keynote presentation, roadmap sessions, breakouts – plus the Microsoft Power Series, and a large expo presence.

Customized Content
The Power Summit schedule is available online! Check out the 60+ sessions today, with content to help deepen understanding of your data and increase your knowledge of the Microsoft Power Platform – Power BI, PowerApps, and Microsoft Flow. Session presenters include Microsoft Engineers and Support Technicians, MVPs, and Power Users sharing real-life scenarios.

View sessions now.

Microsoft Power Series
One major way Microsoft will contribute to Power Summit content is through facilitating the Microsoft Power Series. Several members of the Microsoft product team will host 2+ hour deep-dive sessions specifically designed for Power Summit to help you increase functionality.

Check out the Microsoft Power Series.

Advanced Pricing
Now is the time to register for Power Summit to maximize your savings. Enjoy $200 off with Advanced pricing through Thursday, September 6. Save an additional 10% off your registration with our exclusive coupon code: PRPMicrosoft.

Save now & register

 

Stream Analytics: batching events and trigger on “last”

$
0
0

For a side project, I had to write a few Azure Stream Analytics queries. Not being a SQL -- let alone Stream Analytics -- person, it required some effort and substantial assistance. Hope it helps. 

The goal of this first query is to collect all incoming events from an eventhub over a duration of 10 seconds and then forward the collection to another eventhub as output. 

SELECT i1.Id, Collect() as messages
INTO output
FROM input 
group by i1.id, SessionWindow(second, 10, 15)

The second one is a bit more interesting. The purpose is to wait for a “last” event in a stream. If no event with a specific ID arrives within 15 seconds, the query needs to trigger an output. Here it is:

SELECT i1.Id, i1.EventEnqueuedUtcTime as t
into output
FROM Input as i1 TIMESTAMP BY EventEnqueuedUtcTime
LEFT outer JOIN Input as i2 TIMESTAMP BY EventEnqueuedUtcTime
ON i1.Id = i2.Id 
AND DATEDIFF(second,i1,i2) BETWEEN 0 AND 15
and i2.EventEnqueuedUtcTime > i1.EventEnqueuedUtcTime
WHERE i2 IS NULL

There is something counter intuitive about the last line, but it works beautifully. It is important to put the time comparison in the ON clause and not in the WHERE clause. In the last case, you won't get any output. 

Experiencing Data Access Issue in Azure and OMS portal for Log Analytics – 08/11 – Resolved

$
0
0
Final Update: Saturday, 11 August 2018 01:48 UTC

We've confirmed that all systems are back to normal with no customer impact as of 2018-08-10 00:56 UTC. Our logs show the incident started on 2018-08-09 19:16 UTC and that during the impacted time range 121 Azure Audit Logs customers comprising 112 subscriptions in South-East Australia and 130 Azure Audit Logs customers comprising 428 subscriptions in West Central regions experienced a loss of all logs of data type "AUDIT_LOG_REST_API".
  • Root Cause: The failure was due to a code regression in a service module that has been reverted
  • SEAU Incident Timeline: 3 Hours & 36 minutes - 2018-08-09 22:20 UTC through 2018-08-10 00:56 UTC
  • WCUS Incident Timeline: 5 Hours & 8 minutes - 2018-08-09 19:16 UTC through 2018-08-10 00:24 UTC

We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Jeff Miller





.NET Framework における時差情報(サマータイム)の取り扱い

$
0
0

実は先日、8/1 に社内で異動しまして、18 年間続けてきたコンサルタントからクラウドソリューションアーキテクトにロールチェンジしました。さてこの blog もタイトルを変えるべきかどうなのか……とかまったり考えていたら、ここ数日、びっくりするような話題が飛び込んできました。

2020 年のオリンピックに向けて、限定的(または恒久的)にサマータイムを導入する、というもの。話を聞いたときに耳を疑ったのですが、いやもう絶対に不可能だろう、と私も思いました;。上記に取り上げた立命館大学の上原さんのスライドは非常によくまとまっていて、ホントこれ、と思いましたが、一方で Windows をはじめとするコンピュータシステムでそもそもサマータイムがどのような取り扱いになるのかを知らない人は多いのでは?、と思います。

実はずいぶん昔に国際化対応アプリケーションの開発方法を調べたことがあり、そのときにタイムゾーン(時差情報)をどのように取り扱うのか、をまとめたことがあります。おそらく皆様の参考になるのでは、と思いますので、若干手直しして情報共有してみます。10 年以上前の資料なので、画面のスナップショットがまさかの XP や Vista(!)だったりするのはご容赦、なのですが、考え方は特に変わっていないので、今でも普通に通用する内容かなと思います。(もし何か変わっているところがあったら教えていただけると嬉しいです;。)

 

ちょっと長いスライドなので、夏時間(サマータイム)という観点から要点を抜き出すと、以下の 2 つがポイントです。

① Windows OS 上では、夏時間はタイムゾーンの変更として取り扱われる (p.10)

夏時間に入る=別の国のタイムゾーンに移動したのと同じようなこととして扱われます。イメージとしては、日本国内にいながら、時差のある国へ海外旅行に行ったのと同じような取り扱いになる、と考えるとわかりやすいかと思います。(なお、夏時間などのタイムゾーン情報はレジストリ情報として管理されており、Windows Update によりメンテナンスされています。なので本資料ではカバーしていませんが、 .NET Core で作ったアプリを Linux 上で動作させたりする場合にはまた別の注意が必要になってきます。)

② DateTime 型の加減算処理を行う際は、必ず UTC を介して行う (p.35, 36)

.NET Framework 上で日付をプログラミングする際、DateTime 型でプログラミングされている方が多いと思いますが、DateTime 型はオフセット値(UTC からの時差)を持つことができません。DateTime 型は、「見た目の数値」をそのまま加減算処理してしまうため、夏時間前後でのオフセット値の変化に対応できません。このため、DateTime 型で加減算処理を行う場合には、必ず UTC を介して計算する必要があります。(ちなみにこの問題に対応するために導入されたのが DateTimeOffset 型です。)

アメリカでは国内に主に 4 つのタイムゾーンがある上に夏時間まであるため、プログラミングを行う際にタイムゾーンを考慮したプログラミングを行うのは当然なのですが、日本国内の場合、長らく単一のタイムゾーンしかなかったため、タイムゾーンを考慮した形で設計・実装・テストされているシステムは非常に少ないでしょう。正直なところ、今の状況で夏時間が導入された場合のことを考えると、日本の開発の実情を知る自分としてはかなりぞっとします。(「うちのシステムは夏時間が導入されても大丈夫です!」と言える国内開発者はどれだけいるのだろうかと....;)

# ネットを見ていると、システムの時刻変えればいい、みたいに単純化されている人もいるのですが、スタンドアロンで動く時計みたいなものと、システム間で連携することが前提になっているシステムとでは全く違うんですよね;。
# 例えば DB に書かれた時刻や電文に乗っかってくる時刻が JST なのかサマータイムなのかは大問題で、自分のシステムがちゃんとサマータイム対応していたとしても、相手先のシステムから 1 or 2 時間ずれた時刻が送られてきた瞬間に業務矛盾を起こすケースとか、そういうぞっとするケースが考えられまくるんですよね。
# 本来はこういう問題を避けるために、すべて UTC ベースで処理するのが基本なんですが、ひと昔前のシステムだと UTC で時刻を扱っているケースなんてほとんどないのではないかと思うのですが;(もしかしたら今でもそういうシステムが多数なのかも;)。

2020 年に併せた夏時間の導入はあまりにも無茶すぎる、とエンジニア的な観点からは思いますが、とはいえこれからの IT エンジニアにとって、タイムゾーンや夏時間(サマータイム)を理解し、国際化対応アプリケーションを作る(海外でも動作するアプリを書く)ことは必須かと思います。この資料が、IT エンジニアのみなさんにとって、タイムゾーンや夏時間(サマータイム)を理解する一助になれば幸いです。

SQL Server detected a DTC/KTM in-doubt transaction

$
0
0

Few days back, one of my clients' faced an issue with their Availability group getting into the resolving state. On checking the logs found that the issue was due to a split brain scenario and as it was a business critical issue, we resolved the issue by restarting the SQL server services on the Secondary replica.

Post which the AG information with regards to its owners were clearly updated on the Failover cluster manager and thus the issue was resolved regarding the AG state. It was running as expected on the owner node and the secondary replica was also in a good shape.

As a verification point, I checked the states of the stand alone databases on both the replicas and happened to see the state of one of the databases in the (Not Synchronising/ Suspect) state on the secondary replica. On checking the error logs found the below errors :

2018-07-31 16:54:47.250 spid37s SQL Server detected a DTC/KTM in-doubt transaction with UOW {DD0xxxxx-BCB0-4xx4-9B52-2xxxxF88xxxA}.Please resolve it following the guideline for Troubleshooting DTC Transactions.
2018-07-31 16:54:47.250 spid37s Error: 3437, Severity: 21, State: 3.
2018-07-31 16:54:47.250 spid37s An error occurred while recovering database 'xxx'. Unable to connect to Microsoft Distributed Transaction Coordinator (MS DTC) to check the completion status of transaction (0:857660049). Fix MS DTC, and run recovery again.
2018-07-31 16:54:47.250 spid37s Error: 3414, Severity: 21, State: 2.
2018-07-31 16:54:47.250 spid37s An error occurred during recovery, preventing the database 'xxx' (9:0) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.

On checking the environment further found that they did have MSDTC configured and checked with the client if we could proceed with losing the transaction.

On his agreement, tried to abort the transaction from the MSDTC console, but failed to get the state of the database back to the online state. Thus, used the below script to find the UoW of the list of uncommitted MSDTC transactions :

USE Master;
GO

SELECT
DISTINCT(request_owner_guid) as UoW_Guid
FROM sys.dm_tran_locks
WHERE request_session_id =-2
GO

Then, killed the UoW {DD0xxxxx-BCB0-4xx4-9B52-2xxxxF88xxxA} and restarted the SQL instance which helped us to resolve the issue and both the database and the AG were up and running as before.

Hope this helps !! Happy Solving !!


Building an Azure Event Grid app. Part 5: CI and pushing to DockerHub with VSTS

$
0
0

In this part of the series I’d like to make sure that whenever I change the code for my Event Grid publishing app a new Docker image is built and then pushed to my container repository automatically.

My flow is therefore:

  • Edit code in Visual Studio Code
  • Commit and push the changes to GitHub
  • Trigger Continuous Integration (CI) build
  • Deploy resultant new container image to DockerHub

I’m going to use Visual Studio Team Services (VSTS) to do this, as I like it, it's easy and powerful and it will make any future Azure deployment even easier.

Create a new VSTS team project

I want a new VSTS team project within which I can add work items (user stories, bugs) and track work through various boards (Kanban, Task boards). Create the new project, selecting the most appropriate process template (I chose Scrum).

When the new project is created, by default it includes a new git repo. In this example it’s not needed as we’ll continue to keep the code in GitHub, but we’ll trigger a build in VSTS whenever a commit is pushed to the master branch in GitHub.

Create a new build pipeline

Add a new build pipeline and the first option is to set where the code repo is located. In this case it’s GitHub so select that:

Set up the connection by authenticating to GitHub, I chose to use OAuth:

Then continue to choose the most appropriate template. I select the Docker container template:

In the resultant build pipeline definition in the Build an Image task, change the Container Registry Type to Container Registry (i.e. not Azure Container Registry, a good choice but for this I wanted a completely public repository):

Then create a New Docker registry service connection. Change the Repository to DockerHub and enter your credentials:

 

I also wanted to set the image name to match the image I already had in DockerHub, and therefore changed the default image name from $(Build.Repository.Name):$(Build.BuildId):

To a variable I created called image.name which I set to gdavi/alarms-iot-simulator:

I also checked the Include latest tag option so that the new version is always the latest.

For the Push an Image task, make sure the command is set to Push and update the image name:

Finally set the Trigger to Enable Continuous Integration:

Now you can test by either making a code change and committing and pushing, or save and queue the build:

For now I don’t need to actually deploy the image beyond the container registry but if I wanted to I could add a Release pipeline to deploy to a container orchestrator such as Service Fabric or Azure Kubernetes Service.

Cheers,
Giles

Building an Azure Event Grid app. Part 1: Event Grid Topic and a .NET Core custom app event publisher.

Building an Azure Event Grid app. Part 2: Adding a Logic App subscriber to the Event Grid Topic.

Building an Azure Event Grid app. Part 3: Turning the publisher app into a Docker image.

Building an Azure Event Grid app. Part 4: Adding a Service Principal to the Event Grid Topic.

Building an Azure Event Grid app. Part 5: CI and pushing to DockerHub with VSTS.

Exporting a database that is/was used as SQL Data Sync metadata database

$
0
0

When trying to export a database that is or was used as SQL Data Sync metadata database you can encounter errors like:

Error encountered during the service operation.
Could not extract package from specified database.
The element DataSyncEncryptionKey_1076efa36f054d35a60e717333298486 is not supported in Microsoft Azure SQL Database v12.

or

Error encountered during the service operation.
One or more unsupported elements were found in the schema used as part of a data package.
Error SQL71501: Error validating element [TaskHosting]: Schema: [TaskHosting] has an unresolved reference to object [##MS_SyncAccount##].
Error SQL71501: Error validating element [dss]: Schema: [dss] has an unresolved reference to object [##MS_SyncAccount##].

 

The supported format for Azure SQL Database export and import is .bacpac files.

This kind of files can be seen as a table by table export and are not transactionally consistent.

Independently of the chosen method to generate the .bacpac file, you must take into account this lack of transactional consistency.

The way to ensure that the backup file is consistent is to export a database that has no write activity during the export or create a copy of the database and do the export based on that copy.

When the database is/was used as SQL Data Sync metadata database we need to rely on the second method because there are some security-related objects that are not supported in export at the moment.

In order to successfully export the database, we need to:

  1. Create a database copy.
    You can check how to create a copy of the database using the portal at https://docs.microsoft.com/en-us/azure/sql-database/sql-database-copy-portal (there is also documentation for doing this using T-SQL or PowerShell)
  2. Remove the Data Sync metadata objects from the copy. You can find a script to do it at https://raw.githubusercontent.com/vitomaz-msft/DataSyncMetadataCleanup/master/Data%20Sync%20complete%20cleanup.sql Please make sure you are connected to the copy.
  3. Export the database from the copy.
  4. Delete the database copy.

Gerald M. Weinberg

開発&ビルド環境へのインストール依頼に関して

$
0
0

以下の資料に記載されているように、PU12以降のバージョンで配置した、マイクロソフト管理の開発&ビルド環境では管理者権限での作業が行えません。
< Restricted Admin Access with Platform 12 Updates >

Restricted Admin Access with Platform 12 Updates

このため、お客様/パートナー様から、開発&ビルド環境へ、ソフトウェアのインストールや、各種設定変更など、管理者権限が必要な作業依頼をサービスリクエストで頂くことがございます。
誠に恐縮ですが、データセンターチームでは、開発&ビルド環境への、ソフトウェアインストールや設定変更は対応しておりません。
サービスリクエストを通して御依頼頂いても、実現できない可能性が高いこと御理解頂きたいと思います。

開発&ビルド環境に関するFAQは以下の資料を御参照ください。
(Restricted Admin Access on development VMs with Platform update 12: What you need to know)
https://community.dynamics.com/365/financeandoperations/b/newdynamicsax/archive/2018/01/06/restricted-admin-access-on-development-vms-with-platform-update-12-what-you-need-to-know

OSD Video Tutorial: Part 11 – MDT Integration

$
0
0

This session is part eleven of an ongoing series focusing on Operating System Deployment in Configuration Manager. In it, The focus is an overview of the Microsoft Deployment Toolkit comparing it's use standalone and integrated with Configuration Manager. Demonstrations include the Microsoft Deployment Toolkit UDI wizard, a discussion of the elements added when the Microsoft Deployment Toolkit is integrated with Configuration Manager 2012 and more.

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>