Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

.NET Framework setup verification tool, cleanup tool and detection sample code now support .NET Framework 4.7.2

$
0
0

The Windows 10 April 2018 Update recently shipped, and it includes the .NET Framework 4.7.2 as an OS component. In addition, a standalone redistributable version of the .NET Framework 4.7.2 is now available for download on earlier versions of Windows. To borrow wording from this support article, the .NET Framework 4.7.2 is a highly compatible, in-place update to the .NET Framework 4, 4.5, 4.5.1, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, and 4.7.1.

I have posted updated versions of the .NET Framework setup verification tool, the .NET Framework cleanup tool, and the sample code to detect .NET Framework install states that support detecting, verifying, and cleaning up the .NET Framework 4.7.2. You can find more information about how to download and use these tools at the following locations:

As always, if you run into any issues or have any feedback about these tools or samples, please let me know by posting a comment on one of my blog posts.


5/24 Webinar: Time intelligence for retail and wholesale industries with Power BI by Matt Allington

$
0
0

In this session Power BI MVP and noted book author Matt Allington will be joining from down under to deliver a Power Bi Webinar, focusing on how time intelligence can be used for analysis in these industries with Excel and Power BI.

When: May 24th 2018 2pm PST

Where:

About Matt Allington:

Matt Allington is a BI Professional with over 30 years’ experience in the Consumer Packaged Goods industry.  Matt has held senior job roles in both the commercial side of business as well as IT roles, and as such Matt is uniquely placed to understand the entire Business Intelligence process – from business needs right through to the IT challenges of delivery.  Matt was until recently the BI Director for Asia Pacific at The Coca-Cola Company where he lead a program of improved BI tool usage using Excel, Power Pivot and SharePoint.  In 2014 Matt left Coca-Cola to start up his own company (Excelerator BI) and he now specialises in helping small, medium and large companies in Australia leverage Microsoft Excel, Power Pivot and SharePoint to deliver rapid business value. Visit Matt's blog here. 

ACS Migration Guide

$
0
0

Previously Service Bus namespaces could contain Queues/Topics, Event Hubs, Relays, or Notification Hubs. You may still be affected by the ACS deprecation if you are using an older namespace. For the scope of the below article, "Service Bus namespaces" includes older, mixed-entity namespaces.

 

Access Control Service (ACS) is being retired soon (see blog post). Customers who are using ACS authorized Service Bus namespaces will need to migrate to Shared Access Signature (SAS) authorization prior to November 7, 2018 to protect themselves from unnecessary downtime.

We recommend following the below steps to start your migration to SAS authorization, which will be the supported model going forward.

1. Identify your ACS-Authorized Namespaces

Check to see if you have an ACS buddy namespace provisioned. For a Service Bus namespace typollaktestACS, there will be typollaktestacs-sb ACS buddy namespace provisioned if you are using ACS authorization. You can run the below command to see if an ACS namespace exists (or batch many together in one script):

PS C:Userstypollak> nslookup typollaktestacs-sb.accesscontrol.windows.net
....
Non-authoritative answer:
Name:   aad-ac-prod-sn1-001.cloudapp.net
Address: 70.37.92.127
Aliases: typollaktestacs-sb.accesscontrol.windows.net
         prod-sn1-001.accesscontrol.windows.net

If the ACS namespace resolves as shown above, continue with below steps. If nslookup fails with an error like 'Non-existent domain' then this namespace is NOT using ACS for authorization, and will not be impacted.

2. Identify if You Are Using ACS in your Application

The presence of an ACS buddy namespace does not necessarily mean that ACS authorization is being used. Even if an ACS buddy namespace is present, SAS authorization can be used. The following are some of the ways to determine if your Service Bus solution is using ACS.

  • Check your solution code for the following strings
    • "SharedSecretIssuer"
    • "SharedSecretValue"
    • "owner"
    • "SharedSecretTokenProvider" class references

 

  • Use Fiddler with decryption turned on (or any other network capture tool) to check for traffic to ACS. Look for traffic to <namespace>-sb.accesscontrol.windows.net.

 

  • Typical ACS errors have a reference to an error of form 'ACSxxxx' where xxxx represents a specific error code number.

 

3. Migration Scenarios (from here)

 

Unchanged Defaults

If you are using the ACS default settings, you can just replace the ACS connection string with the SAS connection string provided in the Azure portal (https://portal.azure.com).

Change your ACS connection string:

Endpoint=sb://<namespace>.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=[snip]

To your SAS connection string:

Endpoint=sb://<namespace>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=wG46rtpDCQRGRVnBsTx+eRbYQwkda0slQF1bHy1FBoA=

 

You can see your connection strings via PowerShell using the Get-AzureRmServiceBusAuthorizationRule cmdlet:

PS C:Userstypollak> Get-AzureRmServiceBusAuthorizationRule -ResourceGroupName $ResourceGroup -Namespace "typollaktestacs"

Id : /subscriptions/27150243-299a-426c-ba0b-68dd10cbd7aa/resourceGroups/Default-ServiceBus-SouthCentralUS/providers/Microsoft.ServiceBus/namespaces/typollaktestacs/AuthorizationRules/RootManageSharedAccessKey
Type :
Name : RootManageSharedAccessKey
Location :
Tags :
Rights : {Listen, Manage, Send}

Id : /subscriptions/27150243-299a-426c-ba0b-68dd10cbd7aa/resourceGroups/Default-ServiceBus-SouthCentralUS/providers/Microsoft.ServiceBus/namespaces/typollaktestacs/AuthorizationRules/Rule2
Type :
Name : Rule2
Location :
Tags :
Rights : {Manage, Listen, Send}

Id : /subscriptions/27150243-299a-426c-ba0b-68dd10cbd7aa/resourceGroups/Default-ServiceBus-SouthCentralUS/providers/Microsoft.ServiceBus/namespaces/typollaktestacs/AuthorizationRules/Rule3
Type :
Name : Rule3
Location :
Tags :
Rights : {Manage, Listen, Send}

After choosing the key with the specific rights you want, get the key values using Get-AzureRmServiceBusKey:

PS C:Userstypollak> Get-AzureRmServiceBusKey -ResourceGroupName $ResourceGroup -Namespace "typollaktestacs" -Name "Rule2"
PrimaryConnectionString       : Endpoint=sb://typollaktestacs.servicebus.windows.net/;SharedAccessKeyName=Rule2;SharedAccessKey=WYw3Hwu7OTSgiIrbVK2v0V+GSIXJwsGlO0E8b/FegeI=
SecondaryConnectionString     : Endpoint=sb://typollaktestacs.servicebus.windows.net/;SharedAccessKeyName=Rule2;SharedAccessKey=YCV+s+JkGUp/5UfKg5lLC/zHl7sFLmXX86AGv01cUUY=
AliasPrimaryConnectionString   :
AliasSecondaryConnectionString :
PrimaryKey                     : WYw3Hwu7OTSgiIrbVK2v0V+GSIXJwsGlO0E8b/FegeI=
SecondaryKey                   : YCV+s+JkGUp/5UfKg5lLC/zHl7sFLmXX86AGv01cUUY=
KeyName                       : Rule2

 

You should also replace all SharedSecretTokenProvider references with a SharedAccessSignatureTokenProvider object, and use the SAS policies/keys from the Azure portal instead of the ACS owner account:

From (ACS):

MessagingFactory mf = MessagingFactory.Create(runtimeUri, TokenProvider.CreateSharedSecretTokenProvider(issuerName, issuerSecret));

Change to (SAS):

MessagingFactory mf = MessagingFactory.Create(runtimeUri, TokenProvider.CreateSharedAccessSignatureTokenProvider(keyName, key));

Simple Rules

If the application uses custom service identities with simple rules, the migration is straightforward in the case where an ACS service identity was created to provide access control on a specific queue. This scenario is often the case in SaaS-style solutions where each queue is used as a bridge to a tenant site or branch office, and the service identity is created for that particular site. In this case, the respective service identity can be migrated to a Shared Access Signature rule, directly on the queue. The service identity name can become the SAS rule name and the service identity key can become the SAS rule key. The rights of the SAS rule are then configured equivalent to the respectively applicable ACS rule for the entity.

You can make this new and additional configuration of SAS in-place on any existing namespace that is federated with ACS, and the migration away from ACS is subsequently performed by using SharedAccessSignatureTokenProvider instead of SharedSecretTokenProvider. The namespace does not need to be unlinked from ACS.

You can assign SAS keys with Manage, Send, or Listen privileges via the Azure portal or the Set-AzureRmServiceBusAuthorizationRule cmdlet.

 

PS C:Userstypollak> Set-AzureRmServiceBusAuthorizationRule -ResourceGroupName "Default-ServiceBus-SouthCentralUS" -Namespace "typollaktestacs" -Name "Rule11" -Rights "Listen"

Id       : /subscriptions/27150243-299a-426c-ba0b-68dd10cbd7aa/resourceGroups/Default-ServiceBus-SouthCentralUS/providers/Microsoft.ServiceBus/namespaces/typollaktestacs/AuthorizationRules/Rule11
Type     :
Name     : Rule11
Location :
Tags     :
Rights   : {Listen}

 

Relay Specific Guidance

If you are using Relays, you might find something like below in your config file:

<behavior name="sharedSecretClientCredentials">
  <transportClientEndpointBehavior>
      <tokenProvider>
           <sharedSecret issuerName="ISSUER_NAME" issuerSecret="ISSUER_SECRET"/>
      </tokenProvider>
  </transportClientEndpointBehavior>
</behavior>

Please change this to below, with the appropriate key name/value from your Shared Access Signature Policy in the portal.

<sharedAccessSignature keyName="KEY_NAME" key="KEY_VALUE"/>

 

Please open a Service Bus support ticket here for urgent needs and immediate support. Additional questions can be sent to ACS-SB@microsoft.com.

 

FAQ

 

How do I map my ACS access rules to SAS?

  • The general circumstances and solutions for this are outlined here.
  • One caveat is that at any level (namespace/entity), there is a limit of 12 SharedAccessAuthorization rules. If you need more, we recommend using SAS Tokens . We have a sample on how to create a Security Token Service (STS) to issue SAS Tokens to trusted applications. The sample is meant to serve only as a guideline, and can be easily modified for your scenarios.

 

I don't use C#, what are my options?

Win2D 1.23.0 – Windows SDK version validation

$
0
0

Win2D version 1.23.0 is now available on NuGet and GitHub.

This release is functionally identical to 1.22.0, but includes a better error message if you try to build using too old a version of the Windows SDK.

Digital Transformation Conference – June 7th, Repton School

$
0
0

We are pleased to share details of the Digital Transformation Conference hosted by Microsoft Showcase School, Repton School on Thursday 7th June 2018.
Register here


Developing and implementing a digital strategy

Being a Microsoft Showcase School is about more than just the use of technology. It is about thought leadership, pioneering practices, and helping others in the community to develop. We are incredibly fortunate to work with so many wonderful schools and colleges who proactively engaging the wider community through various events, opening their doors to welcome other teachers and school leaders in sharing best practices and strategies.

June's event at Repton School will focus on exploring the issues in developing and implementing a digital strategy at a school, and its impact on teaching and learning, as what this means in broader terms for a whole school and its stakeholders.

Lee Alderman (Director of ICT) will talk about the 'The Repton Journey', and exploring the themes and topics of the day through the stories of students, teachers and staff. A separate session will examine the use of Microsoft Surface in the classroom, and Microsoft Education UK will be delivering a keynote presentation, as well as demoing some of the latest product innovations for the classroom. Finally, Microsoft partner Insight will be concluding the half-day event with a session on 'Introducing a Microsoft Surface Scheme'.

The conference is aimed at both primary and secondary schools in the maintained and independent sector and is free to attend. It will be of interest to governors, headmasters and other senior leaders and heads of ICT/digital strategy.

To register, please click here or for further details, please email Mr Lee Alderman at digitalconferences@repton.org.uk

 

 


Agenda (subject to change):

 

9.30                  Registration and coffee

10.00                Welcome: Tim Owen, (Deputy Head (Academic)), Repton School

10.10                The Repton journey: developing and implementing a digital strategy: Lee Alderman, (Director of ICT), Repton School

10.40                Microsoft Surfaces in the Classroom: Peter Siepmann (Head of Academic Music) and James Wilton (Housemaster of New House), Repton School

11.10                Coffee

11.30                Keynote address: Steve Beswick: (Microsoft Education UK)

12.00 noon       Demonstration of the latest Microsoft products for the classroom

12.30                Introducing a Microsoft Surface Scheme: Joel Nanton (Insight UK)

1.00                  Depart


Make lifelong connections in the Microsoft Educator Community

$
0
0

Are you a teacher striving to find new ways to engage your students?

If you’d like to connect with a global, professional learning community of other teachers, just like yourself, who are constantly pushing the boundaries of what a classroom looks and feels like then simply follow the three simple steps below: 

1. Join the Microsoft Educator Community - https://education.microsoft.com/

 

 

 

 


2. Earn 1000 points and become a Microsoft Innovative Educator - http://aka.ms/mie 


3. Continue your journey by becoming an MIE Expert - http://aka.ms/MIEExpert

 

 

 

 

 

 

 


Exploring Azure App Service – Web Apps and SQL Azure

$
0
0

There is a good chance that your web app uses a database. In my previous post introducing Azure App Service, I showed some of the benefits of hosting apps in Azure App Service, and how easy it is to get a basic site running in a few clicks. In this post I’ll show how to set up a SQL Azure database along with an App Service Web App from Visual Studio, and apply Entity Framework automatically as part of publish.

Let’s get going

To get started, you’ll first need:

  • Visual Studio 2017 with the ASP.NET and web development workload installed (download now)
  • An Azure account:
  • Any ASP.NET or ASP.NET Core app that uses a SQL Database. For the purposes of this post, I’ll create a new ASP.NET Core app with Individual Authentication:
    • On the “New ASP.NET Core Web Application” dialog, click the “Change Authentication” button.
      clip_image002
  • Then select the “Individual User Accounts” radio button and click “OK”.
  • Click OK.

I can now run my project locally (F5) and create user accounts which will be stored in a SQL Server Express Local DB on my machine.

Publishing to App Service with a Database

Let’s publish our application to Azure. To do this, I’ll right click my project in Solution Explorer and choose “Publish”

clip_image003

This brings up the Visual Studio publish target dialog, which will default to the Azure App Service pane with the “Create new” radio button selected. To continue click “Publish”.

This brings up the “Create App Service” dialog (see the “Key App Service Concepts” section of my previous post for an explanation of the fields). To create a SQL Database for our app to use, click the “Create a SQL Database” link in the top right section of the dialog.

clip_image005

This will bring up the “Configure SQL Database” dialog.

  • Note: If you are using a Visual Studio Enterprise subscription, many regions will not let you create a SQL Azure database so I recommend choosing “East US” or “West US 2” depending on where you are located (we are adding logic in in the Visual Studio 2017 15.8 update to remove those regions if that’s the case, but for now you’ll need to choose an appropriate region). To do this, click the “New…” button next to your “Hosting Plan Dropdown” and pick the appropriate region (“East US” or “West US 2”).
  • Since I don’t have an existing SQL Server, the first thing I need to do is create a server to host the database, so I’ll click the “New…” button next to the “SQL Server” dropdown,
  • Choose a location for the database.
  • Provide an administrator user name and password for the server
  • Click “OK”
    clip_image007
  • Make sure the connection string name field matches the name of the connection string your application uses to access the database (if using a new project, it is “DefaultConnection” which will be prepopulated for you).
    clip_image009
  • Click OK
  • Then click the “Create” button on the “Create App Service” dialog

It should take ~2-3 minutes to create all of the resources in Azure, then your application will publish and a browser will open to your home page.

Configuring EF Migrations

At this point there is a database for your app to use in the cloud, but EF migrations have not been applied, so any functionality that relies on the database (e.g. Registering for a user account) will result in an error.

To apply EF migrations to the database:

  • Click the “Configure…” button on the publish summary page
    clip_image011
  • Navigate to the “Settings” tab
  • When it finishes discovering data contexts, expand the “Entity Framework Migrations” section, and check the “Apply this migration on publish” for all of the contexts it finds
    clip_image013
  • Click “Save”
  • Click Publish again, in the output window you should see “Generating Entity framework SQL Scripts” and then “Generating Entity framework SQL Scripts completed successfully”
    clip_image015

That’s it, your web app and SQL Azure database are both configured and running in the cloud.

Conclusion

Hopefully, this post showed you how easy it is to try App Service and SQL Azure. We believe that for most people, App Service is the easiest place to get started with cloud development, even if you need to move to other services in the future for further capabilities (compare hosting options). As always, let us know if you run into any issues, or have any questions below or via Twitter.

Registrations open for the May 2018 Quarterly API Call

$
0
0

When:

  • May 24, 2018, 3:00 p.m. British Time | 4:00 p.m. Central European Time
  • May 24, 2018, 11:00 a.m. U.S. Pacific Time

Registration link: Click here for the EMEA session or here for the US/CA session.

 

Please join our PM team to learn more about the exciting new API features which are coming soon! We will explore features currently available as well as others that will be piloting and released in the near future.

 

New features and coming soon:

  • Microsoft Audience Network
  • API v12 release
  • Corporate authentication
  • API status dashboard

 

Updates on previously discussed features:

  • Bing Shopping Campaigns roadmap
  • MSA migration
  • API v11 sunset
  • SDK updates

 

This webcast will provide you with a clear understanding of upcoming features and the value they can bring to your search advertising campaigns. Along with a view of the Bing Ads API roadmap, we’ll also provide an estimated timeline as to when features will become available, to provide you with the building blocks needed to develop the solution that works best for you.


Publishing existing maven artifacts to VS Package Management

$
0
0

On a recent customer engagement, we hit a situation where the customer had some artifacts that were available locally (obtained through a script), but these were not available in a maven repo. When we tried to build the solution via a build server it failed.

We setup a maven based repository using VSTS Package Management as documented here, but this was not clear on how to upload an existing POM/JAR artifact.

After some research, and help from colleagues - https://maven.apache.org/guides/mini/guide-3rd-party-jars-remote.html provided the hints

For POC purposes I tried to deploy JTidy to VSTS Package Management – and this command did it.


mvn deploy:deploy-file -DpomFile=C:jtidyjtidy-4aug2000r7-dev.pom -Dfile=C:jtidyjtidy-4aug2000r7-dev.jar -DrepositoryId=vstsinstance-visualstudio.com-mavenfeed -Durl=https://vsinstance.pkgs.visualstudio.com/_packaging/MavenFeed/maven/v1


You obviously need to change the values (eg vstsinstance etc), and need to ensure you follow the initial instructions to setup and Maven feed, as well as the proper settings.xml file

Hope this helps,

Ahmed

Top stories from the VSTS community – 2018.05.18

$
0
0

Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics, listed in no specific order:

TOP STORIES

VIDEOS

TIP: If you want to get your VSTS news in audio form then be sure to subscribe to RadioTFS .

Here are some ways to connect with us:

  • Add a comment below
  • Use the #VSTS hashtag if you have articles you would like to see included

How to seamlessly migrate MySQL and PostgreSQL apps to Azure Database for MySQL and PostgreSQL with minimum downtime

$
0
0

At the recent Microsoft \Build 2018 conference, we announced that one of the key themes in the Data Modernization pillar is to showcase how easy it is to migrate existing apps to the cloud. Migrating your existing infrastructure, platform, services, databases, and applications to the cloud is much easier and more seamless today with the tools and guidance that Microsoft provides to help you migrate and modernize your applications.

To this end, you should definitely log in to the Microsoft Build Live site and take a look at the videos associated with the Playlist: Migrate existing apps to the cloud, which contains a great demo and sessions that discuss migrating applications and databases to the cloud.

One of the tools that Microsoft provides to our customers for migrating different database engines (including SQL Server, Oracle, MySQL, and PostgreSQL) to the cloud is Azure Database Migration Service (DMS).  One cool feature that we introduced for MySQL and PostgreSQL migrations is the continuous sync capability, which limits the amount of downtime incurred by the application. DMS performs an initial load of your on-premises to Azure Database for MySQL or Azure Database for PostgreSQL, and afterward continuously syncs any new transactions to Azure while the application remains running.

When the data catches up on the target Azure side, stop the application for a brief moment (minimum downtime), wait for the last batch of data (from the time you stop the application until the application is effectively unavailable to take any new traffic) to catch up in the target, and then simply update your connection string to point to Azure. There you have it, your application is now live on Azure!

We delivered two great demos on this capability at \Build 2018.

Corey Sanders, CVP of Azure Compute delivered a session, App Modernization with Microsoft Azure, with a demo (from 12:58 to 17:30) showing how fast and easy it is to migrate a PostgreSQL application to Azure Database for PostgreSQL.

In addition, I delivered session, Easily migrate MySQL/PostgreSQL apps to Azure managed service, with a demo showing how to migrate MySQL apps to Azure Database for MySQL.

DMS migration of MySQL and PostgreSQL sources is currently in preview. If you would like to try out the service to migrate your MySQL or PostgreSQL workloads, please sign up at https://aka.ms/dms-preview. We would love to have your feedback to help us further improve the service.

Thanks in advance!

Shau Phang
Senior Program Manager
Microsoft Database Migration Team

DevOps and Appification

$
0
0

In the following post, Premier Developer Consultant Ron Vincent explains how adopting DevOps principles can help organizations be more productive and aid in maintaining many apps.


These days the typical enterprise organization has dozens and even hundreds of apps that it creates and maintains. In this article we’ll discuss how we can apply DevOps principles and practices to such a large portfolio of apps.

The typical enterprise organization has a number of apps to aid employees, customers, suppliers, etc. There are apps for executives, middle management, and everyone else to do their every day jobs as shown here.

0

You can read more of Ron’s post here.

How ACR Build Id’s are generated

$
0
0

As you use ACR Build, for OS & Framework patching, native container builds, or validating a docker build without having the docker client installed, you may be wondering, what is the format for those alpha-numeric Build id's. aa12

The short answer is it's based on the region the build was executed upon, and a base 32, alpha-numeric sequence to provide independent execution, with multi-region fault tolerance.

The format is: [2 digit, base 32 region code][base 32 sequence]

For some background why we chose this solution...

Accepting failure as reality

As we build ACR, and its associated services, we accept the premise that "failure is reality". One of the base foundations of container orchestrators is to accept failure as reality, at the infrastructure level. Its not that we don't strive to make ACR and all our container services reliable. But attempting to make a service n9's reliable is a costly investment. Rather than spend 90% of our effort, getting the last n% of reliability, we invest in recovering from and accepting failure, and providing more capabilities. The end result is you achieve n9's, but you do so by accepting any number of elements may individually fail, and we avoid single points of failure.

ACR Build, Independent Execution, Multi-Region Reliability

ACR supports geo-replicated registries. Which means a single registry can span multiple regions. ACR supports multi-master changes, so any push, any delete, any meta-data update to any region is eventually consistent. Meaning, there's no guarantee for the ordering of each regions replicated arrival. However, all replicated regions will eventually have all the images and meta-data. There's no single master controller. This means any registry can go down, and the rest will continue to operate. As a practice, we have no single points of failure, other than the big rock we all reside. We have planetary replication, to replicate to Mars and other destinations on our backlog, so the sky is not the limit.

Each replicated registry should be able to build, on its own, and become eventually consistent.

Balancing Needs for a Unique ID

With a little background, we can see why we didn't just use a Count Dracula approach of 1, 2, 3... We then had to decide; how can we generate a unique id?

We came up with a few guiding principles.

  1. It must be unique, as we want to support Best practices for tagging and versioning docker images
  2. It must be short enough to be able remember as you tell someone across the room, or type into another device while looking at another screen
  3. It must be easy for a human to read, and not get confused between 1 and l. 0 and o. 2 and z.
  4. It would be nice if there was some sense of sequential ordering. Although it wasn't critical to use every digit.
  5. It didn't have to be globally unique. It could be unique to a specific registry. Meaning myregistry.azurecr.io and yourregistry.azurecr.io could have the same build id. Making it globally unique would conflict with principals 2-4.
  6. We could seed each region with it's own pre-qualifier, so each geo-replicated region of a registry could be sequential

We considered prefacing the region specific unique number with the region id. eastus-1234. Although the first private preview used this, it seemed long and could be confusing when you consider that ACR Build could load balance builds across geo-replicated regions. By using generic numbers, there's no guarantee the preface characters really meant a specific thing.

Where we landed:

  • To keep it short, human readable, we went with a base 32 alpha-numeric sequence. All digits will be lower case, including the numbers. (what's the point of doing this if we can't laugh at the details) 0123456789abcdefghjkmnpqrstuvwxy
  • Each region will get a sequential two digit preface. Rather than do an elaborate zip code model, we'll take a simplistic agile approach. Every region we'll add, will simply get the next sequential two digit seed.
    An insider secret. You'll know the order we roll out build regions by its preface characters.
Region Seed
East US aa
West Europe ab
The next region ac
  • Each region will track its next id.
  • If a build fails, the build id is associated with the failed build, and the next sequential number is used for the next build.
  • Quick builds az acr build -t ., will also get sequential build ids.
  • Build Ids are assigned as the build is queued. If two builds are queued at the same time, that started at 1pm, that took 2 minutes will have a lower number than the build that started at 1:01 pm, and only took 30 seconds.
  • The preface characters are not tied to the closest registry from where the command is executed. As customers adopt base image updates, we expect huge bursts of automated builds as popular OS and run times update. As we scale ACR Build, we will stage the events, globally, just as Windows Update does today. If a region is at capacity, we may utilize any geo-replicated regions of a registry to load balance builds. If a registry is replicated between East US and West Europe, a build initiated on the east coast, may be bounced to West Europe based on capacity. ACR Geo-replication will kick in, and the Build ID will start with ab, the seed from West Europe.

We're excited to participate in building reliable systems, based on the reality that failure is a thing. We also strive on building systems that are easy to maintain as all too often, failures are triggered by complexity of simple things: quick-6-six-unit-conversion-disasters

Steve

Debugging Beyond Visual Studio – WinDbg

$
0
0

In this post, Sr. App Dev Managers Al Mata, Candice Lai, and Syed Mehdi gives a walkthrough of WinDbg.


You’re likely a developer and have used a code editor to debug and analyze your application failures. Few developers know or understand the “old school” way of troubleshooting to uncover additional details; enter the WinDbg debugger.

WinDbg is a general-purpose debugger for Windows operating system applications and code. It helps Developers find and resolve errors in their application, memory, system and drivers to name a few. This article introduces you to the WinDbg debugging concept and tool.



Getting started with WinDbg:

1. Download the Debugging Tools for Windows from the Microsoft website

We recommend you install WinDbg Preview as it offers more modern visuals, faster windows, a full-fledged scripting experience, built with extensible debugger data model front and center.

clip_image001

2. When clicking Download from the Microsoft Store, a prompt will appear, select “Get”

clip_image003

3. Windows will start the download and installation process. A prompt will confirm installation status.

clip_image004

4. Select to “Pin to Start,” close windows by clicking “X” on the top right of Window.

clip_image006

5. Set the Windows Symbol Server path in File > Settings > Symbol path (see example below)

clip_image008

6. Go to your Start menu, select the WinDbg Preview to launch the application

7. The WinDbg initial view

clip_image010


8. What is the difference between User Mode-Debugging and Kernel-Mode Debugging?

In User mode debugging, the code normally delegates to the system API’s to access hardware or memory. You typically are debugging a single executable, which is separated from other executables by the OS. Typical scenario is to isolate memory or application hang issues on Win32 desktop applications. In User mode, the debugger is running on the same system as the code being debugged.

In Kernel mode debugging, the code normally has unrestricted access to the hardware. Typical scenario is driver code developed for hardware devices. When debugging in Kernel mode you typically use two different systems. One system runs the code that is being debugged, and another runs the debugger, usually connected with a cable. Click here for additional information on Kernel mode debugging.


9. Advantages of WinDbg:

  • Extensive numbers of commands and extensions.
  • A useful tool to help understand OS and software running on the system being debugged.
  • Lightweight and can be used in production as it has no dependency, only require an executable (.exe) to run.
  • A useful tool to help isolate User or Kernel mode code that's difficult to troubleshoot on Windows.


10. Common User mode debugging commands:

.hh (Open WinDbg’s help)

clip_image012

Vertaget (Get the version of the target computer)

clip_image014

Symbol Path (Display or set symbol search path)

clip_image016

Version (Dump version info of debugger and loaded extension DLLs)

clip_image018

!ext.help(General extensions)

clip_image020

!analyze -v (Display information about the current exception or bug check; verbose)

clip_image022


11. Common Kernel mode debugging commands:

!analyze

clip_image024

!error (plus error code, e.g. “!error c0000005)

clip_image026


12. Useful links:

Debugging Using WinDbg Preview:

https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugging-using-windbg-preview

Getting Started with WinDbg Microsoft Docs:

https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/getting-started-with-windbg

Common WinDbg Commands:

http://windbg.info/doc/1-common-cmds.html

Elementary User-Mode Debugging:

https://microsoft.sharepoint.com/teams/bidpwiki/Pages1/Elementary%20User-Mode%20Debug.aspx


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Activate Free Azure Cloud Credits – Visual Studio Dev Essentials

$
0
0

In today’s article, we talk about Visual Studio(VS). VS is not only a tool(well, it was once a tool), now it’s a complete suite of things. As the suite goes, Visual Studio Dev Essentials. These are the essentials of what a dev need to get started – the cloud, the tool and how to do it. In other terms, Azure credits, useful tools and trainings. It is all available for free.

Free? Really?

Yeah. Really* 😉 * = If you need unrestricted access, more powerful tools and every training Microsoft has to offer, then you’ll need a more powerful subscription – namely MSDN or VS that comes with MSDN.

What am I signing up for?

Well, $25 (USD) of Azure credits allowing you to subscribe to Azure services(check out my article here), tools(like Parallels for Mac, Windows tools, Power BI visualization tools), Plural Sight training, Xamarin University training(the select mobile portion). If you’re not looking for ninja Level 400 trainings, MVA that comes with VS Dev Essentials will already be sufficient.

How do I activate the cloud?

Step 1: Go to https://www.visualstudio.com/dev-essentials/ and join the program.

Step 2: Sign in to your microsoft account (if you don’t have one, create here – you can use gmail or your own domain as well)

Step 3: Wait for the subscription to finish provision and accept the terms to begin. After you’re done, you’ll be able to see the dashboard.

Step 4: Click Active Azure as shown in the screenshot below:-

a1

Step 5: Fill up the form like one showed below:-

a2

Step 6: A couple of tips when you fill up.

1.       About you part

Make sure you put the correct country. If not you will have problem verifying your bank card. Work phone can be your mobile number. Must put in country code. 65 something, 60 something, 44 something etc.

2.       Identification by phone part

The verification code to phone may take a bit of time. If after 10mins and you still stuck at this step, use your friend’s mobile number. This number is just to receive a code to active the subscription.

3.       Payment information part

Best is to use a credit card, some debit card works too. Your card must be tied to an active bank account. Microsoft WILL NOT charge you anything at the end of the day. However, we do charge 1USD to make sure the card is a legit one. The refund will take place in 1-few days depending on bank. Microsoft will not charge you if you exceed $25. All service will stop. Service will resume the next month. What happen if you want your services to be up and running? (Enable Pay as you go upon credit depletion).

4.       Agreement

Agree with the privacy statement. If you don’t want to receive mails from Microsoft, don’t check the second box.

Step 7: Click Purchase and wait for your account to be provisioned. When it’s done, click on ‘Get Started with your Azure Subscription’ as shown below:-

a4

Step 8: Done! Now you’re at portal.azure.com – your candy store where you can provision microsoft cloud services. Check out my article in what you can do 😊

 

Final notes:

To confirm the subscription is correctly provisioned, check your Billing. It should say ‘Developer Benefits’ like the screenshot below:-

a5

Welcome to the family 😊 xx


Activate Microsoft Free $200 Credit – You gateway to Azure cloud

$
0
0

How do I activate the cloud?

Step 1: Go to https://azure.microsoft.com/en-us/offers/ms-azr-0044p/.

Step 2: Sign in to your microsoft account (if you don’t have one, create here – you can use gmail or your own domain as well)

Step 3: Fill up the form as shown. Make sure you put the correct country. If not you will have problem verifying your bank card. Work phone can be your mobile number. Must put in country code. 65 something, 60 something, 44 something etc.

Step 4: Verify by a phone number. You can either have Microsoft to call or text you. The verification code to phone may take a bit of time. If after 10mins and you still stuck at this step, use another mobile number. Maybe your friend’s? This number is just to receive a code to active the subscription.

Step 5: Verify yourself as a real person with a bank card. Best is to use a credit card, some debit card works too. Your card must be tied to an active bank account. Microsoft WILL NOT charge you anything at the end of the day. However, we do charge 1USD to make sure the card is a legit one. The refund will take place in 1-few days depending on bank. Microsoft will not charge you beyond your trial $200. All services will stop. What happen if you want your services to be up and running? (Enable Pay as you go upon credit depletion).

*Please note if you enable pay as you go from the start, you will be charged until you state you do not want to. Services will continue and will NOT be taken down.

 

Step 6: Agreement. Click the I agree whenever you are ready.

Step 7: Click Purchase and wait for your account to be provisioned. When it’s done, click on ‘Get Started with your Azure Subscription’ as shown below:-

a4

Step 8: Done! Now you’re at portal.azure.com – your candy store where you can provision microsoft cloud services. Check out my article in what you can do 😊 

https://blogs.msdn.microsoft.com/cbtham/2016/12/21/what-is-microsoft-azure/ 

Final notes:

To confirm the subscription is correctly provisioned, check your Billing. It should say ‘Developer Benefits’ like the screenshot below:-

a5

Welcome to the family 😊 xx

Assigning Work Items to a Group in Visual Studio Team Services

$
0
0

In this post, Sr. Application Development Managers Mark Meadows and Everett Yang demonstrate how to extend VSTS Project Templates to allow Group assignments to Work Items.


ISSUE

Recently a customer asked about whether they can assign work items to a group of users, instead of a single user. By default, Visual Studio Team Services (VSTS) allows a Work Item to be assigned to an individual user, not a group of users. In most cases, we would break Work Items down, so it can be assigned to single user. For this customer, however, they wanted the option to assign certain User Stories to a group and build queries and notifications based on group assignment.


WORKAROUND

As there is no direct support for assigning a Work Item to a group of users in VSTS, this is the workaround that the customer eventually adopted. We essentially created a field to which we can assign to a group.

We first created a new process using process inheritance. For example Agile -> blogTestAgile. 

clip_image001

clip_image003

Once you have the inherited process you can add a new field that allows group identities to be assigned.  In customer case, we added a new field for User Story with “Allow assigning to groups” enabled.

clip_image005

clip_image006

clip_image008

clip_image010

The new process can be applied to an existing project that used the same parent process. For example, blogTestAgile could be applied to a project using the existing Agile process.  Once the new process is applied you have an additional field that can be queried based on group assignment.

clip_image012

In this customer’s scenario, they were able to create new Notification Subscriptions for assignment to group as well as perform queries based on group assignment.

clip_image014


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Windows Server Performance Tuning Guidelines

$
0
0

Windows Server Performance Tuning Guidelines


 

OS Documents
Windows 2016 Performance Tuning Guidelines for Windows Server 2016
https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/
Windows 2012 R2 Performance Tuning Guidelines for Windows Server 2012 R2
https://www.microsoft.com/en-us/download/details.aspx?id=51960Performance Tuning Guidelines for Windows Server 2012 R2https://www.microsoft.com/en-us/download/details.aspx?id=51960Performance Tuning
https://msdn.microsoft.com/en-us/library/windows/hardware/dn529133
Windows 2012 Performance Tuning Guidelines for Windows Server 2012
http://download.microsoft.com/download/0/0/B/00BE76AF-D340-4759-8ECD-C80BC53B6231/performance-tuning-guidelines-windows-server-2012.docx
Windows 2008 R2 Performance Tuning Guidelines for Windows Server 2008 R2
http://download.microsoft.com/download/6/B/2/6B2EBD3A-302E-4553-AC00-9885BBF31E21/Perf-tun-srv-R2.docx
Windows 2008 Performance Tuning Guidelines for Windows Server 2008
http://download.microsoft.com/download/9/c/5/9c5b2167-8017-4bae-9fde-d599bac8184a/Perf-tun-srv.docx

 

Performance Tuning Guidelines for previous versions of Windows Server
https://msdn.microsoft.com/en-us/library/windows/hardware/dn529134

 

SQL Server on Linux: Quick Performance Monitoring

$
0
0

I have been asked several times about how to get a Performance Monitor like view on Linux.   There are lots of Linux tools available (top, iotop, Grafana, and SQL Sentry just scratch the surface of available options) to monitor the Linux system.  Allow me to share one such example to capture and monitor a system.

Performance Co-Pilot

PMCHART can be used much like you use performance monitor. You can select metrics to capture, record, replay them both locally and remotely. This only requires (pcp and pcp-gui) installs. The web service provides access from Vector and other utilities.  http://pcp.io/docs/lab.pmchart.html   The sample scripts in this post have common install package instructions.

Example: pmchart -c Memory -c CPU -c Disk -c Overview -c Paging -t 5 -h MyMachine

Screen shots from Centos and Ubuntu VMs running on my MacBook

image

pmrep - convert to CSV or other formats: http://pcp.io/man/man1/pmrep.1.html

Vector (Remote Visualization via Browser)

Getting Started: Using Performance Co-Pilot and Vector for Browser Based Metric Visualizations: https://rhelblog.redhat.com/2015/12/18/getting-started-using-performance-co-pilot-and-vector-for-browser-based-metric-visualizations/

Vector

The following installs the web server and vector (static HTML app) If you want just local monitoring. 

Ubuntu

apt-get install nginx
cd /var/www/html
wget
https://dl.bintray.com/netflixoss/downloads/1.1.0/vector.tar.gz
tar xvzf vector.tar.gz
Connect to
http://localhost:80

Centos/RHEL

yum install cp-webapi pcp-webjs pcp-gui
chkconfig pmwebd on; service pmwebd start
firefox
http://localhost:44323/ &

Reference: http://pcp.io/download.html

Is PCP Running?

service pcp status
service pmcd status
service pmlogger status
service pmproxy status
service pmwebd status
netstat -ltp | grep "/p"

tcp 0 0 *:4330 *:* LISTEN 32018/pmlogger
tcp 0 0 *:44321 *:* LISTEN 28269/pmcd
tcp 0 0 *:44322 *:* LISTEN 32199/pmproxy
tcp 0 0 *:44323 *:* LISTEN 24370/pmwebd
tcp6 0 0 [::]:4330 [::]:* LISTEN 32018/pmlogger
tcp6 0 0 [::]:44323 [::]:* LISTEN 24370/pmwebd

Sample Bash Capture Script and GUI Viewing

#======================================================
# Capture performance counters in a log (archive) using pcp
#  $1 = Minutes to capture
#  $2 = HostName
#  $3 = Replay archived data with pmchart - 0 or 1
#  $4 = Watch live data with pmchart - 0 or 1
#
#     Example   ./pcpcapture.sh 10 MyMachine 1 1
#
# Ubuntu Install
# ==============
#    sudo su
#    apt-get update
#    apt-get -y update
#    curl --silent '
https://bintray.com/user/downloadSubjectPublicKey?username=pcp' | sudo apt-key add -
#    echo "deb
https://dl.bintray.com/pcp/xenial xenial main" | sudo tee -a /etc/apt/sources.list
#    apt-get update
#    apt-get install pcp pcp-webapi
#    service pcp start
#    service pmcd start
#    service pmlogger start
#    service pmproxy start
#    service pmwebd start
# netstat -ltp | grep "/p"
#
# Centos Install
# ==============
#    yum install pcp pcp-gui
#    service pmcd start
#    service pmlogger start
# service pmie start
#
#======================================================
now=$(date +"%m_%d_%Y_%H_%M_%S")
tmpdir=/tmp/pcpcapture/$now
folio=$tmpdir/$now.folio
view=$tmpdir/$now.view
config=$tmpdir/$now.config
rm -rf /tmp/pcpcapture
mkdir -p $tmpdir

#======================================================
# Create Config file
#
echo '#pmlogger Version 1' > $config
echo >> $config
echo 'log mandatory on default {' >> $config
echo '    kernel.all.cpu.user' >> $config
echo '    kernel.all.cpu.sys' >> $config
echo '    kernel.all.cpu.nice' >> $config
echo '    kernel.all.cpu.intr' >> $config
echo '    kernel.all.cpu.wait.total' >> $config
echo '    kernel.all.cpu.steal' >> $config
echo '    kernel.all.cpu.idle' >> $config
echo '}' >> $config
echo 'log mandatory on once {' >> $config
echo '    hinv.ncpu' >> $config
echo '}' >> $config
echo 'log mandatory on default {' >> $config
echo '    kernel.all.load [ "1 minute" ]' >> $config
echo '}' >> $config
echo 'log mandatory on default {' >> $config
echo '    disk.all.read' >> $config
echo '    disk.all.write' >> $config
echo '}' >> $config
echo 'log mandatory on default {' >> $config
echo '    network.interface.in.bytes [ "eth2" ]' >> $config
echo '    network.interface.in.bytes [ "eth1" ]' >> $config
echo '    network.interface.in.bytes [ "docker0" ]' >> $config
echo '    network.interface.in.bytes [ "eth0" ]' >> $config
echo '    network.interface.out.bytes [ "eth2" ]' >> $config
echo '    network.interface.out.bytes [ "eth1" ]' >> $config
echo '    network.interface.out.bytes [ "docker0" ]' >> $config
echo '    network.interface.out.bytes [ "eth0" ]' >> $config
echo '}' >> $config
echo 'log mandatory on default {' >> $config
echo '    mem.util.cached' >> $config
echo '    mem.util.bufmem' >> $config
echo '    mem.util.other' >> $config
echo '    mem.util.free' >> $config
echo '}' >> $config
echo 'log mandatory on default {' >> $config
echo '    mem.util.cached' >> $config
echo '    mem.util.bufmem' >> $config
echo '    mem.util.other' >> $config
echo '    mem.util.free' >> $config
echo '}' >> $config
echo 'log mandatory on default {' >> $config
echo '    kernel.all.cpu.user' >> $config
echo '    kernel.all.cpu.sys' >> $config
echo '    kernel.all.cpu.nice' >> $config
echo '    kernel.all.cpu.intr' >> $config
echo '    kernel.all.cpu.wait.total' >> $config
echo '    kernel.all.cpu.steal' >> $config
echo '    kernel.all.cpu.idle' >> $config
echo '}' >> $config
echo 'log mandatory on default {' >> $config
echo '    disk.all.read' >> $config
echo '    disk.all.write' >> $config
echo '}' >> $config
echo 'log mandatory on default {' >> $config
echo '    swap.pagesout' >> $config
echo '    swap.pagesin' >> $config
echo '}' >> $config

#======================================================
# Create Folio file
#
echo PCPFolio > $folio
echo Created: $now >> $folio
echo Creator: pmchart $view >> $folio
echo Version: 1 >> $folio
echo Archive:    $2    $tmpdir/$now >> $folio

#======================================================
# Create View file
#
echo '#kmchart' > $view
echo version 1 >> $view
echo global width 1300 >> $view
echo global height 600 >> $view
echo global points 60 >> $view
echo global xpos 95 >> $view
echo global ypos 45 >> $view
echo >> $view
echo 'chart title "CPU Utilization [%h]" style utilization' >> $view
echo '    plot legend "User" color #2d2de2 metric kernel.all.cpu.user' >> $view
echo '    plot legend "Sys" color #e71717 metric kernel.all.cpu.sys' >> $view
echo '    plot legend "Nice" color #c2f3c2 metric kernel.all.cpu.nice' >> $view
echo '    plot legend "Intr" color #cdcd00 metric kernel.all.cpu.intr' >> $view
echo '    plot legend "Wait" color #00cdcd metric kernel.all.cpu.wait.total' >> $view
echo '    plot legend "Steal" color #fba2f5 metric kernel.all.cpu.steal' >> $view
echo '    plot legend "Idle" color #16d816 metric kernel.all.cpu.idle' >> $view
echo 'chart title "Average Load [%h]" style plot antialiasing off' >> $view
echo '    plot legend "1 min" color #ffff00 metric kernel.all.load instance "1 minute"' >> $view
echo '    plot legend "# cpus" color #0000ff metric hinv.ncpu' >> $view
echo 'chart title "IOPS over all Disks [%h]" style stacking' >> $view
echo '    plot legend "Reads" color #ffff00 metric disk.all.read' >> $view
echo '    plot legend "Writes" color #ee82ee metric disk.all.write' >> $view
echo 'chart title "Network Interface Bytes [%h]" style stacking' >> $view
echo '    plot legend "in %i" color #ffff00 metric network.interface.in.bytes instance "eth2"' >> $view
echo '    plot legend "in %i" color #0000ff metric network.interface.in.bytes instance "eth1"' >> $view
echo '    plot legend "in %i" color #ff0000 metric network.interface.in.bytes instance "docker0"' >> $view
echo '    plot legend "in %i" color #008000 metric network.interface.in.bytes instance "eth0"' >> $view
echo '    plot legend "out %i" color #ee82ee metric network.interface.out.bytes instance "eth2"' >> $view
echo '    plot legend "out %i" color #aa5500 metric network.interface.out.bytes instance "eth1"' >> $view
echo '    plot legend "out %i" color #666666 metric network.interface.out.bytes instance "docker0"' >> $view
echo '    plot legend "out %i" color #aaff00 metric network.interface.out.bytes instance "eth0"' >> $view
echo 'chart title "Real Memory Usage [%h]" style stacking' >> $view
echo '    plot color #9cffab metric mem.util.cached' >> $view
echo '    plot color #fe68ad metric mem.util.bufmem' >> $view
echo '    plot color #ffae2c metric mem.util.other' >> $view
echo '    plot color #00ff00 metric mem.util.free' >> $view
echo 'chart title "Real Memory Usage [%h]" style stacking' >> $view
echo '    plot color #9cffab metric mem.util.cached' >> $view
echo '    plot color #fe68ad metric mem.util.bufmem' >> $view
echo '    plot color #ffae2c metric mem.util.other' >> $view
echo '    plot color #00ff00 metric mem.util.free' >> $view
echo 'chart title "CPU Utilization [%h]" style utilization' >> $view
echo '    plot legend "User" color #2d2de2 metric kernel.all.cpu.user' >> $view
echo '    plot legend "Kernel" color #e71717 metric kernel.all.cpu.sys' >> $view
echo '    plot legend "Nice" color #c2f3c2 metric kernel.all.cpu.nice' >> $view
echo '    plot legend "Intr" color #cdcd00 metric kernel.all.cpu.intr' >> $view
echo '    plot legend "Wait" color #00cdcd metric kernel.all.cpu.wait.total' >> $view
echo '    plot legend "Steal" color #fba2f5 metric kernel.all.cpu.steal' >> $view
echo '    plot legend "Idle" color #16d816 metric kernel.all.cpu.idle' >> $view
echo 'chart title "IOPS over all Disks [%h]" style stacking' >> $view
echo '    plot legend "Reads" color #ffff00 metric disk.all.read' >> $view
echo '    plot legend "Writes" color #ee82ee metric disk.all.write' >> $view
echo 'chart title "VM Activity - Page Migrations [%h]" style plot' >> $view
echo '    plot legend "Out" color #ffff00 metric swap.pagesout' >> $view
echo '    plot legend "In" color #0000ff metric swap.pagesin' >> $view
 
#======================================================
# Watch live data
#
if [ $4 == 1 ]; then
  echo Launching live data graphs
  pmchart -h $2 -c Overview -c CPU -c Memory -c Disk -c Paging &
fi

#======================================================
# Start the logging on the designated host
#
echo "====================================================="
echo "Logging performance values on host $2 for for $1 minutes"
echo "====================================================="
echo "---------- To check logging status use: pmlc -h $2"
pmlogger -u -T $1mins -r -c $config -h $2 -l $tmpdir/pcplogger.log -t 5.0 $tmpdir/$now

#======================================================
# Replay captured data
#
if [ $3 == 1 ]; then
  echo Launching data replay
  pmafm $folio check
  pmafm $folio list
  pmdumplog -all -d -i -m $tmpdir/$now.0
  pmafm $folio replay &
fi

Sample Bash Capture Script To CSV and TXT

#======================================================
tmpdir=/home/user/pcpcapture
stopFile=$tmpdir/Collect.Stop

if [ "${1}" = "STARTDETACHED" ]; then
    echo "Launching pcpcapture DETACHED with nohup"

    launchPath=`pwd`
    rm -f $launchPath/nohup.out
    rm -f $launchPath/pcpcapture.log
    nohup $launchPath/pcpcapture.sh START & > $launchPath/pcpcapture.log

    # Make sure nohup has launched DETACHED
    #
    echo "Launched capture"
    sleep 6
    exit 0
fi

#======================================================
# WaitForExit(name)
#
function WaitForExit()
{
    name=$1

    pid=`ps aux | grep ${name} | grep -v grep | awk '{print $2}'`
    while [ "$pid" != "" ];
    do
        kill -SIGUSR1 $pid
        sleep 1
        pid=`ps aux | grep pmrep | grep -v grep | awk '{print $2}'`
    done
}

#======================================================
# STOP or START
#
if [ "${1}" = "STOP" ]; then
    touch $stopFile
    WaitForExit pmrep
    WaitForExit pmstat
else
    touch $stopFile
    WaitForExit pmrep
    WaitForExit pmstat

    # Remove any previous files
    #
    rm -f $stopFile
    rm -f $tmpdir/.Metrics*
    rm -f $tmpdir/.Stats*

    mkdir -p $tmpdir

    #======================================================
    # Output all counters for import into performance warehouse
    #
https://access.redhat.com/articles/2372811
    #
    runtime="-s 20 -t 15s"
    host=`hostname`

    while [ ! -f $stopFile ];
    do
        now=$(date +"%m_%d_%Y_%H_%M_%S")
        echo "$now Starting performance capture"

        counters=`pmprobe | awk 'BEGIN{FS=".";} {print $1}' | sort | uniq | awk '{print}' ORS=' '`
        counters+=" :pidstat :vmstat-w :proc-cpu-ext :proc-io-ext :proc-mem-ext :proc-info-ext"

        echo "Launching: pmrep -x $runtime -k -p --dynamic-header -F $tmpdir/.MetricsCapture.$now.$host.csv -o csv ${counters}"
        nohup pmrep -x $runtime -k -p --dynamic-header -F $tmpdir/.MetricsCapture.$now.$host.csv -o csv ${counters} & > $tmpdir/.MetricsCapture.$now.$host.log

        #======================================================
        #
http://man7.org/linux/man-pages/man1/pmstat.1.html
        #
        echo "Launching: pmstat -x $runtime -x > $tmpdir/.StatsCapture.$now.$host.txt"
        pmstat -x $runtime -x > $tmpdir/.StatsCapture.$now.$host.txt

        WaitForExit pmrep

        echo "Renaming files"
        mv $tmpdir/.MetricsCapture.$now.$host.csv $tmpdir/MetricsCapture.$now.$host.csv
        mv $tmpdir/.StatsCapture.$now.$host.txt $tmpdir/StatsCapture.$now.$host.txt
    done

fi

Documentation References

http://www.unix.com/man-page/centos/1/pmlogger/
https://www.systutorials.com/docs/linux/man/1-PCPIntro/
http://menehune.opt.wfu.edu/Kokua/Irix_6.5.21_doc_cd/usr/share/Insight/library/SGI_bookshelves/SGI_Admin/books/PCP_IRIX/sgi_html/ch06.html
https://hostpresto.com/community/tutorials/how-to-monitor-your-server-performance-with-pcp-and-vector-on-ubuntu-14-04/
https://groups.google.com/forum/#!topic/vector-users/aFZn7BzxMrU
https://medium.com/netflix-techblog/introducing-vector-netflixs-on-host-performance-monitoring-tool-c0d3058c3f6f
https://hostpresto.com/community/tutorials/how-to-monitor-your-server-performance-with-pcp-and-vector-on-ubuntu-14-04/
http://vectoross.io/

Bob Dorr - Principal Software Engineer SQL Server

Getting Started with DevOps and Continuous Delivery of Value

$
0
0

In this post, Application Development Manager Wesam Darwish highlights some important information to help customers learn and adopt DevOps.


DevOps: a buzz word, or an exciting transformation journey? As organizations strive to deliver value to their end users faster, improve quality and availability, optimize costs, and deliver innovation with digital-era velocity; they face numerous challenges. These challenges prevent flexible and agile creation of business value.

With multiple industry definitions of DevOps, the definition we adopt at Microsoft is the following: DevOps is the union of people, process, and products to enable continuous delivery of value to our end users. DevOps emphasizes the importance of communication and collaboration between various roles within the organization, including software developers, IT professionals, quality assurance specialists, security teams, as well as the business stakeholders.

At Microsoft, our Premier Support for Developer organization is skilled at supporting teams through successful DevOps adoption. Recently, we delivered a few exciting day-long DevOps sessions across Canada as part of Microsoft’s Modern IT Roadshow to give our Premier Support customers a flavor of what continuous delivery of value looks like, help their teams learn about the various pillars and principles of DevOps, and show how high-performing IT organizations focus on people, processes and tools to enable a successful DevOps journey.

There is no shortage of resources on DevOps, but we share with our customers key resources to help them get started.



Our DevOps Journey

Take a look behind the scenes of the teams in the Microsoft Cloud + Enterprise engineering division discuss how and why they implemented DevOps practices & methodologies to ship more quickly and deliver more reliable services that customers enjoy.


image



What is DevOps?

A great description of the key DevOps concepts, from the basics to monitoring in production.

image



DevOps and Microsoft

Access various information on the DevOps tools you get with Azure. Use our built-in tools and bring your favorites. Access QuickStart tutorials to start your DevOps project in a few clicks.

image



DevOps at Microsoft on YouTube

DevOps at Microsoft: the Microsoft YouTube channel for videos related to the various DevOps products including those such as Visual Studio Team Services (VSTS), App Center, Application Insights and others.

image



DevOps Self-Assessment

Take a self-assessment questionnaire to gain more insights into your organization’s current DevOps maturity.

image



Microsoft Visual Studio DevOps Hands-On-Labs

A set of self-paced labs based on Visual Studio Team Foundation Server and Visual Studio Team Services. These labs help you evaluate your next DevOps toolchain, and help you learn how to implement modern DevOps practices with Visual Studio, Team Services and Azure.

image



Visual Studio Team Services Demo Generator

Helps you create projects on your Visual Studio Team Services account with preset sample content which includes source code, work items, iterations, service endpoints, build and release definitions based on a template you choose. Use it with the Microsoft Visual Studio DevOps Hands-On-Labs!

image


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.
Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>