Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Azure PowerShell DSC Extension v1.10 released

$
0
0

NOTE: You can find more information on the DSC Extension in our release history.

Today we released a minor update to the Azure DSC Extension: version 1.10.1.0.

This update addresses a couple of issues that were producing false error messages on some ARM deployments.

Please feel free to give us your feedback as comments to this post, or using Connect: https://connect.microsoft.com/PowerShell/Feedback.


[Sample Of May. 16] How to Detect the Web Browser Close Event in ASP.NET

$
0
0
May. 16 Sample : https://code.msdn.microsoft.com//How-to-Detect-the-Web-737ef524 As we know, HTTP is a stateless protocol, so the browser doesn't keep connecting to the server. When users try to close the browser using alt-f4, browser close(X) and right click on browser and close, all these methods are working fine, but it's not possible to tell the server that the browser is closed. The sample demonstrates how to detect the browser close event. It includes two parts...(read more)

Identify memory leak for a process hosting ASMX/ WCF service

$
0
0

Problem statement

A good number of asmx web services are hosted on IIS application pool (4.0 | 64bit) over Windows server 2008 R2 SP1. It is observed the private bytes usage for the application pool climbs from 3 to 4 GB in around 1 hour.

Why are there so many allocations in such a quick span of time?

 

How to troubleshoot

Since application is hosted on IIS, the process of interest will be w3wp.exe. We have to treat it as a normal .net process troubleshooting where memory consumption is so high.

Action plan will be to enable perfmon counters for

  • .NET CLR Memory/(*) 
  • Process/(*)
  • .Net CLR Loading/(*)

How to identify if there is any memory leak? (Review)

  • Remove all counters
  • Select .NET CLR Memory/ # Bytes in all Heaps, Process/ Private Bytes counters for the process

The graph appears like below:


 

Observation (from the sampling)

  • Perfmon data has been collected for around 2 hours.
  • We see a pattern where private bytes for w3wp keeps growing at a certain rate. In comparison to it, #Bytes in all heaps graph stays almost flat.
  • It is an indicative where there is a leak in the native modules.

 

Note

Private Bytes:  "The current size, in bytes, of memory that this process has allocated that cannot be shared with other processes."

#Bytes in all heaps: "This counter is the sum of four other counters; Gen 0 Heap Size; Gen 1 Heap Size; Gen 2 Heap Size and the Large Object Heap Size. This counter indicates the current memory allocated in bytes on the GC Heaps."

 

 

How to troubleshoot native memory leak?

We need to get LeakTrack.dll injected memory dumps to troubleshoot further.

Steps

  • Restart the w3wp process
  • Make small number of requests to application
  • Go to debugdiag tool (You can get the latest from http://www.microsoft.com/en-us/download/details.aspx?id=42933)
  • Go to Processes tab
  • Sort the "Process Name" column
  • Find the w3wp (there will be application pool name mentioned in right to identify which w3wp process can be selected)
  • Select the w3wp > Right click > Select "Monitor For leaks"
  • Keep your application testing with high number of requests
  • Monitor the private bytes in debugdiag
  • When size grows to 2.2GB, right click and take a "Full user Memorydump"
  • Second memory dump at 2.6GB size
  • Third memory dump at 3.0GB size
  • Once all dumps are collected, right click process and select "Stop monitoring"

 Review

  • Go to DebuDiag analysis
  • Select MemoryAnalysis only
  • Add Data Files > select the first dump file > Start Analysis

 

  • It will generate reports for the dump file.
  • We have to follow the same process for other 2 memory dumps as well respectively.

 

 

Dump 1

=============================

Number of outstanding allocations = 13,155 allocations (outstanding allocations means x number of allocations are done, but have been not freed up yet from memory)

Total outstanding handle count = 469 handles

Total size of allocations = 1.68 GBytes

Tracking duration = 01:39:09

 

GdiPlus

Module Name  GdiPlus

Allocation Count  1370 allocation(s)

Allocation Size  1.17 GBytes

 

Top function

GdiPlus!GpMemoryBitmap::AllocBitmapData+c2  with 155 allocations takes 1.16GB of memory space

 

Dump 2

============================

Number of outstanding allocations = 13,150 allocations

Total outstanding handle count = 541 handles

Total size of allocations = 2.14 GBytes

Tracking duration = 02:01:34

 

GdiPlus

Module Name  GdiPlus

Allocation Count  1783 allocation(s)

Allocation Size  1.63 GBytes

 

Top function

GdiPlus!GpMemoryBitmap::AllocBitmapData+c2  with 220 allocations takes 1.62GB of memory space 

 

Dump 3

==============================

Number of outstanding allocations = 12,862 allocations

Total outstanding handle count = 493 handles

Total size of allocations = 2.7 GBytes

Tracking duration = 02:26:05

 

GdiPlus

Module Name  GdiPlus

Allocation Count  2283 allocation(s)

Allocation Size  2.19 GBytes

 

Top function

GdiPlus!GpMemoryBitmap::AllocBitmapData+c2  with 290 allocations takes 2.17GB of memory space

 

In dump 3 case, the size of memory dump is 3.16GB. Out of which, 2.19GB GdiPlus module allocations are done in around 2 and half hours.

Now, it is safe to say the overall leak is because of GdiPlus module, but why?

 

Does the web service utilize any GdiPlus APIs for its web methods?

 

Let us identify.

 

How to?

We need to watch for callstacks in the captured memory dumps.

An example:

0:057> kL

 # Child-SP         RetAddr           Call Site

00 00000000`0b99a320 000007fe`fbbfcb6b GdiPlus!GpRecolorObject::ColorAdjust+0x28d

01 00000000`0b99a360 000007fe`fbbfca82 GdiPlus!GpRecolorOp::Run+0x2b

02 00000000`0b99a390 000007fe`fbbf885b GdiPlus!GpBitmapOps::PushPixelData+0x182

03 00000000`0b99a400 000007fe`fbbf7ac8 GdiPlus!GpMemoryBitmap::PushIntoSink+0x267

04 00000000`0b99a530 000007fe`fbafa21b GdiPlus!GpMemoryBitmap::InitImageBitmap+0x290

05 00000000`0b99a600 000007fe`fbafa006 GdiPlus!CopyOnWriteBitmap::PipeLockBitsFromMemory+0xd3

06 00000000`0b99a690 000007fe`fbaff0fa GdiPlus!CopyOnWriteBitmap::PipeLockBits+0x5ea

07 00000000`0b99a810 000007fe`fbb28c23 GdiPlus!GpBitmap::PipeLockBits+0x6e

08 00000000`0b99a840 000007fe`fbb23f75 GdiPlus!GpGraphics::DrvDrawImage+0x287b

09 00000000`0b99b0b0 000007fe`fbb2360b GdiPlus!GpGraphics::DrawImage+0x86d

0a 00000000`0b99b260 000007fe`fbae2a82 GdiPlus!GpGraphics::DrawImage+0xb7

0b 00000000`0b99b2f0 000007fe`fbae2c48 GdiPlus!GdipDrawImageRectRect+0x362

0c 00000000`0b99b3f0 000007fe`f99117c7 GdiPlus!GdipDrawImageRectRectI+0xfc

0d 00000000`0b99b490 000007fe`dd41d439 clr!DoNDirectCall__PatchGetThreadCall+0x7b

0e 00000000`0b99b580 000007fe`dd416da4 System_Drawing_ni!DomainNeutralILStubClass.IL_STUB_PInvoke(System.Runtime.InteropServices.HandleRef, System.Runtime.InteropServices.HandleRef, Int32, Int32, Int32, Int32, Int32,
Int32, Int32, Int32, Int32, System.Runtime.InteropServices.HandleRef, DrawImageAbort, System.Runtime.InteropServices.HandleRef)+0x179

0f 00000000`0b99b6f0 000007fe`dd4e95f9 System_Drawing_ni!System.Drawing.Graphics.DrawImage(System.Drawing.Image, System.Drawing.Rectangle, Int32, Int32, Int32, Int32, System.Drawing.GraphicsUnit, System.Drawing.Imaging.ImageAttributes, DrawImageAbort, IntPtr)+0x264

10 00000000`0b99b850 000007fe`dd418ce1 System_Drawing_ni!System.Drawing.Graphics.DrawImage(System.Drawing.Image, System.Drawing.Rectangle, Int32, Int32, Int32, Int32, System.Drawing.GraphicsUnit, System.Drawing.Imaging.ImageAttributes, DrawImageAbort)+0x99

12 00000000`0b99b960 000007ff`010148fe MyDrawing!MyDrawing.GdiPlusCanvas.BitmapToMonochrome(System.Drawing.Bitmap)+0x2da

13 00000000`0b99ba50 000007ff`01006b5d MyDrawing!MyDrawing.GdiPlusCanvas.GetImageBytes(System.Drawing.Imaging.ImageFormat, Boolean)+0x6e

14 00000000`0b99bad0 000007ff`01006329 MyDrawing!MyDrawing.BaseForm._GetOutput(Drawing.CanvasBase, FormOutputFormat, Int32, Rotation, OutputMode, PdfSettings, Boolean, Boolean)+0xcd

16 00000000`0b99c410 000007ff`00d90392 MyService_Core!MyService.Controller.Create(MyService.Request, MyService.Core.RequestResponse, Account, Boolean, SoftwareID)+0x7da

18 00000000`0b99c6a0 000007ff`00d7c986 MyService!MyService.GetImagesInternal(System.Object, MyService.Core.Request ByRef)+0x38f

19 00000000`0b99c760 000007ff`00d7c7a7 MyService!MyService.WebServiceBase.ExecuteWebMethod[[System.__Canon, mscorlib]](System.Func`1<System.__Canon>, MyService.WebMethodParams)+0x96

1a 00000000`0b99c800 000007fe`f994c9e4 MyService!MyService.GetCommonImages(System.Object)+0xa7

..

4a 00000000`0b99ee50 000007fe`f9adc736 clr!ThreadpoolMgr::WorkerThreadStart+0x3b

4b 00000000`0b99eef0 00000000`773359ed clr!Thread::intermediateThreadProc+0x7d

4c 00000000`0b99feb0 00000000`7756c541 kernel32!BaseThreadInitThunk+0xd

4d 00000000`0b99fee0 00000000`00000000 ntdll!RtlUserThreadStart+0x1d

 

Observation (from callstack above)

1. We see like asmx service APIS call System.Drawing APIs (The System.Drawing namespace provides access to GDI+ basic graphics functionality)

2. It seems graphics APIs like System.Drawing are not supported in asp.net service components. The web link which talks about the same and where alternatives are discussed:

       https://msdn.microsoft.com/en-us/library/system.drawing(v=vs.110).aspx

3. The primary cause of this memory leak is usage of System.Drawing GDI+ APIs in web services.

 

General tips

1. If virtual bytes jump, private bytes stay flat

     => A virtual byte leak i.e. some component is reserving memory but not using it

     => Use debug diag to track it down

2. If private bytes jump but #Bytes in all heaps stay flat

=> Native or loader heap leak.

   => Use debug diag to track it down and/or check if the number of assemblies increase (counter under .net clr loading)

3. #Bytes in all heaps and private bytes follow each other

    => investigate the .net GC heap

 

 

Hope this helps!

 

 

Unable to read handle information

$
0
0

 

The easiest way to generate a dump file is using Task Manager. All you have to do is to find the correct process

and then right click on it and choose the “Create dump file” menu item. When you start working on a dump file sooner

or later you might want to check the handles. In that case you receive the following response :

0:000>!handle

ERROR:
!handle: extension exception 0x80004002.

    "Unable to read handle information"

 

Not all dump files have the handle information of a process. There is a way to see whether the handle data is included or not in the dump file.

0:000>.dumpdebug

----- User
Mini Dump Analysis

 

MINIDUMP_HEADER:

Version         A793 (6380)

NumberOfStreams 11

Flags           1826

                0002 MiniDumpWithFullMemory

                0004 MiniDumpWithHandleData

                0020 MiniDumpWithUnloadedModules

                0800 MiniDumpWithFullMemoryInfo

                1000 MiniDumpWithThreadInfo

 

Voila! The dump file should have the handle information. For details of different dump types you can check the following references.

 

MINIDUMP_TYPE enumeration

https://msdn.microsoft.com/en-us/library/windows/desktop/ms680519(v=vs.85).aspx

 

MiniDumpWithHandleData

Include high-level information about the operating system handles that are active when the minidump is made.

 

So far so good. However, the reason for not having the handle information is an issue with Task Manager.

With current releases of Windows 10 RC and Windows Server RC editions you can see that Task Manager includes the handle data in the dump files.

Until Windows 10 is RTMed and widespread we can generate dump files with usual ways if we are after the handle information.

 

HTH

-faik

Office, Skype for Business (formerly Lync), and Exchange Protocol Documentation Updates

$
0
0

A few weeks ago, we made some updates to the Open Specifications technical documents for Office, Skype for Business (formerly Lync), SharePoint Products and Technologies, and Exchange protocols. For your convenience, we've listed the documents that had changes below.

Highlights for this release:

  • This release contains updates made in relation to the release of Skype for Business and Skype for Business Server. In the majority of cases, this just involved updates to the list of applicable products in each document's Product Behavior appendix; please refer to a document's Change Tracking section for specific details on what was changed for this release.
  • One new Exchange Web Services document, [MS-OXWSITEMID], was added.
  • Two new Skype for Business protocol documents, [MS-CVWREST] and [MS-ECREST], were added.

For those not familiar with the way our Revision Summary/Change Tracking system works: A summary of what was changed in each document can be found in the Revision Summary table that appears at the beginning of each document (before the Introduction). If the revisions to a document included changes that affect the meaning, language, or formatting of the technical content, those changes are detailed in the Change Tracking section that appears just before the Index in each document.

The following documents were updated for this release. In the list below, titles in bold indicate that a document is new. 

Exchange - Release Notes
[MS-OXPROTO]: Exchange Server Protocols System Overview
[MS-OXWSITEMID]: Web Service Item ID Algorithm

Skype for Business (formerly Lync) - Release Notes
[MS-ABS]: Address Book File Structure
[MS-AVEDGEA]: Audio Video Edge Authentication Protocol
[MS-CONFAS]: Centralized Conference Control Protocol: Application Sharing Extensions
[MS-CONFAV]: Centralized Conference Control Protocol: Audio-Video Extensions
[MS-CONFBAS]: Centralized Conference Control Protocol: Basic Architecture and Signaling
[MS-CONFIM]: Centralized Conference Control Protocol: Instant Messaging Extensions
[MS-CONFPRO]: Centralized Conference Control Protocol: Provisioning
[MS-CONMGMT]: Connection Management Protocol
[MS-CVWREST]: Skype for Business Call Via Work REST Protocol
[MS-DLX]: Distribution List Expansion Protocol
[MS-DTMF]: RTP Payload for DTMF Digits, Telephony Tones, and Telephony Signals Extensions
[MS-E911WS]: Web Service for E911 Support Protocol
[MS-ECREST]: Skype for Business Event Channel REST Protocol
[MS-EUMR]: Routing to Exchange Unified Messaging Extensions
[MS-EUMSDP]: Exchange Unified Messaging Session Description Protocol Extension
[MS-H264PF]: RTP Payload Format for H.264 Video Streams Extensions
[MS-ICE]: Interactive Connectivity Establishment (ICE) Extensions
[MS-ICE2]: Interactive Connectivity Establishment (ICE) Extensions 2.0
[MS-ICE2BWM]: Interactive Connectivity Establishment (ICE) 2.0 Bandwidth Management Extensions
[MS-OCAUTHWS]: OC Authentication Web Service Protocol
[MS-OCDISCWS]: Lync Autodiscover Web Service Protocol
[MS-OCER]: Client Error Reporting Protocol
[MS-OCEXUM]: Call Control for Exchange Unified Messaging Protocol Extensions
[MS-OCGCWEB]: Persistent Chat Web Protocol
[MS-OCPSTN]: Session Initiation Protocol (SIP) for PSTN Calls Extensions
[MS-OCSMP]: Microsoft Online Conference Scheduling and Management Protocol
[MS-OCSPROT]: Lync and Lync Server Protocols Overview
[MS-PRES]: Presence Protocol
[MS-PSOM]: PSOM Shared Object Messaging Protocol
[MS-QoE]: Quality of Experience Monitoring Server Protocol
[MS-RGSWS]: Response Group Service Web Service Protocol
[MS-RTASPF]: RTP for Application Sharing Payload Format Extensions
[MS-RTP]: Real-time Transport Protocol (RTP) Extensions
[MS-RTPRADEX]: RTP Payload for Redundant Audio Data Extensions
[MS-RTVPF]: RTP Payload Format for RT Video Streams Extensions
[MS-SDPEXT]: Session Description Protocol (SDP) Version 2.0 Extensions
[MS-SIPAE]: Session Initiation Protocol (SIP) Authentication Extensions
[MS-SIPAPP]: Session Initiation Protocol (SIP) Application Protocol
[MS-SIPCOMP]: Session Initiation Protocol (SIP) Compression Protocol
[MS-SIPRE]: Session Initiation Protocol (SIP) Routing Extensions
[MS-SIPREGE]: Session Initiation Protocol (SIP) Registration Extensions
[MS-SRTP]: Secure Real-time Transport Protocol (SRTP) Extensions
[MS-SSRTP]: Scale Secure Real-time Transport Protocol (SSRTP) Extensions
[MS-TURN]: Traversal Using Relay NAT (TURN) Extensions
[MS-TURNBWM]: Traversal using Relay NAT (TURN) Bandwidth Management Extensions
[MS-XCCOSIP]: Extensible Chat Control Over Session Initiation Protocol (SIP)
[MS-XMLMC]: XML Schema for Media Control Extensions

Office - Release Notes
[MS-STWEB]: Microsoft OneDrive Save to Web SOAP Web Service

Never Waste A Good Crisis

$
0
0

The title of this post is a bit of advice I first heard many years ago, while working on an Enterprise Architecture review of a troubled software development effort.  Never waste a good crisis.

Of course, no crisis is good for the person going through it.  Be compassionate.  And I’m not talking about a personal crisis like the death of a loved one.  I’m talking about a crisis in business, like when a company changes strategy leaving customers out in the cold, or when a new technology simply fails to deliver any value, leaving the champion with less buy-in from his business stakeholders.

These are the little crises of business.  It often starts with someone taking a risk that doesn’t produce an hoped-for return.  If that someone is a senior leader, and they are smart, they have already collected their bonus or promotion and moved on, so they won’t get the blow-back from their own failure.  But just as often, the person who took a risk is still around to get hit with “blame and shame.”

Unhealthy as it is in a corporate environment, blame and shame is common.  When something goes wrong, someone takes the fall.

But for an influencer like an Enterprise Architect, a crisis can be a good thing.  Why?  Because we are change agents.  And people won’t change unless they are forced to change.  John Kotter, in his book “Leading Change” suggests that one of the greatest obstacles to change is complacency.  Change just isn’t urgent enough.  He’s completely right, and a crisis is often what is needed to break through complacency.

So a good change agent has a dozen different changes all queued up, ready to go.  Well thought out, well planned, well designed changes.  Some little, like getting your boss to agree to buy you a new Surface Pro 3, and some big, like a hacker waking up your leadership to the notion data security. 

To take advantage of a crisis, you have to be ready.  Have your arrows sharpened and sitting in your quiver, ready to go.  During a crisis, you may get exactly one shot to propose an idea, and it may not be the moment you expect.  There won’t be a “right” time.  Just the opportune time.  So be prepared.

And when the crisis comes, strike.

On that note, I’m leaving Microsoft. 

I’ve had the great pleasure of being part of the Microsoft family for eleven years now.  As many of my friends know, I was a dot-com entrepreneur back in the 90’s and had a great run at two start-ups in a row.  It was exciting but risky.  My children were very small and responsibilities to my family meant that I needed to curtail the risk for a while.  So I sought a “safe port in a storm” by joining Microsoft.  It served me well.  During the doldrum years and all the way into the Great Recession, I rode with Microsoft, pouring my energy into becoming the best Enterprise Architect I could be.

And for the past few years, I’ve been fortunate to be part of Microsoft Consulting, while the company experimented with providing Enterprise Architecture as a consulting program.  The ESP program has been through many lives in the past few years, and it is still “figuring itself out”, especially with the new “Devices and Services” world Microsoft has chosen for itself.  I’ve met some of the smartest, most amazing architects, project leaders, and yes, even sales professionals while working inside Microsoft Consulting, and I’ve learned a great deal.

But it’s time.  The economy is back.  Enterprise Architecture is on the rise, and I see opportunities to provide Enterprise Architecture service that are outside of Microsoft’s strategic focus. 

So I’m moving on to create my own Enterprise Architecture practice as a compliment to Microsoft Consulting.  I am applying to become a Microsoft Partner, and will work happily with Microsoft customers, but I’ll no longer be limited to working solely in the Microsoft model.  I’ll be looking for other architects willing to take this journey with me. 

Moreover, as many of you know, Enterprise Architecture is of tremendous value in companies that don’t have strong IT strategy and planning DNA.  These can be very large companies that are not IT focused, like transportation companies or retailers, or midsized companies that have never really gotten hold of the concept of strategic planning.  It can even include start-up firms that need to spend wisely and move quickly.  These players are an excellent market for a Vanguard EA, and I’m going for it with an established business and technical architecture process.

So if you wish to continue to follow me, reach out and connect with me on LinkedIn.  http://linkedin.com/in/nickmalik

I will continue blogging on a new platform as soon as I get things set up.  If I’m able, I’ll bring across the EA-specific articles from this blog to that site as well. 

It’s been a good run, but I’m awake from my own complacency, and I’m not going to waste a good crisis.

Windows 8 與 Windows Server 2012 以上版本檔案總管 (explorer.exe) 停止回應問題

$
0
0

Windows 8.0 , Windows 8.1, Windows Server 2012, Windows Server 2012 R2 在 2015 年3月 9日之 Windows Update 更新KB3033889 確認會引起有安裝中,日,韓輸入法 (IME) 之 Windows 8, Windows 8.1, Windows Server 2012, Windows Server 2012 R2 作業系統檔案總管發生停止回應的狀況,使用者會感覺每隔一小段時間整個視窗環境便會凍結住,這是因為檔案總管已經發生 Application Hang 狀況,Windows 用戶可以檢查事件管理員 ( Event  viewer ) 是否會看到類似如下圖般的錯誤紀錄 Application Hang , Event 1002。

image

會造成檔案總管 Application Hang 住的原因很多,但如果您是 Windows 8.0 , Windows 8.1, Windows Server 2012, Windows Server 2012 R2 有使用中文輸入法之用戶並且自 2015年3月中以後才發生此種狀況,但很有可能是此更新所造成的。可以至 https://support.microsoft.com/zh-tw/kb/3048778下載安裝 HotFix 解決此一惱人的問題。

GameDev Adventures: Tower defense Part 8, Creep Waves IV

$
0
0

Today I am sharing with you another video from my Twitch channel http://www.twitch.tv/hielo777/ with my series GameDev Adventures.

This video is the sixth part that will help you to understand the different aspects of a classic Tower Defense game and how to implement them in Construct 2. Here you can find the links for the previous parts: 

Create your own Tower Defense I                                      Create your own Tower Defense II

Create your own Tower Defense III                                      Create your own Tower Defense IV 

GameDev Adventures: Tower defense Part 5, Creep Waves

GameDev Adventures: Tower defense Part 6, Creep Waves II

GameDev Adventures: Tower defense Part 7, Creep Waves III

 All the videos can also be found in my YouTube Channel http://bit.ly/hielotube even after they have been deleted from Twitch. 

Download the Source File for this video Tutorial!!!!

 

I continue the conversation of how to implement a creep wave system in your own game. At least two more videos are on the works for more flexibility on the creep wave system that will make your game amazing.

Hosted by the Scirra Arcade 

All comments are greatly appreciated.


JSON Support in SQL Server 2016

$
0
0

JSON Support in SQL Server 2016

JSON support in SQL server is one of the most highly ranked requests with more than 1000 votes on the Microsoft connect site. We have announced that SQL Server 2016 will have a native JSON support. In this post I will give a brief overview of the JSON features that are planned for SQL Server 2016.

...(read more)

Ultimate Developer Workstation 2015 – Part 3 Performance and tuning

$
0
0

 This is the third post of a 4-part series:

 

  1. Ultimate Developer Workstation 2015 –  Part 1 Planning

  2. Ultimate Developer Workstation 2015 –  Part 2 Building

  3. Ultimate Developer Workstation 2015 –  Part 3 Performance and tuning

  4. Ultimate Developer Workstation 2015 –  Part 4 Software and tools

 

In the last two posts I was planning and building my new workstation. Now it is time for some hands-on experimentations. As you may recall, last time I finished with a successful bios post. Next, Windows 10 Enterprise TP is installed on the Phoenix Blade in an impressive number of less than 3 minutes. Now it is time to run some benchmarks. For quite some time now I use Passmark's Performance Tests to evaluate my workstations. The latest version is V8 which I downloaded and installed. Below is my first result without any modifications or configuration tuning. Strictly OOTB experience.

Not bad, but not great either. For comparison  my old 2012 workstation achieved PassMark rating of 3449 on the Performance Test V8.  My goal was to at least double the performance of my old workstation. I'm not there yet. First, I started pushing CPU. I was able to get this CPU running at 4.7Ghz but it would crash on the CPU load test. I found out that I can run this CPU at 4.6Ghz and pass CPU load tests. Next, I tweaked my video cards. Asus Strix video card comes over-clocked from the factory and runs at 1253 Mhz. With the help of Asus wizard I was able to increase the speed to 1400Mhz. G.Skill memory chips I am using are already heavily overclocked and will be left alone. After spending the afternoon tweaking my new workstation, I was ready to run a new Performance Test. Below is my best score achieved with this new workstation.

 

Quite a bit of a jump from the PassMark Rating of 5327 to 7486. This landed my new workstation in the middle of the Top 100 builds on the PassMark site, (as of the time of writing this post) Not bad! Significant performance gains were achieved from OOTB configuration and the system remains rock solid. This system is also amazingly quiet, I can hear my HP Elite Book laptop sitting on the corner of my desk but I can't hear my workstation sitting below my desk. I am very happy about the lower noise level as my old workstation was relatively loud.

For my daily use, I created a new overclocking profile where I backed off the top a little and am running the workstation at 4.4Ghz. I stored my top performing profile as a custom profile which can be enabled by a click of the button on the Asus Overclocking Command Center when I feel I need to have every ounce of the speed.

So did I reach my goal of doubling performance of my old workstation? Yes. In my daily use configuration I achieved 110% performance gain over my previous workstation, but more important than synthetic tests, this workstation is visibly faster while performing daily tasks. These tasks include working with 3-5 SharePoint VMs running locally on my workstation, using VS2015 and VS2013, running MSSQL 2014, compiling code, doing other tasks like Skype for Business meeting, creating documents, etc.. Running all these tasks at once does not faze this workstation at all. 

Old vs New:

Here are the important system numbers as I write this blog with a virtually silent workstation.

  

Lastly, I would like to discuss storage performance of this workstation. I reused some SSD I had in my old workstation in addition to Phoenix Blade. Let's see how they stack up:

Drive Read MB/s  Write MB/s 
 Corsair 250GB SSD                       492429 
 Samsung 850 Pro 1TB 530 498
 Phoenix Blade 480 GB 17001760 

 

Corsair:

 

Samsung:

 

Phoenix Blade:

 

Phoenix Blade has pretty incredible performance, to translate it to my daily tasks. I am able to create new W2K12R2 with GUI VM from nothing to log in screen in less than 2 minutes. Samsung and Corsair take 4+ minutes. 

I am very happy how this workstation build turned out, the performance and noise levels (or lack of) are pretty amazing, and I am sure, will make me one happy developer for the next 3 years. At least as hardware is concerned. :)

In the next post I will share with you all of the software, tools and 'secret' ingredients I install on my workstations. 

 

 

 

Oh, not again! SQL Server Service is not starting…

$
0
0

The other day I came across a very common but intriguing scenario of SQL Server service NOT starting. Initially when I came across this issue, I felt that this would not take much of my time but interestingly it did not turn out as I thought!

This was actually the SQL Server resource which was not coming ONLINE. As usual the DBA said NOTHING changed, it just happened!!

One of the first things we normally do is to start the service from the SQL Server Configuration Manager/Services console so that we can isolate the issue. In this case we found that the service was failing to start with the below errors:-

---------------------
TITLE: Services
---------------------
Windows could not start the SQL Server (MSSQLSERVER) service on Local Computer.
Error 1053: The service did not respond to the start or control request in a timely Fashion.
Button: OK
--------------------

Generic error, you don’t expect much from the GUI anyways!! Went for the SQL Server Errorlogs and Application/System Event viewer logs. We did not find the Errorlog created in the SQL Server folder nor did any errors get logged in the Application/System logs.

Commonly, in such scenarios, the start-up parameters are goofed up but in this case it was not.

So the next logical step was to start the service in the command prompt in console mode:-

C:\Windows\system32>"C:\Program Files\Microsoft SQL Server\MSSQL10_50.R2\MSSQL\Binn\sqlservr.exe" -sR2 –c

_

It did show any message indicating that the service in not even initialising. It appeared that it was sort of in a hung state. Strange!!

Then I pulled out the next tool from my toolkit: - PROCESS MONITOR a.k.a PROCMON (I hope most of you get the reason why I put that in CAPS for those of you who did not ..read on)

So I collected a PROCMON while running the service in command prompt. I filter it on SQLServr.exe and here is what I find.

 

I see that SQL Server accesses the ERRORLOG file but continues to perform a delete operation on files like ERRORLOG.321234533, ERRORLOG.321234532, ERRORLOG.321234531..Why would SQL do this and why these weird files names??

 

Scratching my head for some time and looking at the PROCMON output for some more time I figured it out. I see that before SQL attempts this file deletion it accesses the below registry key

Event Class:        Registry

Operation:         RegQueryValue

Result:  SUCCESS

Path:     HKLM\SOFTWARE\MICROSOFT\Microsoft SQL Server\MSSQL10_50.R2\MSSQLServer\NumErrorLogs

Type:     REG_DWORD

Length: 4

Data:     321234534

 

But now what is NumErrorLogs and when would that be set??

Remember there is an option in SQL Server where you can customise the Number of Errorlogs to be retained, by default its 6.

 

 

 

To confirm my understanding, I set this to 99 and collected a PROCMON on my test machine.

 

I see the similar symptoms of the Errorlogs being attempted to be deleted.

In short, what is happening is that SQL Server is internally is attempting to delete/rename the Errorlog files and arrange it so that only 99 or less Errorlog files are retained!!! Now since the registry key is set to such a high value SQL Server is looking for the Errorlogs and decrementing the number every time and hence it SQL Server appears to be hung!!!

But if you notice, the maximum value which can be set here is 99. So how does the NumErrorLogs registry key has such a high value??

No points for cracking that. The possible options for that happening:-

  1. Manual intervention of changing this registry key
  2. Running something like below in SQL Server

USE [master]
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'NumErrorLogs', REG_DWORD, 321234533
GO

 

So finally, we go ahead and change the registry key back to the default value and BINGO SQL Server starts up!! Issue resolved!!

Before signing off, credits to Mark Russinovich for the tool PROCESS MONITOR without which I feel this issue could NEVER have been resolved!!

Disclaimer: The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my opinion. Inappropriate comments will be deleted at the authors discretion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular.

 

 

Files not getting uploaded using Web service

$
0
0

The other day, a web application developer contacted me and said that he was not able to upload files to the web server. He was using a web service to have the files uploaded to the application server and he was getting the below error –

The HTTP request was forbidden with client authentication scheme 'Anonymous'.

He had done his homework before coming to me and said that he was able to upload files of smaller size but when he would try larger files (around 40-50 MB) he was getting the above error. This meant that at least the upload module was working fine.

I asked the developer to reproduce the issue and I looked at the IIS logs generated for the website.

Here’s what I found –

GET /Rejected-By-UrlScan ~/TestService/TestProtocolService.svc 88 - ::1 - - 404 0 2 100

GET /Rejected-By-UrlScan ~/TestService/TestProtocolService.svc 88 - ::1 - - 404 0 2 1

Oh, looks like there is a URLSCAN module installed on IIS which is rejecting the requests.

 

For those who are not aware of URLSCAN– It is basically security extension that restricts the types of HTTP requests that IIS will process. By blocking specific HTTP requests, the UrlScan helps to prevent potentially harmful requests from reaching applications on the server. This was used in older versions but newer versions (IIS 7 and above) has a new feature called as Request Filtering.

 

But why are these requests getting rejected, these seem to be legitimate requests.

I looked into the URLSCAN logs (Default Location - C:\Windows\System32\inetsrv\urlscan\logs) to understand the reason -

POST /TestService/TestProtocolService.svc Rejected Content+length+too+long Content-Length: 89437188 30000000

 

Here you go – the content length is too long. It also provides the length of the content attempted to be uploaded as well. Pretty neat logging!

 

Jumped directly into the URLSCAN configuration file (Default Location - C:\Windows\System32\inetsrv\urlscan\UrlScan.ini) and found the below comment in the configuration file –

;   - MaxAllowedContentLength specifies the maximum allowed

;     numeric value of the Content-Length request header.  For

;     example, setting this to 1000 would cause any request

;     with a content length that exceeds 1000 to be rejected.

;     The default is 30000000.

The default max. allowed content length was 30 MB. We altered this configuration and increased the parameter to a higher value to fix the problem. 

 

Disclaimer: The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my opinion. Inappropriate comments will be deleted at the authors discretion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular.

Video Recordings Of Modern Business technical events - Safeguard Your Business

$
0
0

In this video series you will learn to use the latest Microsoft technologies to deliver solutions that help small and midsize organizations protect their business data—whether on-site, in the cloud, or on mobile devices—and minimize the disruptions caused by unexpected events. This seres covers Microsoft Azure Backup, Windows 8.1 security and management, improving security with Windows Server 2012 R2, and availability and recoverability solutions with Windows Server 2012 R2 and SQL Server.

Setup Azure Backup

In this module you will setup and use Azure Backup to implement Azure Recovery Services for on-premises servers and VMs.

Using Group Policy to Secure Windows 8.1 

In this module you manage AppLocker, Windows Defender, and Windows Update settings for Windows 8.1 clients using Group Policy.

Creating a Security Policy with Security Configuration Wizard

This exercise walks you running the Security Configuration Wizard on Windows Server 2012 and applying the policy to other servers.

Hyper-V Replica

In this module you will learn how implement Hyper-V Replica Extended Replication on Windows Server 2012 R2.

Configuring SQL Server 2014 AlwaysOn

AlwaysOn Availability Groups, introduced in SQL Server 2012 provide high availability for your application databases. Availability Groups allow you to failover a group of databases together and allow configuring multiple instances as replicas to which you can failover, thereby increasing redundancy and availability. The availability group involves a set of SQL Server instances, known as availability replicas. Each availability replica possesses a local copy of each of the databases in the availability group. Only one of these replicas acts as the primary replica at any point in time and maintains the primary copy of each database. The primary replica makes these databases, known as primary databases, available to users for read/write access. For each primary database, one or more availability replicas, known as secondary replicas, maintain a copy of the database and the database on a secondary replica is referred to as a secondary database.

Podcast episode 5 – Email over IPv6

$
0
0

P1070990_logo_2

DescriptionThe world is moving to IPv6, and so is email. However, email specialists are not thrilled about the move because of the potential for abuse. If it’s hard enough to stop spam in IPv4 with its limited set of IP addresses, how do we hope to stop it in IPv6 with its virtually unlimited set of IP addresses?

Fortunately, we have a plan. IPv6 will require sender authentication whereas it is optional in IPv4. It is only through sender authentication that we have any hope of containing the spam problem over IPv6.

Length26 minutes

Listen in iTuneshttps://itunes.apple.com/us/podcast/terry-zink-security-talk/id964400682#

Direct download linkThe Terry Zink Security Talk podcast episode 5 - Email over IPv6

Original blog posts1. Support for anonymous email over IPv6

2. A plan for email over IPv6 on Slideshare

3. Office 365 will slightly modify its behavior for email sent over IPv6

Or, you can listen to it below by streaming it in your browser as I’ve now integrated it with Podbean! It took me forever to figure this out, but here we are.

Enjoy.

Visual State Manager and Adaptive Triggers for Universal Windows App

$
0
0

 

Excited to write this blog post as this is one of the much awaited topic that I wanted blog about. Yes Universal Windows App Smile One App targeting for one Billion Devices, that’s just incredible reach for developers and enterprises!

With Universal App platform now we can target all devices essentially running same app running from IOT devices, Phones, Tablets, Laptops, Big Desktops and Huge Displays, No matter what device is used to view them. It’s the same app package! With the introduction of the Windows 10 core and the Universal Windows Platform (UWP), one app package can run across all platforms.

clip_image002

Alright let’s try to understand the idea behind designing these kind of application from developer perspective that too from XAML and C# side of things. As a HTML Developer or Web developer would have done this in the past in CSS meaning designing the Web application to scale to different resolutions or form factors also have different look and feel for their site based on Media Queries.

Okay first let’s see what we get with Windows 10 App templates.

clip_image004

You need to download Windows 10 SDK and tools to get this option while you install VS 2015.

1. Select Blank App template (Windows Universal) you should see something like below. This nothing new if you have developed WPF, Windows Store App or Windows Phone app before. So I won’t bore you with Hello World app J

clip_image006

2. Open the App from Blend to design a page targeting different form factors like Phone, Tablet and Desktop. To do that Right click on the app and select Design with blend option. This should open the app in Blend.

clip_image008

3. You should see something like below, Expression blend was scary initially when I started WPF/Silverlight development and I always thought it’s a tool for Designers only. But the way expression blend as a tool has evolved over years I must say it’s Awesome.

clip_image010

4. To Experience its awesomeness lets design an app with few basic elements on the screen and let’s render the UI on different devices. First click on the MainPage.xaml and click on the drop down as shown in the 1st Arrow. Basically you get to see your page in different device in design time. Next if you want to check your page as how it looks click on the Portrait or Landscape button as shown in 2nd and 3rd arrow.

clip_image012

5. We have base Grid added for you with in the Page, now let’s divide the Grid into 2 portions in 1:5 ratio as shown below using star sizing.

clip_image014

6. Created Image folder and added 3 images Laptop.jpg, Tablet.jpg and Mobile.jpg. Add couple of controls like TextBlock and Image controls inside the grid that we added in the previous step and set few properties as highlighted below position these controls on the UI as below.

clip_image016

7. Now time to test how it looks on phone. Click on the Emulator option as shown below to see how the UI looks on 4inch Phone emulator.

clip_image018

8. You should see the text block and the image as shown in the screen shot below.

clip_image020

9. If you notice as of now the image control source is set to the Mobile.jpg so the Nokia phones image is shown on a mobile emulator. Okay so now if we run the application again on a Laptop or any other tablet emulator it’s going to show the same image. But that isn’t the behavior we want. We want the app automatically adjust the UI based on the devices it runs like “Media Queries in CSS”.

clip_image022

10. Now it’s time to implement cool stuff, Go to the states tab in expression blend and click on the Add state group tab

clip_image024

11. Now click on the Add state button and add 3 new states and rename it as shown below.

clip_image026

12. Click on the XAML view of the View page and add Adaptive triggers to the Visual States that we created for Phone, Tablet and Desktop. Set MinWindowWidth property for each one of the Adaptive Triggers for Visual state for VisualStatePhone, VisualStateTablet and VisualStateDesktop as shown below.

clip_image028

13. Now what happens is when the Universal App that we have written is viewed in width 0-600 screen width Visual State for Phone will be triggered and for 600-800 width Visual State for tablet will be triggered and anything beyond 800 Visual state for desktop will be triggered. Now it’s time to set some of the properties that affect the look and feel for different views. We do that with the help of VisualState.Setters as shown below. We are setting the Image controls source property to the desired value that we want in our case when the screen is in 600-800 range we are changing the image source from Mobile.jpg to Tablet.jpg. Similarly need to do for Desktop Visual State and Mobile Visual State as shown below.

clip_image030

14. And that is it we are ready to test our application on different devices width. Select local Machine and Click on the play button to rock n roll.

clip_image032

App View in Desktop with width greater than 800.

15. Reduce the width with the help of mouse from the right side edges of the app you should see something like below. App adjusting the UI itself to change the look and feel in our case we are changing the image from Desktop.jpg to Mobile.jpg when the screen width is changed.

clip_image034

App View in Phone with width in range of 0-600.

16. Similarly if you now slightly increase the width in Range of 600-800 you should see again our universal app adjusting its screen UI elements for Tablets.

clip_image036

App View in Tablet with width greater than 600 and less than 800

So, similar to what we have for Media queries in CSS, in XAML we have Visual State Manager feature in Blend which is very effective for designing UI’s that changes the look and feel automatically to adapts/adjusts itself for the devices that it runs on.

Waiting to see some great apps on Universal Windows Platform (UWP) from you.

Thanks for reading!


Trace.WriteLine may call OutputDebugString

$
0
0

Recently, I was asked to check a couple of dump files. They were generated when their Biztalk server process was responding slow. I’ve listed the callstacks of all the threads. More than 100 threads were waiting to enter a monitor. An example callstack of such waiting threads is as follows :

 

System.Threading.Monitor.Enter(System.Object)

System.Diagnostics.TraceInternal.WriteLine(System.String)                          

XYZ.BiztalkESB.Utilities.GenericHelper.TraceWrite(System.String, System.String, System.String)

XYZ.BiztalkESB.Utilities.CacheBase.TryGetValue

XYZ.BiztalkESB.Utilities.InterchangeParameterCache.GetParamValue

XYZ.BiztalkESB.Utilities.ParameterHelper.GetInterchangeParameter

XYZ.BiztalkESB.Utilities.LoggerHelper.Log

XYZ.BiztalkESB.Utilities.LoggerHelper.LogHeader

XYZ.SomeIntegration.Biztalk.Resolvers.InterchangeDefinitionResolverOnline.Resolve

XYZ.BiztalkESB.Resolvers.XYZSomeResolveProvider.ResolveItineraryName

XYZ.BiztalkESB.Resolvers.XYZSomeResolveProvider.Resolve

Microsoft.Practices.ESB.Resolver.ResolverMgr.Resolve

Microsoft.Practices.ESB.Itinerary.PipelineComponents.ItinerarySelector.Execute

Microsoft.BizTalk.Internal.ComponentWrapper.Execute

Microsoft.BizTalk.PipelineOM.SimpleStage.Execute

Microsoft.BizTalk.PipelineOM.Pipeline.Execute

Microsoft.BizTalk.PipelineOM.ReceivePipeline.GetNext

 

And, the thread that has entered the monitor has the following callstack. This thread is waiting while executing OutputDebugString.

 

ntdll!ZwWaitForSingleObject+0xa

KERNELBASE!WaitForSingleObjectEx+0x92

KERNELBASE!WaitForSingleObject+0xd

KERNELBASE!OutputDebugStringA+0x130

KERNELBASE!OutputDebugStringW+0x58

System_ni!DomainNeutralILStubClass.IL_STUB_PInvoke(System.String)+0x82

System_ni!System.Diagnostics.DefaultTraceListener.Write(System.String, Boolean)+0x41

System_ni!System.Diagnostics.DefaultTraceListener.WriteLine(System.String, Boolean)+0x3f

System_ni!System.Diagnostics.TraceInternal.WriteLine(System.String)+0xfb

XYZ_BiztalkESB_Utilities!XYZ.BiztalkESB.Utilities.GenericHelper.TraceWrite(System.String, System.String, System.String)+0x59

XYZ_BiztalkESB_Utilities!XYZ.BiztalkESB.Utilities.CacheBase.TryGetValue

XYZ_BiztalkESB_Utilities!XYZ.BiztalkESB.Utilities.InterchangeParameterCache.GetParamValue

XYZ_BiztalkESB_Utilities!XYZ.BiztalkESB.Utilities.ParameterHelper.GetInterchangeParameter

XYZ_BiztalkESB_Utilities!XYZ.BiztalkESB.Utilities.LoggerHelper.Log

XYZ_BiztalkESB_Utilities!XYZ.BiztalkESB.Utilities.LoggerHelper.LogHeader

XYZ_SomeIntegration_Biztalk_Resolvers!XYZ.XYZSomeIntegration.Biztalk.Resolvers.InterchangeDefinitionResolverOnline.Resolve

XYZ_BiztalkESB_Resolvers!XYZ.BiztalkESB.Resolvers.XYZSomeResolveProvider.ResolveItineraryName

XYZ_BiztalkESB_Resolvers!XYZ.BiztalkESB.Resolvers.XYZSomeResolveProvider.Resolve

Microsoft_Practices_ESB_Resolver!Microsoft.Practices.ESB.Resolver.ResolverMgr.Resolve

 

It is time to use http://referencesource.microsoft.com Let’s first understand why a monitor is used. In the source of TraceInternal.WriteLine we can clearly see the synchronisation part :

 

public static void WriteLine(string message) {

 if (UseGlobalLock)
  {

  lock (critSec)
  {

   foreach (TraceListener listener in Listeners)
  {

    listener.WriteLine(message);

    if (AutoFlush)
  listener.Flush();}}

[The rest is truncated]

 The lock is defined as :

 internal static readonly object critSec = new object();

 

 

Also, we understand that once the monitor is available then a listener’s WriteLine method is called. From the callstack of the thread that entered into the monitor we see that the DefaultListener is used. And, an internal version is called :

 

void internalWrite(string message) {

 if (Debugger.IsLogging())
  {

  Debugger.Log(0, null, message);

  }

  else
  {

       if (message == null)

        SafeNativeMethods.OutputDebugString(String.Empty);           

       else

        SafeNativeMethods.OutputDebugString(message);
         } }

 

If there is no debugger attached to the process (we knew that there was no debugger attached to Biztalk server) then OutputDebugString is called. This explains the “OutputDebugString” in callstack of the waiting threads. Next, we need to understand what the OutputDebugString is waiting on. If we examine the parameters of the WaitForSingleObject we will see that an event called DBWIN_BUFFER_READY ( E.g. \Sessions\<SessionNo>\BaseNamedObjects\DBWIN_BUFFER_READY) for 10 seconds. Now, the problem has  changed a bit. Why a string sent to OutputDebugString is processed for so long? Well we know that no debuggers are attached to the Biztalk process. Then what happens to the debug string?

We need to find out how OutputDebugString works. If you search on internet for DBWIN_BUFFER_READY you will find many examples. And, if you still have the Visual Studio 6 SDK which has the DBMon sample (https://msdn.microsoft.com/en-us/library/aa242171(v=vs.60).aspx ) then you’ll see how that string is consumed.

The string is put to a memory mapped file named “DBWIN_BUFFER”. Once the string is put an event named “DBWIN_DATA_READY” is signalled. The party that is waiting on that event will consume that memory mapped file’s content. That party is another application which does not have to be a debugger at all. E.g. the DBMon sample application is perfectly capable to consume the debug string. Once the memory mapped file’s content is consumed then the DBWIN_DATA_READY is set and another event called “DBWIN_BUFFER_READY” is signalled. Now, the application that calls OutputDebugString can continue. Meanwhile, if the consumer application cannot process the content within 10 seconds then OutputDebugString returns.

If that is the case and a fair lock is used and there are 100 threads waiting on the Trace.WriteLine function’s monitor then the last thread would wait for 10x100 seconds. Roughly 16 min. The experience will  ofcourse be a slow system. Just like my customer has felt. Let’s move on. Which application it was in our case that acted as the consumer of  OutputDebugString? The answer is DebugView. So how come DebugView did process the strings so slowly? The answer is “History Depth”. This option was not set in DebugView. If not set then all of the previous debug strings are kept. This causes the list view to operate considerably slower if there is a chatty environment in terms of OutputDebugStrings. Hence, if you want to use DebugView then you better set the History Depth.

Have a great day!

faik hakan bilgen

How to find which user deleted the user database in SQL Server

$
0
0
  In one of the recent scenarios we noticed that a user database was deleted and customer wanted to which user has dropped the database. We know that multiple user had full access on that database. In this post I’ll be sharing the steps to find the details of user who drop the database. Method-1 Connect the SQL Server instance using management studio Right-click on the instance and select “Reports”—“Standard Reports”—“Schema Changes History” We get a report of schema changes for all databases...(read more)

Partner with the Microsoft Partner's Partner

$
0
0

 

I've spoken often of the incredible passion and innovation in Microsoft's Partner ecosystem - never is this more evident when two Partners combine to achieve a common goal, delighting customers and achieving outcomes that were otherwise not possible. With this in mind, I'm excited to announce the launch of the New Horizons Australia (NHA) Microsoft Partner Hub, and with it, the emergence of NHA as the Microsoft Partner's Partner.

NHA developed the Microsoft Partner Hub with one goal; to supercharge the channel by empowering Partners to meet their full potential. Designed to complement the MPN portal, the NHA Microsoft Partner Hub amplifies the key elements of the MPN in a platform that makes it easy for Partners to identify and activate learning plans – backed by the expertise of a Gold Certified Microsoft Learning Partner.

 

The NHA Microsoft Partner Hub

MPH 1

 

Whether you're a new Partner looking to build a Microsoft practice, an existing Partner looking to (re)engage with the Microsoft Partner Network, or an established Partner looking to streamline your Competency attainment – the NHA Microsoft Partner Hub has what you need to succeed:

  • MPN Made Easy
    • Make your enrolment choice a cinch with easy to understand program guides and videos
  • Exclusive Partner Offers
  • Office 365 Partner Onboarding
  • Curated Competency Attainment Experience
  • Dedicated Support
    • Live chat to assist with your enquiries (available during business hours)

 

MPH2a

MPH2

 

Whilst already a fantastic offering, what excites me most about the NHA Microsoft Partner Hub is that this is just the beginning. With an aggressive development schedule, even more exclusive content on the way and NHA's commitment to channel development, there's never been a better time to partner with the Microsoft Partner's Partner.

Out of order events

$
0
0

Some users have reported seeing high number of out of order events in their queries, and wonder what can be done to reduce the number. In this blog post, I will explain how events get out of order, and how to reduce them by changing the query and tweaking query configurations.

First of all, because ASA applies temporal transformation when processing the incoming events (e.g. windowed aggregates, and temporal joins), we need to sort the incoming events by timestamp order. User has the choice of which timestamp to use by using the "timestamp by" clause in the query (e.g. select * from input timestamp by time, where time is a field in the event payload).

When "timestamp by" is not present, we use Event Hub's event enqueue time by default. Because Event Hub guarantees monotonicity of the timestamp on each partition of the Event Hub, and we merge events from all partitions by timestamp order, there will be no out of order events.

When it's important for you to use sender's timestamp, so a timestamp from the event payload is chosen using "timestamp by," there can be several sources or disorderness introduced.

  1. Producers of the events have clock skews. This is common when producers are from different machines, so they have different clocks.
  2. Network delay from the producers sending the events to Event Hub.
  3. Clock skews between Event Hub partitions. This is also a factor because we first sort events from all Event Hub partitions by event enqueue time, and then examine the disordness.

 On the configuration tab, you will find the following defaults.

Using 0 seconds as the out of order tolerance window means you assert all events are in order all the time. Given the 3 sources of disorderness, it's unlikely true. To allow ASA to correct the disorderness, you can specify a non-zero out of order tolerance window size. ASA will buffer events up to that window and reorder them using the user chosen timestamp before applying the temporal transformation. You can start with a 3 second window first, and tune the value to reduce the number of events get time adjusted. Because of the buffering, the side effect is the output is delayed by the same amount of time. As a result, you will need to tune the value to reduce the number of out of order events and keep the latency low.

Event processing latency

$
0
0

Streaming event processing usually implies low end to end latency. While ASA is designed to process incoming events with low latency, there are several factors from both the query semantics and the underlying infrastructure that can introduce latency. In this blog post, I will explain where the additional latency is from, and how to reduce it.

At the center of ASA's query semantics is temporal transformation logic, including windowed aggregates and temporal joins. In order to perform the temporal transformation correctly and efficiently, we sort all incoming events by timestamp order. When Event Hub is used as input, we also need to merge the events from all Event Hub partitions by timestamp order. The merger essentially round robin through the Event Hub partitions reading from the partition with the lowest timestamp seen so far, and performs a merge sort. So far so good.

However, when events do not arrive at all Event Hub partitions for some reason (e.g. low event rate or uneven distribution), the merge sort algorithm has to wait for the event from the empty partition, potentially indefinitely, which halts the whole event processing pipeline. To combat the problem, we do two things.

  1. For queries that don't use "timestamp by," we confirm with Event Hub partition it's indeed out of events to read (as opposed to network delay). If so, we move the clock from the partition forward without waiting for new events.
  2. For queries that use "timestamp by," in addition to #1, we use the late arrival tolerance window specified on the query's configuration tab to figure out how to move user's clock forward. For example, if the local clock is at 2pm, and the late arrival tolerance window is 10 seconds, we will move user's clock to 1:59:50pm.

As you can see, both mechanisms can introduce delay in the processing. For #1, it's the time we check with each and every Event Hub partition on their emptiness, and for #2, it's the tolerance window specified by the user.

My experiment shows with a select * query, if I send 1 event per second to Event Hub, the end to end delay is around 6 seconds.

So, how can we reduce the latency?

Remember, at the center of the delay is the merger. While ASA team continues to improve its design to reduce latency, if you can avoid the merger in the query, you have effectively removed that source of delay. To achieve that, you need to make your query a partitioned query, by adding the "partition by" clause. For example, select * from input partition by partitionid.

The partitionid here is the partitionid from Event Hub. This effectively tells ASA to not merge events from Event Hub partitions during processing. Just note, you need to configure your Event Hub output to use partitionid as the partition key as well; otherwise, the merger is applied again. The drawback of doing this is the output events are no longer guaranteed to be sorted by timestamp, because they are processed in parallel without getting merged together.

My experiment shows with this query change, if I send 1 event per second to Event Hub, the end to end delay is reduced to 2 seconds.

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>