Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Secondary Replica may go into suspended state if you alter the readability property of the Availability Group and page compression is enabled

$
0
0

Please apply one of the fixes mentioned at the bottom of this blog page or employ one of the suggested workarounds if you are AlwaysOn Availability Groups and have enabled page compression on any table that is part of a database in the Availability group. In such a setting, if you alter the readability property of the secondary replica (example: from Read-intent or Yes to No) the secondary replica may go into a suspended state. Additionally, you will see following messages in the error log of the secondary replica:

2015-01-06 10:40:12.10 spid72s     **Dump thread - spid = 0, EC =
0x0000004978DD4B90
2015-01-06 10:40:12.10 spid72s     ***Stack Dump being
sent to C:\Program Files\Microsoft SQL
Server\MSSQL11.MSSQLSERVER\MSSQL\LOG\SQLDump0001.txt
2015-01-06 10:40:12.10
spid72s     *
*******************************************************************************
2015-01-06
10:40:12.10 spid72s     *
2015-01-06 10:40:12.10 spid72s     * BEGIN STACK
DUMP:
2015-01-06 10:40:12.10 spid72s     *   01/06/15 10:40:12 spid
72
2015-01-06 10:40:12.10 spid72s     *
2015-01-06 10:40:12.10 spid72s    
* Location:  page.cpp:3898
2015-01-06 10:40:12.10 spid72s     * Expression: 
!pageFull
2015-01-06 10:40:12.10 spid72s     * SPID:   72
2015-01-06
10:40:12.10 spid72s     * Process ID:  2876
2015-01-06 10:40:12.10
spid72s     *
2015-01-06 10:40:12.10 spid72s     * 
2015-01-06
10:40:12.10 spid72s     *
2015-01-06 10:40:12.10 spid72s     * 
MODULE                          BASE      END       SIZE
2015-01-06
10:40:12.10 spid72s     * sqlservr                       00007FF6899A0000 
00007FF6899F9FFF  0005a000
2015-01-06 10:40:12.10 spid72s     *
ntdll                          00007FFDC9AE0000  00007FFDC9C89FFF 
001aa000
...

2015-01-06 10:40:16.33 spid72s     Stack Signature for the dump is 0x0000000014098401
2015-01-06 10:40:17.19 spid72s     External dump process return code 0x20000001.
External dump process returned no errors.

2015-01-06 10:40:17.19 spid72s     Error: 17066, Severity: 16, State: 1.
2015-01-06 10:40:17.19 spid72s     SQL Server Assertion: File: <page.cpp>, line=3898 Failed Assertion =
'!pageFull'. This error may be timing-related. If the error persists after rerunning the statement, use DBCC CHECKDB to check the database for structural
integrity, or restart the server to ensure in-memory data structures are not corrupted.
2015-01-06 10:40:17.20 spid72s     Error: 3624, Severity: 20, State: 1.
2015-01-06 10:40:17.20 spid72s     A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion
failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to
Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from
Technical Support.
2015-01-06 10:40:35.40 spid72s     AlwaysOn Availability
Groups data movement for database 'MyDatabase' has been suspended for the following reason: "system" (Source ID 2; Source string:
'SUSPEND_FROM_REDO'). To resume data movement on the database, you will need to resume the database manually. For information about how to resume an
availability database, see SQL Server Books Online.
2015-01-06 10:40:35.40 spid72s     Error: 3313, Severity: 21, State: 2.
2015-01-06 10:40:35.40 spid72s     During redoing of a logged operation in database 'MyDatabase', an error occurred at log record ID (1786:4978584:74).
Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the
database.

WORKAROUND:

You can use one of the following workarounds to prevent this issue till such time you can apply the fix.

  1. Do not change the readability property of the availability group.
  2. Turn off page compression.

FIX:

For SQL Server 2012, apply the fix as per KB Article:

3054530    FIX: Corruption occurs on pages of secondary replica when you change the secondary replica to unreadable

http://support.microsoft.com/kb/3054530/EN-US

For SQL Server 2014, apply the following fix:

Microsoft® SQL Server® 2014 SP1


How to obtain the list of visible columns in a shell view

$
0
0

I was recently looking into how to obtain the list of columns displayed in the shell view for a namespace extension that uses the default shell view (DefView). DefView provides the IColumnManager interface to manage the columns that can be displayed in in the view window. The following sample code demonstrates listing the visible columns for each shell view hosted in a shell browser window contained in the ShellWindows collection:
 

#include<windows.h>
#include<ShlObj.h>
#include<Shlwapi.h>
#include<propsys.h>
#include<stdio.h>

#pragma comment(lib, "propsys.lib")
#pragmacomment (lib, "shlwapi.lib")

HRESULT PrintShellViewColumns(IShellView *);

int wmain(int argc, wchar_t *argv[])
{
//
// For each item in the ShellWindows collection that has an active
// IShellView, print the list of visible columns in the view window
HRESULT hr = CoInitialize(NULL);
if (SUCCEEDED(hr))
{
IShellWindows *pShellWindows;
hr = CoCreateInstance(CLSID_ShellWindows, NULL, CLSCTX_LOCAL_SERVER,
IID_PPV_ARGS(&pShellWindows));
if (SUCCEEDED(hr))
{
long cWindows = 0;
hr = pShellWindows->get_Count(&cWindows);

for (long iWindow = 0;
SUCCEEDED(hr) && iWindow < cWindows;
iWindow++)
{
VARIANT vtIndex;
vtIndex.vt = VT_I4;
vtIndex.lVal = iWindow;

IDispatch *pDispatch;
hr = pShellWindows->Item(vtIndex, &pDispatch);
if (S_OK == hr))
{
IShellBrowser *pShellBrowser;
hr = IUnknown_QueryService(pDispatch,
SID_STopLevelBrowser,
IID_PPV_ARGS(&pShellBrowser));
if (SUCCEEDED(hr))
{
IShellView *pShellView;
hr = pShellBrowser->QueryActiveShellView(&pShellView);
if (SUCCEEDED(hr))
{
PrintShellViewColumns(pShellView);
}
pShellBrowser->Release();
}
pDispatch->Release();
}
}
pShellWindows->Release();
}
CoUninitialize();
}
return (int)hr;
}


void PrintHeader(IShellView *pShellView)
{
//
// Print a header that includes the display name of the folder displayed
// in the view window
IFolderView *pFolderView;
HRESULT hr = pShellView->QueryInterface(IID_PPV_ARGS(&pFolderView));
if (SUCCEEDED(hr))
{
IShellItemArray *pShellItemArray;
hr = pFolderView->GetFolder(IID_PPV_ARGS(&pShellItemArray));
if (SUCCEEDED(hr))
{
IShellItem *pShellItem;
hr = pShellItemArray->GetItemAt(0, &pShellItem);
if (SUCCEEDED(hr))
{
LPWSTR pszDisplayName;
hr = pShellItem->GetDisplayName(SIGDN_NORMALDISPLAY,
&pszDisplayName);
if (SUCCEEDED(hr))
{
wprintf(L"IShellView: %p Folder: %s\r\n",
pShellView, pszDisplayName);
CoTaskMemFree(pszDisplayName);
}
pShellItem->Release();
}
pShellItemArray->Release();
}
pFolderView->Release();
}

if (FAILED(hr))
{
//
// Print a generic header on an error
wprintf(L"IShellView: %p\r\n", pShellView);
}
wprintf(L"----------------------------------------\r\n");
}

void PrintFooter(IShellView *pShellView)
{
UNREFERENCED_PARAMETER(pShellView);
wprintf(L"\r\n");
}


void PrintPropertyKey(PROPERTYKEY &propkey)
{
LPWSTR pszCanonicalName = NULL;
HRESULT hr = PSGetNameFromPropertyKey(propkey, &pszCanonicalName);
if (SUCCEEDED(hr))
{
wprintf(L" %s\r\n", pszCanonicalName);
CoTaskMemFree(pszCanonicalName);
}
else
{
WCHAR szGuid[50];
ZeroMemory(szGuid, sizeof(szGuid));
hr = StringFromGUID2(propkey.fmtid, szGuid, ARRAYSIZE(szGuid));
if (SUCCEEDED(hr))
{
wprintf(L" %s,%d\r\n", szGuid, propkey.pid);
}
else
{
wprintf(L" Unknown. Error=%X\r\n", hr);
}
}
}


HRESULT PrintShellViewColumns(IShellView *pShellView)
{
IColumnManager *pColumnManager;
HRESULT hr = pShellView->QueryInterface(IID_PPV_ARGS(&pColumnManager));
if (SUCCEEDED(hr))
{
PrintHeader(pShellView);

UINT nColumns = 0;
hr = pColumnManager->GetColumnCount(CM_ENUM_VISIBLE, &nColumns);
if (SUCCEEDED(hr) && nColumns)
{
PROPERTYKEY *columns = new PROPERTYKEY[nColumns];
hr = pColumnManager->GetColumns(CM_ENUM_VISIBLE,
columns,
nColumns);
if (SUCCEEDED(hr))
{
for (UINT index = 0; index < nColumns; index++)
{
PrintPropertyKey(columns[index]);
}
}
delete columns;
}
pColumnManager->Release();

PrintFooter(pShellView);
}
return hr;
}

Azure プレビュー ポータルの機能強化を発表 (4 月の更新)

$
0
0
このポストは、4 月 28 日に投稿された Azure Preview Portal Improvements (April update) の翻訳です。 Azure ユーザーの皆様に嬉しいお知らせです。 本日、Azure プレビュー ポータルの機能強化を発表しました。これらの機能強化は、 前回 と同じく皆様からの貴重なフィードバックをもとに実施しました。皆様のご協力にはいつも感謝しております。今後も引き続きたくさんのご意見をいただけますと幸いです。 この記事では今回の強化点について詳しく説明していきますが、お急ぎであれば以下の簡単な概要をご覧ください。 リソースの 参照 機能の向上 単一リストからすべてのリソースへアクセス ([All resources] ビューの追加) サブスクリプションによるリソースのフィルタリング 名前によるリソースのフィルタリング リソース リスト内の列のカスタマイズ リソース リストをスタートボードに固定して、すばやく簡単なアクセスが可能に 最近参照したリストの管理性の向上 ...(read more)

Recap of the latest key Windows 10 announcements

$
0
0

Over the last few weeks there have been some exciting announcements from the Windows team on the path to Windows 10 launch, so we want to highlight some of the these for you and ensure you have access to all the publicly available resources, available at the Partner Marketing Centre.

Last week we introduced the Windows 10 Editions, designed to address the specific needs of consumers to small business to the largest enterprises.

  • Windows 10 Home for consumers and BYOD scenarios, available under the free upgrade
  • Windows 10 Pro for small  and lower mid-size businesses, available under the free upgrade
  • Windows 10 Enterprise for Mid-size and large enterprises, available under VL
  • Windows 10 Educationdesigned to meet the needs of schools – teachers, students, staff, and administrators, available under VL
  • Windows 10 Mobile for consumer, small, mid-size and large enterprises and academic institutions, available under OEM
  • Windows 10 Mobile Enterprise for mid-size and large enterprises with IoT scenarios, available under OEM (IoT), VL

For more details, read the blog post  and view the video of the Windows 10 for Licensing webinar available now on demand on MPN: http://aka.ms/Windows_10_Learning_Series. You'll need your MPN login to access.

Windows 10 Free Upgrade Offer

There's been a lot of talk about Windows 10 being a free upgrade. For many customers, that will be true for the first year. So let's confirm the details:

 

  • Microsoft will offer a free upgrade to Windows 10 for qualified Windows 7, Windows 8 and Windows Phone 8.1 devices in the first year. After the first year, upgrades will be paid via boxed product and VL Upgrades.
  • Windows 8/8.1 and Windows 7 Home Basic and Home Premium devices upgrade to Windows 10 Home
  • Windows 8/8.1 Pro and Windows 7 Professional and Ultimate devices upgrade to Windows 10 Pro
  • If upgraded within the first 12 months following launch, the device will receive ongoing Windows 10 updates for free for the life of that device
  • Excludes Windows Enterprise and RT devices
  • The free Windows 10 upgrade is delivered through Windows Update; domain-joined machines can manually get the update via Windows Update. The upgrade cannot be deployed through WSUS.

Windows Update for Business

At Ignite, we announced the free Windows Update for Business service, available for all Windows Pro and Windows Enterprise devices, designed to help organisations keep their Windows devices always up to date with the latest security and features. In case you missed it, check out the blog, Announcing Windows Update for Business for what the service will provide.

//BUILD News

Announcements from the developer event included the new Microsoft Edge web browser, Continuum for phones and how Windows 10 will empower all developers and their code to develop for one billion Windows 10 devices, including iOS and Android. Four new SDKs will enable developers to start with an existing code base and distribute their app through the Windows Store. The 4 code bases are: Web sites, .NET and Win32, Android and iOS. Watch the full keynote on demand here. And register now to join us at Build Tour Sydney on Monday June 1st to get face-to-face time with Microsoft Technical Experts. Register now

And remember, if you’d like to help shape the future of Windows and contribute to Windows 10, and haven't already joined, please join the Windows Insider Program, where you can download the latest technical previews of Windows 10 and provide us valuable feedback.

 

Skype Web SDK による JavaScript プログラミング

$
0
0

今回は、//build 2015 のタイミングで Preview 公開された Skype Web SDK を紹介します。REST API を使った Skype for Business のプログラミングを容易に実現できます。

...(read more)

IT エンジニアの未来ラボ: 学生リポーター隊の de:code 気になるセッション 鈴木さん編

$
0
0
@IT 特集「 IT エンジニアの未来ラボ 」で結成された現役大学生リポーター隊の 3 人 (髙橋さん、山本さん、鈴木さん) も参加する de:code 2015。 そんな彼らが注目する de:code セッションを紹介します。彼らは de:code Countdown にも登場していますので、あわせてご覧ください。 ----------------------------------------------------------------------------------------------------------- 今回「de:code 2015」にて学生リポーターを務めさせていただいている慶應義塾大学環境情報学部4年の鈴木啓太です。 自分の開発経験 中学生の頃にゲームや PC が好きすぎた延長で、PC 内でどのようなプログラムが動いているのかが気になり、触り始めたのがプログラミングを始めたきっかけです。 現在は研究室における解析で Python を中心に扱っています。学外ではウェブサイトの受託開発やハッカソン、Life is Tech! で担当しているコース(Android...(read more)

Experiencing Data Latency for Many Data Types - 5/19 - Investigating

$
0
0

Initial Update: Tuesday, 5/19/2015 03:55 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Latency.

The following data types are affected: Availability, Customer Event, Dependency, Exception, Page Load, Page View, Performance Counter.

Additionally some customers may also not able to query for telemetry data. DevOps are engaged and actively working on fixing the issue.

Work Around: none
Next Update: Before 5/19 06:00:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.

-Application Insights Service Delivery Team

Skype for Business Default Client Experience Matrix

$
0
0

The following table found on TechNet (https://technet.microsoft.com/en-us/library/dn954919.aspx) lists the default client behavior for Lync 2013 clients after deploying to the April 2015 client update.  Also, for those unable to push updates via inband provisioning or those who simply want to experiment with the UI on their local machine, here's a quick powershell script for changing the client UI: https://gallery.technet.microsoft.com/Quick-Skype-for-BusinessLyn-617ec072

 

Server version

EnableSkypeUI  setting

Client experience

Skype for Business Server 2015

Default

Skype for Business

Skype for Business Server 2015

True

Skype for Business

Skype for Business Server 2015

False

User asked to switch to Lync mode (user can switch to Skype for Business later if you change the UI setting to $true)

Lync Server 2010 or Lync Server 2013 (with correct patches)

Default

User asked to switch to Lync mode (user can switch to Skype for Business later if you change the UI setting to $true)

Lync Server 2010 or Lync Server 2013 (with correct patches)

True

Skype for Business

Lync Server 2010 or Lync Server 2013 (with correct patches)

False

User asked to switch to Lync mode (user can switch to Skype for Business later if you change the UI setting to $true)

Lync Server 2010 or Lync Server 2013 (without patches)

Default

User asked to switch to Lync client experience (user cannot switch to Skype for Business later)


Cloud Champions II – Toe to Toe with the Competition – view on demand and access resources

$
0
0

Three product experts shared their top tips to compete successfully with Microsoft cloud services in today’s Cloud Champions webinar. You can access the recording and upcoming sessions on the Cloud Champions calendar.

Below are some additional key resources as a follow on to today’s session:

Look forward to seeing you at our next webinar on 9th June when we’ll hear from Partners who have built out IP and IP services.

Solution to power shell script for setting "ManagedBy" properties of a group object in Active Directory

$
0
0

Irfan Ahmed, Senior Support Escalation Engineer, brings this amazing blog to us. Read on.

Requirement

Recently I am working on a requirement where an IT administrator would like to set a “ManagedBy” properties of a group object in AD as shown below in screenshot using Power Shell Script. We want to do it by running the script under domain user credentials. We made sure that domain user has full permission on the group.

 

Screenshot of a group object properties in Active Directory. Our objective is to set the checkbox highlighted.

 

 

Sample Power Script can be found here.

http://blogs.technet.com/b/blur-lines_-powershell_-author_shirleym/archive/2013/10/07/manager-can-update-membership-list.aspx

 

Environment

You are running Windows Server with Active Directory Web Services (ADWS) or older Active Directory Domain Controllers with the Active Directory Management Gateway Service (ADMGS) installed. 

 

Problem Description

The above mentioned script works fine with Domain admin administrator but it fails with “Access Denied” or Access Denied" or "This security ID may not be assigned as the owner of this object". The error is thrown on Set-Acl call in the script. Please note that we had given full permission to the domain user on the group object and still we have this error. Another interesting point is that we are able to set“Manager can update membership list” with domain user using AD user & computers console.

 

 

Solution

Currently the solution is to use VBscript to set the permission instead of PowerShell script.

Below is the sample VBScript to add and set “Manager can update membership list” permission on the group object in given AD

 

Const ADS_ACETYPE_ACCESS_ALLOWED_OBJECT = &H5
Const ADS_RIGHT_DS_WRITE_PROP = &H20
Const ADS_ACEFLAG_INHERIT_ACE = &H00002
Const ADS_ACEFLAG_DONT_INHERIT_ACE = &H0

Const ADS_FLAG_OBJECT_TYPE_PRESENT = &H01
Const ADS_OBJECT_WRITE_MEMBERS = "{BF9679C0-0DE6-11D0-A285-00AA003049E2}"
'===========================================================================
On Error Resume Next
intEnabled =1
strDomainNT4 = "<DomainName>

'DN of the Security Group object on whic we need to give permission
    Set objGroup = GetObject("LDAP://CN=Test_SG,OU=All_SG,DC=br549,DC=nttest,DC=microsoft,DC=com")

'DN of the user to whom we need to give permission.
    objGroup.Put "managedBy", "CN=Test,CN=Users,DC=br549,DC=nttest,DC=microsoft,DC=com"
' The below line will add   user
    objContainer.SetInfo

' The below code is to set Manager can update membership list
    strManagedBy = objGroup.managedBy 'objGroup.Get("managedBy") 'get managed by

        Set objSecurityDescriptor = objGroup.Get("ntSecurityDescriptor")
        Set objDACL = objSecurityDescriptor.DiscretionaryACL
        Set objUser = GetObject("LDAP://" & objGroup.Get("managedBy"))

           ' Enable "Manager can update member list" check box
                    Set objACE = CreateObject("AccessControlEntry")
                    objACE.Trustee = strDomainNT4 & "\" & objUser.Get("sAMAccountName")

                    wscript.echo objACE.Trustee & " Can now manage users." 
                    objACE.AccessMask = ADS_RIGHT_DS_WRITE_PROP
                    objACE.AceFlags = ADS_ACEFLAG_DONT_INHERIT_ACE
                    objACE.AceType = ADS_ACETYPE_ACCESS_ALLOWED_OBJECT
                    objACE.Flags = ADS_FLAG_OBJECT_TYPE_PRESENT
                    objACE.objectType = ADS_OBJECT_WRITE_MEMBERS
                    objDACL.AddAce(objACE)


objSecurityDescriptor.DiscretionaryACL = objDACL
objGroup.Put "ntSecurityDescriptor", Array(objSecurityDescriptor)
objGroup.SetInfo

 

Written and Reviewed by: Irfan Ahmed, Senior Support Escalation Engineer, EMEA/INDIA Support Team

Опубликованы презентации с выступлений в Санкт-Петербурге и Новосибирске

$
0
0

С середины марта до конца апреля  трейлер Microsoft путешествовал по крупнейшим городам России, Белоруссии и Казахстана, и было проведено 13 конференций.

Теперь у вас есть возможность скачать для себя слайды выступлений наших докладчиков с конференций в Санкт-Петербурге и Новосибирске. Ссылки на выступления доступны ниже:

Installing Reporting Services extensions on SQL Server 2014

$
0
0

Hello,

When installing Reporting Services extensions on SQL Server 2014 you may run into this error:

"Could not load file or assembly 'Microsoft.SqlServer.BatchParser, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. An operation is not legal in the current state. (Exception from HRESULT: 0x80131509)".

To resolve this issue you should install the following components of SQL 2008 R2 Shared Management Objects:

  • Microsoft® System CLR Types for SQL Server® 2008 R2
  • Microsoft® SQL Server 2008 R2 Shared Management Objects
  • Microsoft® SQL Server 2008 R2 ADOMD.NET
  • Microsoft® SQL Server 2008 R2 Analysis Management Objects

Go to the Microsoft® SQL Server® 2008 R2 Feature Pack download page: http://www.microsoft.com/en-us/download/details.aspx?id=16978

Then, expand "Install Instructions" to see the download links.

I hope this is helpful!

Bertrand

It rather involved being on the other side of this airtight hatchway: Code injection via QueueUserAPC

$
0
0

A security vulnerability report arrived that took the following form:

The Queue­User­APC function can be used to effect an elevation of privilege, as follows:

  1. Identify a process you wish to attack.
  2. Obtain access to a thread with THREAD_SET_­CONTEXT access.
  3. Make some educated guesses as to what DLLs are loaded in that process. Start with kernel32.dll, since you're going to need it in step 5.
  4. From the attacking process, scan the memory of those DLLs looking for a backslash, followed by something that can pass for a path and file name. Such strings are relatively abundant because there are a lot of registry paths hard-coded into those binaries. Suppose you found the string \Windows NT\Current­Version\App­Compat­Flags. Even though ASLR randomizes DLL placement, the placement is consistent among all processes, so an address calculated in one process is highliy likely to be valid in all processes.
  5. Create a DLL called C:\Windows NT\Current­Version\App­Compat­Flags.dll. Put your payload in this DLL.
  6. From the attacking thread, call Queue­User­APC with the address of Load­LibraryW as the function pointer, the victim thread as the thread handle, and a pointer to the fixed string identified in part 4 as the last parameter.
  7. The next time the victim thread processes APCs, it will pass \Windows NT\Current­Version\App­Compat­Flags to the Load­LibraryW function, which will load the payload DLL, thereby effecting code injection and consequent elevation of privilege.

Note that this attack fails if the victim thread never waits alertably, which is true of most threads.

If you have been paying attention, the alarm bells probably went off at step 2. If you have THREAD_SET_­CONTEXT access to a thread, then you pwn that thread. There's no need to use Queue­User­APC to make the thread do your bidding. You already have enough to make the thread dance to your music. In other words, you are already on the other side of the airtight hatchway.

Here's how: Look for a code sequence that goes

    push someregister
    call LoadLibraryW

Use the ­Set­Thread­Context function to set the pushed register equal to the address of the string you found in step 4, and set the instruction pointer to the code fragment. The thread will then resume execution at the specified instruction pointer: It pushes the address of the string, and then it calls Load­LibraryW. Bingo, your DLL loads, and you didn't even have to wait for the thread to wait alertably.

On non-x86 platforms, this is even easier: Since all other platforms use register-based calling conventions, you merely have to load the address of the string into the "first parameter" register (rcx for x64) and set the instruction pointer to the beginning of Load­LibaryW.

By default, THREAD_SET_­CONTEXT access is granted only to the user, and never to lower integrity levels. In other words, a low IL process cannot get THREAD_SET_­CONTEXT access to a medium or high integrity thread, and a medium IL process cannot get access to a high integrity thread. This means that, by default, you can only get THREAD_SET_­CONTEXT access to threads that have equivalent permissions to what you already have, so there is no elevation.

Evolving ASP.NET Apps–Updating Complex Grids

$
0
0

The Evolving ASP.NET Apps series + bonus content is being published as a book. Buy it here.

In the last post, we showed how easy it is to update some of the simpler grids. In this section, we will take a look at a more complicated example: the bugs grid on the main page.

With the bugs grid, we need to be a little more careful because we could conceivably have 10s of thousands of bugs stored in the system. In this case, we will probably want to use AJAX sourced data instead of DOM sourced like we did in the simple admin grids. If I were to build this system from scratch, I would create a Web API end-point that served JSON data. The client would make HTTP requests to the server, passing search, sorting and paging parameters. The server would pass those parameters to SQL server and all the filtering, sorting and paging would be done in SQL. This is hands down the most efficient way to do this. It minimizes the amount of data passed between the SQL server, the web server and the client. Requests and responses would be small, which would help ensure better throughput for the web server.

Current Approach

Let's review how the bugs.aspx page works today. Here is a screenshot of the current grid:

BugsGridOriginal

The original Bugs grid

Users can interact with this grid in a number of ways. They can select different queries from the Query dropdown. Selecting a different query executes a completely different SQL query and renders a completely different grid with different columns. Clicking on a column header will sort the result set by that column. Some of the columns have dropdowns that allow for filtering. Selecting a value from one of these dropdowns will filter the grid to only rows that match the selected value for that column. If the query returns a large number of bugs, then only 25 bugs will be displayed at a time and links to show the next and previous 25 bugs will be displayed at the bottom of the grid. This is all functionality that we would expect from a data grid in a business application. We are going to want to keep this base level of functionality.

Behind the scenes, this bugs page works by submitting a form back to the bugs.aspx page whenever the user completes one of these actions mentioned above. The form contains hidden fields to indicate which query to execute, which column to sort by and the selected filters. For example, sorting by a particular column will result in a full refresh of the page. This post-back approach is common for older ASP.NET applications but it leads to a less than optimal user experience. Ideally, only the grid itself would update.

<input type="hidden" name="new_page" id="new_page" runat="server" value="0" /><input type="hidden" name="actn" id="actn" runat="server" value="" /><input type="hidden" name="filter" id="filter" runat="server" value="" /><input type="hidden" name="sort" id="sort" runat="server" value="-1" /><input type="hidden" name="prev_sort" id="prev_sort" runat="server" value="-1" /><input type="hidden" name="prev_dir" id="prev_dir" runat="server" value="ASC" /><input type="hidden" name="tags" id="tags" value="" />

All the filtering, sorting and paging is done on the Web server, not the database server. This is probably acceptable in for many scenarios but it is far from ideal. For example, imagine the grid displays bugs in pages of 25 at a time. What would happen if we had 100,000 bugs in BugTracker and we are trying to display the last 25 bugs in the list of 100,000.

First, the Web server would send a query to the database and load all 100,000 rows in a DataSet in .NET. Immediately we can see some wasted resources here because this is consuming a lot of unnecessary bandwidth between the database and the web server. It will also cause a big spike in memory usage by the web server. Next, the server will create a DataView from the DataSet and apply sorting and filtering. This much less efficient than asking the database to perform these operations. Databases are VERY good at sorting and filtering. DataViews can do an okay job, but they just aren't optimized to the same extend as a database server is. Finally, let's look at the C# code that handles paging.

int rows_this_page = 0;int j = 0;foreach (DataRowView drv in dv)
{// skip over rows prior to this pageif (j < bugsPerPage * this_page)
    {
        j++;continue;
    }// do not show rows beyond this page
    rows_this_page++;if (rows_this_page > bugsPerPage)
    {break;
    }//Render the table row
}

As you can see, rendering the last page in the table involves iterating through every single row in the table. This is not a responsible use of the web server's CPU. Upon further review of the code, I can see that rendering the grid involves iterating over each row again for each column that contains a dropdown filter. That means that displaying ANY page of a grid involves iterating over every row several times. To make things even harder on server, it appears that the full unfiltered datatable is being stored in session state: (DataTable)HttpContext.Current.Session["bugs_unfiltered"]. By storing the datatable in session state, the memory usage won't just spike with each request, it will remain high for every active user session.

The approach used to render this grid might result in acceptable performance for a small number of total bugs and a small number of users, but I don't think it will handle load well. Unlike the admin pages, the bugs page needs to be as efficient as possible. It is the main page of the application and we should expect that every user of BugTracker might be using it at the same time.

Given the current implementation, we can safely assume that the BugTracker deployments do not typically contain thousands of active bugs or thousands of active users. Regardless, I would like to aim for better performance with the bugs grid. The current implementation is also difficult to understand. Let's see if we can both improve the performance and maintainability of this code.

Filtering, Sorting and Paging in SQL

One of the biggest challenges here is the fact that this page can execute arbitrary queries that return any number of columns. This seems to be an important extensibility point in BugTracker so I would like to keep this feature.

If we want to handle filtering, sorting and paging on the database server, then we will to extend the BugTracker queries feature.

The base installation of BugTracker contains about 10 different queries that can be executed on the bugs.aspx page. These can be added to by administrators, but they all look a little something like this:

selectisnull(pr_background_color,'#ffffff') as bg_color, bg_id [id], isnull(bu_flag,0) [$FLAG],
bg_short_desc [desc], isnull(pj_name,'') [project], isnull(og_name,'') [organization], isnull(ct_name,'') [category], rpt.us_username [reported by],
bg_reported_date [reported on], isnull(pr_name,'') [priority], isnull(asg.us_username,'') [assigned to],isnull(st_name,'') [status], isnull(lu.us_username,'') [last updated by], bg_last_updated_date [last updated on]from bugsleftouterjoin bug_user on bu_bug = bg_id and bu_user = @MEleftouterjoin users rpt on rpt.us_id = bg_reported_userleftouterjoin users asg on asg.us_id = bg_assigned_to_userleftouterjoin users lu on lu.us_id = bg_last_updated_userleftouterjoin projects on pj_id = bg_projectleftouterjoin orgs on og_id = bg_orgleftouterjoin categories on ct_id = bg_categoryleftouterjoin priorities on pr_id = bg_priorityleftouterjoin statuses on st_id = bg_status

In order to support sorting, filtering and paging in SQL along with dynamic queries like this, we will need to make some compromises in terms of the query.

We can achieve this by dynamically writing a query that wraps the original query and applying sorting, filtering and paging using some SQL parameters. An abbreviated example of the query above would look something like this:

SELECT * FROM (select bg_id [id], isnull(pj_name,'') [project], isnull(og_name,'') [organization]from bugsleftouterjoin projects on pj_id = bg_projectleftouterjoin orgs on og_id = bg_org
   ) tWHERE (@organization_filter ISNULLOR t.[organization] = @organization_filter)AND (@project_filter ISNULLOR t.[project] = @project_filter)orderby [id] ASCOFFSET @offsetROWSFETCHNEXT @page_size ROWSONLY;

There are a some important things to point out about this query. First, I am using the OFFSET and FETCH syntax which was introduced in SQL Server 2012. If we needed to support older versions of SQL Server, the same result could be accomplished using the ROW_NUMBER approach. Also, the query is not optimal. For example, it would be far more optimal to apply the WHERE clause to the inner query against the id of organization table rather than applying it against the organization name on the outer query. This is a trade-off I am willing to make to allow for the existing extensibility and we will do some tests to ensure performance of the new approach is still acceptable.

Wrapping the existing queries

To wrap the existing queries, I created a new class called BugQueryExecutor. This class takes in an instance of the existing queries and dynamically builds the wrapping query with the filtering, sorting and paging applied. Calling execute will return a datatable with only a the rows for specified page. It will also return a count of the total number of rows before and after filtering was applied. This will be useful so we know when to show the Next / Prev buttons and to give the user some indication of how many bugs were returned by the query vs. the number of bugs that are actually stored in BugTracker.

publicclassBugQueryExecutor
{privatereadonly Query _query;privatereadonlystring[] _columnNames;privateconstint MaxLength = 5000;public BugQueryExecutor(Query query)
    {
        _query = query;
        _columnNames = query.ColumnNames;
    }public BugQueryResult ExecuteQuery(IIdentity identity, int start, int length, string orderBy,string sortDirection, BugQueryFilter[] filters = null)
    {return ExecuteQuery(identity, start, length, orderBy, sortDirection, false, filters);
    }public BugQueryResult ExecuteQuery(IIdentity identity, int start, int length, string orderBy, string sortDirection, bool idOnly, BugQueryFilter[] filters = null)
    {if (!string.IsNullOrEmpty(orderBy) && !_columnNames.Contains(orderBy))
        {thrownew ArgumentException("Invalid order by column specified: {0}", orderBy);
        }string columnsToSelect = idOnly ? "id" : "*";var initialSql = string.Format("SELECT t.{0} FROM ({1}) t",columnsToSelect, GetInnerSql(identity));
        SQLString sqlString = new SQLString(initialSql);var initialCountSql = string.Format("SELECT COUNT(*) FROM ({0}) t", GetInnerSql(identity));
        SQLString countSqlString = new SQLString(initialCountSql);
        SQLString countUnfilteredSqlString = new SQLString(initialCountSql);int userId = identity.GetUserId();
        sqlString.AddParameterWithValue("@ME", userId);
        countSqlString.AddParameterWithValue("@ME", userId);
        countUnfilteredSqlString.AddParameterWithValue("@ME", userId);if (filters != null&& filters.Any())
        {
            ApplyWhereClause(sqlString, filters);
            ApplyWhereClause(countSqlString, filters);
        }

        sqlString.Append(" ORDER BY ");

        sqlString.Append(BuildDynamicOrderByClause());

        sqlString.AddParameterWithValue("ORDER_BY", orderBy ?? _columnNames.First());
        sqlString.AddParameterWithValue("SORT_DIRECTION", sortDirection);


        sqlString.Append(" OFFSET @offset ROWS FETCH NEXT @page_size ROWS ONLY");
        sqlString.AddParameterWithValue("page_size", length > 0 ? length : MaxLength);
        sqlString.AddParameterWithValue("offset", start);returnnew BugQueryResult
        {
            CountUnfiltered = Convert.ToInt32(DbUtil.execute_scalar(countUnfilteredSqlString)),
            CountFiltered = Convert.ToInt32(DbUtil.execute_scalar(countSqlString)),
            Data = DbUtil.get_dataset(sqlString).Tables[0]
        };

    }

    privatestring BuildDynamicOrderByClause()
    {returnstring.Join(", ",
            _columnNames.Select(column => string.Format(@" CASE WHEN @ORDER_BY = '{0}' AND @SORT_DIRECTION = 'DESC' THEN [{0}] END DESC,CASE WHEN @ORDER_BY = '{0}' AND @SORT_DIRECTION = 'ASC' THEN [{0}] END ASC", column)).ToArray());
    }privatevoid ApplyWhereClause(SQLString sqlString, BugQueryFilter[] filters)
    {
        sqlString.Append(" WHERE ");
        List<string> conditions = new List<string>();foreach (var filter in filters)
        {if (!_columnNames.Contains(filter.Column))
            {thrownew ArgumentException("Invalid filter column: {0}", filter.Column);
            }string parameterName = filter.Column;
            conditions.Add(string.Format("[{0}] = @{1}", filter.Column, parameterName));
            sqlString.AddParameterWithValue(parameterName, filter.Value);
        }
        sqlString.Append(string.Join(" AND ", conditions));
    }privatestring GetInnerSql(IIdentity identity)
    {
        SQLString innerSql = new SQLString(_query.SQL);return Util.alter_sql_per_project_permissions(innerSql, identity).ToString();
    }

}

In addition to filtering, sorting and paging, this class also applies any user specific project permissions.

View the commit - Created Bug Query Executor

Some of the existing queries needed to change slightly to support this. Specifically, the BugQueryExecutor needs to now the list of columns returned by each query. I added an update script to update all the known queries. Any other custom queries will need to be updated by the system administrator.

View the commit - Updated known queries

Adding Web API

Now that we have added improved bug querying support, we need to expose a way for the application to call and execute the queries without requiring a full page refresh. Since we are planning to use jQuery DataTables, the easiest option will be to expose an HTTP endpoint that returns the data in JSON format. Since this type of scenario is exactly what Web API was designed, I see this as a perfect opportunity for us to introduce Web API.

First, create a folder named Controllers. By convention, this is where we add Web API controllers. In Web API, controllers are simple classes that handle HTTP requests. Right click on the new Controllers folder and select Add Controller. Select Web API 2 Controller - Empty from the list of options and name the new controller BugQueryController. Visual Studio will add the references to Web API and provide instructions on how to complete the Web API configuration. Follow those instructions by adding GlobalConfiguration.Configure(WebApiConfig.Register); to the Application Start method of global.asax.cs.

That's it! We can use Web API in our project.

View the commit - Added Web API

From the new BugController now, we can add a method that calls the new BugQueryExecutor and returns the results in a format that will be convenient for jQuery DataTables. We also add the Authorize attribute to the BugQueryController. This ensures that only logged in users can access the data from this HTTP endpoint.

[Authorize]publicclassBugQueryController : ApiController
{public IHttpActionResult Get(int queryId, string sortBy, string sortOrder, int start, int length, [FromUri] BugQueryFilter[] filters)
  {
      Query query;using (Context context = new Context())
      {
          query = context.Queries.Find(queryId);
      }if (query != null)
      {
          BugQueryExecutor queryExecutor = new BugQueryExecutor(query);var result = queryExecutor.ExecuteQuery(User.Identity, start, length, sortBy, sortOrder, filters);return Ok(new
              {
                  recordsTotal = result.CountUnfiltered,
                  recordsFiltered = result.CountFiltered,
                  data = result.Data
              });
      }return NotFound();
  }
}

The parameters on the get method correspond to the hidden fields that were used in the original bugs.aspx form.

By default, Web API is configured to use JSON.NET to serialize the result as JSON. Magically, JSON.NET knows how to serialize a DataTable so we didn't need to do any extra work here.

Replacing the Bugs Table

Now that we have all the backend pieces we need, we can finally replace the bugs table on bugs.aspx with a new jQuery DataTables.

Luckily, jQuery DataTables is extremely flexible, making it relatively easy to configure it to call our Web API endpoint to get data.

Rather than placing the bugs grid implementation directly in the bugs.aspx page, I added a new user control called BugList.ascx. By extracting the implementation to a user control, it gives us the option to potentially re-use it else where in the application.

The markup for this user control simply generates the shell for the table based on the columns that for the selected query. This includes a header with 2 rows: One for the column name and a second row for any filter dropdowns.

<table id="bug-table" class="table table-striped table-bordered"><thead><!--Main header--><tr><%foreach (string columnName in GetVisibleColumns())
              {%><th><%= GetColumnDisplayName(columnName) %></th><%}%></tr><!--Filter row--><tr class="filter-row"><% foreach (string columnName in GetVisibleColumns())
               {%><th><% if (IsFilterableColumn(columnName))
                   {%><select class="table-filter" data-column-name="<%=columnName %>"><% foreach (var option in GetFilterValues(columnName))
                       {%><option value="<%=option.Value %>"><%=option.Text %></option><%}%></select><%} %></th><%}%></tr></thead></table>

Based on the existing logic, only certain columns are visible and a subset of those columns are filterable. I extracted this logic from the old static utility methods and placed them in the code behind for the user control.

Finally, I added the following JavaScript to the BugList control to actually render the grid:

$(function() {///Special column rendering functions not shown for brevity.var getCurrentFilters = function() {var filters = [];
      $("select.table-filter").each(function() {var currentFilter = $(this);var selectedValue = currentFilter.val();if (selectedValue) {var selectedColumnName = currentFilter.attr('data-column-name');
              filters.push({Column: selectedColumnName, Value: selectedValue});
          }
      });return filters;
    };var bugsTable = $("#bug-table").dataTable({
        serverSide: true,
        processing: true,
        paging: true,
        orderCellsTop : true,
        orderMulti: false,
        searching: false,
        ajax: function(data, callback) {var sortColumnName = data.columns[data.order[0].column].data;var urlParameters = {
                queryId : queryId,
                sortBy : sortColumnName,
                sortOrder : data.order[0].dir,
                start: data.start,
                length: data.length,
                idOnly: false,
                filters: getCurrentFilters()
            }
            BugList.setQueryParams(urlParameters);var queryUrl = "api/BugQuery?" + $.param(urlParameters);

            $.get(queryUrl).done(function(d) {
                callback(d);
            }).fail(function() {
                callback();
            });
        }
    }).api();//Force a refresh after any of the filters are changed
    $("select.table-filter").on("change", function() {
        bugsTable.draw();
    });
});


By specifying an ajax function for the dataTable configuration, we are able to tell jQuery DataTables exactly how to call our new Web API method. There is also some custom column rendering logic that was migrated over from the old static utility methods. These are not shown above for brevity. While some of this JavaScript might look a little overwhelming, it did allow us to delete a lot of complex code from the bugs.aspx page. It will also eventually allow us to delete a large amount of very hard to understand code from bug_list.js and bug_list.cs.

Here is what the new grid looks like:

BugsGridNew
New Bugs Grid

The columns are a little narrow to fit them all on the page, but the overall look and feel is improved. The biggest improvement for the user is that any sort, filter or paging action no longer triggers a full page refresh. The page generally *feels* snappier.

View the commit - Updated Bugs page to use jQuery DataTables

Grid Related Functionality

At this point, I thought I was done. Unfortunately, there was are a handful of features that relied on the old data grid implementation and are now broken with the new approach.

Specifically, a few features were relying on the following code from bugs.aspx:

Session["bugs"] = dv;
Session["SelectedBugQuery"] = qu_id_string;

if (ds != null)
{
    Session["bugs_unfiltered"] = ds.Tables[0];
}
else
{
    Session["bugs_unfiltered"] = null;
}

After executing a query, both the filtered and unfiltered results were stored in session state. This was used by the print list, print details and export to Excel features to get the list of bugs without re-querying the database. It was also used by the edit bug page to display links to the previous and next bugs in the query results.

While this may be convenient for the developer, it will be hard on the web server's memory. Let's start by fixing these features so they work with the new implementation, then we can compare memory and CPU usage for before and after.

Printing & Exporting to Excel

When the user selects the print option from the bugs page, they are taken to a simple page that lists all the bugs from the current query in an HTML table that is formatted for printing. This page does not have the BugTracker header or footer. Previously, this page would get the list of bugs from session state:

dv = (DataView)Session["bugs"];

The markup for the print page simply iterates over all the bugs in this DataView and output them to an HTML table. This DataView that was stored in session state contained all the bugs, not only the bugs for the current page of bugs.

In the new version, the server does not keep track of the query results in session state. Even the client does not have a list of all the bugs. The client only has a reference to the bugs for the current page. The only way for us to get a list of all the bugs is to have the client pass all the current bug query parameters (queryId, sort column/direction and selected filters) to the print page. The print page can then use the BugQueryExecutor to get a list of all the bugs. Once we have the results from the BugQueryExecutor, the markup for the print page can remain unchanged.

First, let's have the client keep track of the currently selected query parameters by storing it in sessionStorage. Session storage is a local key-value store that is available in all browsers since IE8. It is a convenient way for the client to remember information between page requests without storing large amounts of data in cookies.

var BugList = (function() {var setQueryParams = function(queryParams) {
       sessionStorage.setItem("BugQuery", JSON.stringify(queryParams));
    };var getQueryParams = function() {return JSON.parse(sessionStorage.getItem("BugQuery"));
    };return {
    setQueryParams: setQueryParams,
    getQueryParams: getQueryParams
   }
}());

Now in the BugsList.ascx control, we can set the query parameters whenever the data grid is refreshed:

ajax: function(data, callback) {var sortColumnName = data.columns[data.order[0].column].data;var urlParameters = {
                    queryId : queryId,
                    sortBy : sortColumnName,
                    sortOrder : data.order[0].dir,
                    start: data.start,
                    length: data.length,
                    filters: getCurrentFilters()
                }
                BugList.setQueryParams(urlParameters);var queryUrl = "api/BugQuery?" + $.param(urlParameters);

                $.get(queryUrl).done(function(d) {
                    callback(d);
                    BugList.saveCurrentBugList(d);
                }).fail(function() {
                    callback();
                });
            }

When the user clicks the Print button, we can get the current query parameters and pass those as parameters to the print_bugs.aspx page.

$(function() {var printBugs  = function(baseUrl) {var queryParams = BugList.getQueryParams();if (queryParams != null&& queryParams.queryId) {
            queryParams.start = 0;
            queryParams.length = -1; //Get all the bugs instead of just 1 page
            window.open(baseUrl + $.param(queryParams), "_blank");
        }
    }

    $("#printbuglist").click(function() {
        printBugs("print_bugs.aspx?");returnfalse;
    });
});

In the code behind for print_bugs.aspx, we use the parameters to execute a query and get the list of bugs to print.

int queryId = Convert.ToInt32(Request["queryId"]);int start = Convert.ToInt32(Request["start"]);int length = Convert.ToInt32(Request["length"]);string sortBy = Request["sortBy"];string sortOrder = Request["sortOrder"];
BugQueryFilter[] filters = BuildFilter(Request.Params);
Query query;using (Context context = new Context())
{
    query = context.Queries.Find(queryId);
}

BugQueryExecutor executor = new BugQueryExecutor(query);

BugQueryResult result = executor.ExecuteQuery(User.Identity, start, length, sortBy, sortOrder, filters);

dv = new DataView(result.Data);

The design was exactly the same for the Export to Excel and Print Details features. Repeating the same fix solved those problems:

View the commit - Print and Export to Excel without session state

Navigating through individual results

Clicking a bug in the bug table takes you to the Edit Bug page. This page shows the details for the selected bug and also shows links to the previous and next bugs in the query the user was viewing on the bugs page.

PrevNextBug

Navigating through individual bugs

This provides a very convenient way for the user to navigate through the list of bugs they were viewing on the main page. Unfortunately this is now broken because the implementation relied on the bugs being stored in session state.

We can move some this logic over to the client side by storing a list of the bug ids from the current query in session storage, similar to the way we stored the current query parameters:

var saveCurrentBugList = function (queryResults) {var currentBugList = {
        recordsFiltered: queryResults.recordsFiltered,
        bugIds: []
    };if (queryResults.data) {var bugList = queryResults.data;for (var i = 0; i < bugList.length; i++) {
            currentBugList.bugIds.push(bugList[i].id);
        }
    }

    sessionStorage.setItem("BugList", JSON.stringify(currentBugList));
}var getCurrentBugList = function () {return JSON.parse(sessionStorage.getItem("BugList"));
}

On the bugs page, we can look for the current bug in the list of bugs. If the bug is found in the list, then we can display the Previous / Next bug links in the same was as before. All we did was move that logic from the server to the client. There is some additional complexity around getting the next or previous list of bug ids when there is more than one page of bugs. This can be handled in JavaScript by calling the BugQuery Web API endpoint when necessary.

View the commit - Prev/Next bug links without session state

Conclusion

It was a lot of work but we now have a more modern and flexible approach to rendering a complex data grid. In the next post, we will do some detailed performance and load testing to ensure that our new approach is in fact faster than the previous approach.

WorkFlow Manager Configuration fails with (400) Bad Request

$
0
0

Adding a node to your Workflow Manager Farm (Add-WFHost step) may fail with the following error seen in the configuration log- 

[Progress] [10.03.2015 11:43:01]: Workflow Manager configuration starting.
[Verbose] [10.03.2015 11:43:04]: Configuring Workflow Manager runtime settings.
[Progress] [10.03.2015 11:43:04]: Configuring Workflow Manager runtime settings.
[Error] [10.03.2015 11:43:05]: System.Management.Automation.CmdletInvocationException: The remote server returned an error: (400) Bad Request. --->
System.ArgumentException: The remote server returned an error: (400) Bad Request. ---> System.Net.WebException: The remote server returned an error: (400) Bad Request.
   at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
   at Microsoft.ServiceBus.Messaging.ServiceBusResourceOperations.GetAsyncResult`1.EndGetResponse(GetAsyncResult`1 thisPtr, IAsyncResult ar)
   at Microsoft.ServiceBus.Messaging.IteratorAsyncResult`1.StepCallback(IAsyncResult result)
   --- End of inner exception stack trace --- 

Server stack trace:

Exception rethrown at [0]: 
   at Microsoft.ServiceBus.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)
   at Microsoft.ServiceBus.NamespaceManager.OnEndTopicExists(IAsyncResult result)
   at Microsoft.ServiceBus.NamespaceManager.TopicExists(String path)
   at Microsoft.Workflow.Deployment.Commands.WorkflowServiceConfigHelper.SetWFRuntimeSettings(String resourceDBConnectionString, String config)
   at Microsoft.Workflow.Deployment.Commands.AddWFHost.CallWFRuntimeSettings(Service wfserviceInfo, String unencryptedResourceConnectionString)

   --- End of inner exception stack trace ---
   at System.Management.Automation.PowerShell.EndInvoke(IAsyncResult asyncResult) 
   at Microsoft.Workflow.Deployment.ConfigWizard.CommandletHelper.InvokePowershell(Command command, Action`3 updateProgress)
   at Microsoft.Workflow.Deployment.ConfigWizard.ProgressPageViewModel.AddWFNode(FarmCreationModel model, Boolean isFirstCommand) 

If the earlier configuration steps have succeeded (namely configuring the ServiceBus Farm that is needed by WFM), you can check if you can connect to your ServiceBus namespace using the

ServiceBus explorer tool - http://blogs.msdn.com/b/paolos/archive/2014/05/21/service-bus-explorer-2-3-and-2-1-improved-version-now-available.aspx 

To get the connection string to connect to Service Bus Server, you can run Get-SBClientConfiguration from Service Bus Powershell.

If you get the same error using SB Explorer - The remote server returned an error: (400) Bad Request, please read on. (Otherwise, your issue may be a different one than the one that's described here).
Please check the AD Group membership for the service account provided during Workflow Manager configuration -

There is a known issue if this service account is a member of too many AD Groups.This leads to the Windows token that is generated for authentication by ServiceBus to be too large. The workaround is to either reduce the AD membership for this service account, or try the configuration with a new service account.

 

Written By
Arindam Paul Roy

Reviewed By
Jainath V R

Microsoft India GTSC

 


FileSystemWatcher Follies

$
0
0

 

System.IO.FileSystemWatcher is a handy class that can be used to monitor directories for some types of changes with very little programming effort for the developer who uses it.  For some situations, it’s incredibly useful.  However, it often gets used in program designs with poor assumptions about how things are going to play out in a real world enterprise scenario, and that can lead to problems.   I’ll be going over several examples below and using them to explain limitations of the FileSystemWatcher class, but before that, let’s discuss what the FileSystemWatcher actually does.

FileSystemWatcher: ReadDirectoryChangesW wrapped for .NET

System.IO.FileSystemWatcher is basically a wrapper class for the native API ReadDirectoryChangesW.  There are some differences, mainly added parsing capabilities by FileSystemWatcher after it receives results from ReadDirectoryChangesW that it can use to decide whether or not to report the changes to its subscribers based on its settable properties.  It’s the similarities that are going to matter in most cases though.  This is the function prototype for ReadDirectoryChangesW from WinBase.h in the Windows 8.1 SDK:

WINBASEAPI BOOL WINAPI ReadDirectoryChangesW( _In_ HANDLE hDirectory, _Out_writes_bytes_to_(nBufferLength, *lpBytesReturned) LPVOID lpBuffer, _In_ DWORD nBufferLength, _In_ BOOL bWatchSubtree, _In_ DWORD dwNotifyFilter, _Out_opt_ LPDWORD lpBytesReturned, _Inout_opt_ LPOVERLAPPED lpOverlapped, _In_opt_ LPOVERLAPPED_COMPLETION_ROUTINE lpCompletionRoutine );

There is a restriction here that one probably wouldn’t notice when just looking at FileSystemWatcher: Only directories can be monitored, and while monitoring of sub-directories is optional, at least the entirety of the directory is going to monitored.  So if FileSystemWatcher is set to watch a directory with a lot of activity but filtered to only watch one file, it is going to consume a lot of resources processing things that never make it back to its subscribers. 

There’s also some guidance in the documentation that is often relevant to subscribers of FileSystemWatcher.  When the buffer is overflown, ReadDirectoryChangesW returns ERROR_NOTIFY_ENUM_DIR, and FileSystemWatcher will report that with its OnError event as an InternalBufferOverflowException.  When that happens, the documentation states that “you should compute the changes by enumerating the directory or subtree”.  That effectively means that if an application using FileSystemWatcher wants to not miss changes that happen when a buffer overflow occurs, it needs to keep track of the state of things as they change, and whenever an InterBufferOverflowException is reported, manually enumerate things and compare it against its last known state to compute the changes.

With that basic overview complete, let’s move on to our examples.

Example 1: Attempting to detect when a file has been created and its contents finished writing to disk

This often happens when someone creates a design where data needs to be moved from one server to another, and when the data arrived at the destination, the destination needs to react to the data.  In an effort to “simplify” things and not use networking code, some developers use the file system to pass data to another machine, either through a file sharing technology or another local service like FTP or SFTP.  For this example, let’s say that Server A has an application that is creating files and then using a script to FTP them to Server B.  Server B then has a service that is watching the directory where the FTP service is storing the files.  When data through the form of files come into the directory, Server B attempts to open them and act on the data. 

Obviously, there are a lot of holes in the strategy above, so we won’t cover all of them, but here are some highlights:

  1. There is no FileSystemWatcher event for when a file is closed.
  2. Monitoring for size changes and/or last_write is going to be affected by the disk cache if present.
  3. Suppose that the developers of the application realize the above two things and decide to send a second file over to indicate that the first file has finished transferring through FTP.  This also will not work as there is no guarantee that the second file doesn’t get written before the first.
  4. Repeatedly trying to open a file when its creation is reported with no file sharing may eventually let you know when it is closed, but it will use a lot of resources and could become problematic if many started happening at once or if the directory being watched is accessed over the network.

Example 2: Attempting to watch for changes on a high traffic network share

This happens when the directory being watched has many files changing in it and/or subdirectories are being watched too and those have many changes happening.  The buffer size for network paths is capped due to limits in the underlying network technologies used by file sharing protocols.  If many changes are happening, the buffer will often overflow.  In order to get the changes, the directory or directories will have to be enumerated.  The more files that there are in a directory, the longer it takes to enumerate, so this can quickly add up to a lot of resources for the server actually hosting the files and delays for the application trying to access the information. 

Watching a single directory without subdirectories and minimizing the number of files in that directory will improve performance and reduce the number of potential buffer overflows. 

Example 3: Assuming that all directories behave the same way

Often times a developer will design an application that uses the FileSystemWatcher to monitor a modifiable path through some sort of configuration.  If that developer assumes that the location will be a local hard disk on a physical machine, many unexpected problems could arise in the application.  For instance, let’s go over some of the things that could happen if the user enters a path that ultimately ends up on a network file share:

  1. The remote file server may not be a Windows Server and may not properly support ReadDirectoryChangesW; though many other operating systems will support at least one protocol that does support ReadDirectoryChangesW, not all of them do, and that protocol may not be in use for some reason in the environment.  
  2. There could be latency in the connection that causes performance problems in the application especially if the application performs synchronous I/O operations on GUI interacting threads.  This could be actual network latency or it could just be the additional resources it takes to perform the same functionality against a remote system.
  3. The remote file server may be shut down incorrectly and fail to send a packet back to the machine to inform it that the handle is now invalid.  If that happens, the FileSystemWatcher won’t report an error or any changes from that point on.  It won’t know that there’s a problem, and there’s no built-in timeout mechanism for the class which is about the best one can do to cover this situation.

Wrapping up

In conclusion, FileSystemWatcher is a handy class, but its use needs to be thought about carefully when considering implementing it especially in enterprise scenarios in order to avoid unexpected scenarios within an application and be a good member of the computing ecosystem while achieving desirable outcomes.

 

Follow us on Twitter, www.twitter.com/WindowsSDK.

Microsoft issues RFP for Surface Hub

Migrating “My Map” from Google to Bing

$
0
0

Many users have asked about migrating their data to Bing Maps from Google Maps, after recent changes to the Google “My Map” feature. In this blog post, we show available solutions in Bing Maps.

Getting your data out of Google “My Places”

To get started, you can find your “My Map” data on Google here. Note that Google is currently giving users until June 1st, 2015 to download their data. On the left side panel you will see a Maps tab, select this. This will load a list of all the maps you have created.

clip_image001

Screenshot: List of user created “My Map” layers.

Select the map layer you want to work with. This will take you to another page where you can export the data as a KML file. When the page has loaded, you will see a panel that contains a list of the shapes in your map. Select the options button (three vertical dots) beside the Share button. From the options menu select the Download Classic My Map data or Export to KML option. This will download the data as a KML.

clip_image003

Screenshot: Exporting a My Map layer as a KML file.

Save the KML file somewhere on your computer where you can easily find it.

Import data to “My Places” on Bing Maps

Bing Maps has had a “My Places” feature, which is very similar to Google’s “My Maps” feature, since it first came online in 2005. The “My Places” feature makes it easy to import data from common spatial file formats such as KML, GeoRSS, and GPX. To get started go to http://bing.com/maps and sign in using your Microsoft account. (If you don’t have a Microsoft account, you can link any existing email address as your Microsoft account.) Once signed in, select the “My Places” button in the left side panel, which will open an editor panel over the map and load any data collections you have created previously.

clip_image005

Screenshot: My Places feature on Bing Maps

Next, press the Import button. This will open up a dialog where you can browse for a file to import and then either merge that data with an existing data collection or create a new one.

clip_image007

Screenshot: Import dialog panel

Once you have imported your data you will see the title of the collection listed in the editor panel. Select your collection to see it loaded on the map. You can add or edit the data using the tools available on the editor panel or by selecting any of the shapes.

clip_image009

Screenshot: My Map data imported into My Places on Bing Maps

One caveat is that the My Places functionality limits the size of collections to 200 shapes. If you try importing a collection that is bigger than this, it will only import the first 200 shapes.

Excel Power Maps

If the data you are working with is primarily custom regions that are color code based on some metric then take a look at the new custom regions feature in Excel Power Maps. Power Maps is a powerful add-in for Excel that lets you visualize a lot of data on a map, create timelines, videos and much more. One of the newest features now allows you to load in custom regions from KML and ESRI Shapefiles and link them to your data to create thematic maps.

Build a custom Bing Maps application

There are many ways you can take KML data and integrate it with Bing Maps to create a custom application. If you want to create an app for the Web, Windows 8, Windows Phone, or WPF, take a look at the Microsoft Maps Spatial toolbox which has a lot of tools and code samples that overlay KML and KMZ files on top of the different Bing Maps API’s available for these platforms. These apps will load all the data in your KML files. The only real limit is the amount of memory and processing power your computer has; the more data you overlay the slower the apps will become.

You can also easily upload KML files into the Bing Spatial Data Services and have your data exposed as a spatial REST service. This will allow you to perform complex queries against your data like; find nearby, find along a route, or find all data that intersects a custom shape. You can then easily query your data from this service and overlay it on top of Bing Maps. Documentation on the Bing Spatial Data Services can be found here. You can also find some simple code samples for querying data sources stored in the Bing Spatial Data Services in the Bing Maps Interactive SDK.

With the Bing Spatial Data Services, a single data source can contain up to 600,000 different shapes. Needless to say, if you tried loading that many shapes on a map the performance would be slow, and this is where the power of the spatial queries come in. You can easily create an application that only loads the data that is in the current map view, as shown in this blog post.

Additional Resources

We hope that you have found this information useful. You can find documentation for all of the Bing Maps development API’s here. I also recommend checking out the Bing Maps interactive SDK. If you have any technical questions, let us know of the Bing Maps forums and we will be more than happy to help.

- Ricky Brundritt, Bing Maps Senior Program Manager

Just Say It, a solution to your EMR data entry dilemma

$
0
0
For as long as I can remember (and that’s a very long time), clinicians have been complaining about the burden of entering data into the EMR. For most of us who started practice when medical records were primarily on paper there were basically two modalities for documenting our work. One was the pen. It worked pretty well for SOAP notes and brief encounters. I even recall documenting full physical exams on paper. The problem with writing things down on paper was that my hand would get tired, and...(read more)

Microsoft Advanced Threat Analytics Preview

$
0
0

Après le rachat de la société Aorato en novembre 2014, Microsoft Advanced Threat Analytics (ATA) est la première version publié par Microsoft en version dite Technical Preview.

Vous pouvez télécharger cette version depuis le centre de téléchargement des versions d’évaluation sur :
http://www.microsoft.com/en-us/evalcenter/evaluate-microsoft-advanced-threat-analytics

Si vous voulez découvrir ce nouveau produit très prometteur, je vous invite à regarder la session présentée en anglais lors de Microsoft Ignite sur :
https://channel9.msdn.com/Events/Ignite/2015/BRK3870

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>