Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

AX – Cómo incluir Valores de Dimensión en la consulta Estadísticas de Control Presupuestario

$
0
0

INTRODUCCIÓN

El control de presupuesto es una validación que permite revisar que haya suficientes fondos disponibles para realizar compras. De manera que, en caso de no existir presupuesto suficiente para realizar una determinada compra, Microsoft Dynamics AX entrega un mensaje indicando la falta de fondos para una determinada Cuenta contable y sus dimensiones financieras.

Para poder dar seguimiento al Presupuesto en Microsoft Dynamics AX se tienen diferentes herramientas de consulta. Una de ellas es la consulta ‘Estadística de Control Presupuestario’ [Imagen 1 – Estadística de Control Presupuestario] la cual permite conocer: Fondos presupuestarios disponibles, Presupuesto total revisado, Cargos reales totales, Reservas de presupuesto para reservas de cargo y Reservas de presupuesto para pre-reservas de cargo.

 

Imagen 1 – Estadística de Control Presupuestario

 

Ahora bien, considere que los valores de dimensión que se despliegan para la opción Valores de dimensión [Imagen 1 – Estadística de Control Presupuestario], concuerdan con los Criterios especificados en la Configuración del control presupuestario – Definir reglas de control presupuestario, ya que es aquí donde se definen las combinaciones de dimensiones financieras para el control presupuestario.

 

DEMOSTRACIÓN

A continuación, se realiza un ejercicio para demostrar lo anterior:

 

Observe en los Criterios de cuenta contable definidos en la Configuración del control presupuestario – Definir reglas de control presupuestario (Ruta. Gestión presupuestaria > Configurar > Control presupuestario) [Imagen 2 – Configuración de control presupuestario], se ha especificado
un rango de cuentas de la 601200 a la 601400.

Imagen 2 – Configuración de control presupuestario

 

En consecuencia, los Valores de dimensión que se observan para la consulta Estadísticas de control presupuestario (Ruta. Gestión presupuestaria > Consultas y reportes > Control presupuestario) [Imagen 3 – Valores de dimensión en Estadísticas de control presupuestario] inician desde la cuenta contable 601200 en adelante.

 

Imagen 3 – Valores de dimensión en Estadísticas de control presupuestario

 

Ahora bien, si se amplía el rango de cuentas contables a considerarse en las Reglas de control presupuestario
[Imagen 4 – Configuración de control presupuestario] para iniciar desde la cuenta contable 500140,

 

Imagen 4 – Configuración de control presupuestario

 

entonces, los Valores de dimensión que se tienen para las Estadísticas de control presupuestario, comenzarán desde la cuenta contable 500140 [Imagen 5 – Estadísticas de control presupuestario]

 

Imagen 5 – Estadísticas de control presupuestario

 

Referencias:

Budget control: Overview and configuration
https://ax.help.dynamics.com/en/wiki/budget-control-overview-and-configuration/

Budget control statistics by period page
https://ax.help.dynamics.com/en/wiki/budget-control-statistics-by-period-page-field-descriptions/

 

 

Para M


reading the ETW events in PowerShell

$
0
0

When testing or otherwise controlling a service, you need to read its log that gets written in the form of ETW events. There is the basic cmdlet Get-WinEvent that does this but with it you can’t just read the events continuously. Instead you have to keep polling and connecting the new events to the previous ones. I want to show the code that does this polling.

The basic use that starts this reading in a job whose output can be sent into expect is like this:

    $col_job = Start-Job -Name $LogJobName -ScriptBlock {
        param($module)
        Import-Module $module
        # -Nprev guards against the service starting earlier than the poller
        Read-WinEvents -LogName Microsoft-Windows-BootEvent-Collector/Admin -Nprev 1000 | % {
            "$($_.TimeCreated.ToString('yyyy-MM-dd HH:mm:ss.fffffff')) $($_.Message)"
        }
    } -Args @("myscriptdirTextTools.psm1")

The job starting itself is a bit convoluted because the interpreter in the job doesn’t inherit anything at all from the current interpreter. All it gets is literally its arguments. So to use a function from a module, that module has to be imported explicitly from the code in the job.

The reading of events is pretty easy – just give it the ETW log name. If you don’t care about the events that might be in the log from before, that’s it. If you do care about the previous events (such as if you are just starting the service and want to see all the events it had sent since the start), the parameter -Nprev says that you want to see up to this number of the previously logged events. This is more reliable than trying to start the log reading job first and then the service.

Of course, if you’ve been repeatedly stopping and starting the service, the log would also contain the events from the previous runs. That’s why the limit N is useful, and also you can clean the event buffer in ETW with

wevtutil qe Microsoft-Windows-BootEvent-Collector/Admin

The default formatting of the event objects to strings is not very useful, so this example does its own formatting.

After you’re done reading the events, you can just kill the job. The proper sequence for it together with expect would be:

Stop-Job $col_job
Skip-ExpectJob -Timeout $tout -Buf $col_buf -WaitStop
Remove-Job $col_job

And here is the implementation:

function Get-WinEventSafe
{
<#
.SYNOPSIS
Wrapper over Get-WinEvent that doesn't throw if no events are available.

Using -ea SilentlyContinue is still a good idea because PowerShell chokes
on the strings containing the '%'.
#>
    try {
        Get-WinEvent @args
    } catch {
        if ($_.FullyQualifiedErrorId -ne "NoMatchingEventsFound,Microsoft.PowerShell.Commands.GetWinEventCommand") {
            throw
        }
    }
}

function Get-WinEventsAfter
{
<#
.SYNOPSIS
Do one poll of an online ETW log, returning the events received after
the last previous event.
#>
    [CmdletBinding()]
    param(
        ## Name of the log to read the events from.
        [parameter(Mandatory=$true)]
        [string] $LogName,
        ## The last previous event, get the events after it.
        [System.Diagnostics.Eventing.Reader.EventLogRecord] $Prev,
        ## The initial scoop size for reading the events, if that scoop doesn't
        ## reach the previous event, the scoop will be grown twice on each
        ## attempt. If there is no previous event, all the available events will be returned.
        [uint32] $Scoop = 128
    )

    if ($Prev -eq $null) {
        # No previous record, just return everything
        Get-WinEventSafe -LogName $LogName -Oldest -ea SilentlyContinue
        return
    }

    $ptime = $Prev.TimeCreated

    for (;; $Scoop *= 2) {
        # The events come out in the reverse order
        $ev = @(Get-WinEventSafe -LogName $LogName -MaxEvents $Scoop -ea SilentlyContinue)
        if ($ev.Count -eq 0) {
            return # no events, nothing to do
        }
        $last = $ev.Count - 1
        if ($ev.Count -ne $Scoop -or $ev[$last].TimeCreated -lt $Prev.TimeCreated) {
            # the scoop goes past the previous event, find the boundary in it
            for (; ; --$last) {
                if ($last -lt 0) {
                    return # no updates, return nothing
                }

                $etime = $ev[$last].TimeCreated
                if ($etime -lt $ptime) {
                    continue
                }
                if ($etime -gt $ptime) {
                    break
                }
                if ($ev[$last].Message -eq $Prev.Message) {
                    --$last # skip the copy of the same event
                    if ($last -lt 0) {
                        return # no updates, return nothing
                    }
                    break
                }
            }
            $ev = $ev[0..$last]
            [array]::Reverse($ev) # in-place
            $ev
            return
        }
        # otherwise need to scoop more
    }
}

function Read-WinEvents
{
<#
.SYNOPSIS
Poll an online ETW log forever, until killed.
#>
    [CmdletBinding()]
    param(
        ## Name of the log to read the events from.
        [parameter(Mandatory=$true)]
        [string] $LogName,
        ## The poll period, in seconds, floating-point.
        [double] $Period = 1.,
        ## The initial scoop size for Get-WinEventsAfter.
        [uint32] $Scoop = 128,
        ## Number of previous records to return at the start.
        [uint32] $Nprev = 0
    )

    $prev = $null
    [int32] $msec = $Period * 1000

    $isVerbose = ($VerbosePreference -ne "SilentlyContinue")

    # read the initial records
    if ($Nprev -gt 0) {
        $ev = @(Get-WinEventSafe -LogName $LogName -MaxEvents $Nprev -ea SilentlyContinue)
        [array]::Reverse($ev) # in-place
        if ($isVebose)  {
            & {
                "Got the previous events:"
                $ev | fl | Out-String
            } | Write-Verbose
        }
        $ev
        $prev = $ev[-1]
        $ev = @()
    } else {
        $ev = @(Get-WinEventSafe -LogName $LogName -MaxEvents 1 -ea SilentlyContinue)
        & {
            "Got the previous event:"
            $ev | fl | Out-String
        } | Write-Verbose
        $prev = $ev[0]
    }

    for (;;) {
        Sleep -Milliseconds $msec
        $ev = @(Get-WinEventsAfter -LogName $LogName -Prev $prev -Scoop $Scoop)
        & {
            "Got more events:"
            $ev | fl | Out-String
        } | Write-Verbose
        if ($ev) {
            $ev
            $prev = $ev[-1]
            $ev = @()
        }
    }
}

See Also: all the text tools

reporting the nested errors in PowerShell

$
0
0

A pretty typical pattern for PowerShell goes like this:

...allocate resource...
try {
  ... process resource ...
} finally {
  ...deallocate resource...
}

It makes sure that the resource gets properly deallocated even if the processing fails. However there is a problem in this pattern: if the finally block gets called on exception and the resource deallocation experiences an error for some reason and throws an exception, that exception will replace the first one. You’d see what failed in the deallocation but not what failed with the processing in the first place.

I want to share a few solutions for this problem that I’ve come with. The problem is two-prong: one part of it is the reporting of the nested errors, another one is collecting all the encountered errors which can then be built into a nested error.

As the reporting of the nested errors goes, the basic .NET exception has the provision for it but it’s not so easy to use in practice because the PowerShell exception objects are wrappers around the .NET exceptions and carry the extra information: the PowerShell stack trace. The nesting shouldn’t lose this stack trace.

So I wrote a function that does this, New-EvNest (you can think of the prefix “Ev” as meaning “error value”, although historically it was born for other reasons). The implementation of carrying of the stack trace has turned out to be pretty convoluted but the use is easy:

$combinedError = New-EvNest -Error $_ -Nested $InnerError

in some cases the outer error would be just a high-level text description, so there is a special form for that:

$combinedError = New-EvNest -Text "Failed to process the resource"  -Nested $InnerError

You can then re-throw the combined error:

throw $combinedError

I’ve also made a convenience function for re-throwing with an added description:

New-Rethrow -Text "Failed to process the resource"  -Nested $InnerError

And here is the implementation:

function New-EvNest
{
<#
.SYNOPSIS
Create a new error that wraps the existing one (but don't throw it).
#>
    [CmdletBinding(DefaultParameterSetName="Text")]
    param(
        ## Text of the wrapper message.
        [parameter(ParameterSetName="Text", Mandatory=$true, Position = 0)]
        [string] $Text,
        ## Alternatively, if combining two errors, the "outer"
        ## error. The text and the error location from it will be
        ## prepended to the combined information.
        [parameter(ParameterSetName="Object", Mandatory=$true)]
        [System.Management.Automation.ErrorRecord] $Error,
        ## The nested System.Management.Automation.ErrorRecord that
        ## was caught and needs re-throwing with an additional wrapper.
        [parameter(Mandatory=$true, Position = 1)]
        [System.Management.Automation.ErrorRecord] $Nested
    )

    if ($Error) {
        $Text = $Error.FullyQualifiedErrorId
        if ($Error.TargetObject -is [hashtable] -and $Error.TargetObject.stack) {
            $headpos = $Error.TargetObject.posinfo + "`r`n"
        } else {
            $headpos = $Error.InvocationInfo.PositionMessage + "`r`n"
        }
    }

    # The new exception will wrap the old one.
    $exc = New-Object System.Management.Automation.RuntimeException @($Text, $Nested.Exception)

    # The script stack is not in the Exception (the nested part), so it needs to be carried through
    # the ErrorRecord with a hack. The innermost stack is carried through the whole
    # chain because it's the deepest one.
    # The carrying happens by encoding the original stack as the TargetObject.
    if ($Nested.TargetObject -is [hashtable] -and $Nested.TargetObject.stack) {
        if ($headpos) {
            $wrapstack = @{
                stack = $Nested.TargetObject.stack;
                posinfo = $headpos + $Nested.TargetObject.posinfo;
            }
        } else {
            $wrapstack = $Nested.TargetObject
        }
    } elseif($Nested.ScriptStackTrace) {
        $wrapstack = @{
            stack = $Nested.ScriptStackTrace;
            posinfo = $headpos + $Nested.InvocationInfo.PositionMessage;
        }
    } else {
        if ($headpos) {
            $wrapstack = $Error.TargetObject
        } else {
            $wrapstack = $null
        }
    }

    # The new error record will wrap the exception and carry over the stack trace
    # from the old one, which unfortunately can't be just wrapped.
    return (New-Object System.Management.Automation.ErrorRecord @($exc,
        "$Text`r`n$($Nested.FullyQualifiedErrorId)", # not sure if this is the best idea, the arbitrary text goes against the
        # principles described in http://msdn.microsoft.com/en-us/library/ms714465%28v=vs.85%29.aspx
        # but this is the same as done by the {throw $Text},
        # and it allows to get the errors printed more nicely even with the default handler
        "OperationStopped", # would be nice to have a separate category for wraps but for now
        # just do the same as {throw $Text}
        $wrapstack
    ))
}

function New-Rethrow
{
<#
.SYNOPSIS
Create a new error that wraps the existing one and throw it.
#>
    param(
        ## Text of the wrapper message.
        [parameter(Mandatory=$true)]
        [string] $Text,
        ## The nested System.Management.Automation.ErrorRecord that
        ## was caught and needs re-throwing with an additional wrapper.
        [parameter(Mandatory=$true)]
        [System.Management.Automation.ErrorRecord] $Nested
    )
    throw (New-EvNest $Text $Nested)
}
Set-Alias rethrow New-Rethrow

The information about the PowerShell call stack is carried through the whole nesting sequence from the innermost object to the outernost object.

Now we come to the second prong, catching the errors. The simple approach would be to do:

...allocate resource...
try {
  ... process resource ...
} finally {
  try {
    ...deallocate resource...
  } catch {
    throw (New-EvNest -Error $_ -Nexted $prevException)
  }
}

except that in finally we don’t know if there was a nested exception or not. So the code grows to:

...allocate resource...
$prevException = $null
try {
  ... process resource ...
} catch {
  $prevException = $_
} finally {
  try {
    ...deallocate resource...
  } catch {
    if ($prevException) {
      throw (New-EvNest -Error $_ -Nexted $prevException)
    } else {
      throw $_
    }
  }
}

You can see that this quickly becomes not very manageable, especially if you have multiple nested resources. So my next approach was to wrtite one more helper function Rethrow-ErrorList and use it in a pattern like this:

    $errors = @()
    # nest try/finally as much as needed, as long as each try goes
    # with this kind of catch; the outermost "finally" block must
    # be wrapped in a plain try/catch
    try {
        try {
            ...
        } catch {
            $errors = $errors + @($_)
        } finally {
            ...
        }
    } catch {
        $errors = $errors + @($_)
    }
    Rethrow-ErrorList $errors

Rethrow-ErrorList throws if the list of errors is not empty, combining them all into one error. This pattern also nests easily: the nested instances keep using the same $errors, and all the exceptions get neatly collected in it along the way. Here is the implementation:

function Publish-ErrorList
{
<#
.SYNOPSIS
If the list of errors collected in the try-finally sequence is not empty,
report it in the verbose channel, build a combined error out of them,
and throw it. If the list is empty, does nothing.

An alternative way to handle the errors is Undo-OnError.

The typical usage pattern is:

    $errors = @()
    # nest try/finally as much as needed, as long as each try goes
    # with this kind of catch; the outermost "finally" block must
    # be wrapped in a plain try/catch
    try {
        try {
            ...
        } catch {
            $errors = $errors + @($_)
        } finally {
            ...
        }
    } catch {
        $errors = $errors + @($_)
    }
    Rethrow-ErrorList $errors

#>
    [CmdletBinding()]
    param(
        ## An array of error objects to test and rethrow.
        [array] $Errors
    )
    if ($Errors) {
        $vp = $VerbosePreference
        $VerbosePreference = "Continue"
        Write-Verbose "Caught the errors:"
        $Errors | fl | Out-String | Write-Verbose
        $VerbosePreference = $vp

        if ($Errors.Count -gt 1) {
            $rethrow = $Errors[0]
            for ($i = 1; $i -lt $Errors.Count; $i++) {
                $rethrow = New-EvNest -Error ($Errors[$i]) -Nested $rethrow
            }
        } else {
            $rethrow = $Errors[0]
        }
        throw $rethrow
    }
}
Set-Alias Rethrow-ErrorList Publish-ErrorList

After that I’ve tried one more approach. It’s possible to pass the script blocks as parameters to a function, so a function can pretend to be a bit like a statement:

Undo-OnError -Do {
  ...allocate resource...
} -Try {
  ... process resource ...
} -Undo {
  ...deallocate resource...
}

Looked cute in theory but in practice it had hit the snag that the script blocks in PowerShell are not closures. If some variables get assigned inside script blocks, they’re invisible outside these script blocks, and that’s a major pain here because you’d usually place the allocated resource into some variable and then read that variable during processing and deallocation. Of course, with the script blocks being separate here, the variables assigned during allocation would get lost. It’s possible to work around this issue by making a surrogate scope in a hash table:

$scope = @{}
Undo-OnError -Do {
  $scope.resource = ...allocate resource ...
} -Try {
  ... process resource from $scope.resource ...
} -Undo {
  ...deallocate resource from $scope.resource ...
}

So it kind of works but unless you do a lot of nesting, I’m not sure that it’s a whole lot better than the pattern with the Rethrow-ErrorList. If this were made into a PowerShell statement that makes a proper scope management, it could work a lot better. Or I guess even better, the try/finally statement could be extended to re-throw a nested exception if both try and finally parts throw. And the throw statement could be extended to create a nested exception if its argument is an array. This would give all the benefits without any changes to the language.

Here is the implementation:

function Undo-OnError
{
<#
.SYNOPSIS
A wrapper of try blocks. Do some action, then execute some code that
uses it, and then undo this action. The undoing is executed even if an
error gets thrown. It's essentially a "finally" block, only with the
nicer nested reporting of errors.

An alternative way to handle the errors is Rethrow-ErrorList.
#>
    param(
        ## The initial action to do.
        [scriptblock] $Do,
        ## The code that uses the action from -Do. Essentially, the "try" block.
        [scriptblock] $Try,
        ## The undoing of the initial action (like the "finally" block).
        [scriptblock] $Undo,
        ## Flag: call the action -Undo even if the action -Do itself throws
        ## an exception (useful if the action in -Do is not really atomic can
        ## can leave things in an inconsistent state that requires cleaning).
        [switch] $UndoSelf
    )

    try {
        &$Do
    } catch {
        if ($UndoSelf) {
            $nested = $_
            try {
                &$Undo
            } catch {
                throw (New-EvNest -Error $_ $nested)
            }
        }
        throw
    }
    try {
        &$Try
    } catch {
        $nested = $_
        try {
            &$Undo
        } catch {
            throw (New-EvNest -Error $_ $nested)
        }
        throw
    }
    &$Undo
}

 

Split update operation can result in higher number of replicated commands

$
0
0

In the recent past, we worked on an issue where a number of updates on a replicated article (part of transactional replication) were delivered to the subscriber as a series of DELETEs and INSERTs as opposed to UPDATEs. In this post, I will explain the scenarios under which this situation can occur and what the options are to workaround the situation.

Let us consider a scenario that you are replicating a large table which has a primary key and also a unique non-clustered index. The table and index definitions are provided below.

CREATE TABLE [dbo].[TestReplic](

       [id] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL,

       [c1] [int] NOT NULL,

       [c2] [int] NOT NULL,

       [c3] [int] NOT NULL,

 CONSTRAINT [PK_TestReplic] PRIMARY KEY CLUSTERED

(

       [id] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

) ON [PRIMARY]

CREATE UNIQUE NONCLUSTERED INDEX [UX_TestReplic_c1_c2] ON [dbo].[TestReplic]

(

       [c1] ASC,

       [c2] ASC

)

INCLUDE ([c3]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

 

When you execute an update on a table which updates one of the included columns in the unique non-clustered index, then SQL Server has the option to one of the following:

a. Perform the update as-is and update the clustered and unique non-clustered in a single operation

b. Perform the update as a Split operation by splitting each UPDATE into a DELETE and an INSERT operation. During such a scenario, you will find a SPLIT operator in the UPDATE plan for the table on the publisher.

There are number of heuristics information that SQL Server evaluates to split an UPDATE into a DELETE and INSERT pair. The decision is based on evaluation of a function which determines which of the above options would be better for performance.

When you compare the query plans for the plan with a single UPDATE and the plan with split DELETE and INSERT statements, you will find that:

a. A single clustered index update performs the update for the non-clustered unique index and the clustered index

b. A SPLIT operator appears in the query plan where the indexes are maintained separately. The first operation performed is a clustered index update for the clustered index and another index update for the unique non-clustered index

The screenshot below illustrates the difference.

image

If you are tracing the replication statements, you will find that the first case will only show the execution of “sp_MSupd_dbo<object name>” for performing the updates. The second case will show “sp_MSdel_dbo<objectname>” and “sp_MSins_dbo<objectname>” for the split update scenario.

Tracing the number of commands delivered in the distribution database for a single update or tracking down the query plans or tracking down the stored procedure executions will give you a hint that you have a SPLIT UPDATE in play.

The next question is how do you prevent this from happening. There are a few options that you can exercise:

a. Enable trace flag 2338 at the query level which will prevent the query optimizer from picking per-index plans because of performance. Example of such an update query is shown below.

UPDATE t

SET t.c3=1

FROM dbo.TestReplic tUPDATE TOP(300) dbo.testreplic

SET c3+=1

WHERE c3 = 0 OPTION (querytraceon 2338)

WHERE t.c3 = 10000

Note– Executing a query with the QUERYTRACEON option requires membership in the sysadmin fixed server role. Please read this article for details.

b. The per-index plans are generated for index updates when a large number of rows are affected. So another option would be to perform the update in smaller batches.

c. Alternatively, you could create a stored procedure which would perform the UPDATE and then add the stored procedure as a replicated object to your publication. Please read this article for more information on how to publish stored procedure execution.

Wish all of you a Happy New Year in advance from the Tiger team!

Internet Explorer 11 hosting a Drag & Drop ActiveX control advances from onDragEnter to OnDrop instead of onDragEnter -> onDragOver on Windows 10 x86 and x64 iexplore processes.

$
0
0
The issue as stated in the title is reproducible on fast dragging. See details below.
This happens only when the ActiveX is hosted in IE11. The issue does not occur when the same ActiveX control is hosted on a Win Form application.
To repro the issue here is what you need to do:
Please refer to the attached sample ActiveX control named: myactivexcontrol
Build & register MyActiveXControl. I have also attached myactivexcontrol_ocx which you can directly register & use.
Place any text file named ReadMe.txt in C:Temp folder. Of course you can change the location from the ActiveX code.
Create a sample HTML file named TestPage.html and open this in IE 11 to test the issue.
Here are the contents of the HTML file:
<HTML>
<HEAD>
<TITLE>Drag & Drop Test</TITLE>
</HEAD>
<BODY>
<CENTER>
<!–
This is the key to the example.  The OBJECT
tag is a new tag used to download ActiveX
components.  Once the ActiveX component is available,
you can set its properties by using the PARAM tag.
–>
<OBJECT
CLASSID=”clsid:90EDC5CE-75EE-47F2-AB0E-7E7444FD9257″
ID=”MYACTIVEXCONTROL.MyActiveXControlCtrl.1″
</OBJECT>
<ondragover=”window.event.returnValue=false;”>
</CENTER>
</BODY>
</HTML>

The ActiveX is the one as shown below in an ellipse (which I draw in CMyActiveXControlCtrl::OnDraw() ). I associate a simple text file with the ActiveX window (see CMyActiveXControlCtrl::OnLButtonDown() ). You need to drag that text file from the ActiveX to say Desktop. 

snip1

In fast dragging (even if left mouse button is down) we see only this log:

[3508] grfKeyState is: 0
[3508] DRAGDROP_S_DROP <= The files drops to the same IE window.
grfKeyState never changed to 1 – which is the issue. This however does not happen on Windows 7 OS (32 bit & 64 bit both).

grfKeyState changes to 1 only when you click on the ActiveX and wait for the small arrow (shown below) to appear. This however is not required on Windows 7 OS.
snip2
Is there a workaround that exists?
Yes. This is how I fixed it and has worked from Windows 7 to Windows 10.
Please refer to the attached sample ActiveX control named: MyActiveXControl.ZIP
File Name: DropSource.h
Function Name: HRESULT CDropSource::QueryContinueDrag(BOOL fEscapePressed, DWORD grfKeyState);
WORKAROUND 1
============
Instead of relying on grfKeyState, I checked the mouse button state using GetKeyState() API and return DRAGDROP_S_DROP when the mouse button is released. I have tested it and it works fine.// Modified QueryContinueDrag() codeHRESULT CDropSource::QueryContinueDrag(BOOL fEscapePressed, DWORD grfKeyState)
{
DWORD my_grfKeyState  = 0;
// If the high-order bit is 1, the key is down; otherwise, it is up.
if((GetKeyState(VK_LBUTTON) & 0x80) != 0)
{
my_grfKeyState = 1;
}

TCHAR buffer[100];
swprintf_s(buffer, 100, L”grfKeyState is: %d”, grfKeyState);

::OutputDebugString(buffer);
swprintf_s(buffer, 100, L”my_grfKeyState is: %d”, my_grfKeyState);
::OutputDebugString(buffer);

if (fEscapePressed)
{
::OutputDebugString(L”DRAGDROP_S_CANCEL”);
return DRAGDROP_S_CANCEL;
}

if (!(my_grfKeyState & (MK_LBUTTON | MK_RBUTTON)))
{
::OutputDebugString(L”DRAGDROP_S_DROP”);
return DRAGDROP_S_DROP;
}

::OutputDebugString(L”S_OK”);
return S_OK;
}

WORKAROUND 2
============
I made some changes in CDropSource::QueryContinueDrag() (see below) to test PeekMessage(). PeekMessage seems to be working fine. So the only anomaly what we see in case of the ActiveX hosted in IE11, is for the first time when grfKeyState never changes to 1 from 0.
if ((msg.message >= WM_MOUSEFIRST) && (msg.message <= WM_MOUSELAST)) never becomes TRUE.
HRESULT CDropSource::QueryContinueDrag(BOOL fEscapePressed, DWORD grfKeyState)
{
TCHAR buffer[100];
MSG msg;
DWORD my_grfKeyState  = 0;
    swprintf_s(buffer, 100, L”my_grfKeyState before PeekMessage is: %d”, my_grfKeyState);
::OutputDebugString(buffer);
//auto HaveAnyMouseMessages = [&]() -> BOOL
//{
//    return PeekMessage(&msg, 0, WM_MOUSEFIRST, WM_MOUSELAST, PM_REMOVE);
//};
    // Busy wait until a mouse or escape message is in the queue
while (!PeekMessage(&msg, 0, WM_MOUSEFIRST, WM_MOUSELAST, PM_REMOVE))
{
// Note: all keyboard messages except escape are tossed. This is
// fairly reasonable since the user has to be holding the left
// mouse button down at this point. They can’t really be doing
// too much data input one handed.
if ((PeekMessage(&msg, 0, WM_KEYDOWN, WM_KEYDOWN, PM_REMOVE)
|| PeekMessage(&msg, 0, WM_SYSKEYDOWN, WM_SYSKEYDOWN, PM_REMOVE))
&& msg.wParam == VK_ESCAPE)
{
fEscapePressed = TRUE;
break;
}
}
    if (!fEscapePressed)
{
if ((msg.message >= WM_MOUSEFIRST) && (msg.message <= WM_MOUSELAST))
{
my_grfKeyState = GetControlKeysStateOfParam(msg.wParam);
swprintf_s(buffer, 100, L”my_grfKeyState after PeekMessage is: %d”, my_grfKeyState);
::OutputDebugString(buffer);
}
}
    DWORD my_grfKeyState1  = 0;
if((GetKeyState(VK_LBUTTON) & 0x80) != 0)
{
my_grfKeyState1 = 1;
}
//// DWORD my_grfKeyState = GetAsyncKeyState(VK_LBUTTON); //GetKeyState(VK_LBUTTON);
    swprintf_s(buffer, 100, L”my_grfKeyState1 from GetKeyState() is: %d”, my_grfKeyState1);
::OutputDebugString(buffer);
//swprintf_s(buffer, 100, L”my_grfKeyState is: %d”, my_grfKeyState);
//::OutputDebugString(buffer);if (fEscapePressed)
{
::OutputDebugString(L”DRAGDROP_S_CANCEL”);
return DRAGDROP_S_CANCEL;
}
    // if (!(my_grfKeyState & (MK_LBUTTON | MK_RBUTTON)))
// if(my_grfKeyState == 0 && grfKeyState == 0)
if (!(my_grfKeyState & (MK_LBUTTON | MK_RBUTTON)))
{
::OutputDebugString(L”DRAGDROP_S_DROP”);
return DRAGDROP_S_DROP;
}
     ::OutputDebugString(L”S_OK”);
return S_OK;
}
Output:
[8452] OnLButtonDown
[8452] GetUIObjectOfFile succeeded
[8452] DoDragDrop
[8452] my_grfKeyState before PeekMessage is: 0
[8452] my_grfKeyState after PeekMessage is: 1
[8452] my_grfKeyState1 from GetKeyState() is: 1
[8452] S_OK
[8452] my_grfKeyState before PeekMessage is: 0
[8452] my_grfKeyState1 from GetKeyState() is: 1
[8452] DRAGDROP_S_DROP
However we do have an official fix for this issue from Microsoft.
KB 3179574 (Link: https://support.microsoft.com/en-us/kb/3179574) –  fixes the issue on Windows 8.1 .
Test Results
=========
Operating System: Windows 8.1 x64 (Version 6.3, Build: 9600)
ole32.dll version: 6.3.9600.18256
Issue exists with this version.
Downloaded https://support.microsoft.com/en-us/kb/3179574
Installed this KB. Restarted the machine.
ole32.dll version: 6.3.9600.18403
Issue not reproducible.
For Windows 10 and above OS’s, apply KB 3201845 (Link: https://support.microsoft.com/en-us/kb/apply KB 3201845). This comes via the Windows Update. In the KB you will find a statement saying “Addressed issue with OLE drag and drop that prevents users from downloading a SharePoint document library as a file”.

Capturing Full User Mode Dumps & PerfView traces for troubleshooting high CPU & hang issues.

$
0
0

Please note: below are the steps for capturing traces and not the way to analyze them. It is very essential to capture right traces before analyzing them to find a root case, essentially for high CPU or troubleshooting a process hang.

In general, a dump is a process snapshot of its virtual memory at a single point in time. A one single user mode dump is not the appropriate way to analyze a hang or a high CPU scenario. We need multiple hang dumps captured in the overall time span or vicinity of the hang.
Capturing PerfView traces at the time of the hang also makes sense.

DEBUGDIAG for Dump Capture

Debug Diagnostic tool download link: https://www.microsoft.com/en-us/download/details.aspx?id=49924

Install the MSI (download 64 bit MSI if your OS is 64 bit else the 32 bit MSI).

From the desktop click at START menu and search for DebugDiag2 Collection and run it.

Cancel the “Select Rule Type” dialog.

Go to Processes tab as shown in the screen shot below. During the slowness, hang or high CPU, select your process (for a Web application it would be w3wp.exe), right click and click at “Create FullUserdump”. Repeat this at uniform intervals in the entire duration of the hang. For example if the process hangs now, start capturing dumps and capture say for example 5 dumps at 30 seconds or 60 seconds interval. This dump will give a discrete picture of the process virtual memory at 5 different intervals and of course a better picture of the process – what it’s threads were doing in the 150 seconds or 300 seconds discrete time frame. Default location of the dumps files: C:Program FilesDebugDiagLogsMisc

 snip3

Alternatively, you can also automate the above process by going to the same Processes tab (shown in the screen shot above), right click the process (for which you would like to capture dumps) and select  “Create Userdump Series…“. Select/Adjust the options as shown in the screen shot below. It is good to have Full UserDumps.

snip6

Default location of the dumps files: C:Program FilesDebugDiagLogsMisc

Post capturing the dumps, ZIP the Misc folder and upload it to the case workspace (if you are using Microsoft support) for sending it to the engineer working with you.

PERFVIEW

PerfView download location: https://www.microsoft.com/en-in/download/details.aspx?id=28567

Run PerfView.exe, follow the steps below during the high CPU or hang or process slowness:

At the time of the issue (when you see the slowness)

1.     Click at Collect Menu and select Collect option

2.     CHECK Zip, Merge, thread time check boxes as shown in the screen shot below.

snip4

3. If IIS is involved, expand the Advanced Options tab and select IIS checkbox as show in the screen shot below and click at “Start Collection” button to capture traces.

snip5

4. To stop collecting the traces (collect traces for few minutes), select “Stop Collection” from the same PerfView dialog and allow (meaning wait) the log capture to Merge (you can see that from the PerfView window status bar, flickering towards the right). Once the merge is complete you would notice files with names ending in *.etl.zip on the same folder from where you ran PerfView. Upload it to the case workspace (if you are using Microsoft support) for sending it to the engineer working with you.

 

How the OS behaves in deciding when to use an extra CPU?

$
0
0

I typically got this question from a customer who was explicitly trying to know: How the OS behaves in deciding when to use an extra CPU to process COM+ requests?

Ideally to answer this in one line I would have to say: there is no additional way for the OS to decide that a thread is a COM+ thread or a thread is from any other process for example say notepad.exe.

Please note: all threads from user mode processes are executing at PASSIVE_LEVEL. User-mode code executes at PASSIVE_LEVEL. This is the level at which threads run. In fact https://blogs.msdn.microsoft.com/doronh/2010/02/02/what-is-irql/ says: if you look at the specific definition of “thread” in NT, it pretty much only covers code that runs in the context of a specific process, at PASSIVE_LEVEL or APC_LEVEL.

There is no separate distinction which the OS will make for a COM+ thread.

If you read through “Operating System Concepts” & “Operating System Concept Essentials” by Silberschatz and Galvin, it says:

Almost all processes alternate between two states in a continuing cycle:

• A CPU burst of performing calculations, and

• An I/O burst, waiting for data transfer in or out of the system.

CPU bursts vary from process to process, and from program to program, but an extensive study shows frequency patterns similar to the one shown in the diagram below:

<diagram snipped from the same OS book referred earlier>

 snip7

From the task manager you can set the processor affinity of a process (see below) but the default is set for all. So by choosing any specific processor (choosing a subset) doesn’t make computations fast.

 snip8

You can even change the priority of a process from the task manager but it can make the system highly unstable. Ideally as said before:  There is no separate distinction which the OS will make for a COM+ thread. User-mode code executes at PASSIVE_LEVEL.

snip9

IMHO: the task manager (shown in the screen shot above) sets the priority class of process as discussed in https://msdn.microsoft.com/en-us/library/windows/desktop/ms685100(v=vs.85).aspx. Use HIGH_PRIORITY_CLASS with care. If a thread runs at the highest priority level for extended periods, other threads in the system will not get processor time. If several threads are set at high priority at the same time, the threads lose their effectiveness.

As the MSDN link says: You should almost never use REALTIME_PRIORITY_CLASS, because this interrupts system threads that manage mouse input, keyboard input, and background disk flushing. This class can be appropriate for applications that “talk” directly to hardware or that perform brief tasks that should have limited interruptions.

[Sample Of Dec. 30] How to filter data in view model in Win 10 UWP apps

$
0
0
image
Dec.
30
image
image

Sample : https://code.msdn.microsoft.com/How-to-filter-data-in-view-4d83dd03

This sample demonstrates how to filter data in view model in Win 10 UWP apps.

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


Collecting diagnostics for WCF (hosted in IIS) & Web Service performance related issues

$
0
0

Say for example you are troubleshooting a high CPU or a slow response or a hang issue. For diagnostics collect the following from the server side:

  1. IIS Logs (Location: %SystemDrive%inetpublogsLogFiles)
  2. FREB traces (see steps below)
  3. PerfView traces (see steps below)
  4. Dumps of the IIS worker process (w3wp.exe) hosting your WCF or Web service, captured during the time of slowness. (see steps below on how to capture dumps)
  5. WCF & System.Net tracing (if the client is not a Web page but an application (Web, desktop or a service), you should collect these traces from the client as well.)

FREB

To configure FREB traces, go to IIS Manager, select your Web Site (hosting your WCF Services). On the right hand pane under Actions there is a Configure section; click Failed Request Tracing…

The trace should be enabled. See screen shot below:

snip10

From the center pane, click at Failed Request Tracing…  as shown below:

snip11

Click Add… as shown below and follow the dialog box.

snip12

If you want to track by time, set it by checking Time taken and click Next. See screen shot below.

snip13

Alternatively (check any one, not both) you can track by Status code(s), with values 200-999 & click Next to continue. See screen shot below.

snip14

Click at  Finish as shown in the dialog below. Please note: IIS reset is not required.

snip15

For PerfView & Debug Diagnostic steps see: https://blogs.msdn.microsoft.com/dsnotes/2016/12/30/capturing-full-user-mode-dumps-perfview-traces-for-troubleshooting-high-cpu-hang-issues/

WCF & System.Net tracing

Here is an example of a web.config file with WCF & System.Net tracing enabled. You need to integrate the sections highlighted in yellow within the correct sections highlighted in cyan.

You can directly use it on your web.config. In initializeData (see below), kindly set the correct path.

<?xml version=”1.0″ encoding=”UTF-8″?>

<configuration>

 <system.diagnostics>

    <sources>

       <source name=”System.Net” switchValue=”Verbose”>

            <listeners>

                <add name=”SystemNetTrace”/>

           </listeners>

       </source>

      <source name=”System.ServiceModel” switchValue=”Verbose, ActivityTracing” propagateActivity=”true”>

        <listeners>

          <add name=”wcftrace” />

       </listeners>

      </source>

      <source name=”System.ServiceModel.MessageLogging” switchValue=”Verbose, ActivityTracing”>       

        <listeners>

          <add name=”wcfmessages” />

        </listeners>

      </source>

      <source name=”System.Runtime.Serialization” switchValue=”Verbose”>

        <listeners>

          <add name=”wcfmessages” />

        </listeners>

      </source>

   </sources>

   <sharedListeners>

  

      <add name=”SystemNetTrace” type=”System.Diagnostics.TextWriterTraceListener” traceOutputOptions=”LogicalOperationStack, DateTime, Timestamp, Callstack” initializeData=”C:TracesSystem_Net.txt” />

  

      <add name=”wcftrace” type=”System.Diagnostics.XmlWriterTraceListener” traceOutputOptions=”LogicalOperationStack, DateTime, Timestamp, Callstack” initializeData=”C:TracesWCFTrace.svclog” />

 

      <add name=”wcfmessages” type=”System.Diagnostics.XmlWriterTraceListener” traceOutputOptions=”LogicalOperationStack, DateTime, Timestamp, Callstack” initializeData=”C:TracesWCFMessages.svclog” />

 

   </sharedListeners>

   <trace autoflush=”true” />

  </system.diagnostics>

 </configuration>

If you by any chance is working with Microsoft support, send the following for a review or review it yourself to track the origin of slowness.

  1. IIS Logs – track calls which are taking time. Isolate WCF or Web Service calls taking time.
  2. FREB traces – Analyze them to see where the calls are stuck for example in the IIS integrated pipeline or somewhere else.
  3. WCF & System.Net traces – track for errors, exceptions & duration of the calls via the correlation ID.
  4. PerfView traces – you can track for thread times, ASP.NEt events, etc.
  5. Debug Diagnostic dumps – track for thread call-stacks, CPU usage, memory usage etc.

주간닷넷 2016년 12월 6일

$
0
0

여러분들의 적극적인 참여를 기다리고 있습니다. 혼자 알고 있기에는 너무나 아까운 글, 소스 코드, 라이브러리를 발견하셨거나 혹은 직접 작성하셨다면 Gist나 주간닷넷 페이지를 통해 알려주세요. .NET 관련 동호회 소식도 알려주시면 주간닷넷을 통해 많은 분과 공유하도록 하겠습니다.

금주의 커뮤니티 소식

Taeyo.NET에서 http://docs.asp.net의 ASP.NET Core 문서를 한글화하여 연재하고 있습니다.

On .NET 소식

지난 주 On .NET에는 Xavier Decoster가 Maarten Balliauw와 함께 MyGet에 관해 이야기 나누었습니다.

이번 주 On .NET에서는 MVP Summit 현장에서 MVP들과 함께 인터뷰를 진행하였습니다.

  • AsyncEx : Stephen Cleary가 async/await를 위한 헬퍼 라이브러리 AsyncEx에 관해 설명합니다.
  • IoT, sensors, and Azure : Luis Valencia가 Azure IoT의 센서 모니터링과 신호에 관해 설명합니다.

금주의 패키지 – FlexViewer by ComponentOne

.NET 개발 환경을 지원하는 리포팅 툴은 상당히 많습니다. 그 중 ComponentOne는 리포팅 기능을 포함하여 다양한 컴포넌트들을 만들고, 유지보수하는 업체입니다. FlexViewer는 WinForms, UWP, MVC 환경에서 동작하며 PDF, HTML, Office 등 다양한 포맷을 지원합니다. 홈페이지에서 FlexViewer를 이용해 4분만에 리포트를 만들어 보기 영상을 직접 확인해 보시기 바랍니다.

FlexViewer

금주의 게임 – I Expect You To Die

I Expect You To Die는 VR 퍼즐 게임입니다. 플레이어는 최고의 비밀요원이 되어 여러 위험 상황 속에서 미션을 완수해야 하며, 미션마다 주어진 문제를 해결하기 위해 현명하면서도 빠른 순발력을 지녀야 합니다. 편안하게 앉아 팔을 뻗은 상태로 VR 게임을 즐기실 수 있습니다. 다양한 방법으로 게임상의 퍼즐을 푸실 수 있으며, 여러 번의 실패는 미션을 완료하는 데 많은 도움이 될 것입니다.I Expect You To Die

I Expect You To DieSchell Games에서 C#Unity를 이용하여 개발되었으며, 현재 Oculus Rift와 PlayStation VR 버전에서 게임을 즐기실 수 있습니다.

.NET 소식

ASP.NET 소식

F# 소식

Xamarin 소식

Azure 소식

Games 소식

주간닷넷.NET Blog에서 매주 발행하는 The week in .NET을 번역하여 진행하고 있으며, 한글 번역 작업을 오픈에스지의 송기수 전무님의 도움을 받아 진행하고 있습니다.

song 송 기수, 기술 전무, 오픈에스지
현재 개발 컨설팅회사인 OpenSG의 기술이사이며 여러 산업현장에서 프로젝트를 진행중이다. 입사 전에는 교육 강사로서 삼성 멀티캠퍼스 교육센터 등에서 개발자 .NET 과정을 진행해 왔으며 2005년부터 TechED Korea, DevDays, MSDN Seminar등 개발자 컨퍼런스의 스피커로도 활동하고있다. 최근에는 하루 업무의 대다수 시간을 비주얼 스튜디오와 같이 보내며 일 년에 한 권 정도 책을 쓰고, 한달에 두 번 정도 강의를 하면 행복해질 수 있다고 믿는 ‘Happy Developer’ 이다.

 

Fix identities after migrating through SQL Replica

$
0
0
One of the most common migration procedures to SQL Azure is by configuring replication from your previous environment to your brand new SQL DB: https://msdn.microsoft.com/en-US/library/mt589530.aspx  
This is a very nice migration process, as it allows original database to be available in Production until very few moments before SQL Database goes live.
However, if your database has tables with Identity columns, you should have the following in mind: the tables that are created and filled by replication agents, without any manual action on them, are considered empty by identity mechanism.
The reason why they are considered empty is because identity has the property “NOT FOR REPLICATION”, as they should have. This means that DML operations made by Replication agent in the table are not tracked. Therefore, for identity concerns, the table is empty.
What would happen if you configure replication to migrate and then stop replication and redirect your application to your Azure SQL DB? First row to be inserted in these tables will use the identity criteria for empty tables: Start for the identity seed (usually 1). This is definitely not what we want.
Luckily, there is an easy way to avoid this problem. Before redirecting your application to your Azuer SQL DB, you can reseed your tables. As an example, you could execute below script.
DECLARE @tablename VARCHAR(50) — table name
DECLARE @columname VARCHAR(50) — column name
DECLARE @schemaname VARCHAR(50) –schema name
DECLARE @maxid INT– current value
DECLARE @newseed INT –new seed
DECLARE @newseed_string VARCHAR(50)
DECLARE @sqlcmd NVARCHAR(200) — cmd
CREATE TABLE #Maxid(value int)
DECLARE identity_cursor CURSOR FOR
SELECT OBJECT_NAME(ic.object_id), ic.name, s.name
FROM sys.identity_columns ic
join sys.objects o ON ic.object_id=o.object_id
JOIN sys.schemas s ON o.schema_id=s.schema_id
where o.type=’U’
OPEN identity_cursor
FETCH NEXT FROM identity_cursor INTO @tablename, @columname, @schemaname
WHILE @@FETCH_STATUS = 0
BEGINSET @sqlcmd=’INSERT INTO #Maxid SELECT TOP 1 ‘+ @columname+ ‘ from ‘+ @schemaname+’.’+@tablename + ‘ order by ‘ +@columname+’ desc’
exec sp_executesql @sqlcmd
SELECT TOP 1 @maxid= value FROM #Maxid
SET @newseed=@maxid +1
TRUNCATE TABLE #Maxid
SET @newseed_string=@newseed
SET @sqlcmd=’DBCC CHECKIDENT (”’+@schemaname+’.’+@tablename+”’, RESEED, ‘+@newseed_string+’)’
exec sp_executesql @sqlcmd
FETCH NEXT FROM identity_cursor INTO @tablename, @columname, @schemaname
END
DROP TABLE #Maxid
CLOSE identity_cursor
DEALLOCATE identity_cursor
Happy Azure-ing!

Happy New Year Friday Five

$
0
0

gus-gonzalesVideo: Top 5 countdown of tips for New Microsoft Dynamics 365 Administrators

Gus Gonzalez is a 5-time Microsoft MVP, and leads the vision, growth, and strategic direction at Elev8 Solutions. He has 15 years of consulting experience in the IT Industry, in which he’s designed and implemented Microsoft Solutions. He has worked in the Microsoft Dynamics 365/CRM industry since 2006. He began his CRM Career as a System Administrator and over time, moved up to Global Technical lead, Functional Consultant, and Solution Architect. A CRMUG All Star, Granite Award Winner, world-class trainer and readiness expert, Gus has a passion for creating solutions that organizations can count on, and users love working with. Follow him on Twitter @GusGonzalez2.

 

freek-berson-1Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS

Freek Berson is an Infrastructure specialist at Wortell, a system integrator company based in the Netherlands. Here he focuses on End User Computing and related technologies, mostly on the Microsoft platform. He is also a managing consultant at rdsgurus.com. He maintains his personal blog at themicrosoftplatform.net where he writes articles related to Remote Desktop Services, Azure and other Microsoft technologies. An MVP since 2011, Freek is also an active moderator on TechNet Forum and contributor to Microsoft TechNet Wiki. He speaks at conferences including BriForum, E2EVC and ExpertsLive. Join his RDS Group on Linked-In here. Follow him on Twitter @fberson.

 

herve-roggero

Exploring Microsoft Azure DocumentDB

Herve Roggero is a Microsoft Azure MVP and the founder of Enzo UnifiedHerve’s experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve also runs the Azure Florida Association. Follow him on Twitter @hroggero.

 

 

 

 

oscar-garciaApp Service Authentication with Azure AD

Oscar Garcia is a Software Solutions Architect who resides in Sunny South Florida. He is a Microsoft MVP and certified solutions developer with many years of experience building solutions using .Net framework and multiple open source frameworks. Currently, he specializes in building cloud solutions using technologies like ASP.NET, NodeJS, AngularJS, and other JavaScript frameworks. You can follow Oscar on Twitter via @ozkary or by visiting his blog at ozkary.com.

 

 

jay-r-barrios

Upgrade Active Directory Server 2016 from Server 2012 R2

Jay-R Barrios is a Filipino IT Senior Consultant based in Singapore. He helps his clients in designing and deploying their Microsoft Active Directory and System Center Infrastructure. In 2005, he and other IT Professionals from msforums ph founded the Philippine Windows Users Group PHIWUG. He served 2 terms as its President from 2008 and 2009 and is currently managing the System Center Philippines user group. If not geeking around, Jay-R likes to Travel, Surf and regularly plays soccer with his friends (on the field or in video games). Follow him on Twitter @jayrbarrios.

Lesson Learned #11: Connect from Azure SQL DB using an external table where the source of the data is a SQL Datawarehouse

$
0
0

One of our customer tries to connect from Azure SQL DB using an external table where the source of the data is a SQL Datawarehouse database.

  • This first question was if there is supported or not and I received the confirmation from Azure Product Team that there is not supported and they are working on it.
  • The second question was, why our customer, after configuring the External Table, is facing the error message during the select query: ‘Setting Language to N’us_english’ is not supported.’ ? I tried to reproduce the issue and I was able to find why.
  • I created a table in my SQL DW database.

CREATE TABLE [Order]( [SourceOrderArticleId] [int] NULL, [SourceOrderId] [int] NULL,  [BrandId] [tinyint] NULL) WITH (  DISTRIBUTION = ROUND_ROBIN,    CLUSTERED COLUMNSTORE INDEX)

  • Connected to my Azure SQL DB, I executed the following steps:

CREATE MASTER KEY ENCRYPTION BY PASSWORD=’xxxxxxxxxx’;

CREATE DATABASE SCOPED CREDENTIAL AppCredDW WITH IDENTITY = ‘UserDW’,  SECRET = ‘PasswordDW’;

CREATE EXTERNAL DATA SOURCE RemoteReferenceDataDW WITH (  TYPE=RDBMS,

                LOCATION=’serverdw.database.windows.net’,

DATABASE_NAME=’dwsource’,

CREDENTIAL= AppCredDW);

 

CREATE EXTERNAL TABLE [dbo].[Order]( [SourceOrderArticleId] [int] NULL, [SourceOrderId] [int] NULL,       [BrandId] [tinyint] NULL) WITH ( DATA_SOURCE = RemoteReferenceDatadw);

    • Every time that I executed the query: select * from [dbo].[Order], I got the same issue that our customer, event trying to change the setting in the context I got the same problem.

 

    • Enabling SQL Auditing for the SQL DataWarehouse database, I found the reason that our customer is getting the error message ‘Setting Language to N’us_english’ is not supported.’

 

    • Every time that Azure SQL DB (using Elastic Database component) tries to connect to Azure SQL Datawarehouse, this component change the context of the connection running the following TSQLs statements:

DECLARE @productVersion VARCHAR(20)

SELECT @productVersion = CAST(SERVERPROPERTY(‘ProductVersion’) AS VARCHAR(20))

IF CONVERT(INT, LEFT(@productVersion, CHARINDEX(‘.’, @productVersion) – 1)) >= 12

    EXEC sp_executesql N’SET CONTEXT_INFO 0xDEC7E180F56D3946A2F5081A9D2DAB3600004F8F6CF3AC0205674E2CB44811FA5D45B64057F43BDF17E8′

SET ANSI_NULLS ON;

SET ANSI_WARNINGS ON;

SET ANSI_PADDING ON;

SET ARITHABORT ON;

SET CONCAT_NULL_YIELDS_NULL ON;

SET NUMERIC_ROUNDABORT ON;

SET DATEFIRST 7;

SET DATEFORMAT mdy;

SET LANGUAGE N’us_english’;

SELECT [T1_1].[SourceOrderArticleId] AS [SourceOrderArticleId],

       [T1_1].[SourceOrderId] AS [SourceOrderId],

       [T1_1].[BrandId] AS [BrandId]

FROM   [dbo].[Order] AS T1_1

 

    • The statement SET LANGUAGE N’us_english’ is not supported in SQL Datawarehouse as it, but if you change, to SET LANGUAGE us_english there is possible. If you executed the command SET LANGUAGE N’us_english’ is supported in Azure SQL DB.

 

    • Most probably, this could have any other implications, but, if the Elastic Database component use SET LANGUAGE us_english instead of SET LANGUAGE N’us_english’ we may able to use a SQL Datawarehouse as external table from Azure SQL DB.

The imported project “C:Program Files(x86)MsBuildMicrosoftWindowsXamlv14.08.1Microsoft.Windows.UI.Xaml.CSharp.targets” was not found

$
0
0

While trying to create any C# shared or Windows Phone projects using the Visual Studio 2015 IDE, you may receive an error message as highlighted below:

1

2

This is a known issue and it would be fixed in a future update. In order to resolve the issue, please follow the below two workaround:

Workaround 1: 

Please modify the CodeSharing targets. To do so, download the attached target file and replace the file “C:Program Files (x86)MSBuildMicrosoftVisualStudiov14.0CodeSharingMicrosoft.CodeSharing.CSharp.targets” with it.

Alternatively, you can repair the target file manually: Open the file C:Program Files (x86)MSBuildMicrosoftVisualStudiov14.0CodeSharingMicrosoft.CodeSharing.CSharp.targets(or, for Visual Basic, Microsoft.CodeSharing.VisualBasic.targets)

Around line 8, you should see two entries:
<Import
Project=”$(MSBuildExtensionsPath32)MicrosoftWindowsXamlv$(VisualStudioVersion)Microsoft.Windows.UI.Xaml.CSharp.targets” Condition=”Exists(‘$(MSBuildExtensionsPath32)MicrosoftWindowsXamlv$(VisualStudioVersion)Microsoft.Windows.UI.Xaml.CSharp.targets’)”/>

<Import
Project=”$(MSBuildBinPath)Microsoft.CSharp.Targets” Condition=”!Exists(‘$(MSBuildExtensionsPath32)MicrosoftWindowsXamlv$(VisualStudioVersion)Microsoft.Windows.UI.Xaml.CSharp.targets’)”
/>

Replace these entries with the following:

<Import
Project=”$(MSBuildExtensionsPath32)MicrosoftWindowsXamlv$(VisualStudioVersion)Microsoft.Windows.UI.Xaml.CSharp.targets”
Condition=”false”/>

<Import
Project=”$(MSBuildBinPath)Microsoft.CSharp.Targets” Condition=”true”
/>

Workaround 2:

1. Open the VS 2015 IDE
2. Click on File->New->Project
3. Choose the only Project template under Windows 8 (below screenshot)
This will launch Visual Studio setup where you can install the templates that are missing.

3

4

5

Alternatively, you can install the below feature by changing the installed Visual Studio 2015 from the “Control PanelProgramsPrograms and Features”:

6

P.S.  For Windows 7 OS, the workaround 1 will be applicable only. It can also occur with Visual Basic shared projects.  Obviously the file to modify would be the VB one (Microsoft.CodeSharing.VisualBasic.targets)

Lesson Learned #12: What types of temporary tables can I use in Azure SQL Datawarehouse?

$
0
0

In SQL Datawarehouse we are able just to create tables with #, so, that means that this temporal table will be available during the session that create this table. Other calls, for example, ## ( Global Temporal Tables ) or tempdb..name will return an syntax error like this one. URL: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-temporary

CREATE TABLE tempdb..Temporal([OrderArticleId] [int] NULL,[OrderId] [int] NULL) WITH (DISTRIBUTION = ROUND_ROBIN,CLUSTERED COLUMNSTORE INDEX)

sqldw_non_temporary_table_tempdb

CREATE TABLE ##Temporal ([OrderArticleId] [int] NULL,[OrderId] [int] NULL) WITH (DISTRIBUTION = ROUND_ROBIN,CLUSTERED COLUMNSTORE INDEX)

sqldw_global_temporary_table

 

How to create a stored procedure to populate data of this temporal table. Please, follow up this script as example for your code. Please, remember to execute the creation of the temporal table in the same session.

 

 STEP 1: Create the temporal table.

CREATE TABLE #Temporal ([OrderArticleId] [int] NULL,[OrderId] [int] NULL) WITH (DISTRIBUTION = ROUND_ROBIN,CLUSTERED COLUMNSTORE INDEX)

 

STEP 2: Create the stored procedure, you could use the explicit transaction.

CREATE PROCEDURE LoadTempTable

AS

BEGIN

DECLARE @Value INT = 0;

SET NOCOUNT ON;

BEGIN TRANSACTION

WHILE @Value <100

begin

set @value=@value+1

INSERT INTO #Temporal ([OrderArticleId],[OrderId]) values(@Value,@Value)

end

commit transaction

END

 

STEP 3: Execute the stored procedure and read the results.

exec LoadTempTable

select * from #temporal

drop table #temporal ( When the session will close this table will be deleted too without running the drop command )


Компания Microsoft поздравляет вас с Новым годом!

$
0
0

920x232

Спасибо, что были с нами в 2016 году!

2016 год был полон интересных проектов и новостей. Мы объявили о демократизации искусственного интеллекта, представили Windows 10 Creators Update, объявили о превью-версии SQL Server для Linux и сделали Xamarin доступным бесплатно для всех редакций Visual Studio.

В следующем году мы готовим для вас не менее яркие технологические анонсы и увлекательные запуски и желаем, чтобы в 2017-м с помощью технологий ваши проекты были еще более успешными и приносили долгожданные победы!

И в этот Новый год мы подготовили для вас уникальный подарок — семь удивительных историй о том, как технологии меняют пространство вокруг нас, делая жизнь удобнее, насыщеннее и ярче.

Lesson Learned #13: SQL Server instance in use does not support column encryption

$
0
0

Today, I worked in a support case that our customer is using the following components:

When they connect to local application to their Azure SQL Database, everything works fine. After their code is deployed to Azure App Service they got the error: com.microsoft.sqlserver.jdbc.SQLServerException: SQL Server instance in use does not support column encryption.

We found the issue that is explained at: https://github.com/Microsoft/mssql-jdbc/issues/65 and we suggest the following workaround: https://github.com/Microsoft/mssql-jdbc/pull/76

We hope to have a final fix soon. Our Product Team reported that they already tested the fix which one can find in Dev branch https://github.com/Microsoft/mssql-jdbc/tree/dev You can use this dev branch code in TEST / STAGE environments.

Truncate a SharePoint database log file

$
0
0

Since SharePoint is heavily dependent on SQL Server to store not only content but configuration information about the environment, there is a lot of emphasis placed on the design, configuration, scalability and health of SQL Server.

One area that we see a lot of questions on is:

  1. What should the default recovery model be for SharePoint databases?
  2. How can I truncate the log file to recovery disk space.

When it comes to the default recovery model for SharePoint databases the answer is it…it depends (I know)!  Because SharePoint uses quite a few databases to scale all of the content and service applications, each one has it’s own recovery model recommendations.  Be sure these recommendations line up with your backup plan to prevent any unwanted data loss.

In case you have a runaway log file that needs to be truncated, here is some TSQL that can be executed on the SharePoint SQL Server.

USE [database]

— Set to SIMPLE mode

ALTER DATABASE [database] SET RECOVERY SIMPLE;

— Shrink the database log file

–This name of the log file should be the same name as what is on the disk.  If your not sure run this command to find out.

SELECT name, physical_name AS current_file_location FROM sys.master_files

DBCC SHRINKFILE (‘database_log’, 1);

— Set back to FULL (optional depending on backup method used)

ALTER DATABASE [database] SET RECOVERY FULL;

References:

TFS and Jenkins Integration

$
0
0

As soon as there is a new code submit to TFS, TFS can notify Jenkins to perform continuous integration build or test. This is effective for unit test as we always would like to trigger unit test to check whether there is any code regression. This blog will cover:

  • How to create the project in TFS.
  • How to submit code change from Visual Studio Code.
  • How to trigger the continuous integration from TFS to Jenkins.

Create Project from TFS

In TFS 2015 Update 3, you are able to click the “New team project” button to create a new project shown below. Let’s choose Git as the source control.
12-1

Setup Visual Studio Code

Get the Git repository address of the project just created, clone the project to development machine shown below. Also, it is needed to git config the user name and email.
12-3

Open the local Git repository in Visual Studio Code and can make any code change and submit now. Visual Studio Code will notice the code chang automatically. Click the “commit All” button will commit the change to local git.
12-4

Click the “Push” menu will push the code change to TFS then.
12-5

Configure Jenkins

Create a freestyle project in Jenkins shown below.
12-6

In the Configuration of the newly created project, select Git as the repository of Source Code Management. Both of the Git repository address and a TFS credential are needed shown below.
12-12

Configure TFS

Open the team project administrator page in TFS web portal, there is one tab named “Service Hooks”. It supports multiple services not only for continuous integration. Add a new service.
12-7

In the Service page, select Jenkins.
12-8

In the Trigger page, select “Code pushed” which means TFS will notify Jenkins as soon as there is new code change. You can also select “Build Complete” if TFS performs a scheduled build by build definition.
12-9

In the Action page, provide the project name which is just created in Jenkins. Both of API token and password are supported and token is recommended.
12-10

Before finishing the configuration, it is suggested to click the “Test” button in the Action page to make sure everything is working fine. If there is 403 error shown below, uncheck “Prevent Cross Site Request Forgery exploits” in Jenkins “Configure Global Security” management portal, and try again.
12-11

HOW TO SET MY DEFAULT SEARCH PROVIDER VIA GPO?

$
0
0

In this blog, we share how you can use Group Policy Preferences / Registry to change your Default Search provider used in Internet Explorer 11.

What we will cover in this document:

  • SearchScope Registry and Default SearchScope location
  • Using GPP Registry Wizard
  • User Preferences Registry location
  • Renaming the GPO
  • Warning

REQUIREMENTS: To be familiar with Group Policy Console and Group Policy Preferences / Registry. To have your Clients configured with at least 2 Search Providers.

Make sure you have the Latest Windows Roll-up updates to address any known issues.

SEARCHSCOPE REGISTRY LOCATION

By Default, the SearchScopes registry key contains the default search provider information. This is the location in the registry that will help you identify, which GUID is being used to defined the default search provider.

Here is the location:

  • HKEY_CURRENT_USERSOFTWAREMicrosoftInternet ExplorerSearchScopes

SearchScopes registry

If more than one Search provided is defined by the user, you will first find a DEFAULTSCOPE string name with the REG_SZ GUID identifying the Search provider.

Search Provider

  • So, if you look at the {6aXXXX} value, it shows it is the Google GUID.
  • As you can see, under the SearchScopes key we have two providers: Google and Bing search. In this scenario, we will be configuring Bing as the default search provider.

USING GROUP POLICY PREFERECNES REGISTRY

In this example, we have two providers: Google and Bing.

Here are the steps I took to configure Bing as the default provider.

PART I – STAGING MY HOST MACHINE

  • First, I configure my local host machine that I will be setting the GPO from, with the settings to be configuring on the clients using GPP Registry. This is the easiest way you can configure this GPO and also helps reduce any mistake. So, simply open IE Manage Add-ons / Search Providers and add Google to the list it will take you to the IE gallery site: (https://www.microsoft.com/en-us/iegallery)
  • Second, Set the Google Provider as the Default provider from the Manage Add-ons window.
    • This is what it looks like:

Manage add-ons Search Providers

The Client machines, where we want to change the settings to Big(example), may look like this:

Manage add-ons Search Providers

PART II – GROUP POLICY

Now, that we have the IE settings on the host machine, we can configure our GPP Registry.

  • From GPMC.MSC navigate to your GPO / Preferences / Windows Settings / Registry
  • Right Click on Registry / New and Select Registry Wizard

GPP Registry Wizard

  • From the Registry Browser Window, select Local Computer and click on Next >

GPP Registry Wizard - Registry Browser

  • From the Registry Browser, navigate to: HKEY_CURRENT_USERSOFTWAREMicrosoftInternet ExplorerSearchScopes

  • From this key, make sure you select the DefaultScope name

Registry Browser

  • Next, check both Sub keys containing the GUIDS for the Search Providers: Bing and Google and every value under each keys except any path to user profiles! Also, remember to scroll down to select other items!!

Example:

Registry Browser  - path and configuration

In the Screen below, we can see the FavIconPath goes to a profile directory. DO NOT SELECT THIS OPTION!!

Registry Browser  - path and configuration

  • Click on finish to complete this GPO configuration.

PART III – ELIMINATING THE WARNING

  • NEXT, lets add the User Preferences We will use this to help eliminate a warning the user may get when we enforce the DefaultScope search. This warning is by Design and design to alert users of a program trying to modify their settings. If you do not care about this warning and your users are hands, you can skip this step.

Also, note that this warning may not show for a brand new users.

THE WARNING- EXAMPLE!

An unknown program would like to change your default search provider to ‘Google’ (www.google.com)

SCREENSHOT:

An unknown program would like to change your default search provider to 'Google' (www.google.com)

  • Start a new Registry Wizard and navigate to: HKEY_CURRENT_USERSOFTWAREMicrosoftInternet ExplorerUser Preferences

NOTE: All you need to check is the top User Preference key. No need to select the sub names in the bottom pane!  We will be deleting this with the GPO, so no real use to check these out .

Registry Browser - User Preferences

  • Click on Finish
  • Now, we have all the setting we need to get the default provider configured on the clients. We need to perform some housekeeping to help others understand what we are doing and a small adjustment to the User Preferences setting to make sure, we eliminate the warning.
  • Configured this new GPO to delete the User Preferences. This can be done from the properties of the User Preferences policy. Double-click on the User Preferences object on the right side pane and change the Action to Delete and save it.

Set the Action to Delete

PART IV – CLEANING UP THE GPO

We will now, label the GPO settings and make small adjustments that any admin will appreciate when all done.

As you may have noticed, when using the Wizard, you will end up with a full registry tree view to the path of the settings and not very intuitive. We however, can modify the GPO and make it look a lot cleaner without affecting anything.

First, expand the GPO keys:

full registry tree view

  • Grab the SearchScopes Folder Search folder and drag and drop it on the Registry Registry object object:
  • Do the same for the User Preferences folder, drag it and drop it on the Registry
  • Now, delete the empty tree objects. From Registry Wizard Values folder to Internet explorer Here is a screenshot of what you want to delete and what you want to keep: Red Goes and Blue KEEP

full registry tree view  - What to keep and what to delete

 

Here is what it looks after the clean-up:

Clean up results

Let’s rename the GUIDS to represent the Search Provider. Just click on the GUID and on the right side pane, you can figure out which GUID is for Bing and Google.

It will end up looking like this:

Renamed GUID to represent search scope

PART V – TESTING THE GPO

In this screenshot, we can see the warning as the GPO was applied without the User Preferences GPP (I had disabled this GPO to better illustrate how this works).

IE loading after SearchScope GPO and Warning

  • I enabled the User Preferences GPO, which I have configured to delete the User Preferences registry “key HKEY_CURRENT_USERSOFTWAREMicrosoftInternet ExplorerUser Preferences” and ran the GPUPDATE /FORCE command to reapply the GPO.
  • Relaunch IExplore and no Warnings. Checked my settings Manage Add-ons Search Provider configuration and Bing shows as my Default.

Manage Add-ons configuration on client after GPO

 

With these steps, you should successfully set your prefer search provider on your manage environment. We suggest that you be running the latest IE cumulative updates and Windows Roll-ups to assure you are fully patch and free of any known issues.

 

This blog has been provided to you by the IE Support team!

 

 

 

 

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>