Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

getting a stack trace in PowerShell

$
0
0

One of the most annoying “features” of PowerShell is that when the script crashes, it prints no stack trace, so finding the cause of the error is quite difficult. The exception object System.Management.Automation.ErrorRecord actually has the property ScriptStackTrace that contains the trace, it’s just that the trace doesn’t get printed on error. You can wrap your code into your own try/catch and print the trace. Or you can define a different default formatting for this class, and get the stack trace printed by default.

How to change the default formatting. First I’ll tell how it was done, and then will show the whole contents.

If you want to start from scratch, open $PSHOMEPowerShellCore.format.ps1xml and copy the definition of formatting for the type System.Management.Automation.ErrorRecord to a your own separate file Format.ps1xml. After the last entry <ExpressionBinding>, add your own:

                            <ExpressionBinding>
                                <ScriptBlock>
                                    $_.ScriptStackTrace
                                </ScriptBlock>
                            </ExpressionBinding>

That’s basically it. Well, plus a minor fix: the default implementation doesn’t always include the LF at the end of the message, and if it doesn’t, the stack trace ends up stuck directly to the end of the last line. To fix it, add the “`n” in the previous clause:

                                        elseif (! $_.ErrorDetails -or ! $_.ErrorDetails.Message) {
                                            $_.Exception.Message + $posmsg + "`n"  # SB-changed
                                        } else {
                                            $_.ErrorDetails.Message + $posmsg + "`n" # SB-changed
                                        }

After you have your Format.ps1xml ready, import it from your script:

$spath = Split-Path -parent $PSCommandPath
Update-FormatData -PrependPath "$spathFormat.ps1xml"

Once imported, it will affect the whole PowerShell example.  Personally I also import it in ~DocumentsWindowsPowerShellprofile.ps1, so that at least on my machine I get the messages with the stack trace from all the normal running of PowerShell.

A weird thing is that if I do

Get-FormatData -TypeName System.Management.Automation.ErrorRecord

I get nothing. But it works. I guess some special magic is associated with this class.

And now for convenience the whole format file. The comment in $PSHOMEPowerShellCore.format.ps1xml says that it’s the sample code, so it’s got to be fine to use as another sample:

<?xml version="1.0" encoding="utf-8" ?>
<!-- *******************************************************************


These sample files contain formatting information used by the Windows
PowerShell engine. Do not edit or change the contents of this file
directly. Please see the Windows PowerShell documentation or type
Get-Help Update-FormatData for more information.

Copyright (c) Microsoft Corporation.  All rights reserved.
 
THIS SAMPLE CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY
OF ANY KIND,WHETHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR
PURPOSE. IF THIS CODE AND INFORMATION IS MODIFIED, THE ENTIRE RISK OF USE
OR RESULTS IN CONNECTION WITH THE USE OF THIS CODE AND INFORMATION
REMAINS WITH THE USER.

 
******************************************************************** -->
 
<Configuration>
 
  <ViewDefinitions>
        <View>
            <Name>ErrorInstance</Name>
            <OutOfBand />
            <ViewSelectedBy>
                <TypeName>System.Management.Automation.ErrorRecord</TypeName>
            </ViewSelectedBy>
            <CustomControl>
                <CustomEntries>
                    <CustomEntry>
                       <CustomItem>
                            <ExpressionBinding>
                                <ScriptBlock>
                                    if ($_.FullyQualifiedErrorId -ne "NativeCommandErrorMessage" -and $ErrorView -ne "CategoryView")
                                    {
                                        $myinv = $_.InvocationInfo
                                        if ($myinv -and $myinv.MyCommand)
                                        {
                                            switch -regex ( $myinv.MyCommand.CommandType )
                                            {
                                                ([System.Management.Automation.CommandTypes]::ExternalScript)
                                                {
                                                    if ($myinv.MyCommand.Path)
                                                    {
                                                        $myinv.MyCommand.Path + " : "
                                                    }
                                                    break
                                                }
                                                ([System.Management.Automation.CommandTypes]::Script)
                                                {
                                                    if ($myinv.MyCommand.ScriptBlock)
                                                    {
                                                        $myinv.MyCommand.ScriptBlock.ToString() + " : "
                                                    }
                                                    break
                                                }
                                                default
                                                {
                                                    if ($myinv.InvocationName -match '^[&amp;.]?$')
                                                    {
                                                        if ($myinv.MyCommand.Name)
                                                        {
                                                            $myinv.MyCommand.Name + " : "
                                                        }
                                                    }
                                                    else
                                                    {
                                                        $myinv.InvocationName + " : "
                                                    }
                                                    break
                                                }
                                            }
                                        }
                                        elseif ($myinv -and $myinv.InvocationName)
                                        {
                                            $myinv.InvocationName + " : "
                                        }
                                    }
                                </ScriptBlock>
                            </ExpressionBinding>
                            <ExpressionBinding>
                                <ScriptBlock>
                                   if ($_.FullyQualifiedErrorId -eq "NativeCommandErrorMessage") {
                                        $_.Exception.Message  
                                   }
                                   else
                                   {
                                        $myinv = $_.InvocationInfo
                                        if ($myinv -and ($myinv.MyCommand -or ($_.CategoryInfo.Category -ne 'ParserError'))) {
                                            $posmsg = $myinv.PositionMessage
                                        } else {
                                            $posmsg = ""
                                        }
                                       
                                        if ($posmsg -ne "")
                                        {
                                            $posmsg = "`n" + $posmsg
                                        }
           
                                        if ( &amp; { Set-StrictMode -Version 1; $_.PSMessageDetails } ) {
                                            $posmsg = " : " +  $_.PSMessageDetails + $posmsg
                                        }

                                        $indent = 4
                                        $width = $host.UI.RawUI.BufferSize.Width - $indent - 2

                                        $errorCategoryMsg = &amp; { Set-StrictMode -Version 1; $_.ErrorCategory_Message }
                                        if ($errorCategoryMsg -ne $null)
                                        {
                                            $indentString = "+ CategoryInfo          : " + $_.ErrorCategory_Message
                                        }
                                        else
                                        {
                                            $indentString = "+ CategoryInfo          : " + $_.CategoryInfo
                                        }
                                        $posmsg += "`n"
                                        foreach($line in @($indentString -split "(.{$width})")) { if($line) { $posmsg += (" " * $indent + $line) } }

                                        $indentString = "+ FullyQualifiedErrorId : " + $_.FullyQualifiedErrorId
                                        $posmsg += "`n"
                                        foreach($line in @($indentString -split "(.{$width})")) { if($line) { $posmsg += (" " * $indent + $line) } }

                                        $originInfo = &amp; { Set-StrictMode -Version 1; $_.OriginInfo }
                                        if (($originInfo -ne $null) -and ($originInfo.PSComputerName -ne $null))
                                        {
                                            $indentString = "+ PSComputerName        : " + $originInfo.PSComputerName
                                            $posmsg += "`n"
                                            foreach($line in @($indentString -split "(.{$width})")) { if($line) { $posmsg += (" " * $indent + $line) } }
                                        }

                                        if ($ErrorView -eq "CategoryView") {
                                            $_.CategoryInfo.GetMessage()
                                        }
                                        elseif (! $_.ErrorDetails -or ! $_.ErrorDetails.Message) {
                                            $_.Exception.Message + $posmsg + "`n"  # SB-changed
                                        } else {
                                            $_.ErrorDetails.Message + $posmsg + "`n" # SB-changed
                                        }
                                   }
                                </ScriptBlock>
                            </ExpressionBinding>
                            <ExpressionBinding>
                                <ScriptBlock>
                                    $_.ScriptStackTrace
                                </ScriptBlock>
                            </ExpressionBinding>
                        </CustomItem>
                    </CustomEntry>
                </CustomEntries>
            </CustomControl>
        </View>
    </ViewDefinitions>
</Configuration>

 

 


Create, update and delete remarketing lists with the API!

$
0
0

Following our release earlier this year for managing remarketing list associations at the ad group level using the API, we’re now offering the ability to create and modify remarketing lists. With this update, you can use the Bing Ads API to create new remarketing lists and update or delete existing lists.

Campaign Management API

The following new service operations are included in this update:

A maximum of 100 lists can be processed per call by the operations listed above.

Additionally, a new Rule element is available in the RemarketingList object. It allows you to specify one of four types of rules, which govern how audiences may be determined: CustomEventsRule, PageVisitorsRule, PageVisitorsWhoDidNotVisitAnotherPageRule, and PageVisitorsWhoVisitedAnotherPageRule.

Bulk API

Support for uploads has been added to the Remarketing List record type. Additionally, a new Template field has been added to the record type to allow specification of the rule for audience determination. More details can be found in our December release notes.

SDK support for the Bulk API changes is not available at this time.

un-messing Unicode in PowerShell

$
0
0

PowerShell has a bit of a problem with accepting the output of the native commands that print Unicode into its pipelines. PowerShell tries to be smart in determining whether the command prints Unicode or ASCII, so if the output happens to be nicely formatted and contains the proper Unicode byte order mark (0xFF 0xFE) than it gets accepted OK. But if it doesn’t, PowerShell mangles the output by taking it as ASCII and internally converting it to Unicode. Even redirecting the output to a file doesn’t help because it’s implemented in PowerShell by pipelining and then saving to the file from PowerShell, so everything gets just as mangled.

One workaround that works is to start the command through an explicit cmd.exe and do the redirection in cmd.exe:

cmd /c "mycommand >output.txt 2>error.txt"

Then you can read the file with Get-Content -Encoding Unicode. Unfortunately, there is no encoding override for the pipelines and/or in Encode-Command.

If you really need a pipeline, another workaround is again to start cmd.exe, now with two commands: the first one would print the Unicode byte mark, and the second would be your command. But basically there is no easy way to do it in cmd itself, you’ll have to write the first command yourself.

Well, yet another workaround is to de-mangle the mangled output. Here is the function that does it:

function ConvertFrom-Unicode
{
<#
.SYNOPSIS
Convert a misformatted string produced by reading the Unicode UTF-16LE text
as ASCII to the proper Unicode string.

It's slow, so whenever possible, it's better to read the text directly as
Unicode. One case where it's impossible is piping the input from an
exe into Powershell.

WARNING:
The conversion is correct only if the original text contained only ASCII
(even though expanded to Unicode). The Unicode characters with codes 10
or 13 in the lower byte throw off Powershell's line-splitting, and it's
impossible to reassemble the original characters back together.
#>
    param(
        ## The input string.
        [Parameter(ValueFromPipeline = $true)]
        [string] $String,
        ## Auto-detect whether the input string is misformatted, and
        ## do the conversion only if it is, otherwise return the string as-is.
        [switch] $AutoDetect
    )

    process {
        $len = $String.Length

        if ($len -eq 0) {
            return $String # nothing to do, and would confuse the computation
        }

        $i = 0
        if ([int32]$String[0] -eq 0xFF -and [int32]$String[1] -eq 0xFE) {
            $i = 2 # skip the encoding detection code
        } else {
            if ([int32]$String[0] -eq 0) {
                # Weird case when the high byte of Unicode CR or LF gets split off and
                # prepended to the next line. Skip that byte.
                $i = 1
                if ($len -eq 1) {
                    return # This string was created by breaking up CR-LF, return nothing
                }
            } elseif ($Autodetect) {
                if ($len -lt 2 -or [int32]$String[1] -ne 0) {
                    return $String # this looks like ASCII
                }
            }
        }

        $out = New-Object System.Text.StringBuilder
        for (; $i -lt $len; $i+=2) {
            $null = $out.Append([char](([int32]$String[$i]) -bor (([int32]$String[$i+1]) -shl 8)))
        }
        $out.ToString()
    }
}

Export-ModuleMember -Function ConvertFrom-Unicode

Here is an example of use:

$data = (Receive-Job $Buf.job | ConvertFrom-Unicode)

See Also: all the text tools

localization both ways

$
0
0

The localization of messages on Windows is done through the MUI files. I.e. aside from mycmd.exe or mylib.dll you get the strings file mycmd.exe.mui or mylib.dll.mui, to be placed next to it in a subdirectory named per the language, like “en-us”, and the system will let you open and get the strings according to the user’s current language (such as in this example).

But first you’ve got to define the strings. There are two ways to do it:

  • The older way, in a message file with the extension .mc.
  • The newer way, in an XML manifest with the extension .man.

The .mc files are much more convenient. The .man files are much more verbose and painful, requiring more of the manual maintenance. But the manifest files are also more flexible, defining not only the strings but also the ETW messages (that might also use some of these strings). So if you want to sent the manifested ETW messages (as of Windows 10 there also are the un-manifested ETW messages, or more exactly, the self-manifesting messages), you’ve got to use the manifest file to define them.

But there is only one string section per binary. You can’t have multiple separate message files and manifest files, compile them separate and then merge. You can compile and put them in but only the first section will be used. Which pretty much means that you can’t use the localized strings or ETW messages in a static library: when you link the static library into a binary, you won’t be able to include its strings. If you want localization or ETW, you’ve got to make each your library into a DLL. Or have some workaround way to merge the strings from all the static libraries you use into one before compiling it.

However there is one special exception that is not too widely known: the message compiler mc.exe can accept exactly one .mc file and exactly one .man file, and combine them into a single compiled strings section. So you can define the ETW messages and strings for them in a .man file in the more painful way, and the rest of the strings in the .mc file in the less painful way, and it will still work. Just make sure that you have no overlaps in the message IDs. I’m not sure why can’t they read multiple files of each type and put them all together. But at least you won’t have to convert the .mc files to .man.

Update for SQL Server Integration Services Feature Pack for Azure with support to Azure Data Lake Store and Azure SQL Data Warehouse

$
0
0

Hi All,

We are pleased to announce that an updated version of SQL Server Integration Services Feature Pack for Azure is now available for download. This release mainly has following improvements:

  1. Support for Azure Data Lake Store
  2. Support for Azure SQL Data Warehouse

Here are the download links for the supported versions:

SSIS 2012: https://www.microsoft.com/en-us/download/details.aspx?id=47367

SSIS 2014: https://www.microsoft.com/en-us/download/details.aspx?id=47366

SSIS 2016: https://www.microsoft.com/en-us/download/details.aspx?id=49492

Azure Data Lake Store Components

1.In order to support Azure Data Lake Store (ADLS), SSIS add below two components:

  • Azure Data Lake Store Source:
    • User can use ADLS Source component to read data from ADLS.
    • Support Text and Avro file format.
  • Azure Data Lake Store Destination:
    • User can use ADLS Destination component to write data into ADLS.
    • Support Text, Avro and Orc file format.
    • In order to use Orc format, user need to install JRE

2. ADLS components support two authentication options:

  • Azure AD User Identity
    • If the Azure Data Lake Store AAD user or the AAD tenant administrator didn’t consent “SQL Server Integration Service(Azure Data Lake)” to access their Azure Data Lake Store data before, then either AAD user or AAD tenant administrator need consent SSIS application to access Azure Data Lake Store data. For more information about this consent experience, see Integrating applications with Azure Active Directory.
    • Multi-factor authentication and Microsoft account is NOT supported. Consider to use “Azure AD Service Identity” option if your user account need multi-factor authentication or your user account is a Microsoft account.
  • Azure AD Service Identity

3. The ADLS source editor dialog is as below:

adlssource1

For more information about how to use Azure Data Lake Store components, see Azure Data Lake Store Components.

Azure SQL Data Warehouse

There are multiple approaches to load local data to Azure SQL Data Warehouse (Azure SQL DW) in SSIS. The blog post Azure SQL Data Warehouse Loading Patterns and Strategies gives a fine description and comparison of different approaches. A key point made in the post is that the recommended and most efficient approach that fully exploits the massively parallel processing power of Azure SQL DW is by using PolyBase. That is, first load data to Azure Blob Storage, and then to Azure SQL DW from there using PolyBase. The second step is done by executing a T-SQL sequence on Azure SQL DW.

While conceptually straightforward, it’s not an easy job to implement this approach in SSIS before this release. You have to use an Azure Blob Upload Task, followed by an Execute SQL Task, and possibly followed by yet another task to clean-up the temporary files uploaded to Azure Blob Storage. You also have to put together the complicated T-SQL sequence yourself.

To address this issue, this new release introduces a new control flow task Azure SQL DW Upload Task to provide a one-stop solution to Azure SQL DW data uploading. It automates the complicated process with an integrated, easy-to-manage interface.

On the General page, you configure basic properties about source data, Azure Blob Storage, and Azure SQL DW. Either a new table name or an existing one is specified for the TableName property, making a create or insert scenario.

dw_general

The Mappings page appears differently for create and insert scenarios. In a create scenario, configure which source columns are mapped and their corresponding names in the to-be-created destination table. In an insert scenario, configure the mapping relationships between source and destination columns.

On the Columns page, configure data type properties for each source column.

The T-SQL page shows the T-SQL sequence for loading data from Azure Blob Storage to Azure SQL DW using PolyBase. It will be automatically generated from configurations made on the other pages. Still, nothing is preventing you from manually editing the T-SQL to meet your particular needs by clicking the Edit button.

dw_tsql

For more information about how to use Azure SQL DW Upload Task, see Azure SQL DW Upload Task.

Cannot find a valid offline scope with the name ‘YourTable’ in table ‘[scope_info]’.

$
0
0

Credit to Brian Storie for the contribution provided.

Symptom

Microsoft.Synchronization.Data.DbSyncException: Cannot find a valid scope with the name ‘YourTable’ in table ‘[scope_info]’. Ensure that this scope exists and that it has a corresponding valid configuration in the configuration table ‘[scope_config]’.

Cause

You have created a new Offline scope, added it to the POS Offline profiles form and run the 1095 job without deprovisioning it properly in your Channel database. The Channel database turned out to be in an invalid state.

Solution

  1. Backup Channel database.
  2. Open command prompt with elevated privilege and run the following command line: C:Program Files (x86)Microsoft Dynamics AX60Retail Database Utility>RetailDbUtilityCmd.exe DeprovisionChannelDB                  /StoreServername:<YourStoreServerName> /StoreDatabaseName:<YourStoreDatabaseName>
  3. Check if all the _tracking tables are successfully deleted in your Channel database.
  4. Open and run “Retail Channel Configuration Utility” as administrator.
  5. Select “Create offline database”.  Specify your Channel database and offline database information, click “Apply” to recreate the offline database again.
  6. Channel database and offline database should be provisioned successfully.

 

Work with Financial dimensions [AX2012, X++]

$
0
0

NOTES: All codes in this document based on CU12.

Disclaimer:

All the programming examples in this article are for illustration purposes only. Microsoft disclaims all warranties and conditions with regard to use of the programming example for other purposes. Microsoft shall not, at any time, be liable for any special, direct, indirect or consequential damages, whether in an action of contract, negligence or other action arising out of or in connection with the use or performance of the programming example. Nothing herein should be construed as constituting any kind of warranty.

From AX2012 we got a more flexible financial dimension feature. We can do more settings and customization on it to meet our requests. In this article, we will discuss following topics:

  1. How to add Financial dimensions’ control on a form.
  2. How to add Financial dimensions’ filter in a query.
  3. How Financial dimensions stored.
  4. How to create and modify a financial dimension.
  5. Tips for Financial dimensions.

The Financial dimensions here we are talking include Default dimension and Ledger dimension (account and dimension segment combination that used in ledger journal).

Default dimension:

Ledger dimension:

  1. How to add Financial dimensions’ control on a form.
  • Default dimension

    We have a table called FinancialDimensionDemo and it has a field call DefaultDimension which extends from EDT DimensionDefault.

    We created a form called FinancialDimensionDemo and want add default dimension control for field DefaultDimension.

  1. Add a tab page or group for show Default dimensions. And set AutoDeclaration to yes.

  2. Define an object of class DimensionDefaultingController in form’s classDeclaration method.

    public
    class FormRun extends ObjectRun

    {

    DimensionDefaultingController dimensionDefaultingController;

    }

  1. Override and add following code in form’s init method to initialize DimensionDefaultingController object.

    public
    void init()

    {


    super();

    dimensionDefaultingController = DimensionDefaultingController::constructInTabWithValues(false, true, true, 0, this, TabFinancialDimensions, “@SYS138491”);

    dimensionDefaultingController.parmAttributeValueSetDataSource(FinancialDimensionDemo_DS, fieldStr(FinancialDimensionDemo, DefaultDimension));

    dimensionDefaultingController.parmValidateBlockedForManualEntry(true); //Specifies whether validation should be enforced for dimension values marked as not allowing manual entry. The default value is false.

    }

  2. Open the form

  • Ledger Dimension

    We have a table called FinancialDimensionDemo and it has 2 fields called LedgerDimension which extends from EDT DimensionDynamicAccount, and AccountType which extends from EDT LedgerJournalACType.

    And table has following relation. If you create field by drag EDT, the relation will be created automatically.

    We created a form called FinancialDimensionDemo and want add ledger dimension control for field LedgerDimension.

  1. Drag AccountType and LedgerDimension fields from form Data sources to following grip.

    Set AutoDeclaration property of FinancialDimensionDemo_LedgerDimension as Yes.

  2. Go to define an object for class DimensionDynamicAccountController.

    DimensionDynamicAccountController dimensionDynamicAccountController;

  3. Override init method and add following code to init object of DimensionDynamicAccountController

    dimensionDynamicAccountController = DimensionDynamicAccountController::construct(FinancialDimensionDemo_DS, fieldStr(FinancialDimensionDemo, LedgerDimension), fieldStr(FinancialDimensionDemo, AccountType));

    dimensionDynamicAccountController.parmDimensionAccountStorageUsage(DimensionAccountStorageUsage::Transactional);

    dimensionDynamicAccountController.parmPostingType(LedgerPostingType::LedgerJournal);

    dimensionDynamicAccountController.parmValidateBlockedForManualEntry(true);//Specifies whether validation should be enforced for dimension values marked as not allowing manual entry. The default value is false.

  4. Override following method of datasouce field LedgerDimension

    public Common resolveReference(FormReferenceControl _formReferenceControl)

    {

    Common common = dimensionDynamicAccountController.resolveReference();


    return common;

    }


  5. Override following methods of the SegmentedEntry Control FinancialDimensionDemo_LedgerDimension.

    public
    boolean validate()

    {


    boolean isValid;

    isValid = super();

    isValid = dimensionDynamicAccountController.validate() && isValid;


    return isValid;

    }

    public
    void segmentValueChanged(SegmentValueChangedEventArgs _e)

    {


    super(_e);

    dimensionDynamicAccountController.segmentValueChanged(_e);

    }

    public
    void loadSegments()

    {


    super();

    dimensionDynamicAccountController.parmControl(this);

    dimensionDynamicAccountController.loadSegments();

    }

    public
    void loadAutoCompleteData(LoadAutoCompleteDataEventArgs _e)

    {


    super(_e);

    dimensionDynamicAccountController.loadAutoCompleteData(_e);

    }

    public
    void jumpRef()

    {

    dimensionDynamicAccountController.jumpRef();

    }

  6. Open form

  1. How to add Financial dimensions’ filter in a query
  • Default Dimension

    Use following methods to filer default dimension

    SysQuery::addDimensionAttributeRange()

SysQuery::addDimensionAttributeFilter()

static
void searchDefaultDimension(Args _args)

{

Query q;

QueryRun qr;

QueryBuildDataSource qbds;

FinancialDimensionDemo financialDimensionDemo;


//q = new Query(queryStr(FinancialDimensionDemoQ));

q = new Query();

qbds = q.addDataSource(tableNum(FinancialDimensionDemo));

     //SysQuery::clearDimensionRangesFromQuery(q); //Clear exists filter value.

SysQuery::addDimensionAttributeRange(q,

qbds.name(),//’FinancialDimensionDemo’,


‘DefaultDimension’,

DimensionComponent::DimensionAttribute,

queryRange(‘001’, ‘004’),


‘Cashflow’);


//SysQuery::addDimensionAttributeFilter(q,


//qbds.name(),//’FinancialDimensionDemo’,


//’DefaultDimension’,


//DimensionComponent::DimensionAttribute,


//’001′,


//’Cashflow’);

qr = new QueryRun(q);


while(qr.next())

{

financialDimensionDemo = qr.get(tableNum(FinancialDimensionDemo));

info(financialDimensionDemo.JournalId);

}

}

  • Ledger Dimension

    Use following methods to filer default dimension

    SysQuery::addDimensionAttributeRange()

    SysQuery::addDimensionAttributeFilter()

    static
    void searchLedgerDimensionByDimAttribute(Args _args)

    {

    Query q;

    QueryRun qr;

    QueryBuildDataSource qbds;

    FinancialDimensionDemo financialDimensionDemo;


    //q = new Query(queryStr(FinancialDimensionDemoQ));

    q = new Query();

    qbds = q.addDataSource(tableNum(FinancialDimensionDemo));

    SysQuery::addDimensionAttributeRange(q,

    qbds.name(),//queryBuildDataSource.name()


    ‘LedgerDimension’,//fieldStr(GeneralJournalAccountEntry, LedgerDimension)

    DimensionComponent::DimensionAttribute,

    queryRange(‘100000’, ‘110150’),


    ‘MainAccount’);//DimensionAttribute::find(DimensionAttribute::getMainAccountDimensionAttribute()).Name);

    SysQuery::addDimensionAttributeRange(q,

    qbds.name(),//queryBuildDataSource.name()


    ‘LedgerDimension’,//fieldStr(GeneralJournalAccountEntry, LedgerDimension)

    DimensionComponent::DimensionAttribute,


    ‘004’,


    ‘Cashflow’);//DimensionAttribute::find(dimAttrId).Name

    qr = new QueryRun(q);


    while(qr.next())

    {

    financialDimensionDemo = qr.get(tableNum(FinancialDimensionDemo));

    info(financialDimensionDemo.JournalId);

    }

    }

    static
    void searchWithLedgerDimension(Args _args)

    {

    Query q;

    QueryRun qr;

    QueryBuildDataSource qbds;

    FinancialDimensionDemo financialDimensionDemo;


    //q = new Query(queryStr(FinancialDimensionDemoQ));

    q = new Query();

    qbds = q.addDataSource(tableNum(FinancialDimensionDemo));

    SysQuery::addDimensionAttributeRange(q,

    qbds.name(),


    ‘LedgerDimension’,

    DimensionComponent::LedgerDimensionDisplayValue,


    ‘*-*-025-*’);//,


    //’??’);

    qr = new QueryRun(q);


    while(qr.next())

    {

    financialDimensionDemo = qr.get(tableNum(FinancialDimensionDemo));

    info(financialDimensionDemo.JournalId);

    }

    }

  1. How Financial dimensions stored

[Tables]

DimensionAttribute: This table stores financial dimensions.

DimensionAttributevalue: This table stores the value of the financial dimension. There is a EntityInstance field in this table. That’s the relation to the value original table.

FinancialTagCategory: This table stores record of custom financial dimension.

DimensionFinancialTag: this table stores custom financial dimensions value.

[Views]

DimAttribute*: These views can show you the dimension value and it’s backing record.

  • Default dimension

    [Tables]

    DimensionAttributeValueSet: this table stores the record of the default dimension.

    DimensionAttributeValueSetItem: this table stores the attribute value of the default dimension.

    [Views]

    DefaultDimensionView

    DimensionAttributeValueSetItemView

    static
    void getSpecifiedDefaultDimension(Args _args)

    {

    DimensionAttributeValueSetItem dimensionAttributeValueSetItem;

    DimensionAttributeValue DimensionAttributevalue;

    DimensionAttribute dimensionAttribute;


    select
    firstOnly dimensionAttributeValueSetItem


    where dimensionAttributeValueSetItem.DimensionAttributeValueSet == 52565471266


    join DimensionAttributevalue


    where DimensionAttributevalue.RecId == dimensionAttributeValueSetItem.DimensionAttributeValue


    join dimensionAttribute


    where dimensionAttribute.RecId == DimensionAttributevalue.DimensionAttribute

    && dimensionAttribute.Name == ‘Department’;


    print dimensionAttributeValueSetItem.DisplayValue;


    pause;

    }

  • Ledger dimension

    [Tables]

    DimensionAttributeValueCombination: Stores combination of ledger dimension

    DimensionAttributeValueGroup: Stores dimension group

    DimensionAttributeValueGroupCombination: Store relation of DimensionAttributeValueGroup and DimensionAttributeValueCombination

    DimensionAttributeLevelValue: Stores dimension value of ledger dimension

    [Views]

    DimensionAttributeLevelValueAllView

    DimensionAttributeLevelValueView

    static
    void getSpecifiedLedgerDimension(Args _args)

    {

    DimensionAttributeValueGroupCombination dimensionAttributeValueGroupCombination;

    DimensionAttributeLevelValue dimensionAttributeLevelValue;

    DimensionAttributeValue dimensionAttributeValue;

    BankAccountTable bankAccountTable;


    while
    select dimensionAttributeValueGroupCombination


    where dimensionAttributeValueGroupCombination.DimensionAttributeValueCombination == 52565574540//ledgerDimension


    join dimensionAttributeLevelValue order
    by Ordinal asc


    where dimensionAttributeLevelValue.DimensionAttributeValueGroup == dimensionAttributeValueGroupCombination.DimensionAttributeValueGroup


    join dimensionAttributeValue


    where dimensionAttributeValue.RecId == dimensionAttributeLevelValue.DimensionAttributeValue


    //Specified dimension


    //join DimensionAttribute


    //where DimensionAttribute.name == ‘CostCenter’


    // && DimensionAttribute.RecId == dimensionAttributeValue.DimensionAttribute

    {


    //Back entity table


    //if (dimensionAttributeValue.getEntityInstance().TableId == tableNum(DimAttributeBankAccountTable))


    //{


    //bankAccountTable = BankAccountTable::find(dimensionAttributeValue.getName());//dimensionAttributeLevelValue.displayValue


    //break;


    //}

    info(dimensionAttributeLevelValue.DisplayValue);

    }

    }

  1. How to create or modify a financial dimension.
  • Default dimension

    [Create]

    static
    void CreateDefaultDimension(Args _args)

    {

    Struct struct = new Struct();


    container defaultDimensionCon;

    DimensionDefault dimensionDefault;

    ;

    dimensionDefault = AxdDimensionUtil::getDimensionAttributeValueSetId([2,‘BusinessUnit’, ‘001’,


    ‘CostCenter’, ‘007’]);

    dimensionDefault = AxdDimensionUtil::getDimensionAttributeValueSetId([3,‘BusinessUnit’, ‘001’,


    ‘CostCenter’, ‘007’,


    ‘Department’, ‘022’]);

    struct.add(‘BusinessUnit’, ‘001’);

    struct.add(‘CostCenter’, ‘007’);

    defaultDimensionCon += struct.fields();

    defaultDimensionCon += struct.fieldName(1);

    defaultDimensionCon += struct.valueIndex(1);

    defaultDimensionCon += struct.fieldName(2);

    defaultDimensionCon += struct.valueIndex(2);

    DimensionDefault = AxdDimensionUtil::getDimensionAttributeValueSetId(defaultDimensionCon);

    }

[Modify]

static
void ReplaceDimensionAttributeValue(Args _args)

{

DimensionDefault dimSource, dimTarget, dimReplaced;

dimTarget = 52565471266;

dimSource = AxdDimensionUtil::getDimensionAttributeValueSetId([1, “Department”, ‘022’]);

dimReplaced = DimensionDefaultingService::serviceReplaceAttributeValue(dimTarget,

dimSource,

DimensionAttribute::findByName(‘Department’).RecId);

}

System will create a new default dimension which just the replaced attribute is different than source default dimension.

  • Ledger dimension

    [Create]

    static
    void CreateLedgerDimension(Args _args)

    {


    //print DimensionStorage::accountNum2LedgerDimension(‘1101’,LedgerJournalACType::Cust);


    print DimensionDefaultingEngine::getLedgerDimensionFromAccountAndDim(MainAccount::findByMainAccountId(‘110150’).RecId,

    DimensionHierarchy::getAccountStructure(MainAccount::findByMainAccountId(‘110150’).RecId),


    52565471266);//default dimension


    pause;

    }

    static
    void CreateLedgerDimension2(Args _args)

    {

    LedgerDimensionAccount ledgerDimension;

    ledgerDimension = DimensionDefaultingService::serviceCreateLedgerDimension(DimensionStorage::getDefaultAccountForMainAccountNum(‘110150’),


    52565471266);//default dimension

    info(strFmt(“%1: %2”, ledgerDimension, DimensionAttributeValueCombination::find(ledgerDimension).DisplayValue));

    }

    [Modify]

    static
    void ReplaceLedgerDimensionValue(Args _args)

    {

    LedgerDimensionAccount ledgerDimension, ledgerDimensionReplaced;

    DimensionDefault dimSource, dimTarget, dimReplaced;

    ledgerDimension = DimensionDefaultingService::serviceCreateLedgerDimension(DimensionStorage::getDefaultAccountForMainAccountNum(‘110150’),


    52565471266);//default dimension

    info(strFmt(“%1: %2”, ledgerDimension, DimensionAttributeValueCombination::find(ledgerDimension).DisplayValue));

    dimTarget = DimensionStorage::getDefaultDimensionFromLedgerDimension(ledgerDimension);

    dimSource = AxdDimensionUtil::getDimensionAttributeValueSetId([1, “Department”, ‘022’]);

    dimReplaced = DimensionDefaultingService::serviceReplaceAttributeValue(dimTarget,

    dimSource,

    DimensionAttribute::findByName(‘Department’).RecId);

    ledgerDimensionReplaced = DimensionDefaultingService::serviceCreateLedgerDimension(DimensionStorage::getDefaultAccountForMainAccountNum(‘110150’),

    dimReplaced);//default dimension

    info(strFmt(“%1: %2”, ledgerDimensionReplaced, DimensionAttributeValueCombination::find(ledgerDimensionReplaced).DisplayValue));

    }

    Actually, here is to create a new ledger dimension. Update the records of ledger dimension or default dimension may cause data consistency issues.

  1. Tips for Financial dimensions.
  • Useful classes:

    DimensionDefaultingEngine

    DimensionDefaultingService

    DimensionStorage

    AxdDimensionUtil

    These classes have some static methods that very useful. You can get details from MSDN.

  • Some time we may need to debug what’s kind of account and default dimensions be used when posting certain transaction. You can try set a breakpoint in following method:

    DimensionDefaultingService::serviceCreateLedgerDimension()

  • The ledger dimension default account that used for settings on parameter form that without segment.

    For example, the account that used for posting profile, inventory posting….

    static
    void accountNum2Dimension(Args _args)

    {


    print DimensionStorage::getLedgerDefaultAccountFromLedgerDim(DimensionDefaultingEngine::getLedgerDimensionFromAccountAndDim(MainAccount::findByMainAccountId(‘140200’).RecId,

    DimensionHierarchy::getAccountStructure(MainAccount::findByMainAccountId(‘140200’).RecId),


    0));


    print DimensionStorage::getDynamicAccount(‘1001’, LedgerJournalACType::Vend);


    print DimensionStorage::accountNum2LedgerDimension(‘1001’, LedgerJournalACType::Vend);


    pause;

    }

[Sample Of Dec. 29]How to set video full screen mode in Universal Windows Platform(UWP)

$
0
0
image
Dec.
29
image
image

Sample : https://code.msdn.microsoft.com/How-to-set-video-full-f51df67e

This sample demonstrates how to set video full screen mode in Universal Windows Platform(UWP).

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


Algorithm Basics in Small Basic

$
0
0

Algorithm is combinations of code patterns.  Today, I’d like to introduce three basic code patterns.  I also wrote a sample program for this blog: LMR321.

Sequence

Sequence of statements is the simplest pattern of code.  But sometimes the order of statements becomes very important.

Following two code blocks show the different results.

Turtle.Move(100)  ' move first
Turtle.Turn(90)
Turtle.Turn(90)  ' turn first
Turtle.Move(100)

Loop

In a program, we sometimes need to repeat something.  We can also write same lines of code, but a loop makes it simpler.  Following two code blocks show the same results.

TextWindow.WriteLine("Hello World!")
TextWindow.WriteLine("Hello World!")
TextWindow.WriteLine("Hello World!")
TextWindow.WriteLine("Hello World!")
For i = 1 To 4
  TextWindow.WriteLine("Hello World!")
EndFor

Selection

There are some cases we’d like to change doing with conditions such as input data or current status.  We can do this kind of selection with If statement like following code.

If Turtle.Y < yCenter Then
  TextWindow.WriteLine("UPPER")
Else
  TextWindow.WriteLine("LOWER")
EndIf

See Also

.NET Framework 4.6.2 发布公告

$
0
0

[原文发表地址]: Announcing .NET Framework 4.6.2

[原文发表时间]: August 2, 2016

 

今天,我们很高兴地宣布.NET Framework 4.6.2 发布了!许多更改都是基于您的反馈意见,其中包括一些在用户反馈 和反馈联系中心 提交的意见和建议。非常感谢您的不断的帮助和参与 !

 

这次的发布在以下几个方面有着巨大的改进和提升:

在.NET Framework 4.6.2 的更改列表应用程序接口变化集中,你可以查看到所有更改的东西。

 

立即下载

现在,你可以通过以下途径来下载.NET Framework 4.6.2:

 

基础库类(BCL

在BCL上进行了以下改进:

支持长路径(MAXPATH

在System.IO的应用程序接口中,我们修复了最长(MAXPATH)文件名称长度为260个字符的限制。在用户反馈的问题上,超过4500用户提到了该问题。

通常情况下,这种限制不会影响使用者的应用程序 (例如,从”我的文档”中加载文件),但比较常见的是,在开发人员的机器上生成嵌套很深的源码树或着使用专门的工具,还运行在Unix(经常会用到长路径)。

在.NET Framework 4.6.2以及之后的版本上,这个新功能已经启用。您可以通过在app.config或者web.config等配置文件中进行如下设置,来设置你的应用程序 以定向到.Net4.6.2:

01

你可以通过下面的方式来设置 AppContext 开关配置文件,从而将此功能应用到早期版本的.NET框架应用程序上面。此开关只支持在.NET4.6.2框架上运行的应用程序 (或更高版本)。

02

在现有的禁止使用路径比MAXPATH长的行为中,缺乏定向到.NET4.6.2框架或者是设置AppContext开关。这是为了维护现有应用程序的向后兼容性。

通过以下改进,使得长路径可以应用︰

  • 允许大于 260 字符 (MAX_PATH)的路径。 BCL 组允许路径长度超过 MAX_PATH 最大允许范围。BCL 应用程序接口依赖底层 Win32 文件来 进行限制检查。
  • 启用扩展路径语法和文件命名空间(\?, \.). Windows 公开了多个启用备用路径计划的文件命名空间。例如扩展路径语法,允许超过 32k 的路径字符。BCL 现在支持一些路径,例如︰ \?长路径。.NET 框架现在主要依赖于 Windows 路径来进行规范化,以避免无意中阻止合法路径。扩展路径语法是一个很好的解决Windows版本不支持使用常规形式的长路径(例如,C:长路经)的方法。
  • 性能改进。通过在BCL中采取 Windows 路径规范化以及减少类似的逻辑,使得关于文件路径的逻辑整体性能得到改进。

更多详细的信息,您可以在Jeremy Kuhne的博客中找到.

 

X509 证书现在支持 FIPS 186-3 数字签名算法

.NET 框架 4.6.2 添加了 FIPS 186-3 数字签名算法 (DSA) 的支持。支持密钥长度超过 1024位的X509 证书。它还支持计算签名哈希算法系列(SHA256、 SHA384 和 SHA512)。

.NET 框架 4.6.1 支持 FIPS 186-2,但是限制了密钥长度不能超过 1024位。

通过使用新的DSACng类,你可以利用 FIPS 186-3支持,这可以从下面的示例中看到。

DSA 基类也已经更新,所以您可以使用 FIPS 186-3的支持而不需要转换到新的 DSACng 类。这与之前两个版本.NET框架用于更新 RSA 和 ECDsa算法实现中的方法相同。

 

改进的椭圆曲线加密算法派生例程

ECDiffieHellmanCng 类的可用性已得到改进。在.NET 框架的椭圆曲线加密算法(ECDH) 的密钥协议执行包括三个不同的密钥派生功能(KDF) 例程。这些 KDF 例程现在代表和也被三种不同的方法所支持,你可以从下面的示例中看到。

在.NET Framework 的早期版本中,对于三个不同的例程,你必须知道每个例程在ECDiffieHellmanCng 类中需要设置的属性子集。

 

保存密钥的对称加密支持

Windows 加密库 (CNG) 支持在软件和硬件上保留对称密钥。.NET 框架现在公开了这种 CNG 能力,正如下面的示例中展示的那样。

您需要使用具体的实现类,如 AesCng类来使用此新功能,而不是更常见的工厂方式,如Aes.Create()。这项要求是由于特定于实现的密钥名称和密钥提供者。

AesCng TripleDESCng 类中,分别为 AES 和 3DES 算法添加了保存密钥的对称加密。

 

SignedXml 支持哈希SHA-2

.NET 框架 SignedXml实现支持以下SHA-2 哈希算法

你可以在下面的实例中看到使用SHA-256对XML进行签名

新的 SignedXml 字段中添加了新的 SignedXML URI 常数。下面显示了新字段。

为了支持这些算法,那些已经注册了自定义的 SignatureDescription 处理程序到 CryptoConfig的应用程序将会和往常一样继续运作,但是因为现在有平台默认,所以注册CryptoConfig已经不再是必须的了。

 

公共语言运行库(CLR

在 CLR 做了以下改进。

空引用异常的改进

大家可能都经历过并且调查过空NullReferenceException的原因。为了在以后的Visual Studio版本中,在空引用方面提供更好的调试体验,我们启用了部分与 Visual Studio 团队合作的方式。

在 Visual Studio 中的调试体验,依赖于与你的代码低水平交互的公共语言运行库调试应用程序接口。现在,在 Visual Studio 中的 NullReferenceException 体验看起来是这样︰

在这个版本中,我们扩展了CLR 调试应用程序接口,当空引用异常弹出时,就可以使得调试器能够请求更多的信息,并且进行额外的分析。利用此信息,调试器就能够确定哪些引用为空,并将这些信息提供给你,使你的工作更加容易。

 

部署方案

ClickOnce 进行了以下方面的改进。

支持传输层安全(TLS)协议 1.1 1.2

在部署方案方面,我们为.NET Framework 4.5.2,4.6,4.6.1以及4.6.2 版本增加了TLS 1.1 和 1.2 的协议支持。我们要感谢,在用户反馈上面投票的你们!你不需要做任何额外的步骤来启用 TLS 1.1 或 1.2 的支持,因为部署方案会在运行时自动检测哪个 TLS 协议是必需的。

安全套接字层 (SSL) 和 TLS 1.0 不再被一些组织推荐或支持。例如,为了满足他们在线事务的规范,支付卡行业安全标准委员会在为要求 TLS 1.1 或更高版本而努力。

为了兼容那些不会或者不能升级的应用程序,部署方案继续支持TLS 1.0。我们建议分析您所有使用的 SSL 和 TLS 1.0。请参阅KB库,并使用里面的链接来下载修复程序.NET 框架4.6,4.6.14.5.2

 

客户端证书支持

现在可以将部署方案的应用程序托管在虚拟目录中,而这个虚拟目录要求启用了SSL并且有客户端证书。在该配置下,当用户访问某个应用程序的时候,将会被提示要选择他们的证书。如果客户端证书被设置为”忽略”,那么部署方案应用程序将不会提示选择证书。

在以前的版本中,虽然应用程序以同样的方式托管,但是应用程序的ClickOnce部署最终会被终止,并且弹出拒绝访问错误。

clickonce_ssl

 

ASP.NET

ASP.NET 中提出了以下改进。请参阅 ASP.NET 核心 1.0公告,以了解 ASP.NET 核心的具体改进。

本地化数据注释

使用了模型绑定和数据注释验证,使得本地化更加容易。ASP.NET采用了一个简单的公约,这个公约是针对其中包含数据注释验证消息的resx资源文件:

  • 位于 App_LocalResources 文件夹
  • 按照 DataAnnotation.Localization.{locale}.resx 命名约定。

使用.NET Framework 4.6.2,你可以在你的模型文件中指定数据注解,就像你在未本地化的应用程序中那样。对于错误提示信息,您可以在使用的资源文件中指定名称,如下所示︰

03

asp_net_dataAnnotation_localization

根据新的约定,你可以看到本地化的资源文件已经被放置在’App_LocalResources’文件夹中,如下图所示︰

04

您也可以插入自己的 stringlocalizer 提供程序,将本地化字符存储在另外的路径或者另外的文件类型。

在之前版本的.NET框架中,你需要指定 ErrorMessageResourceType和ErrorMessageResourceNamevalues,如下面所示。

05

 

异步的改进

SessionStateModule 和输出缓存模块已得到改进,启用了异步场景。这个团队正在通过NuGet来发布两个模块的异步版本,这个需要导入现有项目来使用。这两个 NuGet 程序包预计将在未来几周发布。当发生这种情况,我们将更新这篇文章。

 

SessionStateModule 接口

当用户导航到ASP.NET网站时,会话状态可以存储和检索用户会话数据。现在, 你可以使用新的SessionStateModule接口来创建自己的异步会话状态模块,这样你就可以用你自己的方式来存储会话数据, 并且使用异步方法。

 

输出缓存模块

输出缓存通过从控制器行为中缓存返回结果,可以显著地提高 ASP.NET 应用程序的性能,并且这种缓存方式可以避免对于每个请求,不必要地生成相同的内容。

现在,通过执行一个叫做OutputCacheProviderAsync的新接口,你就可以在输出缓存中使用异步应用程序接口。这样做可以减少Web 服务器上线程阻塞,并提高ASP.NET 服务的可扩展性。

 

SQL

SQL 客户端取得了以下方面的改进。

加密功能增强

始终加密功能旨在保护敏感数据,例如存储在数据库中的信用卡号码或身份证号码。它允许客户在对其应用程序内的敏感数据进行加密,永远不会将加密密钥透露给数据库引擎。因此,始终加密将数据拥有着(可以查看这些数据)和数据管理者(没有权限去访问这些数据)进行了分离。

.NET Framework中的SQL Server (System.Data.SqlClient) 数据访问程序对于始终加密功能,在性能和安全性方面也进行了改善。

性能

为了提高对加密数据库列的参数化查询性能,查询参数的元数据现在被缓存下来。当theSqlConnection::ColumnEncryptionQueryMetadataCacheEnabled 属性设置为true(默认值),即使相同的查询被多次调用, 客户端数据库也都只会从服务器里检索一次元数据参数。

安全

在可配置的时间间隔之外,在密钥缓存中的列加密密钥记录将会被释放。你可以通过设置

SqlConnection::ColumnEncryptionKeyCacheTtl 属性来设置时间间隔。

 

Windows 通信基础 (WCF)

WCF 做了以下改进。

NetNamedPipeBinding 最佳匹配

在.NET4.6.2中,关于支持新管道查找来说,NetNamedPipeBinding 已得到增强,就是我们所熟知“最佳匹配”。在使用 “最佳匹配”时,该NetNamedPipeBinding 服务将强制客户端,在最佳匹配的URI和请求断点之间搜索服务侦听,而不是找到第一个匹配服务。

如果WCF 客户端应用程序使用默认的”首先匹配”行为,尝试连接到错误的 URI 时,”最佳匹配”是特别有用的。在某些情况下,当多个 WCF 服务在指定的管道上侦听时,WCF 客户端使用”首先匹配”将会连接到错误的服务。如果一些服务是以管理员帐户托管的,这种情况也可能会发生。

若要启用此功能,你可以将以下 AppSetting 添加到客户端应用程序的 App.config 或 Web.config 文件︰

06

 

DataContractJsonSerializer 改进

DataContractJsonSerializer 已得到改进,以更好地支持多个夏令时制调整规则。当启用时,DataContractJsonSerializer 将使用 TimeZoneInfo 类,来替代时区类。TimeZoneInfo 类支持多个调整规则,这使得它能够处理历史时区数据。当一个时区有不同的夏令时制调整规则时(例如,(UTC + 2)伊斯坦布尔),这是非常有用的。

你可以通过向 app.config 文件中添加以下 AppSetting 启用此功能︰

07

 

TransportDefaults 不再支持 SSL 3

在使用包含运输安全和证书信任类型的NetTcp设置安全链接时,SSL 3不再是默认协议。在大多数情况下,不会影响到现有的应用程序,因为对于NetTcp,自TLS 1.0以来一直包括在默认协议列表中。所有现有的客户端至少能够通过 TLS 1.0进行连接。

SSL3 作为默认协议被移除,因为它不再被认为是安全的。虽然不建议这样做,但如果这是部署所必需的话,你可以通过下面的配置机制将 SSL 3 协定重新添加到协商链接的列表当中:

 

Windows 密码库 (CNG) 的传输安全

传输安全现在支持使用 Windows 密码库 (CNG) 存储的证书。目前,该支持仅限于使用一个长度不超过 32 位指数的公钥证书。

这个新功能在.NET 框架 4.6.2 (或更高版本)的应用程序中可以启用。您可以通过配置 以下 app.config 文件或 web.config 配置文件,来配置应用程序定向到.NET Framework 4.6.2,︰

08

你可以使用下面的方式设置AppContext,来使得你选择的之前版本.NET的 应用程序可以使用此项功能,需要提醒的是,此开关只能够使用到在.NET4.6.2(或更高版本)上运行的应用程序。

09

你也可以以编程方式启用此功能︰

10

 

OperationContext.Current 的异步改进

WCF 现在有能力使 OperationContext.Current 成为 ExecutionContext的一部分。通过这一改进,WCF 允许 CurrentContext 从一个线程传播到另一个线程。这意味着,即使在调用到 OperationContext.Current 之间存在情景切换,在执行的整个方法中,它的值也会正确传递。

下面的示例演示 OperationContext.Current 正确流经线程过渡︰

11

之前,OperationContext.Current 的内部执行是使用ThreadStatic 变量存储 CurrentContext,使用线程的本地存储区来存储与 CurrentContext 相关的数据。如果执行方法调用的场景发生改变(即,由等待其他操作的线程更改),任何之后的调用将运行在不同的线程,而对原始值没有引用。经过这样的修复之后,第二次调用的OperationContext.Current会获取预期的值,尽管 threadId1 和 threadId2 可能会有所不同。

 

Windows Presentation Foundation (WPF)

WPF做了如下改进:

分组排序

现在,使用CollectionView 对数据排序的应用程序可以明确地声明如何进行组排序。这克服了当应用程序动态地添加或删除组,或者是应用程序更改参与分组的项目属性值时一些可能会出现的不直观的排序。通过比较分组中的数据来排序,而不是整个集合的数据,这也提高了创建组的性能。

该功能包括GroupDescription 类的两个新属性︰ SortDescriptions 和 CustomSort。这些属性描述了如何对使用GroupDescription产生的组集合进行排序,在 ListCollectionView中,也有与之同名的属性描述如何对数据项目进行排序。在 PropertyGroupDescription 类里面有两个新的静态属性,使用最为常见的两个新的静态属性︰ CompareNameAscending和 CompareNameDescending。

例如,假设应用程序想要按年龄分组,进行升序排列,每个组按照姓氏排序。

应用程序现在可以使用此新功能的声明︰

12

这个新功能之前,应用程序会如下声明︰

13

 

支持同时在不同分辨率的显示器上显示

WPF 应用程序现在启用每个显示器分辨率认知。这个改进对于不同分辨率级别的多个屏幕连接在一台机器上的显示至关重要。在场景不同 DPI 级别的多台显示器相连为一台机器。由于WPF 应用程序的全部或部分内容会在多个显示器上显示,我们期望 WPF能够自动将应用程序的分辨率和屏幕进行匹配,现在它真的做到了。

您可以在WPF 实例和在 GitHub 上的开发人员指南 中了解更多有关如何启用您的 WPF 应用程序对每个屏幕分辨率适应功能 。

在以前的版本,你必须编写额外的代码,来启用WPF 应用程序中每个显示器 分辨率的认知。

 

支持软键盘

软键盘支持能够自动调用和解除WPF 应用程序中的触摸键盘,而不会在Win10操作系统中禁用 WPF 手写/触控支持。

以前的版本里,在不禁用 WPF 手写/触控支持的情况下,WPF 应用程序将不会支持调用或解除触摸键盘。这是由于从Win8操作系统开始,应用程序中追踪触摸键盘焦点的方式发生了改变。

softkeyboard

 

您的反馈

最后,我想再次感谢下那些为 4.6.2 预览版本提供反馈的人!它有助于4.6.2 的发布。请您继续通过以下方式提出反馈意见︰

The evolution of the text size limits related to the standard static control

$
0
0


Michael Quinlan wondered about

the text size limits related to the standard static control



We start with the resource format, since that was the limiting
factor in the original problem.
The

original 16-bit resource format

represented strings as null-terminated sequences of bytes,
so in theory they could have been arbitrarily large.
However,

16-bit string resources
were limited to 255 characters
because they used a byte count for string length.
My guess is that the resource compiled took this as a cue that
nobody would need strings longer than 255 characters,
so it avoided the complexity of having to deal with a dynamic
string buffer,
and when it needed to parse a string in the resource file,
it did so into a fixed-size 256-byte buffer.



I happen to still have a copy of the original 16-bit resource compiler,
so I can actually verify my theory.
Here’s what I found:



There was a “read next token” function that placed the result
into a global variable.
Parsing was done by asking to read the next token
(making it the current token), and then
and then studying the current token.
If the token was a string,
the characters of the string
went into a buffer of size MAXSTR + 1.
And since string resources have a maximum length of 255,
MAXSTR was 255.



Although the limit of 255 characters did not apply to dialog
controls,
the common string parser stopped at 255 characters.
In theory, the common string parser could have used dynamic
memory allocation to accommodate the actual string length,
but remember that we’re 16-bit code here.
Machines had 256KB of memory,
and no memory block could exceed 64KB.
Code in this era did relatively little dynamic memory allocation;
static memory allocation was the norm.
It’s like everybody was working on an embedded system.



Anyway, that’s where the 255-character limit for strings
in resource files comes from.
But that’s not a limit of the resource file format or of static
text controls.
It’s just a limit of the resource compiler.
You can write your own resource compiler that
generates long strings if you like.



Okay, so what about the static text control?
The original 16-bit static text control had a text size
limit of 64KB
because 16-bit.
This limit carried forward to Windows 95 because the
static text control in Windows 95 was basically a 16-bit
control with some 32-bit smarts.



On the other hand, Windows NT’s standard controls were
32-bit all the way through (and also Unicode).
The limits consequently went up from 64KB to 4GB.
Some messages needed to be revised in order to be able
to express strings longer than 64KB,
For example,
the old EM_GET­SEL message returned
the start and end positions of the selection as two
16-bit integers packed into a single 32-bit value.
This wouldn’t work for strings longer than 65535 characters,
so the message was redesigned so that the wParam
and lParam are pointers to 32-bit integers
that receive the start and end of the selection.



Anyway, now that the 16-bit world is far behind us,
we don’t need to worry about the 64KB limit for static
and edit controls.
The controls can now take all the text you give it.²



¹ And then for some reason
Erkin Alp Güney said that I’m
employed
as a PR guy
.”
I found this statement very strange,
because not only am I not employed as a PR guy,
I have basically no contact with PR at all.
The only contact I do have is that
occasionally they will send me a message
saying that they are upset at something I wrote.
I remember that they were very upset about my story
that shared

some trivia about the //build 2011 conference

because it (horrors) talked about some things
that went wrong.
(And as Tom West

noted
,
it wouldn’t seem to be a good idea for PR to employ someone

with the social skills of a thermonuclear device
.)



² Well, technically no.
If you give a string longer than 4GB, then it won’t be able
to handle that.
So more accurately, it can handle all the text you would
give it, provided you’re not doing something ridiculous.
I mean, you really wouldn’t want to manipulate 4GB of data
in the form of one giant string.
And no human being would be able to read it all anyway.

Forum Moderation Best Practice – Propose an answer and then mark it 7 days later

$
0
0

Forum Ninjas Blog

Hello! This post continues the conversation from last week’s blog post:

.

I ended it by pointing to one article in particular, where we (Microsoft TechNet/MSDN forum owners) hammered out some hard guidelines:

.

So what are those guidelines? I thought you’d never ask!

Today we’ll dig into the first two:

  1. Propose an answer first. Give the Asker/OP a chance to select the right answer.
  2. After proposing an answer, wait one week (7 days), and then mark the answer(s). This gives the OP more than enough time to return. More often than not, the OP will not mark an answer and will not reply again. After waiting the week, then mark the answer. The Asker/OP is your client, and you want to help him and make him happy. Many OPs have gotten angry when Moderators mark answers without waiting a few days (waiting 7 days sets a clear message that the Asker/OP is the client and that you are patient). Plus the people who answer the questions get 5 more points (15 Recognition Points instead of 10) if the Asker/OP is the one who marks the reply as an answer. One exception (to proposing first) is if the thread hasn’t been responded to for over 6 months (you’re cleaning up a forum). But even then, it’s better to propose first if you’re uncertain about an answer.

Well, it kind of answers itself as to “why” we ask this. First, we want the OP to mark it, but if the OP isn’t going to return (which too often is the case), then we still want to mark it. Second, this makes people feel valued, which they are. They don’t answer questions to get stats, points, or medals, but it just simply feels good to be appreciated, and we greatly appreciate the community contributions!

And the bottom line is that the more questions are marked as answered, the more people answer questions. If we don’t mark answers, most often, the forum dries up. A lot of people still ask questions, but fewer and fewer people answer them. That’s not the kind of community we want.

Remember, the Asker/OP is the client. So if they unmark or unpropose, then that’s okay. We just want to make sure they’re willing to come back on, explain why, and help us move the topic forward.

Ideally, we built a moderation team for that particular forum, so we might have different moderators/answerers propose and mark answers.

Of course, this will lead to the debate of whether someone should propose their own answers. This is a meaty enough topic for another day. While we do allow that capability for a reason (it is by design that you can do this), it should be used as a last resort. Ideally, the moderation teams work together, so that it’s not necessary. So that’s the short explanation. But we’ll dig into it some more later.

If you feel overwhelmed, like you don’t have a moderator team (you’re the team) for your forum, then please reply to this post with a nominee in your forum to help you out! We make people Answerers if they have at least 6 months of experience in that forum (so the forum community knows them), they have 100 answered questions, and they have 1K Recognition Points. That’s the bar that can be equally measured. To be made a Moderator, we’d like to see you faithful in the Answerer role for 3+ months, or an MVP or Microsoft employee. And for both roles, you have to agree to follow the Forum Moderation Guidelines (in the article linked below).

Read all the related guidelines on TechNet Wiki here:

.

May the Forums Be With You! (Don’t be a rogue one.)

– Ninja Ed

How a non-Admin users of SSIS 2012/2014 can view SSIS Execution Reports

$
0
0

 

There can be few scenarios where the requirement demands to have Full permission for the developers to have Full access to the SSIS Execution reports. However as per the design SSIS 2012 and SSIS 2014 doesn’t support this it. The non-admin user, by default can see the report which has been executed by them only. They won’t be able to see the reports which have been executed by the other users. Non-admin means they only have public access to all the databases (master, msdb, SSISDB etc.).

Now the Admin users [ either the part of ‘sysadmin’ server role or ssis-admin database (SSIS) role] can see all the SSIS Execution reports for all the users. The SSIS execution reports internally call the view [SSISDB]. [catalog]. [executions]. If we look at the code, we can see that there is a filter condition, which is restricting the non-admin user to see the reports.

WHERE      opers.[operation_id] in (SELECT id FROM [internal].[current_user_readable_operations])

           OR (IS_MEMBER(‘ssis_admin’) = 1)

           OR (IS_SRVROLEMEMBER(‘sysadmin’) = 1)

 

Resolution / Workarounds:

  1. The SSIS upgrade to the SSIS 2016 can be an option here. SSIS 2016 brought a new role in the SSISDB, This new ssis_logreader database-level role that you can be used to grant permissions to access the views that contain logging output to users who aren’t administrators.

          Ref: https://msdn.microsoft.com/en-us/library/bb522534.aspx#LogReader

 

  1. If upgrading to SSIS 2016 is not an option, you can use a SQL Authenticated Login to view the report after giving the ssis-admin permission. In that case that SQL Authenticated Login won’t be able to Execute the package, however they would be able to see all the reports. The moment they will try to execute the report, they will get the below error:

         The operation cannot be started by an account that uses SQL Server Authentication. Start the operation with an account that uses Windows Authentication. (.Net              SqlClient Data Provider).

I believe this option would be risky because we are sharing the admin permission to the non-admin users. Though they won’t be able to execute the report, however              they would be able to change the configuration of the report since they have the ssis-admin permission.

 

  1. There is one more option by changing the code of the view [SSISDB]. [catalog]. [executions].

[ Please note that Microsoft does not support this solution, as this involves changing the code of the SSISDB views. Also, this change can be                  overwritten if we apply any patches/fixes]

a. Let’s create SQL Authenticated Login with minimal permission:

testSSIS for my case:

SQL Server Instance -> Security-> Logins-> New

1

b. Go to the login->User Mapping under the same login and check the SSISDB database. You can give the read permission as shown below.

2

c. Create a SSISDB database role in my case SSISTestSSISDBRole and add the testSSIS user.

d. Also, you can add other windows account as the member in this role.

3

4

e. Go to the Alter View code and Alter the view by adding one more filter condition at the end. You need to go to the [SSISDB]. [catalog]. [executions] and alter the                     script.

5

 

         Change the below filter condition at the end.

WHERE      opers.[operation_id] in (SELECT id FROM [internal].[current_user_readable_operations])

OR (IS_MEMBER(‘ssis_admin’) = 1)

     OR (IS_MEMBER(‘SSISTestSSISDBRole’) = 1) — Extra filter condition.

OR (IS_SRVROLEMEMBER(‘sysadmin’) = 1)

All the non-admin userss would be able to see the reports for all the Executions . Please note that you would only be able to see the basic reports. The Drill through report will not work for this case.

Testing:

Go to:

6

NOTE:  Microsoft CSS does not support the above workaround. We recommend that you move to SQL Server 2016 and make use of the new ssis_logreader database-level role.

 

 

Author:      Samarendra Panda – Support Engineer, SQL Server BI Developer team, Microsoft

Reviewer:  Krishnakumar Rukmangathan – Support Escalation Engineer, SQL Server BI Developer team, Microsoft

 

Creating dynamic SSIS package [Object model] and using OleDBSource & OleDBDestination internally fails in SSIS 2016

$
0
0

 

Issue:

While dynamically creating SSIS packages using the object model and referencing the following SSIS libraries, you may receive the following exception thrown

SSIS Libraries referenced:

C:Program Files (x86)Microsoft SQL Server130SDKAssemblies

  1. SqlServer.DTSPipelineWrap.dll
  2. SQLServer.ManagedDTS.dll
  3. SQLServer.DTSRuntimeWrap.dll

 

 Error Message:

System.Runtime.InteropServices.COMException’ occurred in ConsoleApplication1.exe

Additional information: Exception from HRESULT: 0xC0048021

{“Exception from HRESULT: 0xC0048021”}

   at Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSDesigntimeComponent100.ProvideComponentProperties()

   at ConsoleApplication1.Program.Main(String[] args) in c:UsersAdministratorDocumentsVisual Studio 2013ProjectsConsoleApplication1ConsoleApplication1Program.cs:line 27

   at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)

   at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)

   at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()

   at System.Threading.ThreadHelper.ThreadStart_Context(Object state)

   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)

   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)

   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)

   at System.Threading.ThreadHelper.ThreadStart()

 

Steps to reproduce the issue:

  1. Use the below C# code in a Console Application:

—————————————————————————————————————————————————

using System; 

using Microsoft.SqlServer.Dts.Runtime; 

using Microsoft.SqlServer.Dts.Pipeline; 

using Microsoft.SqlServer.Dts.Pipeline.Wrapper;

 namespace ConsoleApplication1

{

    class Program

    {

        static void Main(string[] args)

        {

            Package package = new Package();

            Executable e = package.Executables.Add(“STOCK:PipelineTask”);

            TaskHost thMainPipe = e as TaskHost;

            MainPipe dataFlowTask = thMainPipe.InnerObject as MainPipe;

 

            // Create the source component.   

            IDTSComponentMetaData100 source =

              dataFlowTask.ComponentMetaDataCollection.New();

            source.ComponentClassID = “DTSAdapter.OleDbSource”;

           CManagedComponentWrapper srcDesignTime = source.Instantiate();

            srcDesignTime.ProvideComponentProperties();

 

            // Create the destination component. 

            IDTSComponentMetaData100 destination =

              dataFlowTask.ComponentMetaDataCollection.New();

            destination.ComponentClassID = “DTSAdapter.OleDbDestination”;

            CManagedComponentWrapper destDesignTime = destination.Instantiate();

            destDesignTime.ProvideComponentProperties();

 

            // Create the path. 

            IDTSPath100 path = dataFlowTask.PathCollection.New();

            path.AttachPathAndPropagateNotifications(source.OutputCollection[0],

              destination.InputCollection[0]);

        }

    }

}

—————————————————————————————————————————————————

  1. Add the reference from:
  •           C:Program Files (x86)Microsoft SQL Server130SDKAssembliesMicrosoft.SQLServer.ManagedDTS.dll
  •           C:Program Files (x86)Microsoft SQL Server130SDKAssembliesMicrosoft.SQLServer.DTSRuntimeWrap.dll
  •          C:Program Files (x86)Microsoft SQL Server130SDKAssembliesMicrosoft.SQLServer.DTSPipelineWrap.dll
  1. Debug the code and you may receive the above exception in the function : srcDesignTime.ProvideComponentProperties();

 

Cause:

The reason for the exception was that the version independent COM ProgID was not registered to point to the latest version, so loading of OLEDB SOURCE connection manager threw above error The code is using version independent ProgIDs:

“DTSAdapter.OleDbSource” &  “DTSAdapter.OleDbDestination”.The COM spec says, the version independent  ProgIDs should always load the latest version. But these ProgIDs are not registered.

 

Resolution/Workaround:

As workaround, modify the ProgIDs to the names of SSIS 2016 IDs and use version specific ProgIDs. wiz.

DTSAdapter.OleDbSource.5 & DTSAdapter.OleDbDestination.5 rather than DTSAdapter.OleDbSource & DTSAdapter.OleDbDestination in the above code sample.

 

You may find the information of these ProgIDs from the System registry.

For e.g.

The ProgID “DTSAdapter.OleDbSource.5” is registered to point to SSIS 2016 OLEDB Source.

under HKEY_CLASSES_ROOTCLSID{657B7EBE-0A54-4C0E-A80E-7A5BD9886C25}

Similarly, the ProgID “DTSAdapter.OLEDBDestination.5” is registered to point to SSIS 2016 OLE DB Destination are under

HKEY_CLASSES_ROOTCLSID{7B729B0A-4EA5-4A0D-871A-B6E7618E9CFB}

 

If you still have the issues, then please contact Microsoft CSS team for further assistance.

 

DISCLAIMER:

Any Sample code is provided for the purpose of illustration only and is not intended to be used in a production environment.  ANY SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

 

 

Author:       Ranjit Mondal – Support Engineer, SQL Server BI Developer team, Microsoft

Reviewer:   Krishnakumar Rukmangathan – Support Escalation Engineer, SQL Server BI Developer team, Microsoft

デバイスマネージャーの [表示] を [デバイス (接続別)] に切り替える

$
0
0

切り分けのために、接続したデバイスと PC のバスとの間でどんなドライバーが動いているか知りたいと思ったことはありますか?

 

皆さん、こんにちは。Windows Driver Kit サポートチームの津田です。今回は、そんな皆様に、デバイス マネージャーの [表示] を、デフォルトの [デバイス (種類別)] から [デバイス (接続別)] に切り替え、各デバイス ノードからドライバーを確認できるところをお見せしたいと思います。また、同じドライバー構成をカーネルデバッガーでデバイス ノードをたどって確認する方法もご紹介します。

 

今回、例として Windows 10 (1607) x86 を使います。

 

1. [スタート] メニューを右クリックして、[デバイス マネージャー] をクリックして、デバイス マネージャーを起動します。

2. 任意のデバイスをクリックします。今回は例として、[ディスクドライブ] にある [Virtual HD ATA Device] をクリックします。

 

clip_image002

 

3. 上図のように、[表示] をクリックすると、[デバイス (種類別)] となっているので、[デバイス (接続別)] をクリックします。

 

clip_image004

 

4. 上図の通り、対象デバイスが接続されている場所がツリー状に表示されます。上記の例では、以下のツリーになっています。

 

ACPI x86-based PC

Microsoft ACPI-Compliant System

   PCI バス

     Intel(R) 82371AB/EB PCI Bus Master IDE Controller

         ATA Channel 0

            Virtual HD ATA Device

 

5. 各ノードを右クリックして [プロパティ] をクリックし、[ドライバー] タブを開いて [ドライバーの詳細] をクリックします。

 

5-1. まずは、Virtual HD ATA Device を見てみます。以下のように disk.sys, EhStorClass.sys, partmgr.sys, vmstorfl.sys が入っていて、各ファイルをクリックすると弊社製であることがわかります。

 

clip_image006

 

5-2. 次に、一つ上のノードである ATA Channel 0 を見てみます。Atapi.sys, ataport.sys があることがわかります。

 

clip_image008

 

5-3. 次に、Intel(R) 82371AB/EB PCI Bus Master IDE Controller を見てみます。Atapi.sys, ataport.sys, intelide.sys, pciidex.sys があるのがわかります。

 

clip_image010

 

5-4. 次に PCI バスを見てみます。その名の通り、pci.sys があります。

 

clip_image012

 

 

6. 上記をカーネルデバッガ―で見てみます。

 

6-1. まずは、disk.sys のデバイスオブジェクトを探します。

 

kd> !drvobj disk

Driver object (8ebe86f8) is for:

Driverdisk

Driver Extension List: (id , addr)

(8a657bd0 8ebeee20) 

Device Object list:

8d700a80 

 

6-2. 最後に表示されたデバイスオブジェクトのアドレスを使って、デバイススタックを見てみます。5-1 で確認した disk.sys の上に partmgr.sys があることがわかります。またこのデバイスノードでは、atapi.sys のデバイスオブジェクトが PDO であることがわかります。デバイスオブジェクトやデバイススタックについては K 里さんのエントリ (Device Object と Device Stack) をご参照ください。

 

kd> !devstack 8d700a80

  !DevObj   !DrvObj            !DevExt   ObjectName

  8d7006e0  Driverpartmgr    8d700798 

> 8d700a80  Driverdisk       8d700b38  DR0

  8d6c1790  Driverstorflt    8d6c1f10 

  8ebd9920  DriverACPI       8eb0b568 

  8ebd7878  Driveratapi      8ebd7930  IdeDeviceP0T0L0-0

!DevNode 8ebdd008 :

  DeviceInst is “IDEDiskVirtual_HD______________________________1.1.0___5&35dc7040&0&0.0.0”

  ServiceName is “disk”

 

6-3. !DevNode として表示されたデバイスノードを見てみます。PDO が上記 atapi.sys のデバイスオブジェクトと同じ (0x8ebd7878) であることがわかります。このデバイスノードが末端なので Child のアドレスが NULL (0) です。親ノードである Parent のアドレス (0x8ebd3e30) が確認できます。

 

kd> !DevNode 8ebdd008

DevNode 0x8ebdd008 for PDO 0x8ebd7878

  Parent 0x8ebd3e30   Sibling 0000000000   Child 0000000000  

  InstancePath is “IDEDiskVirtual_HD______________________________1.1.0___5&35dc7040&0&0.0.0”

  ServiceName is “disk”

  State = DeviceNodeStarted (0x308)

  Previous State = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[08] = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[07] = DeviceNodeEnumeratePending (0x30c)

  StateHistory[06] = DeviceNodeStarted (0x308)

  StateHistory[05] = DeviceNodeStartPostWork (0x307)

  StateHistory[04] = DeviceNodeStartCompletion (0x306)

  StateHistory[03] = DeviceNodeResourcesAssigned (0x304)

  StateHistory[02] = DeviceNodeDriversAdded (0x303)

  StateHistory[01] = DeviceNodeInitialized (0x302)

  StateHistory[00] = DeviceNodeUninitialized (0x301)

  StateHistory[19] = Unknown State (0x0)

  StateHistory[18] = Unknown State (0x0)

  StateHistory[17] = Unknown State (0x0)

  StateHistory[16] = Unknown State (0x0)

  StateHistory[15] = Unknown State (0x0)

  StateHistory[14] = Unknown State (0x0)

  StateHistory[13] = Unknown State (0x0)

  StateHistory[12] = Unknown State (0x0)

  StateHistory[11] = Unknown State (0x0)

  StateHistory[10] = Unknown State (0x0)

  StateHistory[09] = Unknown State (0x0)

  Flags (0x20000130)  DNF_ENUMERATED, DNF_IDS_QUERIED,

                      DNF_NO_RESOURCE_REQUIRED, DNF_NO_UPPER_DEVICE_FILTERS

  UserFlags (0x00000008)  DNUF_NOT_DISABLEABLE

  DisableableDepends = 1 (including self)

 

6-4. 親ノードを !devnode で見てみます。Child のアドレスが、6-3 のもの (0x8ebdd008) と同じであることがわかります。

 

kd> !devnode 8ebd3e30

DevNode 0x8ebd3e30 for PDO 0x8ebd2ce0

  Parent 0x8eb7fa80   Sibling 0x8ebd3c58   Child 0x8ebdd008  

  InstancePath is “PCIIDEIDEChannel4&10bf2f88&0&0”

  ServiceName is “atapi”

  State = DeviceNodeStarted (0x308)

  Previous State = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[09] = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[08] = DeviceNodeEnumeratePending (0x30c)

  StateHistory[07] = DeviceNodeStarted (0x308)

  StateHistory[06] = DeviceNodeStartPostWork (0x307)

  StateHistory[05] = DeviceNodeStartCompletion (0x306)

  StateHistory[04] = DeviceNodeStartPending (0x305)

  StateHistory[03] = DeviceNodeResourcesAssigned (0x304)

  StateHistory[02] = DeviceNodeDriversAdded (0x303)

  StateHistory[01] = DeviceNodeInitialized (0x302)

  StateHistory[00] = DeviceNodeUninitialized (0x301)

  StateHistory[19] = Unknown State (0x0)

  StateHistory[18] = Unknown State (0x0)

  StateHistory[17] = Unknown State (0x0)

  StateHistory[16] = Unknown State (0x0)

  StateHistory[15] = Unknown State (0x0)

  StateHistory[14] = Unknown State (0x0)

  StateHistory[13] = Unknown State (0x0)

  StateHistory[12] = Unknown State (0x0)

  StateHistory[11] = Unknown State (0x0)

  StateHistory[10] = Unknown State (0x0)

  Flags (0x6c0000f0)  DNF_ENUMERATED, DNF_IDS_QUERIED,

                      DNF_HAS_BOOT_CONFIG, DNF_BOOT_CONFIG_RESERVED,

                      DNF_NO_LOWER_DEVICE_FILTERS, DNF_NO_LOWER_CLASS_FILTERS,

                      DNF_NO_UPPER_DEVICE_FILTERS, DNF_NO_UPPER_CLASS_FILTERS

  UserFlags (0x00000008)  DNUF_NOT_DISABLEABLE

  DisableableDepends = 2 (including self)

 

6-5. PDO のアドレスからデバイススタックを見てみます。Atapi.sys のデバイスオブジェクトが FDO としてあり、このデバイスノードは、5-2 で見た「ATA Channel 0」と同じだとわかります。PDO intelide.sys のデバイスオブジェクトなので、5-3 の「Intel(R) 82371AB/EB PCI Bus Master IDE Controller」につながっているのだろうと推察できます。

 

kd> !devstack 0x8ebd2ce0

  !DevObj   !DrvObj            !DevExt   ObjectName

  88695028  Driveratapi      886950e0  IdePort0

  8ebc9620  DriverACPI       8eb0b7a0 

> 8ebd2ce0  Driverintelide   8ebd2d98  PciIde0Channel0

!DevNode 8ebd3e30 :

  DeviceInst is “PCIIDEIDEChannel4&10bf2f88&0&0”

  ServiceName is “atapi”

 

6-6. 同様に Parent をたどって、PDO のデバイススタックを表示していくと以下のようになります。

 

kd> !devnode 0x8eb7fa80

DevNode 0x8eb7fa80 for PDO 0x8eb7e030

  Parent 0x887eccc0   Sibling 0x8eb7f8a8   Child 0x8ebd3e30  

  InstancePath is “PCIVEN_8086&DEV_7111&SUBSYS_00000000&REV_013&267a616a&0&39”

  ServiceName is “intelide”

  State = DeviceNodeStarted (0x308)

  Previous State = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[09] = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[08] = DeviceNodeEnumeratePending (0x30c)

  StateHistory[07] = DeviceNodeStarted (0x308)

  StateHistory[06] = DeviceNodeStartPostWork (0x307)

  StateHistory[05] = DeviceNodeStartCompletion (0x306)

  StateHistory[04] = DeviceNodeStartPending (0x305)

  StateHistory[03] = DeviceNodeResourcesAssigned (0x304)

  StateHistory[02] = DeviceNodeDriversAdded (0x303)

  StateHistory[01] = DeviceNodeInitialized (0x302)

  StateHistory[00] = DeviceNodeUninitialized (0x301)

  StateHistory[19] = Unknown State (0x0)

  StateHistory[18] = Unknown State (0x0)

  StateHistory[17] = Unknown State (0x0)

  StateHistory[16] = Unknown State (0x0)

  StateHistory[15] = Unknown State (0x0)

  StateHistory[14] = Unknown State (0x0)

  StateHistory[13] = Unknown State (0x0)

  StateHistory[12] = Unknown State (0x0)

  StateHistory[11] = Unknown State (0x0)

  StateHistory[10] = Unknown State (0x0)

  Flags (0x6c0000f0)  DNF_ENUMERATED, DNF_IDS_QUERIED,

                      DNF_HAS_BOOT_CONFIG, DNF_BOOT_CONFIG_RESERVED,

                      DNF_NO_LOWER_DEVICE_FILTERS, DNF_NO_LOWER_CLASS_FILTERS,

                      DNF_NO_UPPER_DEVICE_FILTERS, DNF_NO_UPPER_CLASS_FILTERS

  UserFlags (0x00000008)  DNUF_NOT_DISABLEABLE

  DisableableDepends = 2 (including self)

 

// PDO のデバイススタックを表示

 

kd> !devstack 0x8eb7e030

  !DevObj   !DrvObj            !DevExt   ObjectName

  8ebd2030  Driverintelide   8ebd20e8  PciIde0

  8eb7e8f8  DriverACPI       8eb0b9d8 

> 8eb7e030  Driverpci        8eb7e0e8  NTPNP_PCI0002

!DevNode 8eb7fa80 :

  DeviceInst is “PCIVEN_8086&DEV_7111&SUBSYS_00000000&REV_013&267a616a&0&39”

  ServiceName is “intelide”

 

// Parent のデバイスノードを表示

 

kd> !devnode 0x887eccc0

DevNode 0x887eccc0 for PDO 0x8e3fd1e0

  Parent 0x8869b860   Sibling 0x8eb0ce00   Child 0x8eb7fe30  

  InterfaceType 0x5  Bus Number 0

  InstancePath is “ACPIPNP0A03”

  ServiceName is “pci”

  State = DeviceNodeStarted (0x308)

  Previous State = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[09] = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[08] = DeviceNodeEnumeratePending (0x30c)

  StateHistory[07] = DeviceNodeStarted (0x308)

  StateHistory[06] = DeviceNodeStartPostWork (0x307)

  StateHistory[05] = DeviceNodeStartCompletion (0x306)

  StateHistory[04] = DeviceNodeStartPending (0x305)

  StateHistory[03] = DeviceNodeResourcesAssigned (0x304)

  StateHistory[02] = DeviceNodeDriversAdded (0x303)

  StateHistory[01] = DeviceNodeInitialized (0x302)

  StateHistory[00] = DeviceNodeUninitialized (0x301)

  StateHistory[19] = Unknown State (0x0)

  StateHistory[18] = Unknown State (0x0)

  StateHistory[17] = Unknown State (0x0)

  StateHistory[16] = Unknown State (0x0)

  StateHistory[15] = Unknown State (0x0)

  StateHistory[14] = Unknown State (0x0)

  StateHistory[13] = Unknown State (0x0)

  StateHistory[12] = Unknown State (0x0)

  StateHistory[11] = Unknown State (0x0)

  StateHistory[10] = Unknown State (0x0)

  Flags (0x6c0000f0)  DNF_ENUMERATED, DNF_IDS_QUERIED,

                      DNF_HAS_BOOT_CONFIG, DNF_BOOT_CONFIG_RESERVED,

                      DNF_NO_LOWER_DEVICE_FILTERS, DNF_NO_LOWER_CLASS_FILTERS,

                      DNF_NO_UPPER_DEVICE_FILTERS, DNF_NO_UPPER_CLASS_FILTERS

  UserFlags (0x00000008)  DNUF_NOT_DISABLEABLE

  CapabilityFlags (0x000000c0)  UniqueID, SilentInstall

  DisableableDepends = 4 (including self)

 

// PDO のデバイススタックを表示。PCI バスにたどり着いた。

 

kd> !devstack 0x8e3fd1e0

  !DevObj   !DrvObj            !DevExt   ObjectName

  8ebb9020  Driverpci        8ebb90d8 

> 8e3fd1e0  DriverACPI       8eb0bc10  0000000f

!DevNode 887eccc0 :

  DeviceInst is “ACPIPNP0A03”

  ServiceName is “pci”

 

 

上記を行うことで、正常系 (例えばクリーンインストールした OS の環境) と異常系 (お困りの現象が発生する環境) のドライバー構成の違いを切り分けることができる場合があります。皆様のトラブルシューティングの一助となりましたら幸いです。


Microsoft Graph – OneDrive API (C#) を使ったサンプル コード

$
0
0

こんにちは、Office Developer サポートの森 健吾 (kenmori) です。

今回の投稿では、Microsoft Graph – OneDrive API を実際に C# で開発するエクスペリエンスをご紹介します。

ウォークスルーのような形式にしておりますので、慣れていない方も今回の投稿を一通り実施することで、プログラム開発を経験し理解できると思います。本投稿では、現実的な実装シナリオを重視するよりも、OneDrive API を理解するためになるべくシンプルなコードにすることを心掛けています。

例外処理なども含めていませんので、実際にコーディングする際には、あくまでこのコードを参考する形でご検討ください。

 

事前準備

以前の投稿をもとに、Azure AD にアプリケーションの登録を完了してください。少なくとも以下の 2 つのデリゲートされたアクセス許可が必要です。

・Have full access to all files user can access
・Sign users in

その上で、クライアント ID とリダイレクト URI を控えておいてください。

 

開発手順

1. Visual Studio を起動し、Windows フォーム アプリケーションを開始します。
2. ソリューション エクスプローラにて [参照] を右クリックし、[NuGet パッケージの管理] をクリックします。

odapi1

3. ADAL で検索し、Microsoft.IdentityMode.Clients.ActiveDirectory をインストールします。

odapi2

4. [OK] をクリックし、[同意する] をクリックします。
5. 次に同様の操作で Newtonsoft で検索し、Newtonsoft.Json をインストールします。
6. 次にフォームをデザインします。

odapi3

コントロール一覧

  • OneDriveTestForm フォーム
  • fileListLB リスト ボックス
  • fileNameTB テキスト ボックス
  • uploadBtn ボタン
  • renameBtn ボタン
  • deleteBtn ボタン
  • openFileDialog1 オープン ファイル ダイアログ

 

7. プロジェクトを右クリックし、[追加] – [新しい項目] をクリックします。
8. MyFile.cs を追加します。
9. 以下のような定義 (JSON 変換用) を記載します。
 ※ 今回のサンプルでは使用しない定義も残しています。コードを書き換えてデータを参照したり、変更するなどしてみてください。

using Newtonsoft.Json;
using System.Collections.Generic;

namespace OneDriveDemo
{
    public class MyFile
    {
        public string name { get; set; }
        // 以下のプロパティは今回使用しませんが、デバッグ時に値を見ることをお勧めします。
        public string webUrl { get; set; }
        public string createdDateTime { get; set; }
        public string lastModifiedDateTime { get; set; }
    }

    public class MyFiles
    {
        public List<MyFile> value;
    }

    // ファイル移動時に使います。
    public class MyParentFolder
    {
        public string path { get; set; }
    }

    public class MyFileModify
    {
        public string name { get; set; }
        // ファイル移動時に使います。
        public MyParentFolder parentReference { get; set; }
    }
}

補足
JSON 変換時にオブジェクト プロパティ側に別名を付けたい場合は、下記のように JsonProperty 属性を指定することで可能です。

[JsonProperty("name")]
public string FileName { get; set; }

10. フォームのコードに移動します。
11. using を追記しておきます。

using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Newtonsoft.Json;
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;

12. フォームのメンバー変数に以下を加えます。
 clientid redirecturi Azure AD で事前に登録したものを使用ください。

        const string resource = "https://graph.microsoft.com";
        const string clientid = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx";
        const string redirecturi = "urn:getaccesstokenfordebug";
        // ADFS 環境で SSO ドメイン以外のテナントのユーザーを試す場合はコメント解除
        // const string loginname = "admin@tenant.onmicrosoft.com";

        string AccessToken;

13. フォームのデザインでフォームをダブルクリックし、ロード時のイベントを実装します。

        private async void Form1_Load(object sender, EventArgs e)
        {
            AccessToken = await GetAccessToken(resource, clientid, redirecturi);
            DisplayFiles();
        }

        // アクセス トークン取得
        private async Task<string> GetAccessToken(string resource, string clientid, string redirecturi)
        {
            AuthenticationContext authenticationContext = new AuthenticationContext("https://login.microsoftonline.com/common");
            AuthenticationResult authenticationResult = await authenticationContext.AcquireTokenAsync(
                resource,
                clientid,
                new Uri(redirecturi),
                new PlatformParameters(PromptBehavior.Auto, null)
                // ADFS 環境で SSO ドメイン以外のテナントのユーザーを試す場合はコメント解除
                //, new UserIdentifier(loginname, UserIdentifierType.RequiredDisplayableId)            );
            return authenticationResult.AccessToken;
        }

        // ファイル一覧表示
        private async void DisplayFiles()
        {
            using (HttpClient httpClient = new HttpClient())
            {
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                HttpRequestMessage request = new HttpRequestMessage(
                    HttpMethod.Get,
                    new Uri("https://graph.microsoft.com/v1.0/me/drive/root/children?$select=name,weburl,createdDateTime,lastModifiedDateTime")
                );
                var response = await httpClient.SendAsync(request);
                MyFiles files = JsonConvert.DeserializeObject<MyFiles>(response.Content.ReadAsStringAsync().Result);

                fileListLB.Items.Clear();
                foreach (MyFile file in files.value)
                {
                    fileListLB.Items.Add(file.name);
                }
            }
            if (!string.IsNullOrEmpty(fileNameTB.Text))
            {
                fileListLB.SelectedItem = fileNameTB.Text;
            }
        }

14. fileListLB の SelectedIndexChanged イベントをダブルクリックして、処理を実装します。

        // リスト ボックスで選択したファイルをテキスト ボックスに同期
        private void fileListLB_SelectedIndexChanged(object sender, EventArgs e)
        {
            fileNameTB.Text = ((ListBox)sender).SelectedItem.ToString();
        }

15. uploadBtn をダブルクリックして、ボタンクリックイベントを実装します。
※ この方式でアップロードできるファイル サイズには制限があります。

        // ファイル アップロード処理
        private async void uploadBtn_Click(object sender, EventArgs e)
        {
            if (openFileDialog1.ShowDialog() == DialogResult.OK)
            {
                fileNameTB.Text = openFileDialog1.FileName.Substring(openFileDialog1.FileName.LastIndexOf("\") + 1);
                using (HttpClient httpClient = new HttpClient())
                {
                    httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                    httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "octet-stream");
                    HttpRequestMessage request = new HttpRequestMessage(
                        HttpMethod.Put,
                        new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}:/content", fileNameTB.Text))
                    );
                    request.Content = new ByteArrayContent(ReadFileContent(openFileDialog1.FileName));
                    var response = await httpClient.SendAsync(request);
                    MessageBox.Show(response.StatusCode.ToString());
                }
                DisplayFiles();
            }
        }

        // ローカル ファイルの読み取り処理
        private byte[] ReadFileContent(string filePath)
        {
            using (FileStream inStrm = new FileStream(filePath, FileMode.Open))
            {
                byte[] buf = new byte[2048];
                using (MemoryStream memoryStream = new MemoryStream())
                {
                    int readBytes = inStrm.Read(buf, 0, buf.Length);
                    while (readBytes > 0)
                    {
                        memoryStream.Write(buf, 0, readBytes);
                        readBytes = inStrm.Read(buf, 0, buf.Length);
                    }
                    return memoryStream.ToArray();
                }
            }
        }

16. renameBtn をダブルクリックして、クリックイベントを実装します。

        // ファイル名の変更処理
        private async void renameBtn_Click(object sender, EventArgs e)
        {
            foreach (string fileLeafRef in fileListLB.SelectedItems)
            {
                using (HttpClient httpClient = new HttpClient())
                {
                    httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                    httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "application/json");

                    HttpRequestMessage request = new HttpRequestMessage(
                        new HttpMethod("PATCH"),
                        new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}", fileLeafRef))
                    );

                    MyFileModify filemod = new MyFileModify();
                    filemod.name = fileNameTB.Text;
                    request.Content = new StringContent(JsonConvert.SerializeObject(filemod), Encoding.UTF8, "application/json");

                    var response = await httpClient.SendAsync(request);
                    MessageBox.Show(response.StatusCode.ToString());
                }
            }
            DisplayFiles();
        }

17. deleteBtn をダブルクリックして、クリックイベントを実装します。

        // ファイル削除処理
        private async void deleteBtn_Click(object sender, EventArgs e)
        {
            foreach (string fileLeafRef in fileListLB.SelectedItems)
            {
                using (HttpClient httpClient = new HttpClient())
                {
                    httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);

                    HttpRequestMessage request = new HttpRequestMessage(
                        HttpMethod.Delete,
                        new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}", fileLeafRef))
                    );
                    var response = await httpClient.SendAsync(request);
                    MessageBox.Show(response.StatusCode.ToString());
                }
            }
            fileNameTB.Text = "";
            DisplayFiles();
        }

18. ソリューションをビルドして、動作を確認しましょう。

 最初にログイン画面が表示され、ログインしたユーザーでアクセス トークンを取得します。
※ ディレクトリ同期されたドメインでダイアログを表示すると、ユーザー名・パスワードの入力画面が表示されず、自動的にログインされる場合があります。

odapi4

該当ユーザーの OneDrive for Business のルート ディレクトリのファイル一覧が表示されます。

odapi5

該当フォルダーで、ファイルのアップロードや名前変更、削除などの操作をお試しください。

 

参考情報

以下は、Microsoft Graph における OneDrive API の参考サイトです。

タイトル : OneDrive API Documentation in Microsoft Graph
アドレス : https://graph.microsoft.io/en-us/docs/api-reference/v1.0/resources/drive

OneDrive API のリファレンスについては、以下のページをご確認いただき、OneDrive APIが持つ様々なメソッドをお試しください。

タイトル : Develop with the OneDrive API
アドレス : https://dev.onedrive.com/README.htm#

現時点では、OneDrive API で実装できる範囲について、OneDrive コンシューマ版と OneDrive for Business には相違があります。詳細は以下をご確認ください。

タイトル : Release notes for using OneDrive API with OneDrive for Business and SharePoint
アドレス : https://dev.onedrive.com/sharepoint/release-notes.htm

以下は、OneDrive API を紹介した Channel9 の動画になります。

タイトル : Office Dev Show – Episode 38 – OneDrive APIs in the Microsoft Graph
アドレス : https://channel9.msdn.com/Shows/Office-Dev-Show/Office-Dev-Show-Episode-38-OneDrive-APIs-in-the-Microsoft-Graph

Json.NET に関するドキュメントは以下をご参考にしてください。

タイトル : Json.NET Documentation
アドレス : http://www.newtonsoft.com/json/help/html/Introduction.htm

タイトル : Serializing and Deserializing JSON
アドレス : http://www.newtonsoft.com/json/help/html/SerializingJSON.htm

別投稿に記載した通り、開発工数削減のため、アプリケーション開発前に Graph Explorer, Fiddler や Postman などを使用して、あらかじめ使用する REST を確立しておくことをお勧めします。デバッグ方法については、以下をご参考にしてください。

タイトル : Microsoft Graph を使用した開発に便利なツール群
アドレス : https://blogs.msdn.microsoft.com/office_client_development_support_blog/2016/12/13/tools-for-development-with-microsoft-graph/

今回の投稿は以上です。 

 

 

Azure AD – How to register your own SAML-based application using new Azure Portal

$
0
0

With new Azure Portal (https://portal.azure.com/), Azure AD provides very flexible SAML-based configuration, but some folks ask me where to do that ?
In this post, I show you the answer for this question using some bit of SAML-based federation sample code of PHP and Node.js.

Note : For the settings using Azure Classic Portal (Azure Management Portal), see my previous posts “Azure AD Web SSO with PHP (Japanese)” and “Azure AD Web SSO with Node.js (Japanese)“.

Settings with Azure Portal

First of all, I’ll  show you how the SAML settings page is improved by new Azure Portal.

When you want to register own SAML-based application, select “Azure Active Directory” in Azure Portal, click “Enterprise applications” menu, and push “add” button.
You can select a lot of pre-defined (registered) applications (like Salesforce, Google, etc), but you click “add your own” link on top of this page.

In the next screen, select “Deploying an existing application” drop-down and input your app name.

http://i1155.photobucket.com/albums/p551/tsmatsuz/20170101_Set_AppName_zpsojgxd9e4.jpg

After you’ve added your application, select “Single sign-on” menu in your app settings page and select “SAML-based Sign-on” in “Mode” drop-down menu. (see the following screenshot)
By these steps, you can configure several SAML settings in this page.

First, you must specify your application identifier (which is used as entityID in SAML negotiation), and your app’s reply url. (Here we set “mytestapp” as identifier. We use this identifier in the following custom federation applications.)
You can also specify the relay state in this section.

In the next “attributes” section, you can set the value of user identifier (which is returned as NameID by Azure AD in SAML negotiation), and you can also select the claims which should be returned.
When you were using Azure Classic Portal (https://manage.windowsazure.com/), you cannot specify this value and Azure AD always returned the original pairwise identifier as NameID. Some applications which need the e-mail format user principal name as NameID was used to have the trouble to federate Azure AD, but now we don’t have these kind of troubles with new Azure Portal settings.

In the next “certificate” section, you can create the certificate and make the rollover certificate active. Here we create this certificate and make active for the following custom code.

Custom code by PHP (simpleSAMLphp)

Now let’s start to create the code and federate with Azure AD.
First we use PHP, and here we use simpleSAMLphp for the SAML federation.

You first install IIS and PHP in your dev machine, and make sure that the following extensions are set in PHP.ini file.

extension=php_openssl.dll
extension=php_ldap.dll

Next you download simpleSAMLphp (see here), and publish {simplesamplephp install location}/www folder using IIS manager.

Remember that the page is redirected to https://{published simpleSAMLphp site}/module.php/saml/sp/saml2-acs.php/default-sp, when the user is successfully logged-in to Azure AD with SAML federation. Then you must set this url as “Reply URL” in your app settings in Azure Portal. (see the following screenshot)

Open {simplesamplephp location}configconfig.php and change “baseurlpath” to your previous published url. Moreover you must change “auth.adminpassword” to your favorite password. (The default password value is “123”.)

<?php
$config = array (
  . . .

  'baseurlpath'           => 'simplesaml/',
  'certdir'               => 'cert/',
  'loggingdir'            => 'log/',
  'datadir'               => 'data/',
  . . .

  /**
   * This password must be kept secret, and modified from the default value 123.
   * This password will give access to the installation page of simpleSAMLphp with
   * metadata listing and diagnostics pages.
   * You can also put a hash here; run "bin/pwgen.php" to generate one.
   */
  'auth.adminpassword'    => 'test',
  'admin.protectindexpage'  => false,
  'admin.protectmetadata'    => false,
  . . .

Edit {simplesamplephp location}configauthresources.php and make sure to change entityID with the previous application identifier.

$config = array(
  . . .

  'default-sp' => array(
    'saml:SP',

    'entityID' => 'mytestapp',

    'idp' => NULL,

    'discoURL' => NULL
  ),
  . . .

Next you set the federation information using simpleSAMLphp UI, and you must copy the setting information in Azure Portal beforehand.
First you must click “Configure {your app name}” in your app single sign-on settings page in Azure Portal.

In the configuration page, click “SAML XML Metadata” link (see the following screenshot), and the metadata file is downloaded in your local machine. Please copy the content (text) in the downloaded file.
Note that this string content includes the digital signature by the certificate. For this reason, you shouldn’t never change this text, even if it’s space character.

http://i1155.photobucket.com/albums/p551/tsmatsuz/20170101_Download_Metadata_zpsv7qaawcz.jpg

Next you go to the simpleSAMLphp www site (in this example, https://localhost/simplesaml/index.php) using your web browser.
In the simpleSAMLphp settings page, click “Federation” tab and “Login as administrator” link. When the login screen is prompted, you enter “admin” as user id and password which you specified above.

http://i1155.photobucket.com/albums/p551/tsmatsuz/20170101_SimpleSaml_Login_zpsx2lvg0sk.jpg

After logged-in, click “XML to simpleSAMLphp metadata converter” link in the page (see the above screenshot), and the following metadata parser page is displayed.
Please paste your metadata which is previously copied into this textbox, and push “Parse” button. Then the converted metadata settings (which is written with PHP) is displayed in the bottom of this page. (See the following screenshot.)
Copy this PHP code, and paste into {simplesamplephp location}metadatasaml20-idp-remote.php.

<?php
...

$metadata['https://sts.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/'] = array (
  'entityid' => 'https://sts.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/',
  'contacts' =>
  array (
  ),
  'metadata-set' => 'saml20-idp-remote',
  'SingleSignOnService' =>
  array (
    0 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
      'Location' => 'https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2',
    ),
    1 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST',
      'Location' => 'https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2',
    ),
  ),
  'SingleLogoutService' =>
  array (
    0 =>
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
      'Location' => 'https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2',
    ),
  ),
  'ArtifactResolutionService' =>
  array (
  ),
  'keys' =>
  array (
    0 =>
    array (
      'encryption' => false,
      'signing' => true,
      'type' => 'X509Certificate',
      'X509Certificate' => 'MIIC8DC...',
    ),
  ),
);

Note : On the contrary, if you want to set SAML federation SP (service provider) metadata (which includes the value of SingleLogoutService, etc) into Azure AD, you can get this XML from simpleSAMLphp and set it into Azure AD using the application manifest in Azure AD settings.

The settings of simpleSAMLphp has all done !

Let’s create your own PHP (.php) code with simpleSAMLphp like the following code. This sample code is just showing all claims returned by Azure AD.

<?php
  require_once("../simplesamlphp-1.11.0/lib/_autoload.php");
  $as = new SimpleSAML_Auth_Simple('default-sp');
  $as->requireAuth();
  $attributes = $as->getAttributes();
?>
<div style="font-weight: bold;">Hello, PHP World</div>
<table border="1">
<?php  foreach ($attributes as $key => $value): ?>
  <tr>
    <td><?=$key;?></td>
    <td><?=$value[0];?></td>
  </tr>
<?php endforeach;?>
</table>

Let’s see how it works.
If you access to this PHP page with your web browser, the page is redirected to the idp selector. In this page, please select the metadata of Azure AD, and push “Select” button.

Then the page is redirected into the Azure AD login (sign-in) page. Please input your login id and password.

When the login is succeeded, the returned claims are shown as follows in your custom PHP page.

Custom code by Node.js (express, passport)

When you use Node.js, the concept is the same as before. You can just use your favorite SAML library with your custom code, and configure the library with the registered Azure AD app settings.
Here we use the famous passport module with express framework in Node.js.

First you start to install express framework and express command.

npm install express -g
npm install -g express-generator

Create the project directory, and provision express project by the “express” command with the following commands. (The files and folders of template project are deployed, and all related packages are installed.)
After that, you can start and view the express project with your web browser. (Please run by “npm start“, and access with your web browser.)

mkdir sample01
express -e sample01
cd sample01
npm install

Install passport and related modules with the following commands.

npm install express-session
npm install passport
npm install passport-saml

Open and edit app.js (the start-up js file for this express framework), and please add the following code (of bold font).
I explain about this code later.

var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');
var passport = require('passport');
var session = require('express-session');
var fs = require('fs');

var SamlStrategy = require('passport-saml').Strategy;
passport.serializeUser(function (user, done) {
  done(null, user);
});
passport.deserializeUser(function (user, done) {
  done(null, user);
});
passport.use(new SamlStrategy(
  {
    path: '/login/callback',
    entryPoint: 'https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2',
    issuer: 'mytestapp',
    cert: fs.readFileSync('MyTestApp.cer', 'utf-8'),
    signatureAlgorithm: 'sha256'
  },
  function(profile, done) {
    return done(null,
    {
      id: profile['nameID'],
      email: profile['http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress'],
      displayName: profile['http://schemas.microsoft.com/identity/claims/displayname'],
      firstName: profile['http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname'],
      lastName: profile['http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname']
    });
  })
);

var index = require('./routes/index');
var users = require('./routes/users');

var app = express();

// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'ejs');

// uncomment after placing your favicon in /public
//app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
app.use(logger('dev'));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(session(
  {
    resave: true,
    saveUninitialized: true,
    secret: 'this shit hits'
  }));
app.use(passport.initialize());
app.use(passport.session());
app.use(express.static(path.join(__dirname, 'public')));

app.use('/', index);
app.use('/users', users);

app.get('/login',
  passport.authenticate('saml', {
    successRedirect: '/',
    failureRedirect: '/login' })
  );
app.post('/login/callback',
  passport.authenticate('saml', {
    failureRedirect: '/',
    failureFlash: true }),
  function(req, res) {
    res.redirect('/');
  }
);

// catch 404 and forward to error handler
app.use(function(req, res, next) {
  var err = new Error('Not Found');
  err.status = 404;
  next(err);
});

// error handler
app.use(function(err, req, res, next) {
  // set locals, only providing error in development
  res.locals.message = err.message;
  res.locals.error = req.app.get('env') === 'development' ? err : {};

  // render the error page
  res.status(err.status || 500);
  res.render('error');
});

module.exports = app;

I explain about this sample code :

  • When SAML authentication and idp redirection is needed, the entryPoint url (here, https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2) is used. Please copy this value from your app configuration page in Azure AD, and paste. (see the following screenshot)
  • Please see the routing code, app.get('/login', ...);
    When the user goes to /login by the web browser, the SAML flow is proceeded and the user is redirected to the entryPoint url.
  • The path /login/callback is the reply url. When the authentication is succeeded in the identity provider (Azure AD), the results (SAML response) is returned to this url and the claims (here nameID, emailaddress, displayname, givenname, and surname) are parsed. (see return done(null, { id: ..., email: ..., displayName:..., ... }); in the above code.)
    After that, the page is redirected to /. (Please see the routing code, app.post('/login/callback', ...);)
    Thus please set this url as reply url in Azure AD app settings beforehand.
  • Please copy the X509 cert in Azure AD app settings or download the cert (see the following screenshot), and you set this cert as the passport-saml strategy.
    If you set this cert, the passport-saml module validates the incoming SAML response. (The passport-saml checks if the response is not altered by the malicious code.)

Finally let’s get the returned claims and show these values in your page.
Here, we edit routes/index.js, and modify as follows. We’re retrieving the user’s displayName and email address, and passing to the view page.

var express = require('express');
var router = express.Router();

router.get('/', function(req, res, next) {
  if(req.isAuthenticated())
    res.render('index', { username: req.user.displayName, mail: req.user.email });
  else
    res.render('index', { username: null});
});

module.exports = router;

Edit views/index.ejs (which is the view page of the previous index.js), and modify as follows.

<!DOCTYPE html>
<html>
  <head>
    <title>SAML Test</title>
    <link rel='stylesheet' href='/stylesheets/style.css' />
  </head>
  <body>
    <% if (!username) { %>
    <h2>Not logged in...</h2>
    <% } else { %>
    <h2>Hello, <%= username %>.</h2>
    (your e-mail is <%=mail %>)
    <% } %>
  </body>
</html>

Your programming has finished !

Note : Your app must be hosted by https, and please configure to use https. (Here I don’t describe this steps.)

Please start your app using the following command.

npm start

When you access to /login using your web browser, the page is redirected and the Azure AD sign-in page is displayed.

When you succeed your login, your display name and email are displayed in the top page (index.ejs) as follows.

 

If you’re ISV folks, you can submit your own custom app (which is federated with Azure AD) to Azure AD gallery. Everyone can start and use your app (ISV app) federated with Azure AD with a few clicks !

Performing Application Upgrades on Azure VM Scale Sets

$
0
0

Virtual Machine Scale Sets (VMSS) are an awesome solution available on Azure, providing autoscale capabilities and elasticity for your application. I was recently working with a customer who was interested in leveraging VMSS on their web tier and one of the points we focused on was how to do an upgrade of an application running on a VMSS and what the capabilities were in this regard. In this post, we’ll walk through one method of upgrading an ASP.NET application that is running on a VMSS.

For the creation and upgrade of the scale set, we’ll utilize an Azure Resource Manager (ARM) template from the Azure Quickstart GitHub repository. If you’re not familiar with the Quickstart repo, it’s a large and ever-growing repo of ARM templates that can be used to deploy almost every service on Azure. Check it out further when you have some time! The ARM template for this exercise can be found at https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-windows-webapp-dsc-autoscale.

Create VMSS via portal and PowerShell

To start, let’s create the VMSS. The application we’ll be deploying will be a very simple ASP.NET web application, essentially the default app when you create a new ASP.NET project with a minor text modification to display a version number so we can validate the upgrade itself. Super simple, but can easily be extended to more complex and complete applications. There are two ways to kick off the VMSS ARM template deployment; through the Azure portal and through PowerShell. We’ll go through both methods.

Deployment via Azure Portal

Kicking off the deployment through the Azure Portal is easy, as you can use the “Deploy To Azure” link in the Quickstart repo.
deploy-to-azure-button
Click the link and that will open the portal and land you in the template deployment dialog for this template, should look something like this:

portal-deploy-1

From there, you’ll want to fill out the various fields. Most of them are self explanatory, but I’ll call out a couple of items. The _artifacts Location parameter is the base location for the artifacts we’ll be using (ASP.NET WebDeploy package and DSC scripts) which points us to the raw storage in the Quickstart repo. In this case we can leave the _artifacts Location Sas Token blank as this is only needed if you need to provide a SAS token, and all of the artifacts here are publicly available, no token needed. We will then specify the rest of the path to each artifact in the Powershelldsc Zip and Web Deploy Package parameters. The Powershelldsc Update Tag Version parameter will be used in the upgrade, so hold tight and I’ll go through that shortly. For this deployment you’ll want to enter or select a resource group, provide a name for the VMSS and enter a password. The rest of the values can be left at their defaults unless you want to change them.

Click purchase when you’re ready to go and wait for the deployment to complete, which may take 30 minutes or so. Once complete you can validate that everything is working by pulling up the web page the VMSS is hosting. To get this, pull up your VMSS in the portal and you’ll see the public IP address. The web page can be found at http://x.x.x.x/MyApp replacing x.x.x.x with your public IP. Pull up that page and you should see the home page indicating you’re running version 1.0.

web-site-1

Deployment via PowerShell

For deployment via PowerShell, you’ll need two files locally for the deployment. You’ll need the ARM template and the parameter file. Save these to a local directory, in this case we’ll use C:VMSSDeployment.

Open the parameter file and we’ll want to make a few updates. Update the vmssName parameter to be the name you want for your VMSS (3-61 characters and globally unique across Azure). Next, update the adminPassword parameter with the password of your choice. Finally, update the _artifactsLocationSasToken parameter to "", empty quotes (the null value is part of the Quickstart repo requirements). Save and exit this file.

Now we’re ready to kick off deployment. I’ve simplified this and am leaving out some error checking and pre-flight validation. If you want more details on how to ensure you properly handle errors there is a great blog post from Scott Seyman that walks you through all these details. In our case, we’ll create a new resource group and then kick off the ARM template deployment into that resource group. Open a PowerShell session, log in to your Azure account and run the following commands.

$resourceGroupName = "VMSSDeployment"
$location = "West Central US"
New-AzureRmResourceGroup -Name $resourceGroupName -Location $location

$templateFilePath = "C:VMSSDeploymentazuredeploy.json"
$parametersFilePath = "C:VMSSDeploymentazuredeploy.parameters.json"
New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath -Verbose

Once the template has deployed successfully you’ll get a response back with the details on the deployment.

DeploymentName          : azuredeploy
ResourceGroupName       : VMSSDeployment
ProvisioningState       : Succeeded
Timestamp               : 12/29/2016 7:08:43 PM
Mode                    : Incremental
TemplateLink            :
Parameters              :
                          Name             Type                       Value
                          ===============  =========================  ==========
                          vmSku            String                     Standard_A1
                          windowsOSVersion  String                     2016-Datacenter
                          vmssName         String                     vmssjb2
                          instanceCount    Int                        3
                          adminUsername    String                     vmssadmin
                          adminPassword    SecureString
                          _artifactsLocation  String                     https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/201-vmss-windows-webapp-dsc-autoscale
                          _artifactsLocationSasToken  SecureString
                          powershelldscZip  String                     /DSC/IISInstall.ps1.zip
                          webDeployPackage  String                     /WebDeploy/DefaultASPWebApp.v1.0.zip
                          powershelldscUpdateTagVersion  String                     1.0

Outputs                 :
DeploymentDebugLogLevel :

To validate the web site let’s get the public IP address.

Get-AzureRmPbulicIpAddress -ResourceGroupName VMSSDeployment

Now you can plug in your IP address (http://x.x.x.x/MyApp) and confirm that the page comes up successfully.

Upgrade the application

So now we’ve got a web site running version 1.0, but we want to upgrade it to the newly released version 2.0. Lets go through the process to make this happen in both the Azure Portal and through PowerShell.

Upgrade via Azure Portal

To kick off the deployment in the Azure Portal we’ll want to redeploy the template. In the portal, navigate to your resource group, select Deployments and click the Redeploy button. This will pop open the familiar custom template deployment dialog with some information pre-populated, we’ll make a few updates here to do the upgrade.

Update the Resource group parameter to use the existing resource group your VMSS currently resides in. Validate that the Vmss Name parameter is the same as you specified on the original deployment. These are both important so that the deployment is to the existing VMSS and not to a new VMSS in a new resource group. In the Admin Password parameter enter the same password you originally entered. Now, to update the application we’ll change two additional parameters. Update the Web Deploy Package to reference /WebDeploy/DefaultASPWebApp.v2.0.zip, and update the Powershelldsc Update Tag Version to 2.0.
portal-upgrade-1

Once that’s all set, click Purchase to deploy the updated template. Since all of our resources already exist (storage accounts, load balancer, VMSS, etc.) the only updates will be to the VMs in the scale set. Once the deployment is complete, pull up your web page and you should see the newly deployed version 2.0.

web-site-2

Upgrade via PowerShell

The process to upgrade through PowerShell is equally easy. Pop open the parameters file you used on your original deployment. Update the webDeployPackage parameter to reference /WebDeploy/DefaultASPWebApp.v2.0.zip and set the powershelldscUpdateTagVersion to 2.0. Save and exit the file.
Next, re-run the command to deploy the template.

New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath -Verbose

Once finished, pull up your web site and validate that it’s now running version 2.0.

Under the hood

So how does this all work? Let’s go through several key pieces to this solution.

WebDeploy

We’re using a WebDeploy package to deploy the ASP.NET application on the servers. This give us a nice, self contained application package that makes it easy to deploy to one or more web servers. I won’t go into too much detail on this other than I essentially followed the steps referenced in this document. I saved this file locally and uploaded it to the repo to make it available in the deployment. In this case there are two versions with a slightly different configuration to illustrate the upgrade process as described, version 1.0 and version 2.0.

PowerShell DSC script

The servers themselves are configured with a PowerShell DSC script. This script installs IIS and all the necessary dependencies, installs WebDeploy and deploys the WebDeploy package that gets passed as a script parameter from the ARM template itself.

You can use the Publish-AzureRmVmDscConfiguration cmdlet to create the zip file needed for the deployment. This can either create the file locally or upload it to Azure storage for you so it’s available in an Internet accessible location. In this case I created the file locally and uploaded it to the Quickstart repo.

PowerShell DSC extension

The PowerShell DSC VM extension is used to run the aforementioned DSC script on each of the VMs as they are provisioned. We take the path for the WebDeploy package and pass that through as a parameter to the script so it knows where to get it from. The upgrade process is triggered when the forceUpdateTag parameter is updated. The DSC extension sees the different value and will re-run the extension on all the VMs. When we update the path to the WebDeploy package as part of the upgrade process, this pulls down the 2.0 version of the web site and deploys it.

"extensionProfile": {
            "extensions": [
              {
                "name": "Microsoft.Powershell.DSC",
                "properties": {
                  "publisher": "Microsoft.Powershell",
                  "type": "DSC",
                  "typeHandlerVersion": "2.9",
                  "autoUpgradeMinorVersion": true,
                  "forceUpdateTag": "[parameters('powershelldscUpdateTagVersion')]",
                  "settings": {
                    "configuration": {
                      "url": "[variables('powershelldscZipFullPath')]",
                      "script": "IISInstall.ps1",
                      "function": "InstallIIS"
                    },
                    "configurationArguments": {
                      "nodeName": "localhost",
                      "WebDeployPackagePath": "[variables('webDeployPackageFullPath')]"
                    }
                  }
                }
              }
            ]
          }

VMSS upgrade process

There are two modes of upgrades for VMSS, Automatic and Manual. Automatic will perform the upgrade across all VMs at the same time and may incur downtime. Manual gives the administrator the ability to roll out the upgrade one VM at a time, allowing you to minimize any possible downtime. In this case we’re using Automatic since we’re not actually redeploying the VMs, we’re just re-running the DSC script on each one to deploy a new application. You can read more about these options and how to perform a manual upgrade here. Do note that you may see the VMs scale depending on what you specify in the template and what the current running state is. These will scale back up or down based on your metrics once the deployment is complete.

"upgradePolicy": {
          "mode": "Automatic"
}

Wrap up

That’s about it. I hope this provided you with a good example of how to perform an upgrade of an application across a VMSS. Be sure to read through the referenced documentation and browse through the Quickstart repo for other ARM templates that can be used across Azure.

booting Windows from a VHD

$
0
0

The easiest way to have multiple Windows versions available on the same machine is to place some of them into VHDs, and then you can boot an OS directly from a VHD. The boot loader stays shared between all of them on the original C: drive which might have or not have its own Windows too), just each VHD gets its own entry created in the Boot Configuration Database, and the OS can be selected through a menu at boot time. The drive letters will usually shift when you boot from a VHD: the VHD with the OS would be assigned the letter C:, and the other  drives will move, although it’s possible to tell an image to use a different drive letter.

Before we go into the mechanics of it, an important note: the image in the VHD must be generalized. When Windows boots for the first time, it configures certain things, like the machine name, various initial values for generation of the random GUIDs, some hardware configuration information, which are commonly known as the specialization. Duplicating the specialized images is a bad idea, and might not work altogether. The right way to do it is by either generating a fresh VHD that had never been booted yet or by taking a booted image and generalizing it with the Sysprep tool.

An of easy way to add a VHD to the boot menu is to mount it on some drive, say E:, and run:

bcdboot e:windows /d /addlast

Bcdboot will create an entry for it. Along the way it will make sure that the boot loader on the disk is at least as new as the image on the VHD, updating the boot loader from VHD if necessary. An older boot loader might now be able to load the newer version of Windows, so this update is a good thing. The option /d says to keep the default boot OS rather changing it to the new VHD, and /addlast tells to add the new OS to the end of the list rather than to the front.

A caveat is that for bcdboot to work, the VHD must be mounted on a drive letter, not on a directory. If you try to do something like

bcdboot.exe e:vhdimg1mountdirWindows /d

then bcdboot will create an incorrect path in the BCD entry that includes all the current mount path, and the VHD won’t boot.

By the way, if you use Bitlocker on your machine, make sure to temporarily disable it before messing with the BCD modifications, or you’ll have to enter the full long decryption key on the next reboot by the PowerShell command:

Suspend-BitLocker -RebootCount 1

This command temporarily disables the BitLocker until the next reboot, when it gets auto-enabled back. The reason for this is that normally this key gets stored in a TPM which requires the boot sequence signature to match the remembered one to divulge this information. Changing the boot loader code or data changes this signature. And no, apparently there is no way to generate this signature other than by actually booting the machine and remembering the result. So the magic suspension command copies the key to a place on disk, and on the next reboot puts the key back into TPM along with the new boot signature, removing the key from the disk.

Now about what goes on inside, and what else can be done. BCD contains a number of sections of various types. Two entry types are particularly important for this discussion: the Boot Manager and the Boot Loader. You can see them by running

bcdedit /enum

There is one Boot Manager section. Boot manager is essentially the first part of the boot loader, and the selection of the image to boot happens there. And there is one Boot Loader section per each configured OS image, describing how to load that image.

The more interesting entries in the Boot Manager section are:

default is the GUID of the Boot Loader section that will be booted by default. On a booted system the value of default is usually shown as {current}. This is basically because bcdedit defines two symbolic GUIDs for convenience: {current} is the GUID of the Boot Loader section of the currently booted OS, {default} is the GUID of the Boot Loader section that is selected as default in the Boot Manager section. There also are some other pre-defined GUIDs used for specific section types, like {bootmgr} used for the Boot manager section.

By the way, be careful with the curly braces when calling bcdedit from PowerShell: PowerShell has its own syntactic meaning for them, so make sure to always put the strings that contain the curly braces into quotes.

displayorder is the list of Boot Loader section GUIDs for the boot menu.

timeout is the timeout in seconds before the default OS is booted automatically.

The Boot Loader section has quite a few interesting settings:

device and osdevice tell the disk that contain the OS. They would be usually set to the same values, although technically I think device is the disk that contains the Winloader (the last stage of the boot loader than then loads the kernel) while osdevice is the disk that contains the OS itself. Their values are formatted as “subtype=value”, like “partition=C:” to load the OS directly from a partition or “vhd=[locate]vhdsimg1.vhd” to boot from a VHD.  The partition names in this VHD string have multiple possible formats. “[locate]” means that the boot loader will automatically go through all the drives it finds and try to find a file at this path. A string like “[e:]” will mean the specific drive E: at the time when you run bcdedit. This is an important distinction, since when booting from the different VHDs the drive mappings may be different (and very probably will be different at least between the VHDs and the OS on the main partition). In this format bcdedit finds and stores in its database the resolved partition ID, not the letter as such, so it can find the partition later no matter what letter it gets. If you run “bcdedit /enum” later when booted from a different VHD, the letter shown will match the mapping in that OS. And finally the string like “e:” will mean the partition that is seen as E: by the boot manager, and this might be difficult to predict right, so it’s probably better not used. For all I can tell, in the “partition=” specification the letter is always treated similar to the “[e:]” format for VHD, i.e. the stored value is the resolved identity of the partition.

path is the path of Winloader (the last stage of the boot loader that loads the kernel) on the device. It comes in two varieties: winload.exe for the classically-partitioned disks with MBR and winload.efi for the machines with the UEFI BIOS that use the new GPT format of the partition table. If you use the wrong one, it won’t boot, so the best bet is to copy the type from another Boot Loader entry that is known to be right. The path to it might also come in two varieties: either “WINDOWSsystem32” or “WindowsSystem32Boot”. The first path is a legacy one that would still work on the Windows client or full server. The second one is the new one that would work on all the Windows versions, including the tiny ones like NanoServer, IOT or Phone.

description is the name shown in the boot menu.

systemroot is the location of the OS on the osdevice, usually just “WINDOWS”.

hypervisorlaunchtype enables the Hyper-V, “Auto” is a good value for it.

bootmenupolicy selects how the menu for the OS selection is displayed. The value placed there by the usual Windows install is “standard” which does the selection in the graphical mode and is quite slow painful: basically, the graphical-mode selection is done in Winloader, so if you select a different OS, it has to go through the getting a different Windloader that matches that OS, which is done by remembering the selection somewhere on disk and then rebooting the machine, so that next time the right Winloader is picked. The much better value is “legacy” which does the selection in the basic text mode directly in the Boot Manager, so the boot happens fast.

bootstatuspolicy can be set to “DisplayBootFailures” for the better diagnostics.

bootlog and sos enable some kinds of extra diagnostics when set to “yes”. I’m not sure where exactly does this diagnostics go.

detecthal forces the re-enumeration of the available hardware on boot when set to “yes”. It doesn’t matter for the generalized images that would do this anyway. But it might help when moving a VHD with an OS from one physical machine to another.

By the way, bcdedit has two ways of specifying the settings, one for the current section, another one for a specific section. For the current section it looks simply like:

bdcedit /detecthal yes

For a specific section (identified by a GUID or one of the symbolic pseudo-GUIDs) this becomes:

bcdedit /set {SectionGuid} detecthal yes

You can also select the BCD store that bcdedit acts on. For an MBR machine the store is normally located in C:BootBCD.  For an EFI machine the BCD store is located in a separate EFI System partition, under EFIMicrosoftBootBCD. If you’re really interested in looking at the System partition, you can mount it with Disk Manager or with PowerShell. There is a bit of a caveat with mounting the System partitions: it can’t be mounted to a path, only to a drive letter, and if you unmount it, that drive letter becomes lost until the next reboot. If you want to say look at the system partitions on a lot of  VHDs, a better strategy is to change the partition type from System to Basic, mount it, do your thing, then unmount it and change the type back to System. This way you won’t leak the drive letters.

Returning to the subject,  I’ve made a script that helps create the BCD entries for the VHDs at will. It uses my sed for PowerShell for parsing the output of bcdedit. The main function is Add-BcdVhd and is used like this:

Add-BcdVhd -Path C:vhdimg1.vhd -Description "Image 1" -Reset

Here is the implementation:

$bindir = Split-Path -parent $PSCommandPath
Import-Module -Force -Scope Global "$bindirTextProc.psm1"

$ErrorActionPreference = "Stop"

function Get-BootLoaderGuid
{
<#
.SYNOPSIS
Extracts the GUID of a Boot Loader entry from the output of
bcdedit /v or /enum. The entry is identified by its description or by its
device, or otherwise just the first entry.
#>
    param(
        ## The output from bcdedit /v.
        [string[]] $Text,
        ## Regexp pattern of the description used in the boot menu, to identify the section.
        [string] $DescMatch,
        ## Regexp pattern of the device used in this the section.
        [string] $DevMatch
    )

    $script:cur_desc = $DescMatch
    $script:cur_dev = $DevMatch
   
    $Text | xsed -Select "START",{
        if ($_ -match "^Windows Boot Loader") {
            $script:found_desc = !$cur_desc
            $script:found_dev = !$cur_dev
            $script:ident = $null
            skip-textselect
        }
    },{
        if ($_ -match "^identifier ") {
            $script:ident = $_
        }

        if ($cur_desc -and $_ -match "^description ") {
            $d = $_ -replace "^description *", ""
            if ($d -match $cur_desc) {
                $script:found_desc = $true
            }
        }

        if ($cur_dev -and $_ -match "^device ") {
            $d = $_ -replace "^device *", ""
            if ($d -match $cur_dev) {
                $script:found_dev = $true
            }
        }

        if ($ident -and $found_desc -and $found_dev) {
            Set-OneLine $ident
            Skip-TextSelect "END"
        }

        if ($_ -match "^$") {
            Skip-TextSelect "START"
        }
    },"END" | % { $_ -replace "^.*({[^ ]+}).*", '$1' }
}
Export-ModuleMember -Function Get-BootLoaderGuid

function Add-BcdVhd
{
    <#
    .SYNOPSIS
    Add a new VHD image to the list of the bootable images.
    #>

    param(
        ## Path to the VHD image (can be any, will be automatically converted
        ## to the absolute path without a drive.
        [Parameter(
            Mandatory = $true
        )]
        [string] $Path,
        ## The user-readable description that will be used in the boot menu.
        [Parameter(
            Mandatory = $true
        )]
        [string] $Description,
        ## Enable the debugging mode
        [switch] $BcdDebug,
        ## Enable the eventing mode
        [switch] $BcdEvent,
        ## For a fresh VHD that was never booted, there is no need to
        ## force the fioorce the detection of HAL.
        [switch] $Fresh,
        ## Enable the boot diagnostic settings.
        [switch] $Diagnose,
        ## If the entry exists, delete it and create from scratch.
        [switch] $Reset
    )

    # Convert the path to absolute and drop the drive letter
    $Path = Split-Path -NoQualifier ((Get-Item $Path).FullName)

    # Escape the regexp characters
    $pathMatch = $Path -replace "([[]\.()*+])", '$1'
    $pathMatch = "^vhd=.*]$pathMatch`$"

    $descMatch = $Description -replace "([[]\.()*+])", '$1'
    $descMatch = "^$descMatch`$"

    $bcd = @(bcdedit /enum)
    if (!$?) { throw "Bcdedit error: $bcd" }

    # Check if this section is already defined
    $guid_by_descr = Get-BootLoaderGuid -Text $bcd -DescMatch $descMatch
    $guid_by_path = Get-BootLoaderGuid -Text $bcd -DevMatch $pathMatch

    #Write-Host "DEBUG Path match: $pathMatch"
    #Write-Host "DEBUG Descr match: $descMatch"
    #Write-Host "$guid_by_descr by descriprion, $guid_by_path by path"

    if ($guid_by_descr -ne $guid_by_path) {
        throw "Found conflicting definitions of existing sections: $guid_by_descr by descriprion, $guid_by_path by path"
    }

    $guid = $guid_by_descr

    if ($guid -and $Reset) {
        bcdedit /delete "$guid"
        if (!$?) { throw "Bcdedit error." }
        $guid = $null
    }

    if (!$guid) {
        Write-Host "Copying the current entry"
        $bcd = $(bcdedit /copy "{current}" /d $Description)
        if (!$?) { throw "Bcdedit error: $bcd" }
        $guid = $bcd -replace "^The entry was successfully copied to {(.*)}.*", '{$1}'
        if ($guid) {
            Write-Host "The new entry has GUID $guid"
        } else {
            throw "Bcdedit error: $bcd"
        }
    }

    $oldentry = @(bcdedit /enum $guid)
    if (!$?) { throw "Bcdedit error: $bcd" }

    bcdedit /set $guid device "vhd=[locate]$Path"
    if (!$?) { throw "Bcdedit error." }
    bcdedit /set $guid osdevice "vhd=[locate]$Path"
    if (!$?) { throw "Bcdedit error." }
    if (!$Fresh) {
        bcdedit /set $guid detecthal yes
        if (!$?) { throw "Bcdedit error." }
    }

    # Enable debugging.
    if ($BcdDebug) {
        bcdedit /set $guid debug yes
        if (!$?) { throw "Bcdedit error." }
        bcdedit /set $guid bootdebug yes
        if (!$?) { throw "Bcdedit error." }
    }
    if ($BcdEvent) {
        bcdedit /set $guid debug no
        if (!$?) { throw "Bcdedit error." }
        bcdedit /set $guid event yes
        if (!$?) { throw "Bcdedit error." }
    }
    bcdedit /set $guid inherit "{bootloadersettings}"
    if (!$?) { throw "Bcdedit error." }

    # enable Hyper-v start if it's installed
    bcdedit /set $guid hypervisorlaunchtype auto
    if (!$?) { throw "Bcdedit error." }

    # The more sane boot menu.
    bcdedit /set $guid bootmenupolicy Legacy
    if (!$?) { throw "Bcdedit error." }
    bcdedit /set $guid bootstatuspolicy DisplayBootFailures
    if (!$?) { throw "Bcdedit error." }

    # Other useful diagnostic settings
    if ($Diagnose) {
        bcdedit /set $guid bootlog yes
        if (!$?) { throw "Bcdedit error." }
        bcdedit /set $guid sos on
        if (!$?) { throw "Bcdedit error." }
    }

    # This is strictly needed only for CSS but doesn't hurt on other SKUs,
    # must use the path with "Boot", but preserve .exe vs .efi.
    $oldpath = $oldentry | ? { $_ -match "^path " } | % { $_ -replace "^path *","" }
    if (!$oldpath) {
        throw "The current BCD entry doesn't have a path value???"
    }
    $leaf = Split-Path -Leaf $oldpath

    bcdedit /set $guid path "WindowsSystem32Boot$leaf"
    if (!$?) { throw "Bcdedit error." }

    # Print the settings after editing.
    bcdedit /enum $guid
}
Export-ModuleMember -Function Add-BcdVhd

expect in PowerShell

$
0
0

Like the other text tools I’ve published here, this one is not a full analog of the Unix tool. It does only the very basic thing that is sufficient in many cases. It reads the output from a background job looking for patterns. This is a very typical thing if you want to instruct some system do some action (though WMI or such) then look at its responses or logs to make sure that the action was completed before starting a new one.

It’s used like this:

# Suppose that the job that will be sending the output $myjob has been somehow created.
$ebuf = New-ExpectBuffer $myjob $LogFile
$line = Wait-ExpectJob -Buf $ebuf -Pattern "some .* text"
...
Skip-ExpectJob -Buf $ebuf -WaitStop

New-Expect buffer creates an expect object. It takes the job to read from, and the file name to write the received data to (which can be later used to debug any unexpected issues). It can do a couple of other tricks too: If the job is $null, then it will read the input from the file instead. The reading from the file is not very smart, the file is read just once. This is intended for testing the new patterns on a results of a previous actual expect. The second trick is that this whole set of fucntions auto-detects and corrects the corruption from the Unicode mistreatment.

Wait-ExpectJob polls the output of the job until it either gets a line with the pattern or a timeout expires or the job exits. The timeout and polling frequency can be specified in the parameters. A gross simplification here is that unlike the real expect, only one job is polled at a time. It would be trivial to extend to multiple buffers and multiple patterns, it’s just that in reality so far I’ve needed only the very basic functionality. this function returns the line that contained the pattern, so that it can be examined further.

Skip-ExpectJob’s first purpose is to skip (but write into the log file) any input received so far. This allows you to skip over the repeated patterns in the output before sending the next request. This is not completely fool-proof but with the judiciously used timeouts is good enough. The second purpose for it is to wait for the job to exit, with the flag -WaitStop. In the second use it just makes sure that by the time it returns the job had exited and all its output was logged. The second use also has a timeout.

That’s basically it, here is the implementation (relying on my other text tools):

function New-ExpectBuffer
{
<#
.SYNOPSIS
Create a buffer object (returned) that would keep the data for
expecting the patterns in the job's output.
#>
    param(
        ## The job object to receive from.
        ## May be $null, then the data will be read from the file.
        $Job,
        ## Log file name to append the received data to (with a job)
        ## or read the data from (without a job).
        [parameter(Mandatory=$true)]
        $LogFile,
        ## Treat the input as Unicode corrupted by PowerShell,
        ## don't try to auto-detect.
        [switch] $Unicode
    )
    $result = @{
        job = $Job;
        logfile = $LogFile;
        buf = New-Object System.Collections.Queue;
        detect = (!$Unicode);
    }
    if (!$Job) {
        $data = (Get-Content $LogFile | ConvertFrom-Unicode -AutoDetect:$result.detect)
        if ($data) {
            foreach ($val in $data) {
                $result.buf.enqueue($val)
            }
        }
    }
    $result
}

function Wait-ExpectJob
{
<#
.SYNOPSIS
Keep receiving output from a background job until it matches a pattern.
The output will be appended to the log file as it's received.
When a match is found, the line with it will be returned as the result.

The wait may be limited by a timeout. If the match is not received within
the timeout, throws an error (unless the option -Quiet is used, then
just returns).

If the job completes without matching the pattern, the reaction is the same
as on the timeout.
#>
    [CmdletBinding()]
    param(
        ## The buffer that keeps the job reference and the unmatched lines
        ## (as created with New-ExpectBuffer).
        [parameter(Mandatory=$true)]
        $Buf,
        ## Pattern (as for -match) to wait for.
        [parameter(Mandatory=$true)]
        $Pattern,
        ## Timeout, in fractional seconds. If $null, waits forever.
        [double] $Timeout = 10.,
        ## When the timeout expires, don't throw but just return nothing.
        [switch] $Quiet,
        ## Time in milliseconds for sleeping between the attempts.
        ## If the timeout is smaller than the step, the step will automatically
        ## be reduced to the size of timeout.
        [int] $StepMsec = 100
    )
   
    $deadline = $null
    if ($Timeout -ne $null) {
        $deadline = (Get-Date).ToFileTime();
        $deadline += [int64]($Timeout * (1000 * 1000 * 10))
    }

    while ($true) {
        while ($Buf.buf.Count -ne 0) {
            $val = $Buf.buf.Dequeue();
            if ($val -match $Pattern) {
                return $val
            }
        }
        if (!$Buf.job) {
            if ($Quiet) {
                return
            } else {
                throw "The pattern '$Pattern' was not found in the file '$($Buf.logfile)"
            }
        }
        $data = (Receive-Job $Buf.job | ConvertFrom-Unicode -AutoDetect:$Buf.detect)
        Write-Verbose "Job sent lines:`r`n$data"
        if ($data) {
            foreach ($val in $data) {
                $Buf.buf.enqueue($val)
            }
            # Write the output to file as it's received, not as it's matched,
            # for the easier diagnostics of things that get mismatched.
            $data | Add-Content $Buf.logfile
            continue
        }

        if (!($Buf.job.State -in ("Running", "Stopping"))) {
            if ($Quiet) {
                Write-Verbose "Job found stopped"
                return
            } else {
                throw "The pattern '$Pattern' was not received until the job exited"
            }
        }

        if ($deadline -ne $null) {
            $now = (Get-Date).ToFileTime();
            if ($now -ge $deadline) {
                if ($Quiet) {
                    Write-Verbose "Job reading deadline expired"
                    return
                } else {
                    throw "The pattern '$Pattern' was not received within $Timeout seconds"
                }
            }

            $sleepmsec = ($deadline - $now) / (1000 * 10)
            if ($sleepmsec -eq 0) { $sleepmsec = 1 }
            if ($sleepmsec -gt $StepMsec) { $sleepmsec = $StepMsec }
            Sleep -Milliseconds $sleepmsec
        }
    }
}

function Skip-ExpectJob
{
<#
.SYNOPSIS
Receive whetever available output from a background job without any
pattern matching.

The output will be appended to the log file as it's received.

Optionally, may wait for the job completion first.
The wait may be limited by a timeout. If the match is not received within
the timeout, throws an error (unless the option -Quiet is used, then
just returns).
#>
    param(
        ## The buffer that keeps the job reference and the unmatched lines
        ## (as created with New-ExpectBuffer).
        [parameter(Mandatory=$true)]
        $Buf,
        ## Wait for the job to stop before skipping the output.
        ## This guarantees that all the job's output is written to the log file.
        [switch] $WaitStop,
        ## Timeout, seconds. If $null and requested to wait, waits forever.
        [int32] $Timeout = 10,
        ## When the timeout expires, don't throw but just return nothing.
        [switch] $Quiet
    )

    if ($WaitStop) {
        Wait-Job -Job $Buf.job -Timeout $Timeout
    }

    Receive-Job $Buf.job | ConvertFrom-Unicode -AutoDetect:$Buf.detect | Add-Content $Buf.logfile

    if ($WaitStop) {
        if (!($Buf.job.State -in ("Stopped", "Completed"))) {
            if ($Quiet) {
                return
            } else {
                throw "The job didn't stop within $Timeout seconds"
            }
        }
    }
}

 

 

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>