Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Dancing waters, Kinect style

$
0
0

Fountains: indoors or out, these decorative displays of water in motion have long mesmerized onlookers. France’s Sun King, Louis XIV, had fountains placed throughout his palace gardens at Versailles; groundskeepers would activate their assigned fountain as the king approached. If only Louis were around today, he might opt to trigger the fountains and control their displays himself, using the latest Kinect sensor.

That’s because Beijing Trainsfer Technology Development, a Chinese high-tech company, has developed a system that lets onlookers control a fountain’s display by gesturing with their arms and legs. The latest Kinect sensor and the Kinect for Windows SDK are at the heart of the system, working together to capture precise body positions and gestures, which are then translated into instructions for the computer that controls the fountain display. For example, when you raise your left arm, water might spray from the left side of the fountain. Or kick out your right leg, which might activate a series of smaller jets of water. The controlling gestures can be customized for each fountain, and, as the video below demonstrates, the system can be used in both indoor and outdoor settings.

(Please visit the site to view this video)

Trainsfer began development of Kinect-controlled fountains in February 2013 and 11 months later received a patent from the State Intellectual Property Office of the Peoples Republic of China. Around the same time, Trainsfer learned that Microsoft Research was working on Kinect-controlled fountains as well. Trainsfer’s engineers contacted Microsoft Research and received significant support on perfecting their Kinect-based system. “Our product would not exist without Kinect technology,” says Hui Wang, general manager of Trainsfer. “It has enabled us to create a revolutionary product for the eruptive fountain industry.”

The system became commercially available in April 2015, and Trainsfer is currently negotiating to install it in one of China’s largest theme parks. The company also expects the product to be popular in outdoor parks and plazas, as well as indoors in major office buildings. Trainsfer also anticipates selling advertising space at the fountains, where the allure of controlling the water display should attract and engage passersby.

So maybe it won’t be too long before you can do something that would make you the envy of Louis XIV—thanks to Kinect.

The Kinect for Windows Team

Key links


Dynamics CRM Integration Training – Register Now!

$
0
0

Harness the power of Dynamics CRM 2015 Integration!

Join us for a fantastic classroom based Instructor led training session focussing on Dynamics CRM advanced integration. This training is intended for CRM consultants with a strong understanding of CRM development concepts and architecture. Additionally a basic understanding of the business needs for integration between various systems will offer students additional insights throughout the course.

This four day course is based on the following curriculum

  • Getting started with CRM Integration
  • SharePoint Integration
  • CRM and SharePoint Search
  • Dynamic Connector
  • USD
  • UII Integration
  • Azure Integration
  • Azure AD/OAuth
  • Services Beyond the Basics
  • Azure BizTalk Server Integration
  • CRM and Power BI
  • CRM and Dynamics Suite
  • Data Integration and Migration

Location: Microsoft Sydney Office  1 Epping Road North Ryde NSW 2113

Date: 16th– 19th June

Cost: $599 including GST

Pre-Requisites

  • Working knowledge of Dynamics CRM Integration project a success
  • Basic knowledge of infrastructure on networking, load balancing, database, Active Directory, ADFS and Microsoft Office 365.
  • This training is BYOD (Bring Your Own Device). Attendees are asked to bring their own Windows 8 PCs to use during the training.

REGISTER NOW!

Questions: Email msaupr@microsoft.com

Talk: architecture for a connected device+cloud app

$
0
0

How do you write a "connected device+cloud" app that works, and doesn't crash when the user walks out of network coverage? How can you make an "offline mode" that still syncs properly with the cloud? I'm passionate about correct distributed programming. Here are some slides I made:

  • slides.pptx [500k]
  • the transcript is in the "notes" section of the slides.

 

1. Network failures will happen even in the best-written programs and you have to deal with them. I wrote a NuGet package called "Async Mr Flakey" to help you write robust code.

2. Retry-on-failure is a terrible idea, leading to bad user experiences and buggy code. Please don't do it.

3. Getting good async callstacks out of exceptions. I wrote a NuGet package "AsyncStackTraceEx" to help.

4. Architecture - I strongly advocate for breaking your work down into idempotent resumable/retryable nuggets, and having "workitem queues" (in memory or on disk or in cloud). Then have a single WorkAsync method whose job is to do the work to move an item from any one of those queues into the appropriate next queue.

5. Use the UI thread for your WorkerAsync. It's easier, and won't hurt responsiveness.

 

 

Truncate Table referenced by a Foreign key

$
0
0

Last night, I was truncating some tables from an alien database, and while truncating one table I got below error, which is logical because tables were related by foreign key constraint

Msg 4712, Level 16, State 1, Line 1

Cannot truncate table '' because it is being referenced by a FOREIGN KEY constraint.

 

That particular database contains 100 tables and on creating the Database Diagram, I found that there are more than 15 tables related to each other.

Now to truncate all the tables, I have written a recursive procedure, which will perform following operation, starting from the leaf table (table which is not referenced by any table)

  1. 1.      Drop the constraint
  2. 2.      Truncate the table
  3. 3.      Create the constraints back

 

/****** Object:  StoredProcedure [dbo].[uspTruncateTableWithForeignKey]    Script Date: 5/20/2015 10:14:05 PM ******/

SETANSI_NULLSON

GO

 

SETQUOTED_IDENTIFIERON

GO

 

 

 

 

/***************************************************************************** 

PROCEDURE NAME:     [uspTruncateTableWithForeignKey]

AUTHOR:             Siddharth Tandon

CREATED:            04/10/2015

DESCRIPTION:        This will truncate the table (sent as parameter) and all the tables related to it

PARAMETERS                             

       @TableName: Name of the tbale to truncate

       @CosntraintName: Pass NULL

       @SchemaName: Schema of the table

 

EXEC [dbo].[uspTruncateTableWithForeignKey] 'TableName',NULL,'dbo'

****************************************************************************/

 

CREATEPROC [dbo].[uspTruncateTableWithForeignKey]

@TableName varchar(100)

,@ConstraintName varchaR(1000)=NULL

,@SchemaName varchar(100)

AS

BEGIN

       DECLARE

             @Index int= 1

             ,@Count int= 0

             ,@TmpTableName varchar(100)

             ,@Sql nvarchar(1000)

             ,@TmpConstraintName varchar(1000)

             ,@TmpSchemaName varchar(100)

             ,@TmpColumnName varchar(100)

             ,@TmpPKTableName varchaR(100)

             ,@TmpPKSchemaName varchar(100)

             ,@TmpPKColumnName varchar(100)

             ,@BaseTableName varchar(100)

 

       DECLARE @Tables TABLE

       (

             ID intIDENTITY(1,1),

             TableName varchar(100),

             ConstraintName varchaR(1000),

             SchemaName varchar(1000)

       )

      

       INSERTINTO @Tables(TableName,ConstraintName,SchemaName)

       SELECTOBJECT_NAME(parent_object_id),name,SCHEMA_NAME(schema_id)

       FROMsys.foreign_keys

       WHEREOBJECT_NAME(referenced_object_id)  = @TableName

       ANDOBJECT_SCHEMA_NAME(referenced_object_id)= @SchemaName

 

       IFobject_id('tempdb..#tmpRelationship')ISNULL

       BEGIN 

             SET @BaseTableName = @TableName

             SET @ConstraintName =NULL

             CREATETABLE #tmpRelationship

             (

                    ID intIDENTITY(1,1)

                    ,ConstraintName nvarchar(1000)

                    ,FKTableName varchar(100)

                    ,FKColumnName varchar(100)

                    ,FKTableSchemaName varchar(100)

                    ,PKTableNAme varchar(100)

                    ,PKColumnName varchar(100)

                    ,PKTableSchemaName varchaR(100)

             )

       END

 

       SELECT @COUNT =COUNT(*)

       FROM @Tables

 

       IF(@Count > 0)

       BEGIN

             WHILE(@Index<=@Count)

             BEGIN

                    SELECT

                           @TmpTableName = TableName

                           ,@TmpConstraintName = ConstraintName

                           ,@TmpSchemaName = SchemaName

                    FROM @Tables

                    WHERE ID = @Index

 

                    EXEC [dbo].[uspTruncateTableWithForeignKey]@TmpTableName,@TmpConstraintName,@TmpSchemaName

 

                    SET @Index += 1

             END

       END

      

       IF(@ConstraintName ISNOTNULL)

       BEGIN

            

             INSERTINTO #tmpRelationship

             (

                    ConstraintName

                    ,FKTableName

                    ,FKColumnName

                    ,FKTableSchemaName

                    ,PKTableNAme

                    ,PKColumnName

                    ,PKTableSchemaName

             )

             SELECT 

                     KCU1.CONSTRAINT_NAME AS FKConstraint

                    ,KCU1.TABLE_NAME AS FKTable

                    ,KCU1.COLUMN_NAME AS FKColumn

                    ,KCU1.TABLE_SCHEMA AS FKSchema

                    ,KCU2.TABLE_NAME AS ReferencedTable

                    ,KCU2.COLUMN_NAME AS ReferencedColumn

                    ,KCU2.TABLE_SCHEMA AS ReferencedSchema

             FROMINFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTSAS RC

             INNERJOININFORMATION_SCHEMA.KEY_COLUMN_USAGEAS KCU1

                    ON KCU1.CONSTRAINT_CATALOG = RC.CONSTRAINT_CATALOG 

                    AND KCU1.CONSTRAINT_SCHEMA = RC.CONSTRAINT_SCHEMA

                    AND KCU1.CONSTRAINT_NAME = RC.CONSTRAINT_NAME

             INNERJOININFORMATION_SCHEMA.KEY_COLUMN_USAGEAS KCU2

                    ON KCU2.CONSTRAINT_CATALOG = RC.UNIQUE_CONSTRAINT_CATALOG 

                    AND KCU2.CONSTRAINT_SCHEMA = RC.UNIQUE_CONSTRAINT_SCHEMA

                    AND KCU2.CONSTRAINT_NAME = RC.UNIQUE_CONSTRAINT_NAME

                    AND KCU2.ORDINAL_POSITION = KCU1.ORDINAL_POSITION

             WHERE KCU1.TABLE_NAME = @TableName                                

                   

             ------------Drop all constraints related to it

             SET @Sql =N'ALTER TABLE ['+@SchemaName+'].['+@TableName+'] DROP CONSTRAINT ['+@ConstraintName+']'

 

             EXECsp_Executesql@Sql

       END

            

       SET @Sql ='TRUNCATE TABLE '+ISNULL(@SchemaName,'dbo')+'.['+ @TableName +']'

 

       EXECsp_executesql@Sql

 

       IF(@BaseTableName = @TableName)

       BEGIN

 

             SELECT @COUNT =COUNT(*)

             FROM #tmpRelationship

 

             SET @Index = 1

 

             IF(@Count > 0)

             BEGIN

                    WHILE(@Index<=@Count)

                    BEGIN

                           SELECT

                                 @TmpTableName = FKTableName

                                 ,@TmpConstraintName = ConstraintName

                                 ,@TmpSchemaName =  FKTableSchemaName

                                 ,@TmpColumnName = FKColumnName

                                 ,@TmpPKTableName = PKTableNAme

                                 ,@TmpPKColumnName = PKColumnName

                                 ,@TmpPKSchemaName = PKTableSchemaName

                           FROM #tmpRelationship

                           WHERE ID = @Index

 

                           SET @Sql =N'ALTER TABLE ['+@TmpSchemaName+'].['+@TmpTableName+']

                                               WITH NOCHECK ADD FOREIGN KEY(['+ @TmpColumnName +'])

                                               REFERENCES ['+@TmpPKSchemaName+'].['+@TmpPKTableName+'] (['+@TmpPKColumnName+'])'

                          

                           EXECsp_Executesql@Sql

                           SET @Index += 1

                    END

             END

       END

END

 

 

 

 

 https://siddharthtandon.wordpress.com/2015/05/20/truncate-table-referenced-by-a-foreign-key/

Windows 10 Developer Readiness–Conferência gratuita online

Transfer Type is not showing as SAN while trying to Migrate a VM even though the Hosts have SAN storage configured

$
0
0

 

This is another interesting case on which I worked recently where we needed to scratch the surface in VMM to know what the Root Cause was.

 

We were trying to migrate a VMs in SCVMM from one Host to the other Host. And as both the Hosts had SAN storage configured, most of the VMs were showing the correct “Transfer  Type”. But while migrating a few other VMs we could see the “Transfer Type” as “Network” instead of “SAN”:

 

image

 

And we were also getting the Message for “Deployment and Transfer Explanation” as shown above and it said “Virtual Machine resides on a LUN which is not visible to any of the Storage Providers. Which was actually very strange because all the settings for storage were configured correctly.

So to understand what exactly is happening here, we collected the VMM tracing while trying to reproduce the issue as per the Article below:

https://support.microsoft.com/en-us/kb/2913445/en-us 

 

And after we analysed the VMM trace we saw the below message appearing there under the Function SANMigrationBetweenHAandNonHAHostPrecheck.RunPrecheck:

 

DeployLUNNotFoundUsingStorageProvider (26207)

And also we saw the Below Pathsbeing referenced there:

[Microsoft-VirtualMachineManager-Debug]4,4,SANStorageManagementHelper.cs,1342,filePath C:\Windows\system32\vmguest.iso accessPath \\?\Volume{21fd0518-ea71-11e4-80cb-bc305bee5327}\

 

So as you can see in the above path, it is trying to access VMGuest.ISO somehow for this VM which is Ofcourse not SAN Capable or on a SAN storage. SO went back and checked the VM settings in VMM to see if that ISO is Mounted on that VM. But to our surprise that VM did not have any ISO mounted.

 

Our further investigation revealed that the VM had a few Check Points configured and those Check Points had this VMGuest.ISO mounted there as shown below:

 

image

 

 

So we deleted those CheckPoints from the VM configuration in VMM and after that when we tried to migrate, we now saw the “SAN” option showing up there as “Transfer Type”.

 

As you can see that even if at the surface everything looks fine you can still have some issues in the configuration deep inside causing Problems like this.

Will catch you later in my upcoming Posts…Happy Reading.

 

Author:

Nitin Singh

Support Escalation Engineer

Microsoft Security and Manageability Division

Using CLR to replace xp_cmdshell for specific tasks

$
0
0

As we have discussed before, xp_cmdshell is a mechanism to execute arbitrary calls into the system and because of the flexibility of its nature, it is typically abused and leads to serious security problems in the system.

  In most cases, what the sysadmin really wants to do is to enable only a handful of specific tasks on the system, without the whole flexibility that comes from running xp_cmdshell directly.

  One approach to achieve this constraint access to specific tasks on the system is to enable xp_cmdshell through a signed module. For detailed information on this approach, we have some articles in the SQL Server Security blog, but Erland Sommarskog also has a very nice article on the subject that I would recommend: http://www.sommarskog.se/grantperm.html.

  An alternative that I personally prefer is to create SQL CLR modules that help to accomplish the specific tasks. Using SQL CLR modules it is possible to create a finely targeted escalation path that enables users to do exactly what they need, enabling at the same time ease to write parameter check verification and a clear parameterization that would help you avoid command injections.

For example, let’s create a library that help to copy & delete files in the OS, you could easily add checks for specific paths and make sure that the behavior of the module is exactly what you would expect:

 using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.IO;

public class SqlClrUserDefinedModules
{
readonly private static string _approvedDirectory = @"C:\temp";

[SqlProcedure()]
public static void DeleteFile(SqlString filename)
{
FileInfo fi = new FileInfo(filename.Value);

if(!fi.DirectoryName.Equals(_approvedDirectory, StringComparison.InvariantCultureIgnoreCase ))
{
throw new Exception(@"File is not located in an approved directory");
}

File.Delete(filename.Value);
}

public static void CopyFile( SqlString filename, SqlString destinationFilename)
{
FileInfo fi = new FileInfo(filename.Value);
if(!fi.DirectoryName.Equals(_approvedDirectory, StringComparison.InvariantCultureIgnoreCase))
{
throw new Exception(@"Source file is not located in an approved directory");
}

fi = new FileInfo(destinationFilename.Value);
if (!fi.DirectoryName.Equals(_approvedDirectory, StringComparison.InvariantCultureIgnoreCase))
{
throw new Exception(@"Destination file is not located in an approved directory");
}

File.Copy(filename.Value, destinationFilename.Value);
}
}

Because it is accessing the OS file system, in order to use it on your database, you will need to use EXTERNAL ACCESS permission. This gives you two choices:

1)      Trust the DB where you are storing it

2)      Trust the module via a strong name or Authenticode

In our example, I decided to use a strong name, so I will create an ASYMMETRIC KEY in master DB by extracting it from the DLL file itself, grant the right permission (EXTERNAL ACCESS ASSEMBLY) and enable CLR in case it was not enabled:

USE [master]
GO

CREATE ASYMMETRIC KEY [snk_external_access_clr] FROM EXECUTABLE FILE = 'E:\temp\SqlUserDefinedModules.dll'
go

CREATE LOGIN [snk_external_access_clr] FROM ASYMMETRIC KEY [snk_external_access_clr]
GO

GRANT EXTERNAL ACCESS ASSEMBLY TO [snk_external_access_clr]
go

EXEC sp_configure 'clr enabled', 1; RECONFIGURE;
go

The next step would be to access the DB where we are going to host the assembly and create it.

NOTE: because we are trusting the module via a digital signature (Strong Name), we are trusting the module as a whole in the system, regardless of the database where it is hosted. Make sure to take this into account when using EXTERNAL ACCESS or UNSAFE assemblies.

CREATE ASSEMBLY [SqlUserDefinedModules] FROM 'E:\temp\SqlUserDefinedModules.dll'
WITH PERMISSION_SET = EXTERNAL_ACCESS
go

CREATE SCHEMA [SqlClrUserDefinedModules]
go

CREATE PROCEDURE [SqlClrUserDefinedModules].[DeleteFile]
(@filename nvarchar(2048))
AS EXTERNAL NAME [SqlUserDefinedModules].[SqlClrUserDefinedModules].[DeleteFile];
go

CREATE PROCEDURE [SqlClrUserDefinedModules].[CopyFile]
(@filename nvarchar(2048), @destinationFilename nvarchar(2048))
AS EXTERNAL NAME [SqlUserDefinedModules].[SqlClrUserDefinedModules].[CopyFile];
go

At this point, you could simply grant access to execute the module to normal users in the database. For example:

CREATE USER [clr_test_user] WITHOUT LOGIN
go

GRANT EXECUTE ON SCHEMA::[SqlClrUserDefinedModules] TO [clr_test_user]
go

Hopefully this example will be useful to customize CLR modules that can be used to replace any xp_cmdshell usage you may be using in such a way that the CLR modules are more secure and targeted.

Extensibility in Dynamics AX 2012 R3 CU8 (CRT, RetailServer, MPOS) Part 2 – New data entity

$
0
0

Overview

This blog is to expand the knowledge you gained in part 1 of the series (http://blogs.msdn.com/b/axsa/archive/2015/02/17/extensibility-in-dynamics-ax-2012-r3-cu8-crt-retailserver-mpos-part-1.aspx). In case you get stuck, I recommended that you make yourself familiar with part 1 first. Some of the information from there is required and assumed.

The steps are based on the Dynamics AX 2012 R3 CU8 VM image that can be requested via LCS (https://lcs.dynamics.com/Logon/Index) which most partners have access to (Contoso sample).  Alternatively, PartnerSource (https://mbs.microsoft.com/partnersource/northamerica/) can be used to download the VM as well. Make sure you get the CU8 version.

It is recommended to review some of the online resources around the Retail solution, either now or during the processes of following this blog (https://technet.microsoft.com/en-us/library/jj710398.aspx).

The areas this blog covers are:

-         AX: Adding a new data entity, related to a retail store, and populating it by means of a job (no UI)

-         CDX: configuring CDX in order to include the new table in data synchronizations

-         CRT: adding a new data entity and new service

-         RetailServer: exposing a new controller for the new entity; adding new ODATA action

-         MPOS: adding plumbing to call RetailServer; updating UI to expose data

A future blog will cover topics and suggestions for changing existing CRT code.  Stay tuned for that.

The changed code is available in ZIP format which includes the files that have been added or changed only. It can be applied (after backing up your existing SDK) on top of the
“Retail SDK CU8” folder.  Note that the ZIP file includes the changes from part 1 as well.

This sample customization will update the MPOS terminal to show more detailed opening times for a store.  Remember that a store worker can look up item availability across multiple stores. Imagine that as part of that flow, the worker would like to advise the customer if a particular store is open or not. See the screen shot below for the UI flow:

 


 

Notes:

-         The sample is to illustrate the process of a simple customization. It is not intended for product use.

-         All changes are being made under the login contoso\emmah. If you use a different account, or different demo data altogether please adjust the below steps accordingly.

 

High-level steps

 

The following steps need to be carried out:

 

  1. Setup the Retail SDK CU8 for development (see part 1)
  2. Prepare MPOS to be run from Visual Studio from unchanged SDK code (see part 1)
  3. Activate the MPOS device (see part 1)
  4. Include new entity in AX
  5. Configure CDX to sync new entity
  6. Channel schema update and test
  7. Add CRT entity, service, request, response, datamanager and RetailServer controller with new action
  8. Update client framework to call RetailServer endpoint
  9. Update client framework channel manager with new functionality
  10. Update client framework’s view model
  11. Update MPOS’s view to consume updated view model
  12. Test

 

Detailed steps

 

Setup the Retail SDK CU8 for development

See part 1 at http://blogs.msdn.com/b/axsa/archive/2015/02/17/extensibility-in-dynamics-ax-2012-r3-cu8-crt-retailserver-mpos-part-1.aspx.

 

Prepare MPOS to be run from Visual Studio from unchanged SDK code

See part 1 at http://blogs.msdn.com/b/axsa/archive/2015/02/17/extensibility-in-dynamics-ax-2012-r3-cu8-crt-retailserver-mpos-part-1.aspx.

 

Activate the MPOS device

See part 1 at http://blogs.msdn.com/b/axsa/archive/2015/02/17/extensibility-in-dynamics-ax-2012-r3-cu8-crt-retailserver-mpos-part-1.aspx.

  

Include new entity in AX

 

In order to store the store hours per store, we will be using a new table called ISVRetailStoreDayHoursTable. It will store the day, and open and closing times for each store.

As part of the ZIP folder you can find the xpo file at SampleInfo\Sample2_StoreDayHours.xpo. Import this file into AX. It includes 2 items: the table and a simple job that populates sample data for the Houston store.

Run the job named Temp_InsertData at least once. Then inspect the table with SQL Server Management studio:

Configure CDX to sync new entity

In AX, add a new location table and the appropriate columns to the AX 2012 R3 schema (USRT/Retail/Setup/Retail Channel Schema)

 

 Create a new scheduler subjob (USRT/Retail/Setup/Scheduler subjobs)

 

Click transfer field list and make sure that the fields match as above.

Add the new subjob to the 1070 Channel configuration job (USRT/Retail/Setup/Scheduler Job)

 

Edit the table distribution XML to include the new table (USRT/Retail/Setup/Retail Channel Schema)

 

The easiest is to get the XML out of the text box, edit it outside with a XML editor, and then get it back in.  The change you need to do is to add this XML fragment:

  <Table name="ISVRETAILSTOREDAYHOURSTABLE">

     <LinkGroup>

     <Link type="FieldMatch" fieldName="RetailStoreTable" parentFieldName="RecId" />

    </LinkGroup>

  </Table>

in two places. Both times, it needs to be added inside the RetailStoreTable table XML node.

At the end, click Generate Classes (USRT/Retail/Setup/Retail Channel Schema/AX 2012 R3)

Channel schema update and test

The equivalent change to the table schema must be made in the channel side.  This has to be done to all channel databases. Use SQL Server Management Studio and create the table. Since this is a sample, we won’t add stored procedures, we just do that in code. However, it is recommended to use sprocs for performance and security reasons.

The file can be found in the ZIP folder at SampleInfo\ChannelSchemaUpdates.txt

Now, go back to AX and run the 1070 job (USRT/Retail/Periodic/Distribution Schedule/1070/Run now)

Then, verify in AX that the job succeeded (USRT/Retail/Inquiries/Download Sessions/Process status messages). You should see a status of “Applied” for the stores. 

 

Add CRT entity, service, request, response, datamanager and RetailServer controller with new action

 Use the solution in the ZIP file at SampleInfo\RSCRTExtension\RSCRTExtension.sln and inspect he code.

Since this part is based on part 1, I assume you have:

-         already configured the pre.settings file (for rapid deployment as part of the build into RetailServer’s bin directory),

-         already configured RetailServer’s version of commerceRuntime.config to include the new CRT extension dll, and

-         already configured RetailServer’s web.config file to include our new extension dll.

Here is a code map view of the code changes required:

 You can see that we need a CRT request, response, service, dataaccessor and entity. Additionally, RetailServer is customized to include a new StoreDayHoursController that exposes a new ODATA endpoint, GetStoreDaysByStore.  That endpoint uses the CRT and the request object to get a response. It does not use the data service directly.

If you have configured all right, compiled the solution and fired up the ODATA metadata url of RetailServer (http://ax2012r2a.contoso.com:35080/RetailServer/v1/$metadata), you should see the new action:

 

 

Update client framework to call RetailServer endpoint

 

The first step is to make MPOS aware of the new entity and the new endpoint. This is basically proxy code similar to what tools like wsdl.exe would generate for .NET web services. The Retail team is investigating to provide a tool for automatic regeneration in a future release.

CommerceTypes.ts

This is a class that specifies the new entity, both as an interface and a class.

    export interface StoreDayHours {
        DayOfWeek: number;
        OpenTime: number;
        CloseTime: number;
        ExtensionProperties?: Entities.CommerceProperty[];
    }
    export class StoreDayHoursClass implements StoreDayHours {
        public DayOfWeek: number;
        public OpenTime: number;
        public CloseTime: number;
        public ExtensionProperties: Entities.CommerceProperty[];

        /**
         * Construct an object from odata response.
         *
         * @param {any} odataObject The odata result object.
         */
        constructor(odataObject?: any) {
            odataObject = odataObject || {};
            this.DayOfWeek = odataObject.DayOfWeek ? odataObject.DayOfWeek : null;
            this.OpenTime = odataObject.OpenTime ? odataObject.OpenTime : null;
            this.CloseTime = odataObject.CloseTime ? odataObject.CloseTime : null;
            this.ExtensionProperties = undefined;
            if (odataObject.ExtensionProperties) {
                this.ExtensionProperties = [];
                for (var i = 0; i < odataObject.ExtensionProperties.length; i++) {
                    this.ExtensionProperties[i] = odataObject.ExtensionProperties[i] ? new CommercePropertyClass(odataObject.ExtensionProperties[i]) : null;
                }
            }
        }
    }

 

CommerceContext.ts

This is a class that exposes the ODATA data service to the rest of MPOS.

 public storeDayHoursEntity(storeId?: string): StoreDayHoursDataServiceQuery {
  return new StoreDayHoursDataServiceQuery(this._dataServiceRequestFactory, "StoreDayHoursCollection", "StoreDayHours", Entities.StoreDayHoursClass, storeId);
 }

    export class StoreDayHoursDataServiceQuery extends DataServiceQuery<Entities.StoreDayHours> {

        constructor(dataServiceRequestFactory: IDataServiceRequestFactory, entitySet: string, entityType: string, returnType?: any, key?: any) {
            super(dataServiceRequestFactory, entitySet, entityType, returnType, key);
        }

        public getStoreDaysByStoreAction(storeId: string): IDataServiceRequest {
            var oDataActionParameters = new Commerce.Model.Managers.Context.ODataActionParameters();
            oDataActionParameters.parameters = { StoreNumber: storeId};

            return this.createDataServiceRequestForAction('GetStoreDaysByStore', Entities.StoreDayHoursClass, 'true', oDataActionParameters);
        }
    }

 

Update client framework channel manager with new functionality

Now, we got the low-level proxy code done, we need to expose the new functionality in a more consumable way to the rest of application framework. An appropriate location for the new functionality is the IChannelManager as it already encompasses similar functionality that is of more global, channel-related nature.

IChannelManager.ts:

    getStoreDayHoursAsync(storeId: string): IAsyncResult<Entities.StoreDayHours[]>;

ChannelManager.ts:

  public getStoreDayHoursAsync(storeId: string): IAsyncResult<Entities.StoreDayHours[]> {
  Commerce.Tracer.Information("ChannelManager.getStoreDayHoursAsync()");

  var query = this._commerceContext.storeDayHoursEntity();
  var action = query.getStoreDaysByStoreAction(storeId);

  return action.execute<Entities.StoreDayHours[]>(this._callerContext);
 }

 

Update client framework’s view model

The view model is an abstraction of the view that exposes public properties and commands for any view implementation to use.  Here are the 3 things we need to do in order to customize the existing StoreDetailsViewModel:

  • a variable that holds the result for view to bind to and a variable called isStoreDayHoursVisible that view can use to toggle visibility of the UI:

        public storeDayHours: ObservableArray<Model.Entities.StoreDayHours>;
        public isStoreDayHoursVisible: Computed<boolean>;

  • data initialization in the constructor:

    // empty array
  this.storeDayHours = ko.observableArray([]);
  this.isStoreDayHoursVisible = ko.computed(() => {
   return ArrayExtensions.hasElements(this.storeDayHours());
  });

  • data retrieval function to be called by the view

        public getStoreDayHours(): IVoidAsyncResult {
            var asyncResult = new VoidAsyncResult(this.callerContext);
            Commerce.Tracer.Information("StoreDetailsViewModel.getStoreDayHours()");

            this.channelManager.getStoreDayHoursAsync(this._storeId)
                .done((foundStoreDayHours: Model.Entities.StoreDayHours[]) => {
                    this.storeDayHours(foundStoreDayHours);
                    Commerce.Tracer.Information("StoreDetailsViewModel.getStoreDayHours() Success");
                    asyncResult.resolve();
                })
                .fail((errors: Model.Entities.Error[]) => {
                    asyncResult.reject(errors);
                });

            return asyncResult;
        }

 

Update POS’s view to consume updated view model

The StoreDetailsView.ts already calls into the view model to get store distance. For simplicity, we just hook into the done() event handler to call the new function:

                    this.storeDetailsViewModel.getStoreDistance()
                        .done(() => {
                            this._storeDetailsVisible(true);
                            this.indeterminateWaitVisible(false);

                            this.storeDetailsViewModel.getStoreDayHours()
                                .done(() => {
                                    this._storeDetailsVisible(true);
                                    this.indeterminateWaitVisible(false);
                                })
                                .fail((errors: Model.Entities.Error[]) => {
                                    this.indeterminateWaitVisible(false);
                                    NotificationHandler.displayClientErrors(errors);
                                });

Lastly, we update the html to expose the data:

 

Please use the sample code in the ZIP archive as mentioned above.  This also includes a few other changes not
detailed here, for example in resoures.resjson, Converters.ts.
 

Issues and solutions:

If you cannot run MPOS from the Pos.sln file because it is already installed, uninstall the app first. This link may also be helpful: http://blogs.msdn.com/b/wsdevsol/archive/2013/01/28/registration-of-the-app-failed-another-user-has-already-installed-a-packaged-version-of-this-app-an-unpackaged-version-cannot-replace-this.aspx 

Happy coding,

Andreas

 

Original link: http://blogs.msdn.com/b/axsa/archive/2015/05/20/extensibility-in-dynamics-ax-2012-r3-cu8-crt-retailserver-mpos-part-2-new-data-entity.aspx (go back to it for zip file download...)

 


A new utility for upgrading reports when you upgrade TFS

$
0
0

When upgrading TFS to a new version, one set of items that are not upgraded are the already deployed reports for existing projects. The existing reports will continue to work but we’ve made a large number of performance enhancements and other minor tweaks, especially on the “X” Overview reports (in each project there are a couple of rollup reports that are specific to the template but which all work the same way).

The TFS Reporting Bulk Update tool is now available on Codeplex. Currently the Team Foundation Power Tools (tfpt.exe) has an addprojectreports command which will create a reporting site for an existing project and upload the report files for that process template (or reload the existing reports in case they were overridden). But what about when you have to that for tens or hundreds of projects? That’s where the TFS Reporting Bulk Update tool comes in handy.

In order to use this tool the following must be true:

  • You have at least the TFS 2013 Team Foundation Power Tools installed
  • Your server has been upgraded to at least TFS 2013 Cumulative Update 3
    • This is the version in which the updated reports were released in

After running this script and executing the generated batch file, all projects created in TFS 2012 through TFS 2013 Cumulative Update 2 will have their report rdl files upgraded to the newer version of the reports.

To get the PowerShell script, go to the Source Code page and click Download. The PowerShell script is freely modifiable but in short it automatically generates all of the tfpt addprojectreport commands for all projects on your server.

As noted in the PowerShell script, if the project was upgraded from a TFS 2010 project, the correct meta-data to figure out what process template to pull the reports from is unavailable. However, the logic in the PowerShell script can be changed so that it uses a specific version or you can simply add the values to the generated file after the fact.

Hopefully this will make the reporting upgrade process easier and faster as you migrate to TFS 2013 and beyond!

Intermittent SignIn issues with Visual Studio Online - 5/20 - Mitigated

$
0
0

Initial Update: Monday, 20 May 2015 20:00 UTC

We had identified an issue earlier where some of our customers may have seen intermittent sign-in issues while accessing Azure Active Directory backed accounts. We have worked with our partners in Azure and have resolved this issue.

Customers should no longer see any issues while accessing their Azure Active Directly backed accounts in Visual Studio Online.

We apologize for any inconvenience this may have caused.

Sincerely,
VS Online Service Delivery Team

VSO Status Inspector – A walk through on creating a Visual Studio extension to track VSO Status, by Utkarsh Shigihalli

$
0
0

Utkarsh Shigihalli recently developed a new Visual Studio extension to track the Visual Studio Online status. Here is his story …


Introduction

Recently I wrote a Visual Studio extension called VSO Status Inspector to monitor the status of Visual Studio Online (VSO). If you haven’t yet I would encourage you to download it from the Visual Studio extension gallery. It supports both Visual Studio 2013 and 2015.

VSO Status Inspector extension, polls Visual Studio Support Overview page periodically and parses the overall status into an icon that is rendered on the Visual Studio status bar.

The extension targets to raise awareness of VSO issues to developers working in the IDE. While what the extension does is trivial, there is a lot happening under the hood to hook into the IDE. In this blog post I endeavour to walk you through the mechanics of developing this extension and integrating it with different parts of Visual Studio like status bar, output window and options window.

vso_status_inspector

I assume that you are already aware of basics of writing Visual Studio extensions and know how to create a Visual Studio Package.
If you are new to extending Visual Studio, suggest you to start from this
page on MSDN.

I assume that you are already aware of basics of writing Visual Studio extensions and know how to create a Visual Studio Package. If you are new to extending Visual Studio, suggest you to start from this page on MSDN.

For developing this extension I have used following components:

I will break this blog post in to following sections:

  • How to integrate with Visual Studio status bar and display a custom icon
  • How to write messages to VS Output window
  • How to integrate with Options window
  • Parsing the VSO status from support view page
  • Putting everything together

1 - How to integrate with Visual Studio status bar and display a custom icon

Visual Studio status bar is one of the important components of Visual Studio IDE. It provides subtle but clear visual indications on the current context and state of the Visual Studio IDE. For example, status bar turns orange when you are debugging, turns blue when you are loading a solution and violet color when its idle.

vso_status_inspector_statusbar

Internally, Visual Studio status bar is composed of four different regions as in the screen shot below.

vso_status_inspector_statusbar_regions

VSO Status Inspector extension uses Animated icon area as we display an icon based on the VSO status. So let's see how to do that.

In our extension package class, we first need to get the instance of status bar. Visual Studio SDK (VS SDK) exposes interface IVsStatusbar to access the VS status bar. This interface provides different methods for accessing the different status bar regions. So to get the instance of status bar we use GetServicemethod of our extension's Package class and get instance of IVsStatusBar.

I am declaring a property which will return the instance of IVsStatusBar.


public IVsStatusbar StatusBar
{
    get
    {
        if (bar == null)
        {
            bar = GetService(typeof(SVsStatusbar)) as IVsStatusbar;
        }
        return bar;
    }
}


Once we have the status bar instance, we can start calling the methods exposed by interface to perform actions on the status bar.

For this extension I have used Animationmethod as we need to display a icon in animation area.

Note that VSSDK provides few default icons (like Build, Save etc) which can also be used with Animationmethod.


object icon = (short)Microsoft.VisualStudio.Shell.Interop.Constants.SBAI_Deploy;
StatusBar.Animation(1, ref icon);


To display a custom icon, we need to take the custom icon we want to display and create a GDI bitmap and pass the reference to the Animation method - which is exactly what we are doing in below code snippet.


IntPtr _hdcBitmap = IntPtr.Zero;
Bitmap b = ResizeImage(icon, 16);
_hdcBitmap = b.GetHbitmap();
object hdcObject = (object)_hdcBitmap;
StatusBar.Animation(1, ref hdcObject);


GetHbitmap() method converts the image to GDI bitmap object from the given System.Drawing.Bitmap object. The ResizeImage(...) method alters the given image to the size defined (16px in this case) and draws the high quality image and returns the bitmap.


public static Bitmap ResizeImage(Bitmap imgToResize, int newHeight)
{
    int sourceWidth = imgToResize.Width;
    int sourceHeight = imgToResize.Height;
    float nPercentH = ((float)newHeight / (float)sourceHeight);
    int destWidth = Math.Max((int)Math.Round(sourceWidth * nPercentH), 1);
    int destHeight = newHeight;
    Bitmap bitmap = new Bitmap(destWidth, destHeight);
    using (Graphics graphics = Graphics.FromImage(bitmap))
    {
        graphics.SmoothingMode = SmoothingMode.HighQuality;
        graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
        graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
        graphics.DrawImage(imgToResize, 0, 0, destWidth, destHeight);
    }
    return bitmap;
}


2 - How to write messages to Output window

VSO Status Inspector extension also outputs the complete information retrieved from the support overview page to the output window.

vso_status_inspector_output

Output window consists of different panes, which can be selected from the drop down. If you notice screen shot above, we have selected custom pane called VSO Status Inspector. A custom pane is used to display information only related to this extension.

To interact with output window, VS SDK provides another interface IVsOutputWindow. Lets access that by defining another property as below.


public IVsOutputWindow OutputWindow
{
    get
    {
        if (_outputWindow == null)
        {
            _outputWindow = (IVsOutputWindow)GetService(typeof(SVsOutputWindow));
            return _outputWindow;
        }
        return _outputWindow;
    }
}


Now to write to the output window we use the below code snippet.


private Guid _paneGuid = new Guid("{170638A1-CFD7-47C8-975A-FBAA9E532AD5}");
private void WriteToOutputWindow(string message)
{
    IVsOutputWindowPane outputPane;
    OutputWindow.GetPane(ref _paneGuid, out outputPane);
    if (outputPane == null)
    {
        // Create a new pane if not found
        OutputWindow.CreatePane(ref _paneGuid, EXTENSION_NAME, Convert.ToInt32(true),
                                Convert.ToInt32(false));
    }
    // Retrieve the new pane.
    OutputWindow.GetPane(ref _paneGuid, out outputPane);
    outputPane.OutputStringThreadSafe(string.Format("[{0}]\t{1}",
                                      DateTime.Now.ToString("hh:mm:ss tt"), message));
    outputPane.OutputStringThreadSafe(Environment.NewLine);
}


In the above code we first try to get our custom (VSO Status Inspector) pane defined by a unique guid. If we cannot find any pane with our guid, we create a new pane. Finally we write the message passed via the parameter to the output window.

3 - How to integrate with Options window

By default, VSO Status Inspectorpolls for status every 60 seconds. However, we allow user to customize this interval in our extension. To do that user goes to Tools -> Options -> VSO Inspector and can change the interval.

vso_status_inspector_options

To achieve this, we need to integrate our extension with Visual Studio options window. To do that, first we need to define a custom class and inherit from DialogPage class of VS SDK.


[ClassInterface(ClassInterfaceType.AutoDual)]
[CLSCompliant(false), ComVisible(true)]
public class VSOStatusInspectorOptions : DialogPage
{
    private int _interval = 60;
    [Category("General")]
    [DisplayName(@"Polling Interval (in seconds)")]
    [Description("Number of seconds between each poll.")]
    public int Interval
    {
        get { return _interval; }
        set { _interval = value; }
    }
}


The class above is simple with one property to hold the interval. If no value is specified we initialize interval as 60 seconds. Note that, the property has few annotations like category (under which this property will be visible), display name (display name for this property) and description (displayed in the help section under options window).

Finally, we also need to let Visual Studio know that our extension provides a options window, so that Visual Studio makes certain registry changes when our extension is installed. We do that, by decorating our package class with attribute ProvideOptionPage as below.


[ProvideOptionPage(typeof(VSOStatusInspectorOptions), EXTENSION_NAME, "General", 0, 0, true)]
public sealed class VSOStatusInspectorPackage : Package
{
    private const string EXTENSION_NAME = "VSO Status Inspector";
...
}


Finally, the unterval value set in the option window can be accessed in the extension as below.


//get interval from options
var _options = (VSOStatusInspectorOptions)GetDialogPage(typeof(VSOStatusInspectorOptions));
var interval = _options.Interval;


4 - Parsing the VSO status from support view page

As noted during the beginning of this article, the status is retrieved from VSO Support Overview page. The support page contains a div with status information and an icon. The markup is as below.


<div class="TfsServiceStatus">
    <div data-fragmentname="StatusAvailable" id="Fragment_StatusAvailable" xmlns="http://www.w3.org/1999/xhtml">
        <div class="DetailedImage" style="position: relative">
            <img id="GREEN" alt="Green-Service is up"
                 src=https://i3-vso.sec.s-msft.com/dynimg/IC711323.png title="Green-Service is up" xmlns="">
            <div class="RichText" style="position:absolute;top:18px;left:62px" xmlns="http://www.w3.org/1999/xhtml">
                <h1 xmlns="">Visual Studio Online is up and running</h1>
                <p xmlns="">Everything is looking good</p>
                <p xmlns="">For details and history, check out the
                           
<a href="http://blogs.msdn.com/b/vsoservice/">Visual Studio Service Blog.</a></p>
            </div>
        </div>
    </div>
</div>


To parse the above markup, I used very popular HTML parser - HtmlAgilityPack.

The HtmlAgilityPack library makes parsing HTML very easy. We parse the above HTML from support overview page is as below.


HtmlWeb htmlWeb = new HtmlWeb();
HtmlDocument doc = htmlWeb.Load("https://www.visualstudio.com/en-us/support/support-overview-vs.aspx");
var div = doc.DocumentNode.SelectSingleNode("//div[@class='TfsServiceStatus']");
var img = div.SelectSingleNode("//img[@id]");
var h1 = div.SelectSingleNode("//div[@class='RichText']/h1");
var p = div.SelectSingleNode("//div[@class='RichText']/p");


In the code above, we are loading the page markup in to a document and then getting the id and other tags under div with class named TfsServiceStatus. I decide the status of VSO based on the image's id attribute used in the img tag and then also get the text within header (h1) and paragraph (p) tags to display it in VS output window.

So code for displaying different icon is as below. That is if img contains GREEN icon, that means VSO is up and so on.


if (imageId == "GREEN")
// display green icon
else if (imageId == "YELLOW")
// display yellow icon
else if (imageId == "RED")
// display red icon
else
// display different icon when status cannot be identified


5 - Putting everything together

Now, only two more steps remain in our extension:

  • Polling the status page to get up-to-date status and,
  • Auto load our extension when Visual Studio is launched.

Polling the status page to get up to date status

To poll for status, we first get the interval defined in the Options page and then initialize the timer in package's Initialize method as below.


protected override void Initialize()
{
    base.Initialize();
    //Set to unknown icon till we find the status
    SetIcon(Resources.unknown);
    //get interval from options
    _options = (VSOStatusInspectorOptions)GetDialogPage(typeof(VSOStatusInspectorOptions));
    //call the timer code first without waiting for timer trigger
    OnTimerTick(null, null);
    //Set the timer
    var timer = new Timer();
    timer.Interval = TimeSpan.FromSeconds(_options.Interval).TotalMilliseconds;
    timer.Elapsed += OnTimerTick;
    timer.Start();
}


Auto load our extension when Visual Studio is launched

Finally, Visual Studio by default loads the extensions on demand or when context is initialized for which extension is dependent on. For example, when you click a menu item in your extension, your extension is loaded on demand as menu click requires your extension to be initialized. Similarly, if you perform an action triggering a context change in which your extension operates (like you trigger a solution load) and all extensions depending on SolutionExists context are loaded.

For our extension, we wanted to load the extension always. That is either when Visual Studio is opened in empty environment (no solution) or when user is working in a solution (solution exists). So we decorate our package class with two more attributes to set these context as below.


[ProvideAutoLoad(UIContextGuids80.NoSolution)]
[ProvideAutoLoad(UIContextGuids80.SolutionExists)]


Our extension is now loaded by Visual Studio always.

That's it for this post. You can download the complete source code from GitHub.

I hope you got a good overview of how every part (Status bar, Options window and output window for example) of Visual Studio can be extended seamlessly using Visual Studio SDK. So until next time, happy extending Visual Studio.

 

Further reading

Web deployment & NAS Devices !!!!

$
0
0

Folks,

This is my first blog and please feel free to comment if you need additional info on the below content :)

So on this post we will be focusing on issues that occur with NAS devices when we deploy using Webdeploy API

The deployment done from Web Deploy APIs to a file share which is on NAS device fails to write or update existing files.

it first "Deletes" the File share content and then Updates the new content which was not expected.

Note :  We have a -enableRule:DoNotDeleteRule  switch available with Msdeploy but for our case that customer was also deleting some folders through the code and it did not help us.

More information on DoNotDeleteRule is here.

===================================================================================

In one of customer scenario similar to this we found that we have file share on  NAS device known ( NetAppStorage Systems). We use webdeploy APIs to deploy content to NAS device.

Also we have console application which uses WEB DEPLOY APIs & it is internally called using another asp.net website which has the User interface to Deploy the content of one server to a file share.

 ( it was a custom built application to deploy the content via asp.net code to file share directly )

 

Note : This will also fail if we are deploying using Visual studio or by command line using web deploy tool to a file share ( NAS) device.

We see now with NAS device when we deploy the content first " Deletes the content existing on file share and then Freshly copies new information "

 and in 2003 file share or any windows file share it was working fine & not deleting the content and it was running as expected.

The problem started when they Upgraded to use a NAS device and decommissioned windows boxes.

 

 

Fix for this Issue :

====================================================================================

If the file system is NTFS, then Web Deploy will rely on the system to keep things sorted.  If it isn’t, then Web Deploy will sort it first. 

However if the system is not NTFS and for some reason it can’t discover this, then it will assume it’s NTFS. 

We figured out from Dev team of webdeploy that  connecting to windows files share by default windows file shares will be sorted and content will be upgraded easily but NAS device file structure will not be sorted by

default we need to enable a Registry to force webdeploy to sort before it upgrades the files on NAS share.

By doing a run “dir” from a command line on the NAS to see if everything is in alphabetical order.  If it isn’t, then you need to force Web Deploy to do the sorting. 

Before the Registry changes we see :

09/25/2014  03:00 PM    <DIR>          .
09/25/2014  03:00 PM    <DIR>          ..
09/25/2014  03:00 PM               829 About.aspx
09/25/2014  03:00 PM               171 ApplicationError.htm
09/25/2014  03:00 PM             1,072 Bundle.config
09/25/2014  03:00 PM               545 CheckSite.aspx
09/25/2014  03:00 PM             2,270 Default.aspx
09/25/2014  03:00 PM            32,038 favicon.ico
09/25/2014  03:00 PM               122 Global.asax
09/25/2014  03:00 PM             2,026 packages.config
09/25/2014  03:00 PM             3,507 Site.Master
09/25/2014  03:00 PM             4,632 Web.config
09/25/2014  03:00 PM             1,304 Web.Debug.config
09/25/2014  03:00 PM             1,365 Web.Release.config
09/25/2014  03:00 PM    <DIR>          bin
09/25/2014  03:00 PM    <DIR>          Content
09/25/2014  03:00 PM    <DIR>          Images
09/25/2014  03:00 PM    <DIR>          Scripts
              12 File(s)         49,881 bytes
              

 

 To do this, you can create a DWORD called “AlwaysSortDirectories” and set it to 1 under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\IIS Extensions\MSDeploy\3 in the registry

on Source machine ( on server from where the deployment is happening) in our case IIS server .

 By doing the above resolved the issue. 

We see After the Registry setting done and restarted the box we see behaviors of files sorting on NAS.

 

09/25/2014  04:15 PM    <DIR>          .
09/25/2014  04:15 PM    <DIR>          ..
09/25/2014  03:00 PM               829 About.aspx
09/25/2014  03:00 PM               171 ApplicationError.htm
09/25/2014  04:15 PM    <DIR>          bin
09/25/2014  03:00 PM             1,072 Bundle.config
09/25/2014  03:00 PM               545 CheckSite.aspx
09/25/2014  04:15 PM    <DIR>          Content
09/25/2014  03:00 PM             2,270 Default.aspx
09/25/2014  03:00 PM            32,038 favicon.ico
09/25/2014  03:00 PM               122 Global.asax
09/25/2014  04:15 PM    <DIR>          Images
09/25/2014  03:00 PM             2,026 packages.config
09/25/2014  04:15 PM    <DIR>          Scripts
09/25/2014  03:00 PM             3,507 Site.Master
09/25/2014  03:00 PM             4,632 Web.config
09/25/2014  03:00 PM             1,304 Web.Debug.config
09/25/2014  03:00 PM             1,365 Web.Release.config
              12 File(s)         49,881 bytes
              

-- The Dword registry change has fixed the issue of sorting by Web deploy before the content is copied to NAS device.

 

I hope this helps :)

 

Irfank

 

Error handling, part 5: an error infrastructure for Windows

$
0
0

<<Part 4

Overview

The first error infrastructure I've done was for my Triceps project. Its description can be found in the Triceps manual. It has proven itself so convenient and useful that I wanted something similar on Windows. And that Windows implementation is what I want to describe here. Some of the features from Triceps got dropped to reduce the time spent and because of a bit different focus (Triceps is really a programming language, so it needs the good support of error nesting for its compilation error reports, but it's something that can be lived without in other uses, though I'd still like to add it to the Windows implementation at some point). Some of the features with the localization and the IDs got added. Like I've said before, it's not everything I could think of but it's a good approximation, and it might get extended in the future.

I'd want to show the other examples of Windows code as well, such as an easy way to write a Windows service. But this code uses the error reporting library, and that makes me want even more to present the error handling library first.

Even though a lot of my posts here are about PowerShell, I really write most of the code in C/C++. The library I'm about to show is in C++. I've been experimenting with the better error reporting in PowerShell too but that's a separate subject to be discussed later.

The source code can be found in the attached file. Since it has turned out that only one file can be attached to a post, I've combined both files ErrorHelpers.hpp and ErrorHelpers.cpp into ErrorHelpers.txt.You'd need to split them out manually if you download it.

Basic errors

Let's start with the examples of how the library gets used. The most widely used class is Erref ("error reference"). It's really a C++11 counted reference with a bit of helper methods added:

class Erref: public std::shared_ptr<ErrorMsg>

One reason to add the methods to it is that the reference may contain a NULL (such as if there are no errors), and some methods are much more convenient when they can work on the NULL references too. Another reason is for the methods that build the chains of errors. There methods may need to change the original reference, they must apply to the reference and not to the error object stored within it. I'll tell more about the methods in a moment.

The typical usage goes like this:

void SomeMethod(arg1, arg2, Erref &err);
...
Erref err;
SomeMethod(x, y, err);
if (err) {
  // handle or report the error ...
}

If the method experiences an error, it leaves a reference to it in the Erref.

The actual error is represented with the class ErrorMsg. It contains:

    const Source *source_; // The source of this error. NULL means "Windows NT errors."
    DWORD code_; // The error code, convenient for the machine checking.
    std::wstring msg_; // The error message in a human-readable format.
    std::shared_ptr<ErrorMsg> chain_; // The chained error, or NULL.

The contents represents the ideas I've described in the previous installments. msg_ describes the error in the human-readable way. code_ is the machine-readable error code. The code has a meaning within the code module that reported the error. Different modules may have the overlapping code spaces. And source_ represents the code space (or if you prefer, "namespace") of the errors in a module. A module obviously can't cross the DLL boundaries, but nothing stops you from having multiple error sources (i.e. error sources) in one DLL or one program. I'll tell more about the sources in a moment. Finally, the error messages can be chained, with the head of the chain providing the high-level descriptions and the mode detail provided the deeper you go into the chain.

The ErrorMsg objects can be constructed directly but that's mostly used by the internals of the implementation, normally the Source acts as a factory for the messages, and the copying of the message chains is normally done with the Erref method:

Erref err1, err2;
err2 = err2.copy(); 

Here we come back to the class ErrorMsg::Source. Each module that will report errors has a static ErrorMsg::Source object. Each source has a name, and optionally a GUID. You define it as:

ErrorMsg::Source MyErrorSource(L"MyModuleName", &MyModuleErrorGuid);

If you don't care about GUIDs, you can use NULL for the GUID pointer. Personally, I haven't found much use for the guids, and the library doesn't use them in any way at the moment. They've been put there more as a placeholder that might become useful in the future, and so far they haven't. The basic Source let's you create the non-localized error messages. Which is convenient for things like the simple debugging messages or as a last resort if you can't get the localization. The basic method that creates an error messages uses the printf()-like formatting:

Erref err = MyErrorSource.mkString(errorCode, L"some error message with numbers %d 0x%x", 1, 2);

It's also often necessary to create an error that details a system error code. And there is a special method for that:

Erref err = MyErrorSource.mkSystem(GetLastError(), INPUT_FILE_OPEN_FAILED, L"failed to open the input file \"%ls\"", fileName);

It takes two error codes: the system error returned by the Windows functions and the application-level code for the application-level explanation (that code it up to you to define). And as usual the explanatory string, created with the printf()-like formatting. Since there is space for only one error code in an ErrorMsg, you might wonder, where does the second error code go? The answer is that this method creates not a single ErrorMsg but a chain of two of them. The first one contains the application-level explanation and code as usual. The second, chained, one will contain the system error code along with the error string for that code extracted from Windows. That system error string will be localized, as it's normally done by windows.

The ErrorMsg objects for the Windows system errors are a bit of a special case, they contain NULL in the source_ pointer. There is also a special case for the errno errors returned from the C stdlib, I'll describe it later.

Localized errors

The proper programs need to use the localized error messages, and the subclass ErrorMsg::MuiSource helps with that. MUI is the windows term for the subsystem of the localized messages. You start by defining the messages in an .mc file that looks like this:

;//
;//  Status values are 32 bit values layed out as follows:
;//
;//   3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
;//   1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
;//  +---+-+-------------------------+-------------------------------+
;//  |Sev|C|       Facility          |               Code            |
;//  +---+-+-------------------------+-------------------------------+
;//
;//  where
;//
;//      Sev - is the severity code
;//
;//          00 - Success
;//          01 - Informational
;//          10 - Warning
;//          11 - Error
;//
;//      C - is the Customer code flag
;//
;//      Facility - is the facility code
;//
;//      Code - is the facility's status code
;//

MessageIdTypedef=ULONG

SeverityNames=(
    Success=0x0:STATUS_SEVERITY_SUCCESS
    Informational=0x1:STATUS_SEVERITY_INFORMATIONAL
    Warning=0x2:STATUS_SEVERITY_WARNING
    Error=0x3:STATUS_SEVERITY_ERROR
)

FacilityNames=(
    MyFacilityName1=0x0001:MY_FACILITY_Name1
    MyFacilityName2=0x0002:MY_FACILITY_Name2
)

MessageId=0x0001
Facility=MyFacilityName1
Severity=Error
SymbolicName=MUI_INPUT_FILE_OPEN_FAILED
Language=English
Failed to open the input file  "%1!ls!".
.

There you define every which message for every which language you plan to support.

Or alternatively you can define a .man file and define the error strings there along with the things like the ETW events that your program may send. Personally, I find the .mc files a lot more readable. Either way, then you compile the messages into a binary section:

mc -h DirectoryForGeneratedHeaders -r DirectoryForBinaries MyMessages.mc

It would compile an .mc file, or a .man file, or (a little-known fact) both together but no more than one of each:

mc -h DirectoryForGeneratedHeaders -r DirectoryForBinaries MyMessages.mc MyManifest.man

This will produce DirectoryForGeneratedHeaders\MyMessages.h (and/or DirectoryForGeneratedHeaders\MyManifest.h) with the macro definitions for the message codes, and in DirectoryForBinaries it will place MyMessages.rc and MSG*.bin. The .bin files will contain the actual messages for various languages, the .rc (file for the resource compiler) will contain the reference to the .bin files. And then the compiled resource file (and the binary messages referred by it) are fed into the linker to create the executable myprogram.exe and the localized files for it myprogram.exe.mui. I'm a bit fuzzy on how exactly it happens, the build system takes care of it for me, so I'll leave the details of that step up to the inquisitive reader.

Coming back from the detour into the creation into the message files, the MuiSource for reading the message files gets defined like this:

ErrorMsg::MuiSource MyMuiErrorSource(L"MyModuleName", &MyModuleErrorGuid);

Again, feel free to use NULL instead of the pointer to the GUID:

ErrorMsg::MuiSource MyMuiErrorSource(L"MyModuleName", NULL);

Then the ErrorMsg object can be created with:

Erref err = MyMuiErrorSource.mkMui(MUI_INPUT_FILE_OPEN_FAILED, fileName);

It's the same as working with plain strings, only the MUI message ID is used instead of the direct format string. The arguments follow it.

And the localized wrapper messages for the system errors are created like this:

Erref err = MyMuiErrorSource.mkMuiSystem(GetLastError(), MUI_INPUT_FILE_OPEN_FAILED, fileName);

The MUI error sources can't be used with mkString() to avoid the accidental misuse. You must always use the mkMui() methods with them.

Error chaining

How do you chain the errors together? Copying an example from above, if we have some function that receives an error report from some other function it calls, how would it add the extra information and return the enhanced error? The basic logic will go like this, only we need to fill in the bit for combining the errors:

void SomeOtherMethod(int x, int y, Erref &err)
{
  Erref nesterr;
  SomeMethod(x, y, nesterr);
  if (nesterr) {
    // enhance and report the error
    err = MyMuiErrorSource.mkMui(SOME_METHOD_FAILED, x, y);
    // here we heed to chain nesterr to err
    return; // the error indication is in err
  }
  ...
}

The two Erref methods typically used to chain together two errors are wrap() and append(). Append() appends the second error to the first one:

err.append(nesterr);

Wrap() does the opposite, prepends the second error to the existing chain:

nesterr.wrap(MyMuiErrorSource.mkMui(SOME_METHOD_FAILED, x, y));
err = nesterr;

Note that this way you don't really need the extra variable nesterr, you can call directly SomeMethod(x, y, err), and then add the wrapping directly to err.

Technically, the working of wrap() is a bit more complicated than append(): if the argument error is a chain itself, only its first error will be prepended, and the rest will be appended to the end of the chain. This is done because normally the argument of wrap() is expected to contain only one message, and if there are more messages, they would be explanations of some internal error, such as the library being unable to open the MUI file. Because of that these explanations of the internal errors get placed at the end of the chain. There is also the method splice() that splices in another object into the current one, but wrap() is more convenient to use. With splice() the same meaning can be achieved with:

err = MyMuiErrorSource.mkMui(SOME_METHOD_FAILED, x, y);
err.splice(nesterr);

All the methods append(), wrap(), and splice() can handle NULLs in both the argument and in the reference itself. A NULL argument will leave the current reference unchanged. A NULL in the current reference will have it changed to point to the same unchanged error chain as the argument.

Printing the errors

Now you've build an error chain, what do you do with it? You can convert it to a string, and then do whatever you please with it:

wstring s = err->toString();

It works even if err is a NULL reference, then it will return an empty string. The string conversion added a bit of indenting to the messages under the top one, making them easier to read. Each error will start with its source name and the error code in decimal and hex, followed by the text of the message. The errors are separated by \n.The source name for the system errors is printed as "NT".

Since the error chains can be pretty long, if you have to deal with writing to some limited-size buffers (such as constructing the ETW messages from the errors), you may need to break up a single error into multiple buffers. The method toLomitedString() helps with that:

Erref cont;
wstring s = err->toLimitedString(MY_LIMIT_IN_CHARS, cont);

It will take as many messages in the chain and convert them to a string. And if there are buffers left, the reference cont will point to the point somewhere in the middle of the chain, that can be used in the next call of toLimitedString(), giving the continuation buffers. After the whole chain is converted, cont will be set to NULL.

Other helper methods in Erref

Erref has a few more helper methods:

bool v = err.hasError();

Checks that the reference is not empty and the error code in it is not 0. The code 0 (ERROR_SUCCESS) can be used to indicate some warning or informational message without an error. In practice, this didn't work out too well. It's not flexible enough to indicate the warning or informational (or verbose or debug) level, and the presence of errors doesn't propagate all the way up through the chain (unlike the error handling in Triceps). And it just doesn't mesh with the concept of each message having its code, even if it's an informational message, so it doesn't coexist well with MUI. This is something to consider for the future, for now when I need the messages of different levels, I keep them  in the separate chains:

SomeFunction(x, y, Erref &err, Erref &warn, Erref &info);

The next helper method returns the code from the referenced error:

DWORD c = err.getCode();

If err is NULL, it returns 0 (ERROR_SUCCESS).

The next error in the can can be read with:

Erref tail = err.getChain();

As usual, it's safe to call if err is NULL, and will return NULL in this case.

And if you know that the message contains a chained message of a known type in it (such as if it was created with mkSystem() or mkMuiSystem()), you can get the nested error code in one go:

DWORD c = err.getChainCode();

I've already mentioned the copying:

Erref err2 = err.copy();

Copying comes handy if you want to use an error in two places, chaining it to two chains. Just chaining it to two chains will cause two chains to merge, with all kinds of confusing side effects. Instead make a copy for one chain, and use the original error for another one. The copying is full-depth, it copies the whole chain growing from the error message.

And for the simple test program, the following method comes handy:

err.printAndExitOnError();

If the code in the error is not 0, this method converts it to string, prints to stdout, and exits with the value 1.

Errno errors

The errors from the C standard library (errno) are completely independent from the normal Windows errors. The errno uses the same values for the completely separate meaning. Its translation to strings is also different. This is where the concept of different namespaces as defined by the error sources gets useful. A special pre-defined source ErrnoSource (with name "Errno") can be used to report the errno errors. There also are a couple of helper static methods in the class ErrorMsg:

static std::shared_ptr<ErrorMsg> mkErrno(DWORD code);
static std::shared_ptr<ErrorMsg> mkErrno(); // calls _get_errno() to get the code

They create the errors from the ErrnoSource and include the human-readable text of the error. The function _get_errno() is the nicer thread-safe way to get the value of the errno for the current thread. The typical use goes like this:

if ( (f = fopen(filename, "rt")) < 0) {
  err = ErrorMsg::mkErrno();
  err.wrap(MyMuiErrorSource.mkMui(MUI_INPUT_FILE_OPEN_FAILED, filename));
  return;
}

Internal errors

The error subsystem itself may experience errors. For example, if the MUI file is not present, it won't be able to print the MUI messages. In this case the original error will be left with the proper error and source but an empty text, and the explanation of the internal error will be chained to it. The internal errors are reported on the private error source, with the name "ErrorMsg".

Other methods of ErrorMsg

If you want to define some wrappers for the ErrrorMsg creation, you would need to use varargs, and then pass the va_list to the low-level ErrorMsg factory methods. They are:

mkStringVa()
mkSystemVa();
mkMuiVa();
mkMuiSystemVa();

I won't describe them here in detail, just so that you know that they exist if you need them.

Helper functions for formatted printing to C++ strings

I'm very surprised that we don't have the standard functions that work like prinft(), only produce a C++ string. The <iostream> formatted output outright sucks. So at every place I write a version of the printf() to C++ string. The error library uses them, so I've got them included,  and you can as well use them directly:

// Like [v]sprintf() but returns a string with the result.
std::wstring __cdecl wstrprintf(
    _In_z_ _Printf_format_string_ const WCHAR *fmt,
    ...
    );
std::wstring __cdecl vwstrprintf(
    _In_z_ const WCHAR *fmt,
    __in va_list args);

// Like wstrprintf, only appends to an existing string object
// instead of making a new one.
void __cdecl wstrAppendF(
    __inout std::wstring &dest, // destination string to append to
    _In_z_ _Printf_format_string_ const WCHAR *fmt,
    ...
    );
void __cdecl vwstrAppendF(
    __inout std::wstring &dest, // destination string to append to
    _In_z_ const WCHAR *fmt,
    __in va_list args);

An interesting point of their implementation on Windows is that unlike the standard sprintf(), _vsnwprintf_s() doesn't return the needed buffer size in case if the string is longer than the available buffer. Instead the buffer size has to be pre-calculated with _vscwprintf().

The internals of reading the MUI strings

Reading the localized strings from the .mui file is another somewhat interesting subject. I haven't found a ready recipe anywhere and had to piece it together by myself.

It starts by getting the handle of the loadable module (DLL or EXE) where the strings  are defined:

        if (!GetModuleHandleExW(
            GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS
            | GET_MODULE_HANDLE_EX_FLAG_UNCHANGED_REFCOUNT,
            anchor,
            &m)
        ) {
            // handle the error ... 
        }

        muiModule_ = m;

Here anchor is any address within the DLL's code or static data. I use the address of the MuiSource object for this purpose, since it obviously would be defined in the same module where the strings are defined.

Then this handle can be fed to FormatMessageW:

        DWORD res = FormatMessageW(
            FORMAT_MESSAGE_ALLOCATE_BUFFER
            | FORMAT_MESSAGE_FROM_HMODULE,
            source->muiModule_,
            code,
            MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
            (LPWSTR)&buf,
            0, &args);

By the way, there is another special kind of errors that could use their own handling, the WMI errors. The errors from the WMI subsystem can be translated by reading them from the module C:\Windows\System32\wbem\wmiutils.dll. The module handle for it can be obtained with LoadMUILibraryW(). But I haven't got around yet to add the support for it similar to errno.

<<Part 4 (To be continued...) 

Use Azure custom routes to enable KMS activation with forced tunneling

$
0
0

Previously, if customers enabled forced tunneling on their subnets, OS activations prior to Windows Server 2012 R2 would fail as they could not connect to the Azure KMS server from their cloud service VIP.  However, thanks to the newly released Azure custom route feature this is no longer the case.  The custom route feature can be used to route activation traffic to the Azure KMS server via the cloud service public VIP which then enables activations to succeed.

In order to configure this, use the Set-AzureRoute command (this is only doable via Azure PowerShell version 0.9.1 and higher) to add an entry for the prefix 23.102.135.246/32 and specify the next hop type as “Internet”. 

Example:

PS C:\> $r = Get-AzureRouteTable -Name "WUSForcedTunnelRouteTable"
PS C:\> Set-AzureRoute -RouteName "To KMS" -AddressPrefix 23.102.135.246/32 -NextHopType Internet -RouteTable $r

Name     : WUSForcedTunnelRouteTable
Location : West US
Label    : Routing Table for Forced Tunneling
Routes   :
          Name                 Address Prefix    Next hop type        Next hop IP address
          ----                 --------------    -------------        -------------------
          defaultroute         0.0.0.0/0         VPNGateway
          to kms               23.102.135.246/32 Internet

 

The highlighted entry is the new route added. 

 As you can see in my example VM, the Guest OS activates successfully.

  

We can also create TCP connections to the KMS server:

C:\Users\dave>psping kms.core.windows.net:1688

PsPing v2.01 - PsPing - ping, latency, bandwidth measurement utility
Copyright (C) 2012-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
 
TCP connect to 23.102.135.246:1688:
5 iterations (warmup 1) connecting test:
Connecting to 23.102.135.246:1688 (warmup): 39.32ms
Connecting to 23.102.135.246:1688: 35.94ms
Connecting to 23.102.135.246:1688: 36.21ms
Connecting to 23.102.135.246:1688: 36.23ms
Connecting to 23.102.135.246:1688: 38.57ms
 
TCP connect statistics for 23.102.135.246:1688:
  Sent = 4, Received = 4, Lost = 0 (0% loss),
  Minimum = 35.94ms, Maximum = 38.57ms, Average = 36.74ms 

As expected, attempting to connect to other Internet resources fails due to forced tunneling (or may go out via an on-prem gateway):

C:\Users\dave>psping www.msn.com:80
PsPing v2.01 - PsPing - ping, latency, bandwidth measurement utility
Copyright (C) 2012-2014 Mark Russinovich
Sysinternals - www.sysinternals.com
 
TCP connect to 204.79.197.203:80:
5 iterations (warmup 1) connecting test:
Connecting to 204.79.197.203:80 (warmup): This operation returned because the timeout period expired.
Connecting to 204.79.197.203:80: This operation returned because the timeout period expired.
Connecting to 204.79.197.203:80: This operation returned because the timeout period expired.
Connecting to 204.79.197.203:80: This operation returned because the timeout period expired.
Connecting to 204.79.197.203:80: This operation returned because the timeout period expired.
 
TCP connect statistics for 204.79.197.203:80:
  Sent = 4, Received = 0, Lost = 4 (100% loss),
  Minimum = 0.00ms, Maximum = 0.00ms, Average = 0.00ms

Dynamics CRM Online 2015 Update 1: ロールアップ列と計算列の新機能

$
0
0

みなさん、こんにちは。

今回は Dynamics CRM Online 2015 Update 1 で追加されたロールアップ列と
計算列の新機能をまとめて紹介します。

ロールアップ列の新機能

関連した活動レコードのロールアップ

これまでレコードに関連した活動を集計したい場合、電話、電子メール、
タスクなど個別に集計した結果を、計算列で合計する必要がありました。
今回のリリースではこれらを一括して集計することが可能です。

以下手順で利用できます。

1. 任意のエンティティで、新規に列を追加します。ここでは取引先企業に
列を追加してみます。

image

2. 「データの種類」で「整数」を選択し、「フィールドの種類」から
「ロールアップ」を選択して、「編集」ボタンをクリックします。

image

3. 関連より「活動 (関連) 」を選択します。

image

4. 任意の集計を設定して保存します。

計算列の新機能

NOW() 日付関数のサポート

今回のリリースで計算実行時の日付を取得できるようになりました。

DIFF() 関数のサポート

2 つの日付型の値より、以下の粒度で差分を取得できるようになりました。

- DIFFMINUTES() : 分
- DIFFHOURS() : 時間
- DIFFDAYS() : 日
- DIFFWEEKS(): 週
- DIFFMONTHS() : 月
- DIFFYEARS() : 年

早速これらを組み合わせて列を作成してみましょう。

今回はサポート案件が作成されてからの経過時間を計算してみます。

1. サポート案件のカスタマイズよりフィールドを追加します。

image

2. 任意の表示名と名前を付けます。「データの種類」で「整数」を選択し、
「フィールドの種類」から「計算」を選択して「編集」をクリックします。

image

3. 条件でサポート案件が調査中か処理中を選択します。

image

4. 操作に以下の式を追加します。

image

5. 「保存して閉じる」をクリックします。

6. 作成したフィールドをフォームの任意の場所に設置してカスタマイズを
公開します。

7. 任意のサポート案件レコードを開いて、意図した結果が出ていることを
確認します。

image

まとめ

今回リリースされた機能で、これまれ実現できなかったシナリオも実現出来る
ようになりました。個人的には NOW() 関数と DIFF 系関数が非常に嬉しいです。
是非お試しください!

- 中村 憲一郎


Seven COOL features we noticed on Visual Studio Online today

$
0
0

We dogfood Visual Studio Online (VSO) 24x7, explore as we evolve, innovate, and continuously fine-tune our process. Our recent Managing agile open-source software projects with Microsoft Visual Studio Online eBook is already dated, thanks to the cool features that are released as part of the regular service updates.

Here are our top seven (7) features we noted today, while working on the setup of the VSO Extensibility Ecosystem, App Sample and Guidance project, which we will cover in more detail in one of the upcoming posts.

  1. After creating a new team project, you can fast-track to your Kanban board to get organised, or to your source control system to manage your code.
    image
  2. Your backlog visually indicates which work items are managed by your or another team. For example, 10516-10528 are owned by another team and cannot be reordered on the ALM team board.
    image
    If we show the area path, this becomes even more evident.
    image
  3. The Value Area allows us to define a business of architecture (runway) value for each work item, as used by the Scaled Agile Framework (SAFe).
  4. We can opt in the Epic backlog level and stop decorating some of our Features with an Epic tag to simulate an Epic.
  5. Board columns can be customised, allowing us to introduce our favourite “in flight” analogy for active projects.
    image
  6. Definition of Done (DoD) can be specified for each column and instead of searching for the DoD in a document or work item, you simply click image.
  7. Part of the customisation is the ability to split columns into Doing and Done.

image

For more information on these and other productivity features, please refer to Visual Studio Online, Get Started, and Visual Studio Online Updates.

Win10 apps in .NET- common library issues

$
0
0

This is Part 2 of my "VB Win10 Apps" series.

  • Part 1: getting started
  • > Part 2: issues with common libraries - JSON.Net, SignalR, SharpDX, SQLite, LiveSDK
  • Part 3: ... (please let me know what you'd like me to write about next!)

 

Json.NET

In VS2015 RC, at runtime, when you invoke JsonConvert.SerializeObject or JsonConvert.DeserializeObject, then you might get a FileNotFoundException saying that it can't load the file "System.Runtime.Serialization".

Fix1: use the "latest stable 6.0.8" version of Newtonsoft.Json, not the preview 7.* version.

Explanation: the 7.* versions of Newtonsoft.Json are using a new way of structuring their NuGet package. It's a fine way, and will work in VS2015 RTM, but just doesn't work yet in RC. The exact text of the error message is

An exception of type 'System.IO.FileNotFoundException' occurred in Newtonsoft.Json.dll but was not handled in user code. Additional information: Could not load file or assembly 'System.Runtime.Serialization, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes' or one of its components

 

Fix2: if you're trying to serialize/deserialize a type that's defined in a PCL, then change the PCL targets to exclude Silverlight5, and to only use .NET >= 4.5.1

Explanation: If your PCL targets .NET4.0.3 or older, or it targets Silverlight5, then it is a so-called "old style PCL". These will work fine in UWP apps when we get to VS2015 RTM, but they're not quite fully working in RC. You'll find this for instance if you try to reflect upon or serialize a class that's defined in such an old style PCL and has <DataContract> on it. The exact text of the error message is similar to the above, but comes from a different assembly:

An exception of type 'System.IO.FileNotFoundException' occurred in mscorlib.ni.dll but was not handled in user code. Additional information: Could not load file or assembly 'System.Runtime.Serialization, Version=2.0.5.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e, Retargetable=Yes' or one of its components

 

 

SignalR

SignalR Client is currently an "old-style PCL" and suffers from the same problem as above. The fix [2] above is to rebuild SIgnalR and remove the Silverlight target. This is a difficult task :( but MVP Joost van Schaik has helpfully provided detailed instructions:

http://dotnetbyexample.blogspot.it/2015/05/getting-signalr-clients-to-work-on.html

 

 

SharpDX

SharpDX is a set of managed wrappers to use DirectX from within VB and C# apps. It's what underpins other high-level games libraries like MonoGame.

SharpDX works fine in UWP apps in VS2015 RC. It's a NuGet package: you add a reference to it via Project References > Manage NuGetReferences.

  • SharpDX.Toolkit doesn't install on VS2015 RC for UWP apps. I'm not sure if it's going to be able to install in VS2015 RTM. Note that the SharpDX toolkit is deprecated and no longer exists in SharpDX preview-3.0 going forwards.

 

 

SQLite

SQLite is the best way to have a local database in your UWP app. It works fine in VS2015 RC.

  1. Install the "pre-release Sqlite for UAP apps" from the Sqlite download page http://www.sqlite.org/download.html. This is only needed once per developer machine.
  2. Within your project, right click on References > AddReference > WindowsUniversal > Extensions > SQLite for Universal App Platform

That installs the low-level Sqlite native library. (Don't worry about the current build-time warning "No SDKs found"). On top of that you need a VB-accessible wrapper from it. There are a few common wrappers:

 

Sqlite-net. This NuGet wrapper is currently delivered as C#-source-code only. So you'll need to create a trivial C# PCL to call it from VB...

  1. Right click on Solution > Add > NewProject > C# > Windows > Windows8 > Class Library (Portable for 8.1 Universal).
  2. Within that C# PCL, right click on References > Manage Nuget References > sqlite-net
  3. Within that C# PCL, go to Project > Properties > Build. Change the dropdown at the top to "Configuration: all configurations. And inside "Conditional compilation symbols type NETFX_CORE
  4. Within your app, right click on References > AddReference > Projects > Solution, and add a reference to the PCL you just created.

Here's some example sqlite-net code. I've shown two techniques to query the database, using "db.QueryAsync" and "db.Table.Where". The second db.Table.Where form currently has a limitation in sqlite-net, which doesn't understand string equality tests from VB (until they accept a pull-request I sent them!)

Async Function TestSqlAsync() As Task
    Dim db As New SQLiteAsyncConnection("db_name_1")
    Await db.CreateTableAsync(Of Customer)
    Await db.InsertAsync(New Customer With {.Name = "Fred", .Age = 41})
    Dim results1 = Await db.QueryAsync(Of Customer)("select * from Customer")
    Dim results2 = Await db.Table(Of Customer).Where(Function(c) c.Age > 20).ToListAsync()
End Function

Public Class Customer
    <PrimaryKey, AutoIncrement> Public Property Id As Integer
    Public Property Name As String
    Public Property Age As Integer
End Class

 

Sqlite.WinRT.UAP - I've not personally used this NuGet package and don't know what it's like

 

LiveSDK

The LiveSDK hasn't yet been updated for UWP. Also in VS2015 RC you can't yet right-click on your project > Store > Associate App With Store. So there's not much you could do anyway.

 

 

IAsyncAction

There's an error message that a few people have encountered:

The type 'IAsyncOperation<>' is defined in an assembly that is not referenced. You must add a reference to assembly 'Windows,Version=255.255.255.255, Culture=neutral, PublicKeyToken=null, ContentType=WindowsRuntime'.

This error should never occur! It's usually a sign that you tried to upgrade from an earlier version of VS! If you get this error on a clean install of VS2015 RC, please let me know urgently. Thanks.

セキュリティ更新プログラム適用後、 Lync 2010 の画面が崩れる

$
0
0

こんにちは、 Japan Lync support team です。
先日、セキュリティの更新プログラムが公開されました。

https://support.microsoft.com/ja-jp/kb/3057110

その中で、 Lync 2010 に更新プログラムを適用すると以下のように表示が崩れるという問題が報告されております。
(他にも表示が崩れる箇所があります)

マイクロソフトは製品の不具合であると認識しており、現在修正プログラムを作成しております。
公開時期など決まりましたら改めてご報告させていただきますが、まずはご連絡させていただきます。

ご迷惑をおかけしまして、申し訳ございませんが、何卒よろしくお願いいたします。

Windows 10 - App to App communication in Universal Windows Applications

$
0
0

Windows 10 provides many techniques that developers can use for inter- app communications. Publisher Cache Folders is one such technique. The idea is that A publisher can have multiple applications in Windows Store and they may share common data , settings and other artifacts that could be leveraged across multiple applications.

While using Publisher Cache Folders in your application , here are few things to keep in mind

-The apps reading/writing or using Publisher Cache Folders needs to be from same publisher.

-Shared storage location is provisioned automatically when the first app for a given publisher is installed and it will be removed automatically when the last app for a given publisher is uninstalled

-You are responsible for data management /versioning inside the Publisher Cache Folder.

-Publisher Cache folder can also live in a SD card

Implementing Publisher Cache Folders is really easy. In following example I have implemented a simple read write on a text file that is living inside Publisher Cache Folders from 2 different applications.

As a first step we will declare the Publisher Cache Folder in the application manifest of each application wanting to use the folder.

<Extensions> <Extension Category="windows.publisherCacheFolders"> <PublisherCacheFolders > <Folder Name="CommonFolder"/> </PublisherCacheFolders> </Extension> 

 

Now folder is available for us to be utilized from different applications. Writing to the folder from App1

 async void WritetoPublisherFolder() { StorageFolder SharedFolder = Windows.Storage.ApplicationData.Current.GetPublisherCacheFolder("CommonFolder"); StorageFile newFile = await SharedFolder.CreateFileAsync("SharedFile.txt", CreationCollisionOption.OpenIfExists); await FileIO.WriteTextAsync(newFile, textBox.Text.ToString()); } 

Reading from Folder from App2

 async void ReadFromSharedFolder() { StorageFolder SharedFolder = Windows.Storage.ApplicationData.Current.GetPublisherCacheFolder("CommonFolder"); StorageFile newFile = await SharedFolder.GetFileAsync("SharedFile.txt"); var text= await FileIO.ReadTextAsync(newFile); textBlock.Text= await FileIO.ReadTextAsync(newFile); ; } 

Publisher Cache Folders opens up many scenarios for the developers building multiple applications and sharing the data among those applications.

BUILD で発表された Azure SQL Database プレビューの大規模な更新

$
0
0
このポストは、4 月 29 日に投稿された Azure SQL Database previews major updates for BUILD の翻訳です。 Azure SQL Database が更新され、SaaS とエンタープライズ アプリケーションが新たにサポートされました。ある調査では Azure SQL Database をご利用のお客様が 406% の投資回収率を記録したという実績が得られていますが、今回の更新では、Azure をデータ用およびアプリケーション用の最適なプラットフォームとする取り組みにおいて、さらに新たな進歩を遂げました。 昨年 (英語) 、マイクロソフトは Azure SQL Database に多数の 新機能 (英語) やテクノロジを導入して安全性や信頼性を向上させ、最新のミッションクリティカルなクラウド アプリケーションのニーズに対応してきました。今回実施された変更では、安全性、パフォーマンス、生産性が向上したほか、新たに SaaS アプリケーションを開発するための機能がサポートされました。 これらの更新の成果により Azure SQL...(read more)
Viewing all 29128 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>