Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Small Basic Guru – Winning Articles for July 2017

$
0
0

Welcome to the July 2017 Small Basic Guru awards!

Nonki's article about graduating to Visual Studio 2017 takes the top spot this month!

Small Basic Technical Guru - July 2017 
Gold Award Winner Nonki Takahashi Small Basic: Instructions to Graduate and Debug with Visual Studio 2017 SYEDSHANU: "Wow this is great post Nonki, on Small Basic. This article has very detailed explanation from how to install Visual Studio 2017 with working steps and outputs"
Silver Award Winner Nonki Takahashi Small Basic Sample: Line Editor SYEDSHANU: "Simple but very useful post .If introduction and code part explanations added it will be more great post."
Bronze Award Winner Nonki Takahashi Small Basic: Overflow SYEDSHANU: "All you can learn about Small Basic Overflow. One more helpful post from Nonki."

 

Big thanks to Nonki for the great content!

You can find the other technology area winners, here, on the Wiki Ninjas blog!

 

Small and Basically awesome,

- Ninja Ed


Scaling SharePoint 2013 Search databases

$
0
0

A few months ago, I had the need to scale out a SharePoint 2013 Search topology.  TechNet has a very detailed collection of articles regarding search architecture, scaling, etc., which I recommend reviewing as the first step in the process of any search architecture changes.  The scaling "landing page" can be found here:  https://technet.microsoft.com/en-us/library/dn727115.aspx.

As part of my scale-out, I needed to add additional Analytics Reporting and Crawl databases.  It was at this point that I simply could not find the documentation on how to add Analytics Reporting databases to an existing topology.  To save you the same headache, this post will cover adding the above mentioned databases.

To scale out the crawl databases...

To scale out the analytics reporting databases (and redistribute the data)...

Hope that helps!

Automating Lab Environments with LabInaBox on Prem or in the Cloud.

$
0
0

Today building out repeatable demo/lab environments quickly has become a necessity.  Previous this year I released LabInaBox which provided for a quick way to build out lab environments on a Windows 10 machine using Hyper-V.  As more and more customers are looking at Azure I wanted to provide a similar experience for cloud lab deployment.  Today I will walk through the new GUI interface which has been deployed with this release and discuss how you can leverage LabInaBox to build test environments in Hyper-V or in Azure.

Install LabInaBox

First things first we need to install LabInaBox.  The installation msi can be found on GitHub here.  Simply download the msi locally to your machine.  After downloading you will need to unblock the file before you can execute the msi.  You can do this by right clicking on the executable and selecting properties then clicking the unblock box.

After executing you will be prompted for the installation Folder by default it will place in the C:LabInaBox folder.  The installation path can be modified, however I would recommend not placing it in the program files folder as files are created by the application in this directory which would require administrative privileges.  After the setup completes you should have a folder structure with supporting files which resembles the below:

At this point we have all the files required to move forward Azure Deployment.  If we want to also leverage the Hyper-V lab scenario we will need to provide a sysprepped Windows2016 image in the ParentVMDisks folder.  Due to licensing and size of the file I was not able to package this into the installation.  If you need further instructions on creating the sysprepped image further details can be found in the ReadMe under installation.

LabInaBox Layout

LabInaBox has a GUI configuration utility which will aid you in generating files you will leverage to build out your labs.    To further understand the layout I will walk through the steps for creating these files then go into the detail of what the files are and how to leverage them.  So lets get started by launching LabInaBox. Once you launch you will be presented with the screen below.

You have the option of starting from scratch selecting which type of Lab you want to build and filling out the appropriate fields, however for simplicity I have have provided templates which you can open and modify then generate your lab files.  We will walk through the Azure one first.  To utilize these click File..Open and select DefaultAzureConfig.json.

Azure LabInaBox

Generate Azure Lab Automation Scripts

Below you see the values which have been pre-populated from the template.  Notice there are three values which are either blank or have XXXX that you will need to provide the details for.  These are Azure Login Certificate Name, Azure ApplicationID and Azure Tenant ID.  The Azure Lab scripts require you to have created a service principal name with a certificate so that you can automatically log into azure from your machine without user input.  If you have not read and completed this step please refer to my previous blog where I walk you through the steps.

Once you provide the three required values discussed above you could click Finish and the fields will be generated with the values provided.  However, you may want to change a few things here though so lets walk through some of these items and explain.  Lab Machine Prefix is utilized to create a folder structure on disk where the scripts are stored.  You will also see this same prefix is what I chose to pass for my Domain Name and I prepended it to each machine name.  I prefer this as I know when looking at the machine what lab it is tied to.  Other items you will likely want to change are the Domain Admin Username and Password and the VMUserName and Password.  Domain Admin is as it sounds the domain administrator for the domain and VMUsername is the local administrator on the machine.  Azure Publisher, Azure Offer, AzureSku and OS are all values which determine which azure template is utilized to build the VM.  Three additional items of interest which you may not immediately follow are Azure Automation Account, Azure Automation Resource Group, and DSC Configuration.  Azure Automation Account is the name of the automation account which will be generated via the PowerShell scripts and it will be created in the Azure Automation Resource Group.  All DSC Scripts, modules ect. will be uploaded to this location.  A separate resource group is leveraged here so that in if we have a nightly destroy script to remove resource groups we can easily filter out our automation account.  DSCConfiguration is the final item of interest here.  When you install LabInaBox two configurations are provided DomainConfig and SQLAzureConfig.  Additional ones can be added overtime as the community feels needed.  Make your changes and click Finish.  Upon clicking Finish the application will close and open windows explorer showing the files it has generated.  You should see something similar to below.  Notice its created in the LabConfig directory under TST which was our LabPrefix.  Each file is prefaced with our prefix.

What are these files generated and how do I leverage them?

Notice we have three files here, one JSON file and two ps1 files.  Lets take a look at the JSON file first.  I am utilizing Visual Studio Code to analyse the file.  Below we see the contents of the JSON file.  Notice first I did not provide the Certificate Subject, ApplicationID, or TenantID but I was able to generate the scripts.  If I ran the scripts currently they would error because these are required to connect to Azure.  Throughout the JSON file we see all the information we input into the GUI tool and some additional ones which are automatically generated.

There is nothing we need to change here I just wanted to show this file is generated and the PowerShell scripts will consume these values.  We could modify/create this file manually if we wanted however we could also open the file in the tool and modify it and click finish and the scripts will be updated.

The next file listed is TST_Create.ps1 contents of it are showed below:

First item in the script is to load a custom module LabinaBox.psm1.  You will notice it has a WarningAction of Silently Continue.  This suppresses the warning message which is generated because Login-AzureCert doesnt comply with naming standards.  Next we see a series of cmdlet calls which will build out our AzureLab environment and configure it.  Notice each cmdlet takes a parameter of configuration.  Configuration is the JSON file we looked at previously.  Within each module this json file will be converted into a PSCustom Object so that each property can be dot source referenced.  A quick run down of each cmdlet and what it is used for is listed below for reference:

New-AzureLab

  • Creates ResourceGroup for Lab
  • Creates Azure Virtual Network
  • Creates Each Azure VM requested

Publish-AzureDSModules

  • Creates ResourceGroup for Automation Account
  • Creates Automation Account
  • Creates Storage Account
  • Creates StorageContainer
  • Uploads DSC Modules from Install LocationLabInaBoxDSCResourcesAzureAutomation to container
  • Creates AzureAutomationModule from uploaded file

New-AzureDSCConfigurations

  • Creates Credentials in Automation account
  • Imports DSC Configurations to AutomationAccount

Compile-AzureDSCConfiguration

  • Compiles each configuration for each node requested

Set-AzureDSCNodeConfigurations

  • Registers each Node with AzureAutomation DSC
  • Applies configuration requested

The final final TST_Remove.ps1 is listed below.

 

Below describes what each of these cmdlets do:

RemoveAzureDSCNodeConfiguration

  • Removes Configuration from server
  • Unregisters server from Azure Automation DSC

Remove-AzureLab

  • Removes Lab Resource Group and all items in ResouceGroup

Hyper-V LabInaBox

Similarly you can generate a Lab onprem on a Hyper-V server.  In this case as mentioned before we will need Windows10 as well as a sysprepped image before we can move on.  Once we have these we can again click File...Open and select Default_Private_Config.json.  You should now be presented with a screen that looks like the following:

Similar to the Azure LabInaBox you may want to change a few properties here.  To Start Lab Machine Prefix similar to the Azure will drive how the folder structure is created for the scripts.  Unlike the Azure component this is prepended by default to all VMs which are created because we do not have a Resource Group to break them out we need to ensure we dont overlap Vms.  Moving down the list in this example I name the switch based on the subnet I am utilizing for the lab, this is not required but something I found easier to identify later.  As you move down the list there are Folder paths to set.  This allows you move things around if you have more than one drive in your machine and you want to segregate the IO workload.  Servers is just a comma separated list of server names you want created, keep in mind the lab prefix will be pre-pended to the name.   ISO selection allows you to mount ISOs as drives on the domain controller so you can install additional software later.  Once you select the ISO FOlder Path the values will populate in the drop down.  If we stop here in this example we will get three machines.  One Domain controller named PRV-DC and two windows servers named PRV-SQL1 and PRV-SQL2.  Our domain controller will have DHCP,DNS, AD, and certificate services configured so we are ready to begin playing with DSC configurations.  If we are looking for additional functionality we look at the last four boxes.  Developer Machine will create another windows machine PRV-DEV which will be leveraged for any scripting or development.  DSC Central will create an additional vm named PRV-DSC.  PRV-DSC will have a DSC configuration applied to it which installs SQL Server and sets up the DSC Data Driven Deployment solution found on GitHub.  Clicking Finish as before will present us with the scripts which are generated.

Notice with Hyper-V we have several additional scripts which are generated.  With HyperV we have scripts to Start, Stop, Update, Create, Remove and Checkpoint a Lab.  Hyper-V scripts are setups the same as the Azure LabInabox in that each cmdlet also takes configuration as a parameter where configuration is a JSON.

Wrap-Up

All of the cmdlet code can be found in the one module LabInaBox.psm1.  In general my approach with here was to be able to easily and quickly generate a lab in Hyper-V or Azure.  With that being said all the PowerShell cmdlets can be used as examples of how you could automate your workloads in your environment.  Please feel free to comment or contribute to the solution on GitHub.

Happy Automating till Next Time!

TFS 2018 RC1 is available

$
0
0

Team Foundation Server 2018 RC1 is now available for download.  This is the first available build for what we have been calling TFS V.next.  And, as you can tell from the title of this post, the official name will be TFS 2018.  Here's all the important links:

There's a lot I want to say about this release...

Like all of our Release Candidates, this is a "go-live" release, meaning that it has been tested and is ready to be used in a production environment.  At the same time, it's not done and there's a much higher chance you'll hit a bug than with a more final release.  However, we've been using it in production and it's reasonably stable.  On our current trajectory, we'll ship an RC2 in a month or so and then a final release at some point after that.  Precise timing depends somewhat on the feedback we get along the way.  This RC *is* localized, though you will find some English strings.  We will have complete localization by the time we release.

This is a "major" release of TFS.  The primary thing that means in these days of continuous delivery is there are breaking changes and system requirements updates.  Our customer promise for our Updates is that they are the same as RTM, just better.  However, our, roughly annual "major releases" are our opportunity to make bigger changes that might be more disruptive.  Here's a link that includes the latest TFS system requirements and another that focuses on the requirements changes between TFS 2017 and TFS 2018.

Keeping in mind that the last major feature release was TFS 2017 Update 2 (released only just over a month ago), this TFS 2018 release candidate has a bunch of VERY nice improvements.  You can read the release notes for details but let me summarize the highlights...

A new Release Definition Editor

We've done a major update of our release management UI.  The highlight for me is the new visual release definition editor that allow you to visualize and configure your code release pipeline.  We've also improved the task editing experience, adding templates for common app patterns, etc.

 

Multi-machine deployments with Deployment Groups

Our release management solution now supports deployment agents on the targets that greatly simplify the deployment of multi VM applications.  It greatly simplifies the authentication issues associated with deployment.  It also includes the ability to do rolling deployments so your app can stay available during upgrade.

 

Wiki

We now have a Wiki built into the product.  You can write pages with a mix of markdown and HTML.  You can organize pages into a table of contents.  It's a simple and fantastic way to share project information with your team and with visitors.

 

Maven Packages

We've added support for Maven packages to our package management solution - so now you can use TFS to manage your Nuget, NPM and Maven packages.

 

Pull Request improvements

This release contains innumerable improvements for pull requests.  We continue to iterate rapidly to make them better and better - making code easier to review, notifications better, policies and validations better and much more.  The release notes have lots of details.

 

Git Forks

We've included the first generation of Git Forks.  It allows users without write permission to a repository to fork it, iterate on the fork independently and, ultimately, submit a pull request to have their changes included in the original.  For now, all forks of a repo need to be in the same Team Project Collection as the original.  We will relax that requirement in a future update.

 

Mobile work items

This release brings our first installment in making the TFS experience good on mobile devices - mobile work items.  You can view a work item, launched from an email or any other location. in a nice phone form factor optimized experience.  You can also get simple lists like work assigned to me or that I'm following and view work items from there.  Over time, we'll expand our mobile experience to other parts of the product too.

 

Much, much more

My summary, of course only captures a few of the highlights.  You can check out the release notes for more.  This RC1 release is mostly feature complete.  There was only 1 sprint of feature work that missed RC1 and will first show up in RC2.  Probably the biggest news in that set will be support for GVFS to enable enterprises with even very large, complex, intertwined codebases to adopt Git if they choose.

As always, we really appreciate you taking the opportunity to install it and give us feedback.  The best place to report problems is on our Developer Community site.  I'm excited about all the improvements in this release.  And when you combine that with the improvements that shipped in Update 1 and Update 2 (since TFS 2017 RTM), it's a pretty amazing leap above what was available just a year ago.

Thanks a ton and looking forward to hearing your feedback,

Brian

Experiencing Data Gaps for Metric Data Type – 08/30 – Mitigating

$
0
0
Update: Wednesday, 30 August 2017 19:52 UTC

We are aware of issues within Application Insights and are actively working on mitigation. Root cause has been isolated to one of our back-end processing  services, which impacted Application Insights Service Overview Blade in the Azure Portal.

Starting at 08/30 12:00 UTC, some customers may experience data loss and see incorrect data values for some of the metrics while using the Service Overview Blade.  We estimate another 2 hours before the issue is completely addressed.
  • Work Around: Customers can retrieve this data for the impact window using AppAnalytics portal.
  • Next Update: Before 08/30 22:00 UTC

-Sapna

Free ebook: Introduction to Windows Containers

$
0
0

cover of free ebook Introducing Windows ContainersWe’re happy to announce the availability of our newest free ebook, Introduction to Windows Containers by John McCabe and Michael Friis. Enjoy!

download PDF here

INTRODUCTION

With the introduction of container support in Windows Server 2016, we open a world of opportunities that takes traditional monolithic applications on a journey to modernize them for better agility. Containers are a stepping stone that can help IT organizations understand what key items in modern IT environments, such as DevOps, Agile, Scrum, Infrastructure as Code, Continuous Integration, and Continuous Deployment, to name just a few, can do and how these organizations can adopt all of these elements and more to their enterprises.

As a result of Microsoft’s strong strategic partnership with Docker—the de facto standard in container management software—enterprises can minimize the time required to onboard and run Windows Containers. Docker presents a single API surface and standardizes tooling for working across public and private container solutions as well as Linux and Windows Container deployments.

This is the next phase in IT evolution in which a direct replatform of code cannot be achieved and truly begins to bring the power of the cloud to any enterprise.

ABOUT THE AUTHORS

John McCabe works for Microsoft as a senior premier field engineer. In this role, he has worked with the largest customers around the world, supporting and implementing cutting-edge solutions on Microsoft Technologies. In this role, he is responsible for developing core services for the Enterprise Services Teams. John has been a contributing author to several books, including Mastering Windows Server 2012 R2 from Sybex, Mastering Lync 2013 from Sybex, and Introducing Microsoft System Center 2012 from Microsoft Press.

John has spoken at many conferences around Europe, including TechEd and TechReady. Prior to joining Microsoft, John was an MVP in Unified Communications with 15 years of consulting experience across many different technologies such as networking, security, and architecture.

Michael Friis is a product manager at Docker where he works on Docker for Amazon Web Services and Azure. He also focuses on integrating Docker with Microsoft technology. Previously he was at Heroku, and, before that, AppHarbor, a .NET platform as a service.

SQL Updates Newsletter – August 2017

$
0
0

Recent Releases and Announcements

Issue Alert

Recent Blog Posts and Articles

Recent Training and Technical Guides

Monthly Script and Tool Tips

 

Fany Carolina Vargas | SQL Dedicated Premier Field Engineer | Microsoft Services

Error while detaching the Collection database : TF246017: Team Foundation Server could not connect to the database.

$
0
0

Environment: TFS 2017

Last weekend while I was with a customer I observed the below error message, it’s interesting to know that this error happened while we were detaching the database to take back up and we could never attach it back again.

TF246017: Team Foundation Server could not connect to the database. Verify that the instance is specified correctly, that the server that is hosting the database is operational, and that network problems are not blocking communication with the server.

When we rerun the job, it failed without a clue. Looking at various parameters and logs were landing at the above error.

On the go, we found the TfsWarehouse database (from the old 2008 days) was in recovery state. While detaching the database the scripts looked for it and it was not reachable, hence the error message.
The TfsWarehouse database was fixed from recovery state, then the detach and attach worked seamlessly. So it is a good idea to see what’s the state of databases in the SQL instance.

Content: Vimal Thiagaraj 


Tool to generate DebugDiag rule for SharePoint based on event id (Tag) and optionally a partial message

$
0
0

When I updated my post on how to create DebugDiag rule based on event id (Tag), I received some feedback that modifying the scripts is quite error prone and more often than not the resulting script would not work as expected. To that end, I decided to create a visual tool that will create the rule template based on the parameters that matter. This rule template can be imported directly into DebugDiag. Please use DebugDiag 2 Update 2 (or later) which you can download from Microsoft. So let’s see two different scenarios.

Download the tool here: Tool Binaries

Unzip the files to your tool folders.

NOTE: Normally when you download an EXE (zipped or not) from the Internet, Windows will block the execution. To resolve this, right click on CreateSPDebugRule.exe and check ‘Unblock’ and click OK:

image

 

When the ULS log entry points to w3wp.exe

We plan to generate a dump file based on this fresh ULS entry:

08/30/2017 14:48:18.80    w3wp.exe (0x3820)    0x2714    SharePoint Foundation    DistributedCache    air4a    Monitorable    Token Cache: Failed to get token from distributed cache for '0).w|s-1-5-21-3258600628-1467426315-3527673178-500'.(This is expected during the process warm up or if data cache Initialization is getting done by some other thread).    aa74149e-4d3f-b0b8-39ea-4e89c1505cc6

 

IMPORTANT: The entry needs to be fresh because the process id changes when IIS resets.

 

Along with the process name, ULS logs include the process id (PID) in hex format. In this case, the thread is 0x3820 which is  14368 in decimal. There is an easy way to identify the application pool via command prompt.

Run from an elevated command prompt (IT WILL NOT WORK ON POWERSHELL):

%windir%system32inetsrvappcmd list wp

 

image

 

So we learned the application pool name is ‘SharePoint – 80’.

You can fill out the form as below. Notice that I add a ‘partial message’ – this is necessary when a tag is too generic and you just want to capture when the message contains some pattern. The tag bz7l which I discussed in several occasions for example is very specific, it only occurs when a claim credential cannot be converted to Windows credential, so there is no need to add a partial message. In doubt, add a partial message. I can choose the number of dump files to be created by the rule (the default is 5).

image

 

You do not need to run the tool on the server where the dump will be captured, but if you do you have the advantage of Select the Application Pool by clicking the Select… button (you need to run the tool in administration mode for this). The selection window looks like that.

 

image

 

If you have everything in place you click on Generate Template… and choose a name (the name will not affect the name of the rule name in DebugDiag).

image

 

To import the rule into DebugDiag

  • Run ‘DebugDiag 2 Collect’.
  • Click ‘Cancel’ to close the Rule Wizard.
  • Click the ‘Import’ button to load the rule you created with the tool

image

  • Click Ok to accept the rule settings

image

 

  • Notice that the rule will not be active when you import. You need to right-click the rule and choose Activate Rule. If this is the first time you activate a rule there will be several pop-ups from DebugDiag, just accept all defaults.
  • Once the rule is active, the dump will be generated when the condition occurs.

 

When the ULS log entry points to a process

For this scenario,, we plan to generate a dump file based on this ULS entry:

08/30/2017 14:48:29.28    OWSTIMER.EXE (0x1A94)    0x04E4    SharePoint Server Search    Administration    ajzc7    Medium    Cleanup of Orphan Systems in server CONTOSOWFE    ac74149e-4d87-b0b8-39ea-409643feb4b7

 

Notice that this time, you do not need a fresh ULS log because we will capture based on a service (actually based on a process name). OWSTIMER.exe is the name for the SPTimerV4 service, The tool needs the process name which is how it shows in ULS log already. You can fill out this rule like below. Notice I chose ‘Application/Service’ this time. You can also use the Select… button to choose the process if you are running the tool on the server. This is the form filled out:

image

 

Feel free to play around and adapt the tool by changing the source code if you want to. The project zip is somewhere at: https://github.com/rodneyviana/blogdemos

Handling a TFS 2018 Upgrade from Old Form to New Form

$
0
0

As of TFS 2017.2, the old work item form <Layout> tag has been deprecated and is no longer supported in TFS 2018. If you are upgrading your server and have a collection where the new work item form has not been enabled you will encounter the following severe warning during readiness checks:

[VS403364]: This release introduces major updates to the work item form layout and functionality and deprecates legacy custom controls. Consequently, the upgrade process will update all work item type definitions to use the new work item form WebLayout element and remove all custom controls. For additional information and recommended upgrade steps, see the Deployment Guide.

Best Effort Transformation of Layout to WebLayout Tag

Any work item type definition that does not have the <WebLayout> tag will undergo a best effort transformation to the new form layout. All legacy custom controls will no longer load. To test the transformation before completing server upgrade, we recommend using a pre-production environment to refine your template along with a manual post upgrade step to upload any changes to your server.

1.      Use a Pre-Production Environment (PPE) to refine your template

To see the results of the transformation, stand up a pre-production environment and check all work item types. Customize the form layout in the PPE if the transformation did not meet your needs.

2.      Export refined process template(s) from PPE

After you have made modifications to your process template(s), export the template and save to a location you can access after the upgrade is complete.

3.      Post upgrade, manually upload refined template(s)

Following the upgrade, upload the template(s) you saved from PPE and validate that the work item form looks the way you want before announcing the server upgrade is complete.

Upgrading a Server with Legacy Custom Controls

If you are upgrading a server with legacy custom controls built by Microsoft, you can go to the VSTS Marketplace, download the new control, and add it to your work item type definition. This includes:

Other legacy custom controls will need to be converted to the new REST based extensibility model. Get started by extending the work item form using the Work Item Form Service.

Thank you.

Graph Poker Hand Distributions

$
0
0

Last time I showed how to Calculate Poker hand probabilities with code that showed how often a particular hand occurred, such as a Pair or a Full House.
Now that we can calculate these probabilities, lets graph them.

You’ll need to add references to some assemblies: System.Windows.Forms, System.Windows.Forms.DataVisualization, WindowsFormsIntegration

First we’ll accumulate the results of dealing 5 card hands into a Dictionary<string, int>, where entries might be “Pair” = 223, “Straight”=3

Windows Forms has a nice graphing assembly System.Windows.Forms.DataVisualization, which calls a graph a “Chart.
Because the charting code is written for Windows Forms, and the Poker code uses Windows Presentation Foundation, we need to host the Windows Forms chart as a child of a WindowsFormsHost control.
We’ll create a ChartArea and add it to the chart.
Set the chart DataSource property to the dictionary. We’ll add a Series to the chart, with the type “Bar” for a bar chart. (Try changing it to a Line chart for fun)

Here’s a screenshot of 14 million deals, indicating visually that a Pair occurs just a little less than “Nothing” (42% vs 50% of the deals).
image

<code>

using System;
using System.Linq;
using System.IO;
using System.Runtime.InteropServices;
using System.Threading;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Interop;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Threading;
using System.Collections.Generic;
using System.Windows.Forms.DataVisualization.Charting;
using System.Windows.Forms.Integration;
using System.Threading.Tasks;
using System.Text;

/*
 File->new->Project->C#->WPF App "Poker"
 download cards.dll from https://onedrive.live.com/redir?resid=D69F3552CEFC21!74629&authkey=!AGaX84aRcmB1fB4&ithint=file%2cDll
 Solution->Add Existing Item Cards.dll (Properties: Copy to Output Directory=Copy If Newer)
 Add Project->Add Reference to System.Drawing, WindowsFormsIntegration, System.Windows.Forms, System.Windows.Forms.DataVisualizetion
     *
     * * */
namespace Poker
{
  /// <summary>
  /// Interaction logic for MainWindow.xaml
  /// </summary>
  public partial class MainWindow : Window
  {
    public int NumDealsPerGroup { get; set; } = 1000;
    int hghtCard = 100;
    int wdthCard = 80;
    public MainWindow()
    {
      InitializeComponent();
      Width = 1100;
      Height = 800;
      WindowState = WindowState.Maximized;
      Title = "CardDist";
      this.Loaded += MainWindow_Loaded;
    }
    void AddStatusMsg(string msg, params object[] args)
    {
      if (_txtStatus != null)
      {
        // we want to read the threadid 
        //and time immediately on current thread
        var dt = string.Format("[{0}],{1,2},",
            DateTime.Now.ToString("hh:mm:ss:fff"),
            Thread.CurrentThread.ManagedThreadId);
        _txtStatus.Dispatcher.BeginInvoke(
            new Action(() =>
            {
                      // this action executes on main thread
                      var str = string.Format(dt + msg + "rn", args);
              _txtStatus.AppendText(str);
              _txtStatus.ScrollToEnd();
            }));
      }
    }

    private void MainWindow_Loaded(object sender, RoutedEventArgs e)
    {
      try
      {
        this.DataContext = this;
        var sp = new StackPanel() { Orientation = Orientation.Vertical };
        sp.Children.Add(new Label() { Content = "Card Dealing Program." });
        var spControls = new StackPanel() { Orientation = Orientation.Horizontal };
        sp.Children.Add(spControls);
        spControls.Children.Add(new Label() { Content = "nDeals" });
        var txtnDeals = new TextBox()
        {
          Width = 100,
          ToolTip = ""
        };
        txtnDeals.SetBinding(TextBox.TextProperty, nameof(NumDealsPerGroup));
        spControls.Children.Add(txtnDeals);

        _txtStatus = new TextBox()
        {
          IsReadOnly = true,
          VerticalScrollBarVisibility = ScrollBarVisibility.Auto,
          HorizontalScrollBarVisibility = ScrollBarVisibility.Auto,
          IsUndoEnabled = false,
          FontFamily = new FontFamily("Courier New"),
          FontSize = 10,
          Height = 200,
          MaxHeight = 200,
          HorizontalContentAlignment = HorizontalAlignment.Left
        };

        var dictHandValues = new Dictionary<string, int>(); // Hand value, cnt. Like "Pair" = 4
        foreach (var k in Enum.GetValues(typeof(PokerHand.HandValues)))
        {
          dictHandValues[k.ToString()] = 0;
        }
        var chart = new Chart();
        //                chart.Height = 500;
        chart.Width = 200;
        chart.Dock = System.Windows.Forms.DockStyle.Fill;
        var chartArea = new ChartArea("ChartArea");
        chart.ChartAreas.Add(chartArea);
        var wfh = new WindowsFormsHost();
        wfh.Height = 600;
        wfh.Child = chart;
        sp.Children.Add(wfh);

        var dictPairDist = new Dictionary<int, int>(); // #hands that are a Pair, count
        var series1 = new Series();
        chart.Series.Add(series1);

        /* add a "/" at the beginning of this line to use alternate code
         // insert code here later
        /*/
        chart.DataSource = dictHandValues;
        series1.ChartType = SeriesChartType.Bar;
        series1.XValueMember = "Key";
        series1.YValueMembers = "Value";
        chartArea.AxisX.Interval = 1;
        chartArea.AxisY.LabelStyle.Format = "#,#";
        //*/

        var canvas = new Canvas() { Height = hghtCard + 3 };
        sp.Children.Add(canvas);
        sp.Children.Add(_txtStatus);

        this.Content = sp;

        var deck = new Card[52];
        for (var suit = 0; suit < 4; suit++)
        {
          for (var denom = 0; denom < 13; denom++)
          {
            var card = CardDeck.GetCard((CardDeck.Suit)suit, denom);
            deck[suit * 13 + denom] = card;
          }
        }
        for (int i = 0; i < PokerHand.HandSize; i++)
        {
          var img = new Image()
          {
            Height = hghtCard
          };
          // add it to the canvas
          canvas.Children.Add(img);
          // set it's position on the canvas
          Canvas.SetLeft(img, i * wdthCard);
          //                    Canvas.SetTop(img, suit * (1 + hghtCard));
        }
        var numHands = 0;
        var rand = new Random(1);
        PokerHand hand = null;
        Action UpdateUI = () =>
        {
          series1.ToolTip = $"Num deals = {numHands:n0} {_txtStatus.Text}";
                  // draw the cards
                  for (int i = 0; i < PokerHand.HandSize; i++)
          {
            var img = (Image)canvas.Children[i];
            img.Source = hand.Cards[i].bmpSource;
          }
          chart.DataBind();
          var sb = new StringBuilder();
          foreach (var kvp in dictHandValues)
          {
            sb.Append($" {kvp.Key}={kvp.Value * 100.0 / numHands,9:f6}");
          }
          _txtStatus.Text = $@"{hand.PokerValue().ToString(),-15}";
          _txtStatus.AppendText($"rnN={numHands:n0} {sb.ToString()}");
        };
        Action<int> ShuffleAndDeal = (nDeals) =>
        {
          int nPairs = 0;
          for (int nDeal = 0; nDeal < nDeals; nDeal++)
          {
            numHands++;

                    // shuffle
                    for (int n = 0; n < 52; n++)
            {
                      //get a random number 0-51
                      var tempNdx = rand.Next(52);
              var tmp = deck[tempNdx];
              deck[tempNdx] = deck[n];
              deck[n] = tmp;
            }
                    //var newdeck = new Card[52];
                    //for (int i = 0; i < 26; i++)
                    //{
                    //    newdeck[2 * i] = deck[i];
                    //    newdeck[2 * i + 1] = deck[i + 26];
                    //}

                    //for (int n = 0; n < 52; n++)
                    //{
                    //    deck[n] = newdeck[n];
                    //}
                    // deal
                    hand = new PokerHand(deck.Take(5).ToList());
            var val = hand.PokerValue();
            if (val == PokerHand.HandValues.Pair)
            {
              nPairs++;
            }
            dictHandValues[val.ToString()]++;
          }
          if (!dictPairDist.ContainsKey(nPairs))
          {
            dictPairDist[nPairs] = 1;
          }
          else
          {
            dictPairDist[nPairs]++;
          }
        };

        var btnGo = new Button()
        {
          Content = "_Go",
          Width = 20
        };
        spControls.Children.Add(btnGo);
        bool fIsGoing = false;
        btnGo.Click += async (ob, eb) =>
        {
          fIsGoing = !fIsGoing;
          if (fIsGoing)
          {
            dictPairDist.Clear();
            while (fIsGoing)
            {
              await Task.Run(() =>
                      {
                      ShuffleAndDeal(NumDealsPerGroup);
                    });
              UpdateUI();
            }
          }
        };
        this.MouseUp += (om, em) =>
        {
          if (!fIsGoing)
          {
            ShuffleAndDeal(1);
            UpdateUI();
          }
        };
        btnGo.RaiseEvent(new RoutedEventArgs(Button.ClickEvent, null));
      }
      catch (Exception ex)
      {
        this.Content = ex.ToString();
      }
    }

    // http://www.math.hawaii.edu/~ramsey/Probability/PokerHands.html
    internal class PokerHand
    {
      public static int HandSize = 5;
      public enum HandValues
      {
        Nothing,
        Pair, // .047539
        TwoPair, // .021128
        ThreeOfAKind,
        Straight,
        Flush,
        FullHouse,
        FourOfAKind,
        StraightFlush,
        RoyalFlush
      }
      public List<Card> Cards;

      public PokerHand(List<Card> cards)
      {
        this.Cards = cards;
      }
      public HandValues PokerValue()
      {
        // using Linq is probably not the most efficient
        var value = HandValues.Nothing;
        var groupsBySuits = Cards.GroupBy(c => c.suit);
        var groupsByRank = Cards.OrderBy(c => c.denom).GroupBy(c => c.denom);
        if (groupsBySuits.Count() == 1) // only one suit= Flush. See if it's a straight
        {
          if (IsStraight(groupsByRank))
          {
            if (groupsByRank.First().Min().denom == 8) //it's a ten
            {
              value = HandValues.RoyalFlush;
            }
            else
            {
              value = HandValues.StraightFlush;
            }
          }
          else
          {
            value = HandValues.Flush;
          }
        }
        else
        {
          // if the hand has none of a kind: e.g. a 3,5,6,9,J, then there are 5 groups by denom
          // if there's a pair, then there will be 4 groups
          // 2 pair: there will be 3
          // if there's a triple, then there will be 2 (full house) or 3 groups
          // 4 of a kind: 2 groups
          switch (groupsByRank.Count())
          {
            case 5: // there are 5 groups of denoms, so can only be straight
              if (IsStraight(groupsByRank))
              {
                value = HandValues.Straight;
              }
              break;
            case 4:
              value = HandValues.Pair;
              break;
            case 3: // 2 pair or triple
              {
                var nn = groupsByRank.OrderByDescending(g => g.Count());
                int nMaxCount = nn.First().Count();
                if (nMaxCount == 3)
                {
                  value = HandValues.ThreeOfAKind;
                }
                else
                {
                  value = HandValues.TwoPair;
                }
              }
              break;
            case 2: // full house or 4 of a kind
              {
                var nn = groupsByRank.OrderByDescending(g => g.Count());
                int nMaxCount = nn.First().Count();
                if (nMaxCount == 4)
                {
                  value = HandValues.FourOfAKind;
                }
                else
                {
                  value = HandValues.FullHouse;
                }
              }
              break;
          }
        }
        return value;
      }

      private bool IsStraight(IEnumerable<IGrouping<int, Card>> groupsByRank)
      {
        bool isStraight = false;
        if (groupsByRank.Count() == HandSize)
        {
          // the groups are sorted
          var first = groupsByRank.First().First().denom;
          var last = groupsByRank.Last().First().denom;
          if (first + HandSize - 1 == last)
          {
            isStraight = true;
          }
          else
          { // special case: Ace low
            if (last == 12) // Ace
            {
              int ndx = 0;
              foreach (var g in groupsByRank)
              {
                var c = g.First().denom;
                if (ndx++ != c)
                {
                  break;
                }
              }
              if (ndx == HandSize)
              {
                isStraight = true;
              }
            }
          }
        }

        return isStraight;
      }
    }

    public class Card : Image, IComparable
    {
      public CardDeck.Suit suit;
      public int denom; // 0-12
      public int Value => (int)suit * 13 + denom;// 0-51
      public BitmapSource bmpSource { get; private set; }
      /// <summary>
      /// Create a new card
      /// </summary>
      /// <param name="suit"></param>
      /// <param name="denom"> 12=A, 11=K, Q=10, J=9,... 0=2</param>
      /// <param name="bmpSource">can be null</param>
      public Card(CardDeck.Suit suit, int denom, BitmapSource bmpSource = null)
      {
        this.suit = suit;
        this.denom = denom;
        this.bmpSource = bmpSource;
      }
      public string GetDenomString()
      {
        var result = (denom + 2).ToString();
        switch (denom)
        {
          case 9:
            result = "J";
            break;
          case 10:
            result = "Q";
            break;
          case 11:
            result = "K";
            break;
          case 12:
            result = "A";
            break;

        }
        return result;
      }
      public override string ToString()
      {
        return $"{GetDenomString()} of {suit}";
      }

      public int CompareTo(object obj)
      {
        if (obj is Card)
        {
          var c = (Card)obj;
          if (c.suit == this.suit)
          {
            return c.denom.CompareTo(this.denom);
          }
          return ((int)c.suit).CompareTo((int)(this.suit));
        }
        throw new InvalidOperationException();
      }
    }
    public class CardDeck
    {
      public enum Suit
      {
        Clubs = 0,
        Diamonds = 1,
        Hearts = 2,
        Spades = 3
      }
      // put cards in 2 d array, suit, rank (0-12 => 2-A)
      public Card[,] _Cards;
      public BitmapSource[] _bitmapCardBacks;
      private static CardDeck _instance;

      public static int NumCardBacks => _instance._bitmapCardBacks.Length;

      public CardDeck()
      {
        _Cards = new Card[4, 13];
        var hmodCards = LoadLibraryEx("cards.dll", IntPtr.Zero, LOAD_LIBRARY_AS_DATAFILE);
        if (hmodCards == IntPtr.Zero)
        {
          throw new FileNotFoundException("Couldn't find cards.dll");
        }
        // the cards are resources from 1 - 52.
        // here is a func to load an int rsrc and return it as a BitmapSource
        Func<int, BitmapSource> GetBmpSrc = (rsrc) =>
        {
                  // we first load the bitmap as a native resource, and get a ptr to it
                  var bmRsrc = LoadBitmap(hmodCards, rsrc);
                  // now we create a System.Drawing.Bitmap from the native bitmap
                  var bmp = System.Drawing.Bitmap.FromHbitmap(bmRsrc);
                  // we can now delete the LoadBitmap
                  DeleteObject(bmRsrc);
                  // now we get a handle to a GDI System.Drawing.Bitmap
                  var hbmp = bmp.GetHbitmap();
                  // we can create a WPF Bitmap source now
                  var bmpSrc = Imaging.CreateBitmapSourceFromHBitmap(
                      hbmp,
                      palette: IntPtr.Zero,
                      sourceRect: Int32Rect.Empty,
                      sizeOptions: BitmapSizeOptions.FromEmptyOptions());

                  // we're done with the GDI bmp
                  DeleteObject(hbmp);
          return bmpSrc;
        };
        // now we call our function for the cards and the backs
        for (Suit suit = Suit.Clubs; suit <= Suit.Spades; suit++)
        {
          for (int denom = 0; denom < 13; denom++)
          {
            // 0 -12 => 2,3,...j,q,k,a
            int ndx = 1 + 13 * (int)suit + (denom == 12 ? 0 : denom + 1);
            _Cards[(int)suit, denom] = new Card(suit, denom, GetBmpSrc(ndx));
          }
        }
        //The card backs are from 53 - 65
        _bitmapCardBacks = new BitmapSource[65 - 53 + 1];
        for (int i = 53; i <= 65; i++)
        {
          _bitmapCardBacks[i - 53] = GetBmpSrc(i);
        }
      }
      public static double MeasureRandomness(Card[] _cards)
      {
        var dist = 0.0;
        for (int suit = 0; suit < 4; suit++)
        {
          for (int denom = 0; denom < 13; denom++)
          {
            var ndx = suit * 13 + denom;
            int curval = _cards[ndx].Value;
            dist += Math.Pow((ndx - curval), 2);
          }
        }
        return dist;
      }
      /// <summary>
      /// Return a card
      /// </summary>
      /// <param name="nSuit"></param>
      /// <param name="nDenom">1-13 = A, 2,3,4,J,Q,K</param>
      /// <returns></returns>
      public static Card GetCard(Suit nSuit, int nDenom)
      {
        if (_instance == null)
        {
          _instance = new CardDeck();
        }
        if (nDenom < 0 || nDenom > 12)
        {
          throw new ArgumentOutOfRangeException();
        }
        return _instance._Cards[(int)nSuit, nDenom];
      }

      internal static ImageSource GetCardBack(int i)
      {
        return _instance._bitmapCardBacks[i];
      }
    }

    public const int LOAD_LIBRARY_AS_DATAFILE = 2;
    private TextBox _txtStatus;

    [DllImport("kernel32.dll", SetLastError = true)]
    static extern IntPtr LoadLibraryEx(string lpFileName, IntPtr hFileReserved, uint dwFlags);

    [DllImport("User32.dll")]
    public static extern IntPtr LoadBitmap(IntPtr hInstance, int uID);

    [DllImport("gdi32")]
    static extern int DeleteObject(IntPtr o);
  }

}

</code>

Surface Studio で USB ドライブから Windows を展開する場合の注意点

$
0
0

こんにちは。Surface 法人向けサポート担当の岩松です。

 

Surface Studio は、「ハイブリッド ドライブ」という構成を採用しており、USB ドライブから Windows を展開する際、予期せぬ動作になることがあります。

本記事では、その際の注意点について、弊社開発部門で公開している英文記事の要旨を日本語でご紹介します。

 

Deploy Windows from USB drive to Surface Studio

 

なお、Microsoft Deployment Toolkit  (MDT)  を Surface Studio の展開にご利用いただく場合は、以下の記事もあわせてご参照ください。

 

Deploy Surface Studio using MDT

 

 

シナリオ #1: Windows 10 を USB drive からインストールする

このシナリオは、USB ドライブに Windows 10 のソース ファイルを入れて、 (Surface Studio は RS1 以降が必要です)  そのドライブからブートし Surface Studio にインストールします。 これは、とてもシンプルなシナリオでですが、Surface Studio ではいくつかの追加的な操作が必要です。

このシナリオを実行するには、Windows が起動する INSTALL.WIM と同様に、セットアップが起動する BOOT.WIM にドライバーをインストールしておく必要があります。

前提条件:

  • 最新の Windows ADK がインストールされている。
  • 最新の Surface Studio .MSI がダウンロードされ展開されている。
    • msiexec /a SurfaceStudio_Win10_15063_1701606_0.msi targetdir=C:SurfaceStudioDrivers /qn
    •  Note: MSI ファイルを展開する際、targetdir(展開先)は MSI ファイルがあるフォルダーとは別にします。
  • Windows 10 Version 1607 (またはそれ以降) の ISO ファイルの内容が USB ドライブ E: にコピーされている。

ステップ:

  1. ブートイメージを index #2 でマウントする。(ここからセットアップが起動します。)
    • DISM /MOUNT-IMAGE /IMAGEFILE:E:SOURCESBOOT.WIM /INDEX:2 /MOUNTDIR:C:MOUNT
  2. Intel Chipset SATA Raid Controller driver(iastora.sys) をブートイメージに追加する。
    • DISM /IMAGE:C:MOUNT /ADD-DRIVER /DRIVER: C:SURFACESTUDIODRIVERSSURFACEPLATFORMINSTALLERSURFACESTUDIO_WIN10_15063_1701606_0DRIVERSSYSTEMRST_AHCIIASTORAC.INF
  3. ブートイメージをアンマウントし、変更をコミットする。
    • DISM /UNMOUNT-IMAGE /MOUNTDIR:C:MOUNT /COMMIT
  4. install.wim イメージをマウントする。
    • DISM /MOUNT-IMAGE /IMAGEFILE:E:SOURCESINSTALL.WIM /INDEX:1 /MOUNTDIR:C:MOUNT
  5. Intel Chipset SATA Raid Controller driver(iastora.sys) をインストール イメージに追加する。
    • DISM /IMAGE:C:MOUNT /ADD-DRIVER /DRIVER: C:SURFACESTUDIODRIVERSSURFACEPLATFORMINSTALLERSURFACESTUDIO_WIN10_15063_1701606_0DRIVERSSYSTEMRST_AHCIIASTORAC.INF
  6. install.wim をアンマウントし、変更をコミットする。mit changes
    • DISM /UNMOUNT-IMAGE /MOUNTDIR:C:MOUNT /COMMIT

上のステップは、特にストレージ コントローラーのドライバーのためのものです。Windows を一度インストールした後なら、.MSI ファイル全体をインストールし、他の Surface Studio のドライバー全部をインストールすることができます。

 

 

シナリオ #2:  一般的な Windows PE イメージの作成

もし、他の展開ツールを使用しており、Windows PE を利用する場合も、手順はおおむね同じです。以下の手順で、Surface Studio をサポートする一般的な Windows PE イメージを作成することができます。

  1. Windows PE イメージの作成
    • COPYPE.CMD AMD64 C:WINPE_X64
  2. Windows PE イメージのマウントimage
    • DISM /MOUNT-WIM /WIMFILE:.C:WINPE_X64MEDIASOURCESBOOT.WIM /INDEX:1 /MOUNTDIR:C:WINPE_X64MOUNT
  3. Intel Chipset SATA Raid Controller driver(iastora.sys) を WinPE のブート イメージに追加する。
    • DISM /IMAGE:C:WINPE_X64MOUNT /ADD-DRIVER: C:SURFACESTUDIODRIVERSSURFACEPLATFORMINSTALLERSURFACESTUDIO_WIN10_15063_1701606_0DRIVERSSYSTEMRST_AHCIIASTORAC.INF
  4. イメージをアンマウントする
    • DISM /UNMOUNT-WIM /MOUNTDIR:C:WINPE_X64 /COMMIT
  5. USB Windows PE イメージを作成する。
    • MAKEWINPEMEDIA /UFD C:WINPE_X64 E:

 

Setting up SQL Server R service (In Database) and R Server(Standalone)

$
0
0

Hello all,

Recently, I had worked on a requirement to install R services during the SQL 2016 installation. In this blog, I am covering the steps followed to install and configure R services (In database). Also, I have included steps to install R server (standalone) shared feature as part of SQL installation.

 

Please note that this blog is applicable to SQL 2016 installations only, From SQL 2017 installations, the R Services (In database) feature is called as machine learning service (In database) and R server (Standalone) is named machine learning server (Standalone). Also, please note that SQL 2017 also supports Python as programing language for statistics.

 

R services (In -database) and R Server (Standalone) are the new components introduced in SQL Server 2016 installations.  The purpose of these components is entirely different:

R Services (In-Database) To enable secure execution of R scripts on the local SQL Server computer. When you select this feature, extensions are installed in the database engine to support execution of code written in R. If you need to run your R code in SQL Server, either by using stored procedures or by using the SQL Server instance as the compute context,  R Services (In-Database) feature need to be installed.

Microsoft R Server (Standalone) If the requirement is not to  use SQL Server as the compute context for developing R solutions, R server (Standalone) feature can be installed. Its recommended to install R Server (Standalone) on a laptop or other remote computer used for development.

Avoid installing  both R Services (In-Database) and R Server (Standalone) on the same computer.

 

Procedure to Install/Configure R Services (In database):

If the requirement is to use SQL Server as the compute context and to execute R code in SQL Server, select the component as below:

 

R services (In database) creates a new SQL service called "SQL Server Launchpad".

Once the setup is completed, ensure that SQL Server Launchpad service is started. Launchpad service has a dependency on SQL Server database Engine Service.

Once the installation is complete, to enable external script (R script) inside SQL server, “external scripts enabled” server configuration setting must be set to 1. Default value is 0.

 

sp_configure 'external scripts enabled'

 

-- Enabling the configuration setting

sp_configure 'external scripts enabled', 1

reconfigure with override

 

Once the above setting is set to 1, restart the SQL service. Otherwise, the “external scripts enabled” will be still in disabled state till SQL service is restarted. SQL service restart will also initiate restart of SQL launchpad service restart.

Post SQL service restart, test the below query to check if external R script can be executed.

If the above query execution which executes an external R code script is successful, it indicates that configuration of R services (In database) is successful.

 

Procedure to Install R Server (Standalone):

If the requirement is not to use SQL Server as the compute context to develop R solutions, install R server (Standalone) feature.

Applying Service pack/Cumulative updates when R Server (Standalone) is installed is tricky. If the machine where R Server (Standalone) doesnt have internet access to install Microsoft R Open and Microsoft R Server, offline installation option is available.

SRO (R Open) and SRS (R Server ) cab files can be downloaded from the location mentioned as per below. Download the files and copy the files to the server and provide the path via browse option.

Click on next and complete the installation. If this step is not followed, the setup installation would fail for R Server (Standalone) feature. If the server has internet access, setup automatically downloads the cab files during the installation.

 

Known issues reported with R services and R Server:

FIX: Version of R Client is incompatible with the Microsoft R server version 8.0.3
https://support.microsoft.com/en-us/help/3210262/fix-version-of-r-client-is-incompatible-with-the-microsoft-r-server-ve

FIX: Cannot install SQL Server R Services during an offline installation of SQL Server 2016 updates
https://support.microsoft.com/en-us/help/3210708/fix-cannot-install-sql-server-r-services-during-an-offline-installatio

 

Hope the above steps mentioned will help you in configure SQL Server R Services.

Please share your feedback, questions and/or suggestions.

Thanks,

Don Castelino | Premier Field Engineer | Microsoft

 

Disclaimer: All posts are provided AS IS with no warranties and confer no rights. Additionally, views expressed here are my own and not those of my employer, Microsoft.

Event-Tipp: Praktische Einführung in DeepLearning

$
0
0

Hallo zusammen,

hier habe ich mal wieder ein Event-Tipp für Euch. Unser Student Partner Simon Rühle stellt Euch eine praktische Einführung in DeepLearning vor!

Themen

  • Deep Learning

Für wen?

Für jedermann, der sich mit AI auseinandersetzt oder es gerne möchte.

Was genau?

Workshop mit Vortrag & Live Coding

Snacks und Getränke umsonst.

Wann & Wo?

Mittwoch 21. November, 17:30 Uhr,  Universitätsstraße 38 (V38), Raum 0.363, Stuttgart

Anmeldung

Praktische Einführung in DeepLearning

Tuesday, Sep 5, 2017, 5:30 PM

Location details are available to members only.

38 Mitglieder Attending

Lange hat es seit dem letzten Meetup gedauert, aber endlich ist es wieder soweit.Nach dem Feedback aus dem letzten Meetup wird es diesmal eine praktischere Einführung in das Thema DeepLearning geben. Eine kurze theoretische Einführung wird begleitet von einfachen Beispielen, die als Einstieg in das Thema dienen.Der Talk wird diesmal von mir selbs...

Check out this Meetup →

How to add the “Payroll electronic NACHA PPD” File format for US Payroll in D365 for Finance and Operations

$
0
0

If you are trying to set up Direct Deposit (NACHA) Payments for Payroll in Microsoft Dynamics 365 and don't see the "Payroll electronic NACHA PPD" as an option in the Export Format list on the Method of Payment, you need to add the File format by clicking the Setup link. I created this blog as a quick reference, since it's a little easy to forget sometimes if you don't setup Methods of Payments often.

To add the Payroll electronic NACHA PPD File format, and make it available in the drop-down menu of the Electronic format field on the Payroll Electronic Method of Payment:
  • In Accounts Payable > Methods of Payments > File formats, click Setup.

  • When the File format for methods of payment page opens, click Payroll electronic NACHA PPD from the Export options.
  • Click the arrow -> to add the selected format.

NOTE: The export format Payroll electronic NACHA PPD is the same as Standard NACHA - PPD format, but required for Payroll. Customization is required if your commercial bank has format variations from the Standard NACHA - PPD format. If your company has customizations on the Standard NACHA - PPD format, the same considerations will need to be made for the Payroll electronic NACHA PPD export format.

 

Find more for US Payroll Direct Deposit setup in my post: How to set up Direct Deposit Payments for US Payroll in Microsoft Dynamics 365

 

 


Setting up Direct Deposit Payments for US Payroll in Microsoft Dynamics 365 for Finance and Operations

$
0
0

US Payroll in Microsoft Dynamics 365 for Finance and Operations gives you the option to pay workers by check or by electronic (Direct Deposit) payment. To set up the Direct Deposit NACHA feature for US Payroll, you need to make sure the following setups are complete.

  • Payroll Bank account
  • Payroll electronic payments - Method of Payment
  • Direct Deposit & account setup for Employees
  • Payroll journal in General ledger
  • Payment issuance for Electronic method of payment

Here is a reference for the required setups to get you going with Direct Deposit NACHA payments in US Payroll.

Create a new bank account

Cash & Bank management > Bank Accounts, click New and enter the following:

  • Routing Number
  • Bank Account Number
  • Company Statement Name
  • Company ID -  in "Additional identification" tab on the Bank account form.

Set up a method of payment for Payroll electronic payments

Accounts Payable > Setup > Payment Setup > Methods of Payment - click New. Enter a Description and the following:

  • Payment Status field, select Sent to avoid posting the payroll payment journal without first generating the payment.
  • In the Period field, select Invoice.
  • In the Grace period field, enter Zero.
  • In the Payment type field, select Other.
  • On the File Formats tab, click the Export format dropdown menu to select Payroll electronic NACHA PPD from the Export format list.
    • If you don't see the Payroll electronic NACHA PPD export format in the list, please see quick steps in my post "How to set up the US Payroll NACHA payments File format in D365 for Finance and Operations"
    • Direct Deposit (NACHA) Payments requires the export format Payroll electronic NACHA PPD for US Payroll, however it is the same as the Standard NACHA - PPD format. If your company has customizations on the Standard NACHA - PPD format, the same considerations will need to be made for the Payroll electronic NACHA PPD export format.

Set up Employees to receive Direct Deposit payments.

Worker bank accounts and bank account disbursements need to be setup for each worker.

Add Bank accounts for each worker.

  • Select a worker and on the Action Pane, click Personal information > Bank accounts. Open the Worker bank account form and click +New to create a new line.
  • Complete the Account identification, Routing number, Bank account number, and Bank account type fields.

Setup bank account disbursements for each worker.

  • Select a worker and on the Action Pane, click Personal information > Bank account disbursements.
  • Select the Remainder field if the remainder of the payment is allocated to this account. Only one account can have this selected.
  • Enter the amount currency if a specific amount is allocated to the account.
  • Select the Prenote Status field to indicate the status of the Prenote.
    • Field must be removed after validation by the bank for the employee to start receiving payments through direct deposit.

Payroll journal must exist in General ledger

If a Payroll journal does not already exist, create a new one in General Ledger > Journal Setup > Journal names with the following:

  • Name: Payroll
  • Description: Payroll Journal.
  • The correct Voucher series for Payroll journals

Electronic method of payment field on the Payment issuance tab of the Payroll parameters page.

In Payroll parameters, select the Payroll - Electronic Method of Payment on the Payment issuance tab

  • Payroll > Setup > Parameters > Payroll parameters > Payment issuance tab
  • In the Electronic method of payment field, enter or select a value.

IT 業界や IT 開発者(デベロッパー、プログラマー) に興味を持つ学生の皆さんへ ~ マイクロソフト、そしてエバンジェリストの仕事とは?

$
0
0

そろそろ学校 (小学校~高校) の夏季休暇が終わりでしょうか。最近の夏休みの宿題で、興味を持った業界・業種について調べてまとめる、という課題があるとのこと、中学生の方からご質問をいただきました。

IT 業界や IT 開発者(デベロッパー、プログラマー) に興味を持つ学生の皆さんにご参考になればと思い、ご質問いただいたご本人の許可をいただき、公開させていただきます。

※こちら個人的な意見に基づくものであり、所属組織や同じ職種であっても、同じ状況ではない可能性があることを予めご了承ください。


マイクロソフトの業務内容について

Q1. Web サイトには「ソフトウエアおよびクラウドサービス、デバイスの営業・マーケティング」との記載がありました。同じ業界の他社とは異なる特長を教えてください。

A1. 日本には 日本マイクロソフト株式会社(ソフトウエア関連サービスの販売およびマーケティング) と マイクロソフト ディベロップメント(開発拠点) の2つの会社組織があります。「ソフトウエアおよびクラウドサービス、デバイスの営業・マーケティング」は日本マイクロソフト株式会社の業務内容になります。マイクロソフト (米国マイクロソフト コーポレーション (本社) 以下世界中のすべての組織) として、こちらのご質問にお答えします。

マイクロソフトは、ソフトウエアやオンライン(クラウド)サービスを提供し、それを活用したアプリケーションやサービスの開発のためのツールまでを提供しています。例えば、個人向けの製品・サービスとして、PC マシン本体(Surface や関連機器) や OS (Windows))、Windows 上やそれ以外のデバイスでも利用できるアプリ(Office など)やサービス (Outlook、Skype、OneDrive など) があります。また IT 事業者向け(ソフトウエア、アプリを開発販売する会社または個人) 向けとして、クラウドサービス(Azure)や開発製品(Visual Studio)、開発技術(.NET (C#, Visual Basicなど開発言語も含みます))、と個人向け・企業向け・IT事業者向けと幅広い製品・サービスを提供しています。

近年では、Windows に限らずさまざまなOSでお使いいただける開発ツールや技術(Visual Studio Core, .NET Core など) を無償公開したり、Linuxも使える環境(Azure Virtual Machine)を提供したりするなど、Windows に限らない方針を採っています。また、OSS (オープンソース) 化 (=技術を全部公開して、誰もが修整・拡張できる環境を作ること) も合わせて評価されています。

Q2. 海外のマイクロソフト社との業務上のつながりについて教えてください。

A2. 日本マイクロソフト株式会社は、米国マイクロソフト コーポレーションの日本法人(子会社) になり、マーケティングおよび販売に関して協同しています。

マイクロソフト ディベロップメントは日本も開発拠点の一部であり、世界各地の拠点と共に、会社自体が一体となってソフトウエア関連サービスの開発を行っています。
Q3. なぜマイクロソフトでは Apple と違い、OS を独占せず、他社のハードウエアに搭載できるようにしているのか。

A3. もともと マイクロソフトはハードウエアを持たず、各社のマシンにインストールできるOS (Windows) を開発、販売してきました。(現在では個人ユーザー向けにはSurfaceといったPCを販売していますが...)

コンピューターの発展初期の段階では、各マシンに特化したアプリケーションを開発するのが普通でした。この場合、それぞれのマシンの強みを生かすことはできますが、他の種類のマシンではせっかく開発した同じアプリケーションを利用することができません。

Windows の場合は、一度 Windows 上で動くアプリケーションを開発すれば、Windows がインストールされている様々な種類のマシンでも利用することができるという利便性を提供してきたと言えると思います。(今でも、PC自体は 富士通、NEC、東芝、Lenovo など各社が製造していますが、Windows 上のアプリケーションは同じものが動きます)

Q4. 20年後にはマイクロソフトはどのように変化していると思いますか。

A4. わかりません。IT業界の変化は激しく、Dog Year と呼ばれているぐらいですので...。(成長の速いイヌにとっての1年は人間の7年に相当する=通常7年の変化が1年で起こる)

もし存続しているとすると、ITを取り巻く技術進歩と時代の中でも新しい価値を世の中に提供できる企業であり続けていると思います。


エバンジェリストの仕事について

Q5. 担当されている業務内容について教えてください。

A5. Microsoft Azure というオンライン・クラウドサービスを担当し、お客様 (IT関連製品・サービスの開発に携わる方々) に向けて、ご説明と販売を行います。(実際の販売契約は営業担当者が行います) 最近は その中でも Cognitive Services と呼ばれる AI をAPI経由で簡単に利用できるサービスを主に扱っています。

ITの製品やサービスの基盤 (ベース) となるサービスで、お客様に馴染みのない新しいサービスであることも多いため、実際にどのように開発、利用するのかをお伝えする必要があります。手順を追いながらご説明するデモを行ったり、実際にお客様で利用いただく場合を想定したサンプル開発を行うなど、利用方法とそのメリットをご理解いただけるような業務も行っています。

Q6. なぜ日本マイクロソフトに入社しようと決めたのですか。

A6. "あらゆるソフトウエアや(オンライン)サービスを開発するうえで、1社ですべての必要な技術を持っている会社だと思ったからです。何社かを組み合わせて開発するのが最善の場合もありますが、1社の技術で統一できるのであれば、親和性が高い・1社でまとめてサポートを受けられるなどメリットが大きくなります。

お客様にご提案・販売する際、マイクロソフトの信頼性が高いのもポイントの一つです。"

Q7. 担当の業務で難しいこと、大変なことは何ですか。

A7. ソフトウエアやオンラインサービスといった「目に見えないもの」「ユーザーには使っているという意識がないが、基礎技術として使われているもの」を担当しているので、マイクロソフトの製品やサービスがどのように皆様のお役に立っているのか、説明が難しいことがあります。(特にIT業界以外の方々には理解も難しいようです)

Q8. 仕事をしている中でどんな時にやりがいを感じますか。

A8. お客様にマイクロソフトの製品やサービスをご説明して、その良さを理解していただいて、実際に使っていただけたとき


社会人としての仕事の取り組み方について

Q9. 社会に出て働く上で、一番必要な能力は何だと思いますか。

A9. 物事の本質を見る目だと思います。

もう少し説明すると、いろいろな情報が飛び交う中でも何が事実なのかを把握すること、表面的な現象にとらわれずに根本原因を見極めること、何が原因でどのような結果になったのかの論理立てができること、ということになるかと思います。

Q10. 日本マイクロソフトでの業務に一番必要な能力は何だと思いますか。

A10. Passion (情熱、熱意) だと思います。
マイクロソフトの製品やサービスを使っていただくことで、日本のIT業界 (IT関連製品・サービスの開発に携わる方々) やそれを使う人々が、もっと便利になる・もっと能力を伸ばせるという思いを持って業務を行っている人が強いと思います。

Extensible X++ – Method signatures

$
0
0

 

Method signatures are not extensible – and will not be.

 

Extensible parameters would be intrusive – in so many ways:

  • Breaks derived classes (signatures of overridden methods must match),
  • Requires recompilation,
  • No side-by-side support.

 

Here are some alternatives.

 

The request for adding a new parameter to a method as an extension is surfacing quite frequently. A method's signature is the contract between caller and method.  When new parameters are added, then callers can provide additional data that the method can act on. In isolation; a request for an extra parameter makes no sense. An extra parameter will not help anyone unless someone is passing it in, and someone is acting on it.

In the land of extensibility there are restrictions:

  • You cannot change the places a given method is called, that would require overlayering.  But you can call the method from your own code (with control over the parameters).
  • You cannot change the implementation of the method, that would require overlayering. But often you can add logic pre/post or wrap using Chain-of-command.

With these constraints, here are some options:

  1. "Method overloading"

Scenario

You need to pass extra information from your code to an existing method and act on it in a pre/post handler.

Description

X++ doesn't support overloading methods – but you can mimic the behavior by creating a new method (with new name) in an extension class. The extension method can take additional parameters and call the original method; with your logic before and after calling the original method.

Example

Here is an example of "overloading" the insert method on CustTable. Notice the first 3 parameters are identical to the parameters on the original insert() method.


  1. Class state

Scenario

You need to pass extra information from your code to a method on a class that is called via other method(s) on the class.

Description

X++ now supports adding class state via an extension. You can use this to store the extra information.

Example

Here is an example adding state and wrapping a method.

Here is an example using this. The standard implementation of initFromItemOrCategory() calls initFromInventTable().

 

  1. Disposable context

Scenario

You need to pass extra information from your code to a pre/post handler somewhere downstream.

Description

It can be tempting to store the extra information in a global cache or variable. There is a better approach, which avoids stale data, and is type-safe. Create a singleton class for the context. The class must implement System.IDisposable, so it is disposed when it goes out of scope. The receiving code can access the singleton instance to extract the information.

Example

Here is an example of the context class:

Here is an example of the calling code:

Here is an example of the consuming code. The standard implementation of CustTable.Insert() calls DirPartyTable::createNew() – the example uses Chain-of-command to wrap the createNew method, and then accesses the context to get the information.

Caution

Transferring state from one arbitrary place to another using a global variable (which the context class is) can lead to future logical errors – that can be very hard to detect and fix. My recommendation would be to limit the usage to situations where the scope and consequences are manageable.

  1. Request new extension points

Now, the above mechanisms cannot solve the cases where existing callers need to provide more data, or where the existing method implementation needs to act on the data. That would require overlayering. If you discover a need for this or another case where you are not comfortable with the above options, please log an extension request explaining your scenario. If you've read so far, it should now be obvious that you need more than changing a method's signature – you need an extension point, the implementation of this might require changing a method's signature – but that is just a technical detail.

 

P.S. If you have other suggestions for how to solve situations like these, please leave a comment.

 

THIS POST IS PROVIDED AS-IS AND CONFERS NO RIGHTS.

The Alpha AXP, epilogue: A correction about file system compression on the Alpha AXP

$
0
0



Some time ago
,
I claimed that Windows file compression had to be dumbed down in order
for the Alpha AXP to hit its performance targets.



I have since been informed by somebody who worked on file system
compression for the Windows NT that information was

badly sourced
.
(My source was somebody who worked on real-time file compression,
but not specifically on the Windows NT version.)



This is a bit of a futile correction because the wrong information
has already
traveled around the world
[citation needed],

posted some selfies to Instagram
,
and

renewed its passport
.



Windows NT file compression worked just fine on the Alpha AXP.
It probably got a lot of help from its

abundance of registers

its ability to

perform 64-bit calculations natively
.



We

regret the error
.



Bonus chatter:
Real-time file compression is a tricky balancing act.



Compression unit:
If you choose a small compression unit, then you don't get to
take advantage of as many compression opportunities.
But if you choose a large compression unit,
then reads become more inefficient in the case where
you needed only a few bytes out of the large unit,
because you had to read the entire unit and decompress it,
only to get a few bytes.
Updates also become more expensive the larger the compression unit
because you have to read the entire unit, update the bytes,
compress the whole unit, and then write the results back out.
(Possibly to a new location if the new data did not compress as well
as the original data.)
Larger compression units also tend to require more memory for
auxiliary data structures in the compression algorithm.



Compression algorithm:
Fancier algorithms will give you better compression, but cost you
in additional CPU time and memory.



What makes twiddling the knobs particularly difficult is that the
effects of the knobs aren't even monotonic!
As you make the compression algorithm fancier and fancier,
you may find at first that things get slower and slower,
but when the compression ratio reaches a certain level,
then you find that the reduction in I/O starts to dominate
the extra costs in CPU and memory, and you start winning again.
This crossover point can vary from machine to machine
because it is dependent upon the characteristics of the hard drive
as well as those of the CPU.
A high-powered CPU on a slow hard drive is more likely to see a net
benefit,
whereas a low-powered CPU may never reach the breakeven point.

Azure Content Spotlight – Reactive event programming with Azure Event Grid

$
0
0

Welcome to another Azure Content Spotlight! These articles are used to highlight items in Azure that could be more visible to the Azure community.

Azure Event Grid is a new offering in Azure for building event-based applications using a serverless infrastructure.  Azure Event Grid provides event publishing support in order to react to events in integrated services.  Built for scalability, it enables intelligent filtering and routing on event metadata providing the flexibility to adapt event-based applications in a serverless solution.

The Azure Event Grid has a pay-per-event pricing model, and while it is in preview, the cost is $0.30 per million operations.  The first 100,000 operations per month are free.

The following image summarizes the currently supported services:

Happy eventing!

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>