Quantcast
  • Login
    • Account
    • Sign Up
  • Home
    • About Us
    • Catalog
  • Search
  • Register RSS
  • Embed RSS
    • FAQ
    • Get Embed Code
    • Example: Default CSS
    • Example: Custom CSS
    • Example: Custom CSS per Embedding
  • Super RSS
    • Usage
    • View Latest
    • Create
  • Contact Us
    • Technical Support
    • Guest Posts/Articles
    • Report Violations
    • Google Warnings
    • Article Removal Requests
    • Channel Removal Requests
    • General Questions
    • DMCA Takedown Notice
  • RSSing>>
    • Collections:
    • RSSing
    • EDA
    • Intel
    • Mesothelioma
    • SAP
    • SEO
  • Latest
    • Articles
    • Channels
    • Super Channels
  • Popular
    • Articles
    • Pages
    • Channels
    • Super Channels
  • Top Rated
    • Articles
    • Pages
    • Channels
    • Super Channels
  • Trending
    • Articles
    • Pages
    • Channels
    • Super Channels
Switch Editions?
Cancel
Sharing:
Title:
URL:
Copy Share URL
English
RSSing>> Latest Popular Top Rated Trending
Channel: MSDN Blogs
NSFW?
Claim
0


X Mark channel Not-Safe-For-Work? cancel confirm NSFW Votes: (0 votes)
X Are you the publisher? Claim or contact us about this channel.
X 0
Showing article 22121 to 22140 of 29128 in channel 16291381
Channel Details:
  • Title: MSDN Blogs
  • Channel Number: 16291381
  • Language: English
  • Registered On: June 20, 2013, 3:48 am
  • Number of Articles: 29128
  • Latest Snapshot: October 4, 2019, 1:29 am
  • RSS URL: http://blogs.msdn.com/b/mainfeed.aspx?type=blogsonly/rss.aspx
  • Publisher: https://blogs.msdn.microsoft.com
  • Description: Get the latest information, insights, announcements, and news from Microsoft experts and...
  • Catalog: //blogs3805.rssing.com/catalog.php?indx=16291381
Remove ADS
Viewing all 29128 articles
Browse latest View live
↧

Investigating Socket exception for ASP.NET applications in Azure App Service

October 13, 2017, 11:01 am
≫ Next: Open Source Private Traffic Manager and Regional Failover
≪ Previous: Top stories from the VSTS community – 2017.10.13
$
0
0

In Azure App Service, the number of outbound connections are restrictive based on the size of the VM. Below are the machine wide TCP limits (as documented here.)

Limit Name Description Small (A1) Medium (A2) Large (A3)
Connections Number of connections across entire VM 1920 3968 8064

When an application initiates an outbound connection to a database or a remote service, it uses the TCP connection from a range of allowed TCP connections on the machine. In some scenarios, the number of available outbound ports might get exhausted and when the applications tries to initiate a new connection request, it might fail with this error:

Exception Details: System.Net.Sockets.SocketException: An attempt was made to access a socket in a way forbidden by its access permissions.

COMMON REASONS THAT CAUSE THIS ERROR:

  • Using client libraries which are not implemented to re-use TCP connections.
  • Application code or the client library is leaking TCP socket handles.
  • Burst load of requests opening too many TCP socket connections at once.
  • In case of higher level protocol like HTTP this is encountered if the Keep-Alive option is not leveraged.

TROUBLESHOOTING:

For a .NET application,the below code-snippet will log the active outbound connections and details about the external service to which it is connecting. This will not log database related connections as they do not use System.Net. You can create a page PrintConnectionsSummary.aspx under the site and place it under /wwwroot folder. When the application encounters the socket exceptions, the users can browse to the page PrintConnectionsSummary.aspx to see the connection count to the remote services.

Download source file: Click here

NOTE: This will work only when the application is using System.Net namespace to make an outbound connections.


Below screenshot depicts the above code-snippet in action. It is printing the external connections that the app might be connecting to (via System.net)

image

↧
Search

Open Source Private Traffic Manager and Regional Failover

October 13, 2017, 11:53 am
≫ Next: Query Data Store Forced Plan behavior on AlwaysOn Readable Secondary
≪ Previous: Investigating Socket exception for ASP.NET applications in Azure App Service
$
0
0

Many web applications cannot tolerate downtime in the event of regional data center failures. Failover to a secondary region can be orchestrated with solutions such as Azure Traffic Manager which will monitor multiple end-points and based on availability and performance, users can be directed to the region that is available or closest to them.

The Azure Traffic Manager uses a public end-point (e.g., https://myweb.trafficmanager.net). There are, however, many mission critical, line of business web applications that are private. They are hosted in the cloud on private virtual networks that are peered with on-premises networks. Since they are accessed through a private end-point (e.g., http://myweb.contoso.local), the Azure Traffic Manager cannot be used to orchestrate failover for such application. The solution is to use commercial appliances such ad Big-IP from F5. These commercial solutions are sophisticated and can be hard to configure. Moreover, they can be expensive to operate, so for development and testing scenarios or when budgets are tight, it is worth looking at Open Source alternatives.

In this blog post, I describe how to use the Polaris Global Server Load Balancer to orchestrate regional failover on a private network consisting of three virtual networks in three different regions. In this hypothetical scenario, the "Contoso" organization has a private network extended into the cloud. They would like to deploy a web application to the URL http://myweb.contoso.local. Under normal circumstances, entering that URL into their browser should take them to a cloud deployment in a private virtual network (e.g., 10.1.0.5), but in the event that this entire region is down, they would like for the user to be directed to another private network in a different region (e.g., 10.2.0.5). Here we are simulating the on-premises network that is peered with the cloud regions by deploying a jump box with a public IP (so that we have easy access here) in a third region. The topology looks something like this:

 

This entire topology with name servers, web servers, control VM, etc. can be deployed using the PrivateTrafficManager Azure Resource Manager Template, which I have uploaded to GitHub.com. I have deployed this template successfully in both Azure Commercial Cloud and Azure US Government Cloud. Be aware that it takes a while to deploy this template. It uses three (!) Virtual Network Gateways, which can take a while (45 minutes) to deploy. Also be aware that the VM SKUs must be available in the regions that you choose, so your deployment may fail depending on your regional choices. In my testing, I have used eastus, westus2, wescentralus in the commercial cloud and usgovvirginia, usgovtexas, usgovarizona for the regions, but you can experiment.

After deployment, you should have 3 virtual networks (10.0.0.0/16, 10.1.0.0/16, 10.2.0.0/16) and they are connected via the VPN Gateways. In the future, one would probably consider Global VNet Peering, which is currently in public preview in Azure Commercial Cloud. In the first of the networks, we are hosting a Jump Box (or ControlVM), which we are simply using here to test the configuration. It is accessible via RDP using a public IP address. A simple modification to this would be to eliminate the public IP address and use Point-To-Site VPN to connect to the first virtual network. In the other two networks, we have a name server and a web server in each network. The first set is considered the primary one.

When the user enters http://myweb.contoso.local into their browser, traffic should go to http://www1.contoso.local (10.1.0.5), the first web server, unless it is unreachable. In that case, traffic goes to http://www2.contoso.local (10.2.0.5), the second web server. Both name servers (10.1.0.4 and 10.2.0.5) are listed as DNS servers on all networks and they both run the Polaris GSLB and if one of them is down, name resolution requests will go to the other.

The Polaris Wiki has  sufficient information to set up the configurations, but the main configuration file is the polaris-lb.yaml file:

pools:
    www-example:
        monitor: http
        monitor_params:
            use_ssl: false
            hostname: myweb.contoso.local
            url_path: /
        lb_method: fogroup
        fallback: any
        members:
        - ip: 10.1.0.5
          name: www1
          weight: 1
        - ip: 10.2.0.5
          name: www2
          weight: 1

globalnames:
    myweb.contoso.local.:
        pool: www-example
        ttl: 1

This established two different regions and the endpoints that are being monitored. In the example we also install a polaris-topology.yaml file:

datacenter1:
- 10.1.0.5/32

datacenter2:
- 10.2.0.5/32

The name servers are configured with the setup_ns.sh script:

#!/bin/bash

sql_server_password=$1

debconf-set-selections <<< "mysql-server mysql-server/root_password password $sql_server_password"
debconf-set-selections <<< "mysql-server mysql-server/root_password_again password $sql_server_password"

debconf-set-selections <<< "pdns-backend-mysql pdns-backend-mysql/password-confirm password $sql_server_password"
debconf-set-selections <<< "pdns-backend-mysql pdns-backend-mysql/app-password-confirm password $sql_server_password"
debconf-set-selections <<< "pdns-backend-mysql pdns-backend-mysql/mysql/app-pass password $sql_server_password"
debconf-set-selections <<< "pdns-backend-mysql pdns-backend-mysql/mysql/admin-pass password $sql_server_password"
debconf-set-selections <<< "pdns-backend-mysql pdns-backend-mysql/dbconfig-remove select true"
debconf-set-selections <<< "pdns-backend-mysql pdns-backend-mysql/dbconfig-reinstall select false"
debconf-set-selections <<< "pdns-backend-mysql pdns-backend-mysql/dbconfig-upgrade select true"
debconf-set-selections <<< "pdns-backend-mysql pdns-backend-mysql/dbconfig-install select true"

apt-get update
apt-get -y install mysql-server mysql-client  pdns-server pdns-backend-mysql pdns-server pdns-backend-remote memcached python3-pip

#Add recurser to Azure DNS
sudo sed -i.bak 's/# recursor=no/recursor=168.63.129.16/' /etc/powerdns/pdns.conf

#Install polaris
git clone https://github.com/polaris-gslb/polaris-gslb.git
cd polaris-gslb
sudo python3 setup.py install

wget https://raw.githubusercontent.com/hansenms/PrivateTrafficManager/master/PrivateTrafficManager/scripts/pdns/pdns.local.remote.conf
wget https://raw.githubusercontent.com/hansenms/PrivateTrafficManager/master/PrivateTrafficManager/scripts/polaris/polaris-health.yaml
wget https://raw.githubusercontent.com/hansenms/PrivateTrafficManager/master/PrivateTrafficManager/scripts/polaris/polaris-lb.yaml
wget https://raw.githubusercontent.com/hansenms/PrivateTrafficManager/master/PrivateTrafficManager/scripts/polaris/polaris-pdns.yaml
wget https://raw.githubusercontent.com/hansenms/PrivateTrafficManager/master/PrivateTrafficManager/scripts/polaris/polaris-topology.yaml
wget https://raw.githubusercontent.com/hansenms/PrivateTrafficManager/master/PrivateTrafficManager/scripts/polaris.service

cp pdns.local.remote.conf /etc/powerdns/pdns.d/
cp *.yaml /opt/polaris/etc/
systemctl restart pdns.service

#Copy startup to ensure proper reboot behavior
cp polaris.service /etc/systemd/system
systemctl enable polaris.service
systemctl start polaris.service

 pdnsutil create-zone contoso.local
 pdnsutil add-record contoso.local ns1 A 10.1.0.4
 pdnsutil add-record contoso.local ns2 A 10.2.0.4
 pdnsutil add-record contoso.local www1 A 10.1.0.5
 pdnsutil add-record contoso.local www2 A 10.2.0.5

 exit 0

A few different things are achieved in that script:

  1. PowerDNS with MySQL backend is installed. This involves installing MySQL server and setting root password for that. In order for this script to run unattended during template deployment, we feed in the answers needed during installation.
  2. Polaris GSLB is installed.
  3. The contoso.local zone is established.
  4. NS records are created for the name servers and the web servers.
  5. We ensure that Polaris GSLB runs on reboot of the name servers.

For the web servers, we use the setup_www.sh script:

#!/bin/bash

machine_message="<html><h1>Machine: $(hostname)</h1></html>"

apt-get update
apt-get -y install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get -y update
apt-get -y install docker-ce

mkdir -p /var/www
echo $machine_message >> /var/www/index.html
docker run --detach --restart unless-stopped -p 80:80 --name nginx1 -v /var/www:/usr/share/nginx/html:ro -d nginx

This script installs Docker (Community Edition), creates an HTML page displaying the machine name, and runs nginx in a Docker container to display this page. This is just to allow us to see a difference as to whether the page is served from one region or the other in this example. In a real scenario, the user should not really experience a different going to one region or the other.

As mentioned, we are not using the Azure Docker Extension here. It is not yet available in Azure Government and the approach above ensures it will deploy in either Azure Commercial Cloud or Azure US Government Cloud.

After deploying the template, you can log into the jump box using RDP and a first test is to ping myweb.consoso.local:

Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS C:Usersmihansen> ping myweb.contoso.local

Pinging myweb.contoso.local [10.1.0.5] with 32 bytes of data:
Reply from 10.1.0.5: bytes=32 time=28ms TTL=62
Reply from 10.1.0.5: bytes=32 time=28ms TTL=62
Reply from 10.1.0.5: bytes=32 time=29ms TTL=62
Reply from 10.1.0.5: bytes=32 time=29ms TTL=62

Ping statistics for 10.1.0.5:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 28ms, Maximum = 29ms, Average = 28ms
PS C:Usersmihansen>

We can also open a browser en verify that we are getting the page from the first web server (www1):

Now we can go on the Azure Portal (https://portal.azure.us for Azure Gov used here, https://portal.azure.com for Azure Commercial) and cause the primary web server to fail. In this case, we will simply stop it:

After stopping it, wait a few seconds and repeat the ping test:

PS C:Usersmihansen> ping myweb.contoso.local

Pinging myweb.contoso.local [10.2.0.5] with 32 bytes of data:
Reply from 10.2.0.5: bytes=32 time=49ms TTL=62
Reply from 10.2.0.5: bytes=32 time=46ms TTL=62
Reply from 10.2.0.5: bytes=32 time=53ms TTL=62
Reply from 10.2.0.5: bytes=32 time=48ms TTL=62

Ping statistics for 10.2.0.5:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 46ms, Maximum = 53ms, Average = 49ms
PS C:Usersmihansen>

As we can see, it is now sending clients to the secondary region and if we go to the browser (you may have to close your browser and force a refresh to clear the cache), you should see that we are going to our secondary region:

Of course if you restart www1, the clients will be directed to the primary region.

In summary, we have established a private network with a private DNS zone. A private traffic manager has been established with open source components and we have demonstrated failing over to a secondary region. All of this was done without any public end-points and could be entirely private between an on-prem network and Azure.

Let me know if you have comments/suggestions/experiences to share.

↧
↧

Query Data Store Forced Plan behavior on AlwaysOn Readable Secondary

October 13, 2017, 12:10 pm
≫ Next: Service Fabric 6.0 Refresh Release for Windows Server Standalone, Azure and .NET SDK
≪ Previous: Open Source Private Traffic Manager and Regional Failover
$
0
0

 

We are getting following questions repeatedly from many customers so thought of writing a quick blog to explain the behavior.

Question: For example if you have Query Data Store (QDS) enabled for user database participating in Always On Availability Groups and you Forced Plan for specific query, what happened if same query running on readable secondary will it use Forced Plan?

Answer: QDS is not supported on Readable Secondary, so though you have Forced Plan on Primary Replica you “may” see different plan on secondary database because QDS do not force same plan on readable secondary.

Please note, SQL do not create Plan Guide to Force Execution Plan in QDS.

QDS needs to store runtime stats in databases which is not possible when database in readable state.

I am sharing diagram from following article for your reference: https://docs.microsoft.com/en-us/sql/relational-databases/performance/how-query-store-collects-data

 

Question: Will QDS retain FORCED Plan information when Database failover from Primary replica to secondary Replica?

Answer: Yes, QDS store Forced Plan information in sys.query_store_plan table, so in case of failover you will continue to see same behavior on new Primary.

select * from sys.query_store_plan where is_forced_plan=1

Question: How QDS Force Query Plan? Is it creating Plan Guide to Force Plan?

Answer: As I mentioned earlier QDS do not use plan guide, instead of it use “USE PLAN” query hint which takes an xml_plan as an argument, so QDS basically pull this XML plan information from query_plan column from sys.query_store_plan table.

You can refer following diagram to know how SQL SERVER force Plan.

 

 

 

Vikas Rana | Twitter | Linkedin | Support Escalation Engineer
Microsoft India GTSC

↧

Service Fabric 6.0 Refresh Release for Windows Server Standalone, Azure and .NET SDK

October 13, 2017, 6:38 pm
≫ Next: Postmortem – Availability issues with Visual Studio Team Services on 10 October 2017
≪ Previous: Query Data Store Forced Plan behavior on AlwaysOn Readable Secondary
$
0
0

Today we have released a new version of the SDK and are in the process of rolling out a new runtime to Azure clusters.

This release is both a refresh to clusters running in Azure (Linux and Windows), a refresh to the .NET SDK and the first 6.0 release of Service Fabric for Windows Server (Standalone).

At this time we have made the SDK available for download and are about to start the upgrade and roll-out to clusters in Azure. The upgrade will go on over the next days, so expect your clusters to upgrade anytime soon, if you are running with Automatic upgrade in Azure.

Check out the release notes for the details: Microsoft-Azure-Service-Fabric-Release-Notes-SDK-2.8.CU1-Runtime-6.0.CU1

Cheers,

The Service Fabric Team

↧

Postmortem – Availability issues with Visual Studio Team Services on 10 October 2017

October 13, 2017, 1:23 pm
≫ Next: New InfoPath Calculation Timeout in SharePoint 2016
≪ Previous: Service Fabric 6.0 Refresh Release for Windows Server Standalone, Azure and .NET SDK
$
0
0

On 10 October 2017 we had a global incident with Visual Studio Team Services (VSTS) that had a serious impact on the availability of our service (incident blog here). We know how important VSTS is to our users and deeply regret the interruption in service and are working hard to avoid similar outages in the future.

 
Customer Impact
This was a global incident that caused performance issues and errors across all instances of VSTS, impacting many different scenarios. The incident occurred within Shared Platform Services (SPS), which contains identity and account information for VSTS.

The incident started on October 10th at 7:16 UTC and ended at 14:10 UTC.

The graph below shows the number of impacted users during the incident:

What Happened
This incident was caused by a change delivered as part of our sprint 124 deployment of SPS. In that deployment, we pulled in version 5.1.4 of the System.IdentityModel.Tokens.Jwt library. This library had a performance regression in it that caused CPU utilization on the SPS web roles to spike to 100%.

The root cause of the spike was due to usage of a compiled regex in the jwt token parsing code. The regex library maintains a fixed length cache for all compiled regular expressions, and the regex's in use by the JWT token parser are marked as culture-sensitive, so we were getting a regex cache entry for each locale being used by our users. Compiled regexs are fast so long as they are cached (compiled once and reused), but they are computationally expensive to generate and compile, which is why they are normally cached. Because of the wide variety of locales of our users who came online at about 07:00 UTC, we exceeded the capacity of the regex cache, causing us to thrash on this cache and peg the CPUs due to excessive regex compilation. Additionally, the compilation is serialized, leading to lock contention.

Ideally, we would have been able to roll back the SPS deployment, however we had already upgraded our databases and roll back was not an option. Going forward we will add a 24 hour pause before upgrading the databases enabling us to observe the behavior of the service under peak load.

To mitigate the issue, the web roles were scaled out, however in retrospect it is clear we should have attempted this mitigation much earlier. During the incident there was some level of concern that increasing the web role instance count could cause downstream issues. As part of our post mortem we agreed that defining standard mitigation steps for common scenarios like this (e.g. – high CPU on web roles) will help address such issues faster.

There was an additional delay in mitigating the issue due to challenges adding the new web role instances. One of the existing web roles was stuck in a “busy” state which prevented the new instances from coming online. While investigating how to resolve this issue the problematic web role self-healed allowing the new capacity to come online at 13:26 UTC.

As the additional capacity became active the service started to drain the request backlog. This overwhelmed the CPU of two backend databases, and triggered concurrency circuit breakers on the web roles causing a spike of customer visible errors for approximately 1 hour:

Once the backlog of requests was drained the CPU utilization on the two databases returned to normal levels.

After ensuring the customer impact was fully mitigated, we prepared a code change which eliminated the use of the constructor containing the regex for the JWT tokens. This fix has been deployed to SPS.

 
Next Steps
In retrospect, the biggest mistake in this incident was not that it occurred, but that it lasted so long and had such wide impact. We should have been able to mitigate it in minutes, but instead it took hours.

  1. We will ensure we can rollback binaries in SPS for the first 24 hours after a sprint deployment by delaying the database servicing changes 24 hours.
  2. We are updating our standard operating procedures to define prescriptive mitigations steps for common patterns such as high CPU on web roles.
  3. We are following up with Azure to understand what caused the delay in adding additional web role capacity.
  4. We are going to further constrain compute on our internal dogfood instance of SPS so that this class of issue is more likely to surface before we ship to external customers.
  5. We are working on partitioning SPS. We currently have a dogfood instance in production, though the access pattern to trigger the issue was not present there (insufficient number of locales to trigger the issue). We have engineers dedicated to implementing a partitioned SPS service, which will allow for an incremental, ring-based deploy that limits the impact of issues.
  6. We have updated the AAD library to fix the JWT token regex parsing code.
  7. We are fixing every other place in our code where compiled regex’s are used incorrectly, especially compiled expressions that are not using the Culture Invariant flag.

We again apologize for the disruption this caused you. We are fully committed to improving our service to be able to limit the damage from an issue like this and be able mitigate the issue more quickly if it occurs.

Sincerely,
Ed Glas
Group Engineering Manager, VSTS Platform

↧
↧

New InfoPath Calculation Timeout in SharePoint 2016

October 13, 2017, 6:50 pm
≫ Next: Profesionalización
≪ Previous: Postmortem – Availability issues with Visual Studio Team Services on 10 October 2017
$
0
0

SharePoint 2016 was born from the cloud - that is SharePoint Online in Office 365. In SharePoint Online, we added a performance throttling mechanism so that complex InfoPath forms do not cause performance issues for the web servers. The same code base was shipped with SharePoint Server 2016 (On Premise), and you may notice that the InfoPath forms that you used to be able to publish successfully in SharePoint 2013 are failing to be published with the following error:

Expression evaluation timeout (<timeout> ms) exceeded, closing form urn:schemas-microsoft-com:office:infopath:list.

You can set the timeout to the largest possible value which is equivalent of removing the timeout check. Just know that this can have a negative performance impact on your SharePoint Web Front Ends.

Here is the PowerShell script that you can use to change the timeout:

$farm = Get-SPFarm
$service = $farm.Services | where {$_.TypeName -eq "Forms Service"}
$service.Properties["CalculationTimeoutMs"] = [Int32]::MaxValue
$service.Update()

Happy SharePointing!



                       
↧

Profesionalización

October 14, 2017, 12:59 am
≫ Next: Git と Visual Studio 2017 その 1 : レポジトリの作成
≪ Previous: New InfoPath Calculation Timeout in SharePoint 2016
$
0
0

Quizá un camino que se aproxima más a la profesionalización, en el sentido de ejercer a cabalidad una ciencia o un arte, sea precisamente no buscar la profesionalización entendida como recorrer grados académicos y ascender estructuras jerárquicas, sino mantenerse como un devoto aficionado a la curiosidad propia y al aprendizaje; especialmente si ese aprendizaje es cooperativo, en conversación y en discusión con otros, sin ortodoxias académicas ensimismadas y desapegadas del resto del mundo.

Por fortuna yo no perdí tiempo estudiando filosofía en ninguna institución. Me hubiera sucedido lo que a Jonathan Keats, lo cual es lo mismo que ya he hecho en varias ocasiones: abandonar instituciones que pretenden tener el monopolio de la verdad ya sea en política, historia, ciencia, religión o en otras áreas de mucho interés para mí; como la creación de soluciones de negocio basadas en software.

What the World Needs is More Curious Amateurs.

↧

Git と Visual Studio 2017 その 1 : レポジトリの作成

October 14, 2017, 1:12 am
≫ Next: Using Microsoft Cognitive Emotion API with Android App Studio
≪ Previous: Profesionalización
$
0
0

Git や Visual Studio 2017 の Git サポートは既に情報が多くある為、自分が書く意味が無いと思いましたが、Visual Studio で Git を使い始めた頃は結構不安であった事を思い出しました。このシリーズでは Visual Studio 2017 でソース管理をした場合、裏ではどのように Git が動いているのかという観点で情報をまとめます。英語ではすでに書き終わっているので、お急ぎの場合はそちらをどうぞ。

Git の仕組み

まずは Git の根本的な仕組みを知ることが大事。様々な情報がありますが、個人的には以下のビデオがおすすめです。

Advanced Git Tutorial [video]
GOTO 2015 • Deep Dive into Git • Edward Thomson [video]

英語ですがとてもよく説明されています。是非一度見てください。入門的なものはこちらをどうぞ。

しばらくローカル実行

中央管理システムと違い、Git は分散型のソースコード管理システムです。そのためサーバーがなくても動作するため、しばらくはローカルでのみ作業します。

コンソールアプリプロジェクト作成

Git を使うにあたりまずプロジェクトを作ります。

1. Visual Studio で C# のコンソールアプリプロジェクトを作成。「新しい Git リポジトリの作成」にはチェックをいれない。

image

2. [OK] をクリックして作成。準備完了。

レポジトリの作成 : Git

Visual Studio で作業する前に、まず Git での動作を確認します。

1. コマンドプロンプトを開き、ソリューションを作成したフォルダに移動。

image

2. ‘git init’ を実行。すると .git 隠しフォルダが作成される。出ない場合、隠しフォルダも見えるように設定。

image

image

3. この時点でブランチを確認。’git branch’ を実行。期待値としては master は既定であるはずだが、実際にはない。

image

4. ブランチは実際はコミットに対する参照。現時点でコミットがないのでブランチもない模様。.git ディレクトリには HEAD ファイルがあり、現在のブランチを保持しているため、’type .gitHEAD’ で内容を表示。type は Linux でいうところの cat コマンドで中身が表示可能。

image

5. HEAD ファイルは refsheads にある master ファイルをポイントしているが、実際にはファイルはまだない。

image

ステータスの確認およびステージングとコミットの実行

コミットを実行するには、対象となるフォルダやファイルをステージングエリアに追加する必要があります。この概念は重要で、Git はコードを保持する場所として 3 つのエリアを持っています。

  1. 作業ディレクトリ: 現在作業しているフォルダ。
  2. ステージングエリア: コミットする対象を保持するエリア。このエリアがあることで、どのファイルやフォルダをコミットしたいか選択ができる。一度ステージされたファイルは”追跡対象”となる。
  3. ローカルレポジトリ: コミットしたアイテムが保存されるエリア。

1. ‘git status’ を実行して、現在の状態を確認。まだ何もステージされていない状態。

image

2. ‘git add .’ を実行してすべてを追加。個別に追加対象を指定する場合は、‘git add .vs VS_Git.sln VS_Git/*’ を実行。再度 ‘git status’ を実行して、ステージングエリアに追加されていることを確認。実際は obj フォルダ等一部のファイルやフォルダは管理対象外にしたいが、方法は後述。

image

3. ‘git commit -m “初回コミット”’ を実行して、ステージされたアイテムのコミットを実行。 コミット ID として aa28ae9 が表示されている。

image

4. ‘git status’ を再度実行。今回は “nothing to commit” ということでコミット対象はなし。この表示の場合、作業ディレクトリ、ステージングエリアと最新のコミットが同じ状態。

image

5. ‘git log’ を実行してコミット履歴を確認。文字化けが発生。

image

6. 回避のために ‘set LESSCHARSET=utf-8’ を実行して再度 ‘git log’。黄色の文字はコミットの ID となる SHA1 ハッシュ値。また HEAD が master をポイントしている。

image

Git オブジェクト

では実際、ステージングエリアにファイルを追加したりコミットをすると何が起きるのか?答えはすべて Git オブジェクトとなり、SHA1 ハッシュ値が計算されます。オブジェクトは全部で 3 種類あります。

  1. ブロブ: ファイルに相当
  2. ツリー: フォルダに相当
  3. commit: コミットの実行者名や時間など情報を保持。またコミットに紐づく tree オブジェクトの SHA1 ハッシュ値も保持

1. 一度コミットをしたので、master ファイルが存在するか確認。今度はファイルが存在。中身を確認すると SHA1 ハッシュ値を含んでいるだけだが、この値は先ほどコミットした際に表示されていた値と一致。つまり HEAD ファイルは現在のブランチをポイントしており、ポイントされたファイルは現在のコミットをポイントしている。

image

image

2 .gitobjects フォルダを確認。複数のフォルダがあるが、これらは各オブジェクトのハッシュ値初めの 2 文字。今回のコミットの場合は aa がフォルダ名となっており、ファイル名は残りのハッシュ値を利用。

image

image

3. ‘git cat-file commit aa28ae9’ を実行してコミットオブジェクトを表示。Git オブジェクトはテキストファイルではないが、cat-file コマンドを使えば表示可能。SHA1 ハッシュ値を全て渡してもよいが、初めの 6、7 文字で特定できるため、ここでははじめの 7 文字を利用。

image

4. ツリーオブジェクトのハッシュ値を含んでいるので、‘git ls-tree 17806763’ を実行してツリーオブジェクトを表示。コミットしたフォルダ構造と同じく、2 つのツリーとブロブオブジェクトを含んでいる。

image

5. VS_Git ツリーも確認。‘git ls-tree 361e2e7’ を実行した結果から、Program.cs を表示するため、‘git cat-file blob bd87f23’ を実行。

image

Git レポジトリの削除

Git での管理をやめたい場合は、.git フォルダを削除するだけです。フォルダを削除後、Visual Studio 2017 を再起動します。

レポジトリの作成 : VS

次に Visual Studio 2017 で Git レポジトリを作成してみましょう。

1. Visual Studio 2017 でソリューションを開く。

2. ソリューションを右クリックして、”ソリューションをソース管理に追加” をクリック。VS が .git フォルダが作成。

image

3. 連携が行われるとソリューションエクスプローラーのアイコンが変化。

image

4. コマンドプロンプトを開き、ソリューションのディレクトリに移動。’git status’ を実行すると、先ほどと違いコミット対象がないと表示される?

image

5. ‘git log’ を実行してコミット履歴を確認。すでに 2 つもコミットがあることを確認。はじめのコミットで .gitignore および.gitattributes ファイルを追加。次のコミットですべてのプロジェクトアイテムを追加。

image

6. フォルダにも .git フォルダ以外に追加のファイルを確認。

image

7. .gitignore ファイルを開くと、ファイルやフォルダのリストが記載されており、Git では追跡されない。既定で多くのアイテムが記載済。

8. ‘git cat-file commit 2390071’ を実行して最新のコミットに含まれるツリーを確認。

image

9. ‘git ls-tree cb09f33’ を実行すると .vs フォルダが含まれていないが、これは .vs フォルダが .gitignore に含まれているため。

image

10. さらに ‘git ls-tree b2dfadc’ を実行。こちらでも obj フォルダが含まれていないことを確認。

image

まとめ

今回は Visual Studio からソース管理を実行すると、その瞬間に既に複数の Git コマンドが実行されていることや、.gitignore など追加ファイルが作成されることを確認しました。次回はコードの変更を Git で保存する方法についてさらに見ていきます。

リファレンス

Git Basics - Recording Changes to the Repository

中村 憲一郎

↧
Search

Using Microsoft Cognitive Emotion API with Android App Studio

October 14, 2017, 11:23 am
≫ Next: Git と Visual Studio 2017 その 4 : ブランチ
≪ Previous: Git と Visual Studio 2017 その 1 : レポジトリの作成
$
0
0

Guest blog by Suyash Kabra. Microsoft Student Partner at University College London

About me

clip_image002

I am a 2nd year Computer Science student at University College London. I am passionate about hackathons, playing video games, listening to music and watching movies. I am really interested in cognitive services as well as virtual/augmented reality and machine learning. You can find me on LinkedIn

Introduction

As a human it is very easy for us to see someone’s face and understand their emotions. This helps us to act appropriately towards them. But imagine if your phone was able to do the same thing. Depending on your mood it could play you the appropriate songs or show you the appropriate cat pictures (or dog ones if you love dogs instead). You might think I am joking, but believe me I’m not making this up! Using Microsoft’s Emotion API, we can detect the emotion of the people in an image or even in real time!

Emotion API is one of the many cognitive services provided my Microsoft. Using the other cognitive services are similar to Emotion API so feel free to check out the other services after this tutorial.

In this tutorial we are going to create an android application which lets the user select an image from their gallery and find out the emotion of the people in the selected image. Isn’t that awesome?

Prerequisite

You should have basic knowledge of how to create a basic android application (know about xml and java) and know how REST API work You can find this project on GitHub – https://github.com/skabra07/MSP-Emotion-API.

What we will use:

· Android Studio to create your android application. You can download that from https://developer.android.com/studio/index.html

· Any Android device. If you don’t have one that’s fine you can use the Android Virtual Device (AVD) provided with Android Studio. Here is a link which shows you how to setup an AVD https://developer.android.com/studio/run/managing-avds.html

By the end of this article you will be able to:

· Create a basic android application which can select an image from your gallery.

· Be able to use the Emotion API to get emotions of the people in the image

How will this work:

1. In the main activity page present the user with an option to select an image from the gallery.

2. Once they have selected the image click on “Get emotion”

3. The emotion we get will be displayed.

If you don’t understand what is happening, do not fear. Below is a step by step guide on how to create the whole application.

Getting your Emotion API Subscription key

We can get a free 30-day standard Emotion API subscription. Note that this subscription only allows you to make 20 calls per minute or 30,000 calls per month.

First go to https://azure.microsoft.com/en-us/try/cognitive-services/

clip_image004

Then select the “Get API Key” for the Emotion API. After that accept the conditions and select your country/region

clip_image006

Finally log in with your preferred choice of account.

clip_image008

One logged in you will be sent to the confirmation page. Here you will have your subscription key which you should write down as we will use it later.

Creating a new Android Studio Project.

Open Android Studio and select “Start a new Android Studio Project”:

clip_image010

Fill in the details about the project. Ensure the project location is where you want to place the project in your PC. Then click next.

clip_image012

Select “Phone and Tablet” with the minimum SDK as API 19 (KitKat) and then click on next

clip_image014

Select “Empty Activity” and click next

clip_image016

Leave the new page as it is and click on “Finish” to create the new project

clip_image018

The App Layout

Now that we have the android project ready we will first create a layout for the “MainActivity” page of the application.

Open the activity_main.xml file (found under the res -> layout folder):

clip_image020

Remove the existing Text View code. In our view we will have a Image View (to show the image picked form the gallery), 2 buttons and a Textbox for the result. The layout will look like:

clip_image022

The code for the layout:

<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
     xmlns:app="http://schemas.android.com/apk/res-auto"
     xmlns:tools="http://schemas.android.com/tools"
     android:layout_width="match_parent"
     android:layout_height="match_parent"
     tools:context="emotionapi.emotionapi.MainActivity"
     tools:layout_editor_absoluteY="81dp"
     tools:layout_editor_absoluteX="0dp">


     <ImageView
         android:id="@+id/imageView"
         android:layout_width="0dp"
         android:layout_height="0dp"
         app:srcCompat="@mipmap/ic_launcher"
         tools:layout_constraintTop_creator="1"
         tools:layout_constraintRight_creator="1"
         tools:layout_constraintBottom_creator="1"
         app:layout_constraintBottom_toTopOf="@+id/getEmotion"
         android:layout_marginStart="3dp"
         android:layout_marginEnd="2dp"
         app:layout_constraintRight_toRightOf="@+id/getEmotion"
         android:layout_marginTop="16dp"
         tools:layout_constraintLeft_creator="1"
         android:layout_marginBottom="12dp"
         app:layout_constraintLeft_toLeftOf="@+id/getImage"
         app:layout_constraintTop_toTopOf="parent" />

     <Button
         android:id="@+id/getImage"
         android:layout_width="wrap_content"
         android:layout_height="wrap_content"
         android:text="Get Image"
         android:onClick="getImage"
         android:layout_marginStart="60dp"
         tools:layout_constraintTop_creator="1"
         android:layout_marginTop="12dp"
         app:layout_constraintTop_toBottomOf="@+id/imageView"
         tools:layout_constraintLeft_creator="1"
         app:layout_constraintLeft_toLeftOf="parent" />

     <Button
         android:id="@+id/getEmotion"
         android:layout_width="wrap_content"
         android:layout_height="wrap_content"
         android:text="Get Emotion"
         android:onClick="getEmotion"
         android:layout_marginEnd="36dp"
         tools:layout_constraintRight_creator="1"
         tools:layout_constraintBottom_creator="1"
         app:layout_constraintBottom_toTopOf="@+id/resultText"
         app:layout_constraintRight_toRightOf="parent"
         android:layout_marginBottom="37dp" />

     <TextView
         android:id="@+id/resultText"
         android:layout_width="0dp"
         android:layout_height="136dp"
         android:ems="10"
         android:text=""
         android:textAlignment="viewStart"
         android:textAppearance="@style/TextAppearance.AppCompat.Widget.PopupMenu.Large"
         android:textSize="18sp"
         android:typeface="normal"
         android:layout_marginStart="18dp"
         android:layout_marginEnd="20dp"
         tools:layout_constraintRight_creator="1"
         tools:layout_constraintBottom_creator="1"
         app:layout_constraintBottom_toBottomOf="parent"
         app:layout_constraintRight_toRightOf="parent"
         tools:layout_constraintLeft_creator="1"
         android:layout_marginBottom="24dp"
         app:layout_constraintLeft_toLeftOf="@+id/getImage" />

     <TextView
         android:id="@+id/textView"
         android:layout_width="84dp"
         android:layout_height="26dp"
         android:ems="10"
         android:text="Result:"
         android:textAlignment="viewStart"
         android:textAppearance="@style/TextAppearance.AppCompat.Widget.PopupMenu.Large"
         android:textSize="18sp"
         android:typeface="normal"
         android:layout_marginStart="4dp"
         app:layout_constraintBaseline_toBaselineOf="@+id/resultText"
         tools:layout_constraintBaseline_creator="1"
         tools:layout_constraintLeft_creator="1"
         app:layout_constraintLeft_toLeftOf="parent" />

</android.support.constraint.ConstraintLayout>

From the code you can see we have two buttons with the id “getEmotion” and “getImage”. These buttons also have a method name for the “android:onClick” field. This relevant method will be called when the user clicks on the button. The rest of the code is the font size of the texts and the layout position of the elements.

Application Permissions and External Libraries

Since we are accessing the gallery and using the internet we will need to take permission from the user for it. We will open the AndroidManifest.xml…

clip_image024

...and add the following lines of code:

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

The above two lines get permission from the user to access the storage and internet of the device we are using. Without this we will not be able to access the gallery or send the picture which may crash our application. Your AndroidManifest.xml will look as follows:

clip_image026

Also since we are making a POST call we will need to use a library which can handle these calls and get a response. I will explain more about this later. To add the external library open the “build.gradle (Model:app)” (found under the “Gradel Scripts” section of the left navigation bar) and add the following line inside the “dependencies” block of code:

compile group: 'cz.msebera.android' , name: 'httpclient', version: '4.4.1.1'

The new file should look as follows:

clip_image028

How the application works

Before we continue I will explain what exactly is going to happen (the logic behind the application) and then show you how to do it using a step by step guide.

1. When the user opens the application they will see the layout we created.

2. They will then click on the “GET IMAGE” button.

3. Upon clicking the button, the application will check if it has permission to access the gallery. If not, it will ask the user for permission. This ensures that the application does not crash if the user does not provide the application permission.

4. Once we get the permission, the user will select an image from their gallery.

5. After they have selected their image, the user will be sent back to the main page and here they will see the image they have selected.

6. If they are happy with the image they will click on the “GET EMOTION” button.

7. Once we get the emotion we will display it to the user.

But how exactly do we get the emotion?

When the user clicks on the “GET EMOTION” button, we execute an asynchronous class in the background. This class first converts the image selected to base64 so that we can send the image to the Emotion API. After that we make a POST call to https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize which is the endpoint of the Emotion API. An endpoint is a URL to which the request is being made. Along with the image we add 2 headers to the POST call. Headers are “key-value” pairs, providing some more detailed information about the request we are making. The headers we will send will be “Content-Type” and the “Subscription Key”. Finally we will add a body to the POST call. A body is the data we will be sending to the server, and so our POST body will be the image in base64.

Once we have all of this we make the POST call and then read for the response from the server. The response will be in JSON format and will look something like:

[
  {
    "faceRectangle": {
      "left": 68,
      "top": 97,
      "width": 64,
      "height": 97
    },
    "scores": {
      "anger": 0.00300731952,
      "contempt": 5.14648448E-08,
      "disgust": 9.180124E-06,
      "fear": 0.0001912825,
      "happiness": 0.9875571,
      "neutral": 0.0009861537,
      "sadness": 1.889955E-05,
      "surprise": 0.008229999
    }
  }
]

As you can see it gives a probability of all the emotions the API feels about the image. We will however parse through all the emotions and pick the emotion with the highest probability. After we have done that we will display the emotion on the results section of the main page.

I know that was a lot of information but here is a step by step guide on how to achieve the above.

Step-by-Step guide

1. Open MainActivity.java:

clip_image030

2. Declare class variable which will hold the Image View and Text View from our layout so that we can show the image selected by the user and display the emotion of the image. (We declare class variables inside the MainActivity Class but outside any of the classes methods). Once we have declared the variables we will initiate those variables inside the “onCreate” method

public class MainActivity extends AppCompatActivity {

     private ImageView imageView; // variable to hold the image view in our activity_main.xml
     private TextView resultText; // variable to hold the text view in our activity_main.xml


     @Override
     protected void onCreate(Bundle savedInstanceState) {
         super.onCreate(savedInstanceState);
         setContentView(R.layout.activity_main);

         // initiate our image view and text view
         imageView = (ImageView) findViewById(R.id.imageView);
         resultText = (TextView) findViewById(R.id.resultText);
     }
}

3. Next we will write the methods which will check if we have permission to access the gallery. If we don’t have permission (checkPermission return False) we will have a method which asks for permission

public boolean checkPermission() {
     int result = ContextCompat.checkSelfPermission(getApplicationContext(),READ_EXTERNAL_STORAGE);
     return result == PackageManager.PERMISSION_GRANTED;
}
 
private void requestPermission() {
     ActivityCompat.requestPermissions(MainActivity.this,new String[]{READ_EXTERNAL_STORAGE}, REQUEST_PERMISSION_CODE);
}

4. Then we will create a function called “getImage”. This method is executed when the user clicks on the “GET IMAGE” button. Along with that we will also write a method called “onActivityResult”. This method will display the image the user selected on the Image View.

private static final int RESULT_LOAD_IMAGE  = 100;
private static final int REQUEST_PERMISSION_CODE = 200;
 
// when the "GET IMAGE" Button is clicked this function is executed
public void getImage(View view) {
         // check if user has given us permission to access the gallery
         if(checkPermission()) {
             Intent choosePhotoIntent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
             startActivityForResult(choosePhotoIntent, RESULT_LOAD_IMAGE);
         }
         else {
             requestPermission();
         }
}

// This function gets the selected picture from the gallery and shows it on the image view
protected void onActivityResult(int requestCode, int resultCode, Intent data) {

     // get the photo URI from the gallery, find the file path from URI and send the file path to ConfirmPhoto
     if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) {

         Uri selectedImage = data.getData();
         String[] filePathColumn = { MediaStore.Images.Media.DATA };
         Cursor cursor = getContentResolver().query(selectedImage,filePathColumn, null, null, null);
         cursor.moveToFirst();
         int columnIndex = cursor.getColumnIndex(filePathColumn[0]);
        
         String picturePath= cursor.getString(columnIndex);
         cursor.close();
         Bitmap bitmap = BitmapFactory.decodeFile(picturePath);
         imageView.setImageBitmap(bitmap);
     }
}

In the onActivityResult() we get the path to the image the user has selected. Then we create a bitmap image of the selected image and finally show the bitmap image in our Image View

5. Now we will write a method to convert an image to base 64

// convert image to base 64 so that we can send the image to Emotion API
public byte[] toBase64(ImageView imgPreview) {

     Bitmap bm = ((BitmapDrawable) imgPreview.getDrawable()).getBitmap();
     ByteArrayOutputStream baos = new ByteArrayOutputStream();
     bm.compress(Bitmap.CompressFormat.JPEG, 100, baos);
     return baos.toByteArray();
}

6. Then create a method “getEmotion()” which will be called when the user clicks on the “GET EMOTION” button. This method will create an object of the GetEmotionCall class (which we will create in the next step) and then call the execute method of the class (it is an asynchronous class)

// when the "GET EMOTION" Button is clicked this function is executed
public void getEmotion(View view) {
     // run the GetEmotionCall class in the background
     GetEmotionCall emotionCall = new GetEmotionCall(imageView);
     emotionCall.execute();
}

7. Now we will create a new inner asynchronous class called “GetEmotionCall” which will handle the API call. An asynchronous class is a class which runs on the background. This prevents the task that the class is executing from interrupting the main thread. It ensures our application doesn’t freeze while we are waiting for a response from our API call and also ensure the application doesn’t crash if we don’t get a response. To read more about asynchronous class work you should check out https://developer.android.com/reference/android/os/AsyncTask.html

private class GetEmotionCall extends AsyncTask<Void, Void, String> {
private final ImageView img;

GetEmotionCall(ImageView img) {
     this.img = img;
}
 
// this function is called before the API call is made
@Override
protected void onPreExecute() {
     super.onPreExecute();
}
 
// this function is called when the API call is made
@Override
protected String doInBackground(Void... params) {
 
}
 
// this function is called when we get a result from the API call
@Override
protected void onPostExecute(String result) {
 
}
 

8. From the above you could see that the onPreExecute,doInBackground and onPostExecute are empty. We will first write the method body for onPreExecute. It will be a simple line which displays “Getting results…” in the result text view of the layout

protected void onPreExecute() {
     super.onPreExecute();
     resultText.setText("Getting results...");
}

9. The doInBackground will make a POST call using the library we added before. (NB: Enter your subscription key where you see the text “subscription key here”). We will then read the response from the POST call as a string and then return it so that onPostExecute can use it. Note that the parameter for the method request.setEntity() calls the toBase64() method as the setEntity() sets the body for the request and our request body is the image in base64

protected String doInBackground(Void... params) {
     HttpClient httpclient = HttpClients.createDefault();
     StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build();
     StrictMode.setThreadPolicy(policy);

     try {
         URIBuilder builder = new URIBuilder("https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize");

         URI uri = builder.build();
         HttpPost request = new HttpPost(uri);
         request.setHeader("Content-Type", "application/octet-stream");
         request.setHeader("Ocp-Apim-Subscription-Key", "subscription key here");

         // Request body. The parameter of setEntity converts the image to base64
         request.setEntity(new ByteArrayEntity(toBase64(img)));

         // getting a response and assigning it to the string res
         HttpResponse response = httpclient.execute(request);
         HttpEntity entity = response.getEntity();
         String res = EntityUtils.toString(entity);

         return res;
     }
     catch (Exception e){
         return "null";
     }
}

10. Now that we have the result from the Emotion API we will parse it to get the emotion with the highest probability and then display it. We will use the JSON library to parse the JSON file.

protected void onPostExecute(String result) {
     JSONArray jsonArray = null;
     try {
         // convert the string to JSONArray
         jsonArray = new JSONArray(result);
         String emotions = "";
         // get the scores object from the results
         for(int i = 0;i<jsonArray.length();i++) {
             JSONObject jsonObject = new JSONObject(jsonArray.get(i).toString());
             JSONObject scores = jsonObject.getJSONObject("scores");
             double max = 0;
             String emotion = "";
             for (int j = 0; j < scores.names().length(); j++) {
                 if (scores.getDouble(scores.names().getString(j)) > max) {
                     max = scores.getDouble(scores.names().getString(j));
                     emotion = scores.names().getString(j);
                 }
             }
             emotions += emotion + "n";
         }
         resultText.setText(emotions);

     } catch (JSONException e) {
         resultText.setText("No emotion detected. Try again later");
     }
}

The “result” variable contains the JSON response from the API call in string format. Since it’s a JSON array we convert the string to a JSON array using the JSONArray class. We then have a string called emotions which stores all the emotions of the people in the image. After that we have 2 for loops. The outer loop is used to go through the emotions of the people in the image. We may have 2 people in the image so we need to get the emotion of each people. The 2nd for loop goes through the emotions and the scores for the individual people in the image. In the 2nd for loop we try to find the emotion with the highest probability and once we have that we write the new emotion into the “emotions” variable. Once we have gone through the 2 for loops we display the “emotions” string in our results text view.

And that’s it. The full code of MainActivity.java:

public class MainActivity extends AppCompatActivity {

     private ImageView imageView; // variable to hold the image view in our activity_main.xml
     private TextView resultText; // variable to hold the text view in our activity_main.xml
     private static final int RESULT_LOAD_IMAGE  = 100;
     private static final int REQUEST_PERMISSION_CODE = 200;


     @Override
     protected void onCreate(Bundle savedInstanceState) {
         super.onCreate(savedInstanceState);
         setContentView(R.layout.activity_main);

         // initiate our image view and text view
         imageView = (ImageView) findViewById(R.id.imageView);
         resultText = (TextView) findViewById(R.id.resultText);
     }

     // when the "GET EMOTION" Button is clicked this function is called
     public void getEmotion(View view) {
         // run the GetEmotionCall class in the background
         GetEmotionCall emotionCall = new GetEmotionCall(imageView);
         emotionCall.execute();
     }

     // when the "GET IMAGE" Button is clicked this function is called
     public void getImage(View view) {
             // check if user has given us permission to access the gallery
             if(checkPermission()) {
                 Intent choosePhotoIntent = new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
                 startActivityForResult(choosePhotoIntent, RESULT_LOAD_IMAGE);
             }
             else {
                 requestPermission();
             }
     }

     // This function gets the selected picture from the gallery and shows it on the image view
     protected void onActivityResult(int requestCode, int resultCode, Intent data) {

         // get the photo URI from the gallery, find the file path from URI and send the file path to ConfirmPhoto
         if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) {

             Uri selectedImage = data.getData();
             String[] filePathColumn = { MediaStore.Images.Media.DATA };
             Cursor cursor = getContentResolver().query(selectedImage,filePathColumn, null, null, null);
             cursor.moveToFirst();
             int columnIndex = cursor.getColumnIndex(filePathColumn[0]);
             // a string variable which will store the path to the image in the gallery
             String picturePath= cursor.getString(columnIndex);
             cursor.close();
             Bitmap bitmap = BitmapFactory.decodeFile(picturePath);
             imageView.setImageBitmap(bitmap);
         }
     }

     // convert image to base 64 so that we can send the image to Emotion API
     public byte[] toBase64(ImageView imgPreview) {
         Bitmap bm = ((BitmapDrawable) imgPreview.getDrawable()).getBitmap();
         ByteArrayOutputStream baos = new ByteArrayOutputStream();
         bm.compress(Bitmap.CompressFormat.JPEG, 100, baos); //bm is the bitmap object
         return baos.toByteArray();
     }


     // if permission is not given we get permission
     private void requestPermission() {
         ActivityCompat.requestPermissions(MainActivity.this,new String[]{READ_EXTERNAL_STORAGE}, REQUEST_PERMISSION_CODE);
     }


     public boolean checkPermission() {
         int result = ContextCompat.checkSelfPermission(getApplicationContext(),READ_EXTERNAL_STORAGE);
         return result == PackageManager.PERMISSION_GRANTED;
     }

     // asynchronous class which makes the API call in the background
     private class GetEmotionCall extends AsyncTask<Void, Void, String> {

         private final ImageView img;

         GetEmotionCall(ImageView img) {
             this.img = img;
         }

         // this function is called before the API call is made
         @Override
         protected void onPreExecute() {
             super.onPreExecute();
             resultText.setText("Getting results...");
         }

         // this function is called when the API call is made
         @Override
         protected String doInBackground(Void... params) {
             HttpClient httpclient = HttpClients.createDefault();
             StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build();
             StrictMode.setThreadPolicy(policy);

             try {
                 URIBuilder builder = new URIBuilder("https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize");

                 URI uri = builder.build();
                 HttpPost request = new HttpPost(uri);
                 request.setHeader("Content-Type", "application/octet-stream");
                 request.setHeader("Ocp-Apim-Subscription-Key", "d2445b75d6d54c07970a7f834c92ff3c");

                 // Request body.The parameter of setEntity converts the image to base64
                 request.setEntity(new ByteArrayEntity(toBase64(img)));

                 // getting a response and assigning it to the string res
                 HttpResponse response = httpclient.execute(request);
                 HttpEntity entity = response.getEntity();
                 String res = EntityUtils.toString(entity);

                 return res;

             }
             catch (Exception e){
                 return "null";
             }

         }

         // this function is called when we get a result from the API call
         @Override
         protected void onPostExecute(String result) {
             JSONArray jsonArray = null;
             try {
                 // convert the string to JSONArray
                 jsonArray = new JSONArray(result);
                 String emotions = "";
                 // get the scores object from the results
                 for(int i = 0;i<jsonArray.length();i++) {
                     JSONObject jsonObject = new JSONObject(jsonArray.get(i).toString());
                     JSONObject scores = jsonObject.getJSONObject("scores");
                     double max = 0;
                     String emotion = "";
                     for (int j = 0; j < scores.names().length(); j++) {
                         if (scores.getDouble(scores.names().getString(j)) > max) {
                             max = scores.getDouble(scores.names().getString(j));
                             emotion = scores.names().getString(j);
                         }
                     }
                     emotions += emotion + "n";
                 }
                 resultText.setText(emotions);

             } catch (JSONException e) {
                 resultText.setText("No emotion detected. Try again later");
             }
         }
     }
}

Uploading the application to your device

Now that our application is ready it’s time to use it. To upload the application to your device, click on the green “Play” button and then select the device you want to upload the application to.

clip_image032

clip_image034

Congratulations. You can now use this application to see the emotion of the images in your library. Here is a video link to see the application in action:


Resources

You can find this project on GitHub – https://github.com/skabra07/MSP-Emotion-API

For more information about the Emotion API you can read its documentation here - https://docs.microsoft.com/en-us/azure/cognitive-services/emotion/home

For other cognitive services like the Face API check out the Azure Cognitive Services website here - https://azure.microsoft.com/en-gb/services/cognitive-services/

↧
↧

Git と Visual Studio 2017 その 4 : ブランチ

October 15, 2017, 2:02 am
≫ Next: Git と Visual Studio 2017 その 5 : マージで他のブランチからコミットを反映
≪ Previous: Using Microsoft Cognitive Emotion API with Android App Studio
$
0
0

前回の記事ではコミットのリセットについて紹介しました。今回の記事ではブランチを見ていきます。

ブランチ : Git

ブランチは、現在のコードには手を加えることなく変更をする際に切る(作成する)もので、レポジトリの完全なコピーを持つイメージでした。コピーであれば何をしても安心ですよね。しかしプロジェクトの規模が大きい場合、レポジトリ全体をコピーするのは無理があり、この点 Git はうまく処理します。ではどのように動作するか見ていきましょう。

1. ブランチを切る前に現在の .git ディレクトリのプロパティより、サイズおよびファイル/フォルダ数を確認。

image

2. ‘git branch dev’ を実行して dev ブランチを作成。

image

3. 再度 .git ディレクトリのプロパティを確認。ファイル数が 2 つ増加。これらのファイルは .gitrefsheads および .gitlogsrefsheads ディレクトリに dev ファイルが作成されたため。

image

4. 作成されたファイルを type で表示。

image

5. 一方 .gitrefsheadsmaster を見た場合も同じ SHA1 ハッシュ値を持っている。つまりブランチを作成した際、Git はレポジトリーをコピーすることはなく、新しいブランチ名と同じファイルが作成しコミットのハッシュ値をコピーするだけのため、高速に完了する模様。

image

6. 現在 master ブランチにいるため、‘git checkout dev’ で dev ブランチに移動(チェックアウト)。HEAD を確認するとポイントしている先が dev ファイルになっているため、ブランチの移動は HEAD の書き換と同等。

image

ブランチにコミット

Dev ブランチを切ったので、早速コミットして何が起きるか見てみましょう

1. ’git branch’ を実行して dev ブランチにいることを確認。

image

2. “Class4.cs” ファイルをソリューションに追加してコミット。Class4.cs は新規追加で追跡対象となっていないため、’git commit –a’ ではコミット対象とならない。個別に追加してからコミットを実行。また Visual Studio ですべてのファイルを保存し忘れると、プロジェクトファイルの変更が認識されないため注意。プロジェクトファイルは既に追跡対象となっているため、明示的に add せず、commit –a でコミット対象となる。

image

3. ‘git log --oneline --graph’ を実行して履歴を表示。HEAD は dev ブランチをポイントしており、これまでの履歴の最後に新しいコミットが追加されている。

image

4. master ブランチに切り替えて履歴を再度確認。今度は dev ブランチの最後のコミットが表示されていない。

image

ブランチの削除

このタイミングでブランチを削除するとどうなるか、やってみましょう。

1. ‘git branch -d dev’ を実行してブランチを削除。しかし Git はまだ変更がマージされていないと警告を表示して削除を阻止。理由はこのままブランチを削除すると変更内容が失われるため。

image

2. 次は ‘git branch -D dev’ を実行。この場合強制的にブランチが削除さえるが、dev ブランチが最後にポイントしていたコミット ID を表示。

image

3. エクスプローラーで見ると、確かに dev ファイルが無い。つまり最後の変更である Class4.cs ファイルも削除された?

image

4. 前回の記事で説明した通り、Git は既定で 90 日間は Git オブジェクトを削除しないため、実際には消えていない。復旧する方法は色々あるが、今回の場合 ‘git reset --hard c79adb2’ を行うと復旧される。‘git log --oneline’ で結果を確認。今回は dev ブランチだけで変更があったため、競合なく復元できたが、この辺りは今後の記事で説明。

image

Git オブジェクトとしてのコミットは存在しているため、reset コマンドで master がポイントするコミットを変更できました。またこのコミットオブジェクトが保持する親コミットの ID は 3fd99cb であったため、履歴的にも矛盾がありません。Git オブジェクトには何も変更がなく、refsheadsmaster の値が書き換わっただけです。

ブランチ : VS

次に Visual Studio 2017 でのブランチを見ていきます。

1. ステータスバーにある Git 情報を確認。左がレポジトリ名、右が現在のブランチ名。

image

2. master をクリックして ”新しいブランチ” をクリック。ブランチメニューに遷移。

image

3. ブランチ名を指定。ひとつ下のドロップダウンでは作成したブランチがポイントしたいブランチを指定。今回は master しかないため既定のまま。”ブランチのチェックアウト” チェックボックスを付けると、ブランチの作成時にチェックアウトも実施される。Git も ‘git checkout –b dev’ で同じことが可能。

image

4. ブランチが作成されると GUI で確認できる。太字になっているブランチが現在チェックアウトされているブランチ。任意のブランチを右クリックしてチェックアウトすることができるが、個人的にはステータスバーを使うほうが現在のブランチを確認することも、替える事も便利。

image

image

5. dev ブランチにいることを確認して、Class5.cs ファイルを追加。右クリックメニューよりコミットを選択。

6. コミットコメントを入力して、”すべてをコミット” をクリック。この時点で保存していないファイルがあると保存のダイアログが表示される。とても便利。

image

image

7. アクション | 履歴の表示よりローカル履歴を確認。dev ラベルが最新のコミットを、master ラベルが直前のコミットをポイントしていることを確認。

image

8. 次は削除を検証。master ブランチに切り替え。

image

9. dev ブランチを右クリックして削除。Git でも表示されたように、マージされていない旨が表示される。ここで OK をクリックすると -D オプションと同じ効果。

image

image

Visual Studio でもほぼ同じ操作が可能ですが、削除されたブランチに紐づくコミットを VS からは復旧できませんでした。

ブランチの復旧

最後にブランチの復旧について見ていきます。

1. 検証のため、dev ブランチを作成し、Class 5.cs、Class6.cs をそれぞれコミット。

image

2. この時点で再度 dev ブランチを削除。

image

3. ‘git reset --hard 5df97d2’ を実行すると情報は復旧されるものの、master ブランチに復旧されるため先ほどと履歴が異なる。

image

4. 一旦 dev ブランチを削除した状況に戻すため、‘git reset --hard c79adb2’ を実行。

5. ここで ‘git checkout 5df97d2’ を実行し、コミット 5df97d2 の状態に遷移。Git が “detached HEAD” という警告を出すが、これは HEAD がどのブランチもポイントしていない状態の事であり問題なし。

image

6. ‘git branch’ を実行すると master 以外に Head detached ととして 5df97d2 が表示。HEAD の中身もブランチをポイントせず、直接コミットをポイント。

image

image

7. ‘git checkout -b dev’ を実行。現在のコミットをポイントする新しい dev ブランチを作成。’git branch’ を実行すると Detached Head はなくなり、dev ブランチが存在。履歴もブランチ削除以前に復旧。

image

まとめ

見てきた通り、Visual Studio 2017 のブランチサポートは Git の機能全てをカバーしませんが、重要な機能はサポートしています。またブランチを間違えて消した場合も、Git の動作としてあくまでコミットをポイントするためのラベルであることを理解していると、慌てることなく復旧が行えます。次回はマージを見ていきます。次の記事へ

中村 憲一郎

↧

Git と Visual Studio 2017 その 5 : マージで他のブランチからコミットを反映

October 15, 2017, 2:42 am
≫ Next: Some Tools of a PFE
≪ Previous: Git と Visual Studio 2017 その 4 : ブランチ
$
0
0

前回の記事ではブランチについて紹介しました。今回はブランチの変更を取り入れる、マージについて見ていきます。

マージ : Git

ブランチを作る理由は、機能追加や修正など色々ありますが、変更はいずれ反映する必要があり、Git では merge コマンドを使います。

1. ‘git branch’ を実行。master と dev ブランチがあり、現在 dev ブランチにチェックアウト中。

image

2. ‘git log --oneline --graph’ を実行。master より新しいコミットがあることを確認。

image

3. マージをする際、マージを受け入れる側のブランチで作業を行うため、master ブランチに移動。その後 ‘git merge dev’ を実行。結果として “fast-forwad” と出ていることを確認。

image

4. ‘git log --oneline --graph’ を実行。master が dev と同じコミットをポイントしている。つまり今回のマージで新しいコミットは作成されておらず、refheadsmaster ファイルがさしているコミットの情報が変わっただけ。この場合 dev ブランチがどこから作成されたかなどの情報が失われている。

image

5. マージをやり直すため、‘git reset --hard c79adb2’ を実行。’git log --oneline --graph --all’ を実行。all オプションを付けることですべてのブランチの情報を取得可能。

image

6. ‘git merge --no-ff dev’ を実行。--no-ff オプションを付けることで fast-forward を阻止。結果として “recursive strategy” と表示されている。

image

7. ‘git log --oneline --graph’ を実行。先ほど異なり、新しいコミットが作成されている。また dev ブランチがどのタイミングで作成されたかも確認が可能。ちなみに (¥)マークは左から右上に向けた矢印となる ASCII アート。

image

8. Visual Studio でも試せるように ‘git reset --hard c79adb2’ を再度実行。

マージ : VS

1. 現在の確認を実施。dev ブランチに切り替えてから、チームエクスプローラー | 変更 | アクション | 履歴の表示より現在の状態を確認。

image

2. チームエクスプローラー | ブランチメニューに移動し、master へチェックアウト。その後マージリンクをクリック。

image

3. ”マージするブランチ” で dev を選択し、マージをクリック。”マージ後に変更内容をコミットする” チェックボックスは入れておく。

image

4. マージ完了後履歴を更新。dev と master が同じコミットをポイントしているので、fast-forward マージとなっている。

image

5. c79adb2b を右クリックしてハードリセット。

image

6. 再度ブランチ画面からマージを実施。今回は ”マージ後に変更内容をコミット” のチェックを外して実行。

image

7. すると以下のメッセージが表示される。メッセージには変更をコミットするよう指示があるが ”中止” のリンクしかない。

image

8. チームエクスプローラーのホームより変更メニューへ移動。変更画面でステージ済みのアイテムとして dev ブランチの変更があることを確認。コメントを入れてコミット。

image

9. master ブランチの履歴を表示。新しいコミットが作成され、dev ブランチの履歴も保持されている。

image

”マージ後に変更内容をコミット” の既定値

fast-forward の無効を既定にしたい場合は、Visual Studio 2017 から設定を変更できます。

1. チームエクスプローラーより設定に移動。

image

2. ”グローバル設定” を選択。

3. “既定でマージ後に変更をコミットする” のチェックボックスを変更して、”更新” をクリック。

image

まとめ

個人的には Visual Studio 2017 のマージ機能は Git と言葉が違うことや、fast-forward ではないマージの場合に画面の移動を自分でする必要がある点など、結構混乱していたので、今回の記事で少しでも明確になると幸いです。次回はリベースについて見ていきます。次の記事へ

中村 憲一郎

↧

Some Tools of a PFE

October 15, 2017, 11:17 am
≫ Next: Git と Visual Studio 2017 その 6 : リベースで他のブランチからコミットを反映
≪ Previous: Git と Visual Studio 2017 その 5 : マージで他のブランチからコミットを反映
$
0
0

Hi all,

I hope you are all well! Today I will give you a brief overview of the tools I need to use on a regular base.

Chrissy LeMaire, one of the best SQL MVPs in the world, asked me directly via Twitter and also publicly via a Tweet to write down some of the tools a PFE uses and I surely couldn´t deny:

Anyone have a list of apps, modules, tools, etc that Microsoft PFE's use? If not, can a PFE write a blog post? 😁 (Partic. interestd in SQL)

— Chrissy LeMaire (@cl) October 9, 2017

David Peter Hansen started with a fantastic list of tools regarding SQL, which can be found as follows:

SQL Server Performance Troubleshooting Free Scripts and Tools List

My technological specialties are little different though, because I am mainly focused in Windows Client, PowerShell and Security.
I hope that this list will be of help for some of you and I wish you all a lot of fun testing and using the tools!


Client & Debugging:

First of all I start with the typical troubleshooting tools without any order. This is only a small subset of all the tools I sometimes need to use, but you really should be aware of these ones!


DefragTools and Lightsaber

One of the best materials regarding debugging are the DefragTools - Channel 9 video sessions by Andrew Richards, Chad Beeder and Larry Larsen showing some deep dive troublehooting tools and techniques.

In this sessions a so called Lightsaber is explained, which is a dedicated USB-Stick / OneNote-Folder containing the most important debugging tools (the holy grail for every toubleshooter):

Session 131 Lightsabre Windows 10


WinDBG

WinDBG is one of the most important tools debugging memory dumps and many more:

A good way to start here is taking a look at the videos from the DefragTools and using cheat sheets as the following one: here


WinDBG Preview

This year the new WinDBG Preview was announced.

You can see the videos in the DefragTools: here and here


WinDBG - Time Travel Debugging

A cool feature inside the new Preview WinDBG is Time Travel Debugging.

"Time Travel Debugging (TTD) is a reverse debugging solution that allows you to record the execution of an app or process, replay it both forwards and backwards and use queries to search through the entire trace. Today’s debuggers typically allow you to start at a specific point in time and only go forward. TTD improves debugging since you can go back in time to better understand the conditions that lead up to the bug. You can also replay it multiple times to learn how best to fix the problem."

Find further information here:

https://channel9.msdn.com/Shows/Defrag-Tools/Defrag-Tools-185-Time-Travel-Debugging-Introduction

https://blogs.windows.com/buildingapps/2017/09/27/time-travel-debugging-now-available-windbg-preview/


Wireshark

"Wireshark is the world’s foremost and widely-used network protocol analyzer. It lets you see what’s happening on your network at a microscopic level and is one of the standard across many commercial and non-profit enterprises, government agencies, and educational institutions."


Telerik Fiddler

"The free web debugging proxy for any browser, system or platform" - Fiddler is great for website performance analysis and troubleshooting of encrypted traffic.


CMTrace

CMTrace is a real time log file viewer for System Center Configuration Manager.

Important features:

  • Real-time logging
  • Merging multiple log files together at once.
  • Highlighting -  error messages in red; warning messages in yellow.
  • Error Lookups
  • Standard format for many log files

Error lookup:


Windows System Control Center - WSCC

"WSCC allows you to install, update, execute and organize the utilities from various system utility suites. WSCC can install and update the supported utilities automatically. Alternatively, WSCC can use the http protocol to download and run the programs. The portable edition doesn't require installation and can be run directly from a USB drive."

WSCC supports the following utility suites:

  • Sysinternals Suite
  • NirSoft Utilities


Sysinternals

"The Sysinternals web site was created in 1996 by Mark Russinovich to host his advanced system utilities and technical information. Whether you’re an IT Pro or a developer, you’ll find Sysinternals utilities to help you manage, troubleshoot and diagnose your Windows systems and applications."

You really should know about the Sysinternals tools! Most of the tools are discussed and explained in the mentioned DefragTools. Start here.


Procmon

"Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity. It combines the features of two legacy Sysinternals utilities, Filemon and Regmon, and adds an extensive list of enhancements including rich and non-destructive filtering, comprehensive event properties such session IDs and user names, reliable process information, full thread stacks with integrated symbol support for each operation, simultaneous logging to a file, and much more. Its uniquely powerful features will make Process Monitor a core utility in your system troubleshooting and malware hunting toolkit."


Procexp

"The Process Explorer display consists of two sub-windows. The top window always shows a list of the currently active processes, including the names of their owning accounts, whereas the information displayed in the bottom window depends on the mode that Process Explorer is in: if it is in handle mode you'll see the handles that the process selected in the top window has opened; if Process Explorer is in DLL mode you'll see the DLLs and memory-mapped files that the process has loaded. Process Explorer also has a powerful search capability that will quickly show you which processes have particular handles opened or DLLs loaded."


ProcDump

"ProcDump is a command-line utility whose primary purpose is monitoring an application for CPU spikes and generating crash dumps during a spike that an administrator or developer can use to determine the cause of the spike. ProcDump also includes hung window monitoring (using the same definition of a window hang that Windows and Task Manager use), unhandled exception monitoring and can generate dumps based on the values of system performance counters. It also can serve as a general process dump utility that you can embed in other scripts."


Autoruns

"Autoruns has the most comprehensive knowledge of auto-starting locations of any startup monitor, shows you what programs are configured to run during system bootup or login, and when you start various built-in Windows applications like Internet Explorer, Explorer and media players. These programs and drivers include ones in your startup folder, Run, RunOnce, and other Registry keys. Autorunsreports Explorer shell extensions, toolbars, browser helper objects, Winlogon notifications, auto-start services, and much more. Autoruns goes way beyond other autostart utilities."


PSExec

"PsExec is a light-weight telnet-replacement that lets you execute processes on other systems, complete with full interactivity for console applications, without having to manually install client software. PsExec's most powerful uses include launching interactive command-prompts on remote systems and remote-enabling tools like IpConfig that otherwise do not have the ability to show information about remote systems."


Nirsoft Tools

"Unique collection of freeware desktop utilities, system utilities, password recovery tools, components, and free source code examples." The NirSoft Tools include some really nice tools as the following: RegistryChangesView

"NirLauncher is a package of more than 200 portable freeware utilities for Windows, all of them developed for NirSoft Web site during the last few years."


PPing

"PPing is designed to give you the easiest possible solution for discovering ports from a windows console. The design was heavily oriented towards the terminology and behavior of the classic ping tool under windows."

Alternatively you can do it with PowerShell:


Test-NetConnection

Further examples can be found here.


PuTTY

"PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. PuTTY is open source software that is available with source code and is developed and supported by a group of volunteers."


Posh-SSH

Windows Powershell module that leverages a custom version of the SSH.NET Library https://github.com/sshnet/SSH.NET to provide basic SSH functionality in Powershell. The main purpose of the module is to facilitate automating actions against one or multiple SSH enabled servers


LogLauncher

The LogLauncher gathers all important logs from one or many machines and is really awesome! It can be download here.


IE / Edge - F12 Developer Tools

The Microsoft Edge F12 DevTools are built with TypeScript, powered by open source, and optimized for modern front-end workflows. 

Use the Debugger to step through code, set watches and breakpoints, live edit your code and inspect your caches. Test and troubleshoot your code

The Microsoft Edge F12 DevTools Debugger

The Performance panel offers tools for profiling and analyzing the responsiveness of your UI during the course of user interaction.F12 DevTools Performance panel

Take a look through the docs and additionally here:

https://channel9.msdn.com/Shows/Defrag-Tools/Defrag-Tools-126-Internet-Explorer-F12-Developer-Tools-Part-1

https://channel9.msdn.com/Shows/Defrag-Tools/Defrag-Tools-127-Internet-Explorer-F12-Developer-Tools-Part-2


Microsoft Security Compliance Toolkit

"This set of tools allows enterprise security administrators to download, analyze, test, edit and store Microsoft-recommended security configuration baselines for Windows and other Microsoft products, while comparing them against other security configurations.
The Microsoft Security Configuration Toolkit enables enterprise security administrators to effectively manage their enterprise’s Group Policy Objects (GPOs).  Using the toolkit, administrators can compare their current GPOs with Microsoft-recommended GPO baselines or other baselines, edit them, store them in GPO backup file format, and apply them via a Domain Controller or inject them directly into testbed hosts to test their effects. "
Here you will find important announcements: https://blogs.technet.microsoft.com/secguide/

And this will give you further guidance: Defrag Tools #174 - Security Baseline, Policy Analyzer and LGPO


PerfView

"PerfView is a performance-analysis tool that helps isolate CPU- and memory-related performance issues."
PerfView Defrag Tools videos: Part8, Part7, Part6, Part5, Part4, Part3, Part2, Part1

Windows Analyzer 

"Included in the Windows Assessment and Deployment Kit (Windows ADK), Windows Performance Analyzer (WPA) is a tool that creates graphs and data tables of Event Tracing for Windows (ETW) events that are recorded by Windows Performance Recorder (WPR), Xperf, or an assessment that is run in the Assessment Platform. WPA can open any event trace log (ETL) file for analysis."
This tool is one of the most important ones for a Client PFE.

Windows Performance Recorder

"Included in the Windows Assessment and Deployment Kit (Windows ADK), Windows Performance Recorder (WPR) is a performance recording tool that is based on Event Tracing for Windows (ETW). It records system events that you can then analyze by using Windows Performance Analyzer (WPA)."

This tool is necessary to create the traces for the Windows Analyzer.


Notepad++

Last but not least comes the well-know Notepad++. If you don´t know this tool you definitely missed something! It is especially good, when working with very big log files >50MB and/or with xml files.

It includes the following features:

  • Syntax Highlighting and Syntax Folding
  • User Defined Syntax Highlighting and Folding: screenshot 1, screenshot 2, screenshot 3 and screenshot 4
  • PCRE (Perl Compatible Regular Expression) Search/Replace
  • GUI entirely customizable: minimalist, tab with close button, multi-line tab, vertical tab and vertical document list
  • Document Map
  • Auto-completion: Word completion, Function completion and  Function parameters hint
  • Multi-Document (Tab interface)
  • Multi-View
  • WYSIWYG (Printing)
  • Zoom in and zoom out
  • Multi-Language environment supported
  • Bookmark
  • Macro recording and playback
  • Launch with different arguments

PowerShell:

One of my main specialties is also one of my biggest tools. You can actually achieve everything with PowerShell: gather information, automate and even use techniques, which are completely missing in the UI. You can even automate most of the described tools above - and as for example the new Project Honolulu for Windows Server is completely based on PowerShell and uses PowerShell WMI cmdlets in its backend. But for using PowerShell in the daily work there are also some tools you really need to know.


ISE with ISESteoroids

PowerShell.exe and PowerShell_ISE.exe are the most known tools fo PowerShell using in Windows. The ISE is not the best toolset, if you are coming from Visual Studio for example. I am a former .Net software architect and by working with PowerShell this was my first little downside. But - there is this addon called ISESteroids from Tobias Weltner, which brings a bunch of additional functions to the ISE and results into a complete great toolset - here are some of the added capabilities:

  • Essential Editor Settings - Secondary Toolbar
  • Code Refactoring
  • Advanced Search&Replace
  • Ensuring Code Compatibility
  • Creating Modern User Interfaces
  • Security and Protection
  • Community Tools


VSCode

"Visual Studio Code is a lightweight but powerful source code editor which runs on your desktop and is available for Windows, macOS and Linux. It comes with built-in support for JavaScript, TypeScript and Node.js and has a rich ecosystem of extensions for other languages (such as C++, C#, Python, PHP, Go) and runtimes (such as .NET and Unity). Begin your journey with VS Code with these introductory videos."

VSCode will replace the most used tool - the ISE - within the next time and therefore you really should take a look at it. I gathered the most important articles around this topic, which you really should go through:

How to install Visual Studio Code and configure it as a replacement for the PowerShell ISE

Why I use Visual Studio Code to write PowerShell

Transitioning from PowerShell ISE to VS Code

Here you will find all default keybindings, which will help you a lot.


VSTS / Git / Release Pipeline

Visual Studio Team Services just allows to easily create your complete Release Pipeline. I will not spend too much time in here, because it is a dedicated topic, but focusing into more professional and sophisticated powershelling or dev, you really should take a closer look at it.


PSGUI

Working with XAML-created PowerShell GUIs I very often reuse my own projects PSGUI and PSGUIManager:


Knowledge Management:

A fact is - as a PFE you are always working hard and you are always lacking time. Also no one in the world can now everything, but you should know where to find the information. Very often totally undererstimated, but the knowledge management is one of the most important areas, where you can improve your work quality and performance. I will show you some of my most used tools to manage all the information and my time.


Email Structure

A good email structure is the most important thing nowadays. As a PFE you easily get hundreds or thousands of emails per day. Most of them contain at least some information, which may be usable at some point in the future. There are dozens of books out there to assist you in these kind of tasks. I want to show you one of my favorite books:

How to be a Productivity Ninja: Worry Less, Achieve More and Love What You Do Kindle Edition


OneNote

I grab every information into my OneNote and sort it. The biggest benefit of OneNote is the performant search capability.
It looks like this:

And as you probably would expect, I have dozens of notebooks:

If I found some interesting blog posts I normally just copy them and add them to my OneNote. I always remember some passphrases or keywords to the topics I am searching for and this helps a lot!


Teams

Teams is our new communication tool, which allows to add all other services directly into it, aswell as meetings similar to Skype.


To-Do

"Microsoft To-Do helps you manage, prioritize, and complete the most important things you need to achieve every day, powered by Intelligent Suggestions and Office 365 integration. Download the To-Do Preview today."

It is important to manage my tasks and time - therefore I used for a long time Wunderlist - then To-Do and now the tool below - Office Tasks or so called Microsoft Planner from the O365. I would say, that Microsoft To-Do is the consumer app and Microsoft Planner is the enterprise app.


Office Tasks

"Take the chaos out of teamwork and get more done! Planner makes it easy for your team to create new plans, organize and assign tasks, share files, chat about what you’re working on, and get updates on progress."

Office Tasks is my new tool, which I use with my personal O365 account to manage all upcoming work and personal tasks. The good thing about this specific one is, that you can assign tasks to dedicated users in your O365 account and leverage everything with documents from your OneDrive / for Business.

 


Social Media:

Social media is important. Networking is important. You really should not ignore this. 

Most of the news as blog posts, announcements, official discussions and many more can be catched by being involved into social media. This is one of the most important things today to stay up to date in the IT. Additionally to this I use some more tools, which bring a huge benefit to my daily work. This aren´t all of my tools, but probably the most important ones.


Twitter

Twitter is necessary to stay up to date and gather all new blog articles from officials or well-known people as MVPs.


LinkedIn

In LinkedIn you very often find great high level articles specifically targetting CXOs, which contain good information.
It is also the most important platform for networking. I get frequently asked via LinkedIn regarding little technical topics (and I am totally fine with this!) and in the counterpart I also try to get some feedback from the people regarding our newest technologies.
One more topic is jobs - LinkedIn is from my experience the most used platform for sharing jobs and the place where job hunters are trying to fill up their sophisticated jobs. If you want to join this chance you really should ensure, that your profile is completely and correctly filled. There has also been added a feature to provide headhunters with further information, if you are searching for a job and what direction it should go to.


Blogs

I really need to write this down. We are in a time, where blogs are important.

As you are reading my blog post, you know that blogs may contain useful information, but even more - sometimes official announcements are made via blogs. You need to have a dedicated list of blogs, where you take a look into in regular timeframes.

Michael Niehaus´ one for example is one of the most important ones for me and probably also for you:

https://blogs.technet.microsoft.com/mniehaus/


Hootsuite

"Hootsuite is a social media management platform, created by Ryan Holmes in 2008. The system’s user interface takes the form of a dashboard, and supports social network integrations for Twitter, Facebook, Instagram, LinkedIn, Google+, YouTube, and many more."

I am using Hootsuite a lot - it is very useful for me, because I can now plan postings to all my social media accounts in advance.

As you can see it is also combinable with Right Relevance:


Right Relevance

"Discover fresh relevant content to your interests, save interesting articles, follow influential experts, be the first to share soon-to-be viral content and much more."

I really love Right Relevance, because it just gives me the most important blog articles and news regarding specific topics. Included in Hootsuite I can now just share the most important information just in time and set it up into my "read-line".


The Old Reader

The Old Reader is a RSS-reader which I like a lot! I have added my favorite blogs here and can easily prove, what articles I missed.


Conferences & UserGroups

As an IT-Pro you really should visit conferences and usergroups from time to time. As mentioned before - networking is one of the most important things in a life of an IT-Pro and you can do this the best at conferences and usergroups!


MeetUp

This one is my main tool to identify UserGroups in my area and I am managing the German PowerShell UserGroup and more dedicated the Munich one via MeetUp. We are having around 30-50 attendees every time and you realy should use it to connect yourself!


PaperCall

If you are speaking a lot at conferences you would have seen, that many conferences are moving their CFP to Papercall. Take a look - there may be a conference you want to speak on.


The End

Thank you all for reading the whole list - I hope, that some of the mentioned ideas tools and techniques will help you in the future. If you find any important things missing or want to discuss any of the parts you are always free to comment. I am happy to hear your feedback and opinions!

 

All the best,


David das Neves

Premier Field Engineer, EMEA, Germany
Windows Client, PowerShell, Security

↧

Git と Visual Studio 2017 その 6 : リベースで他のブランチからコミットを反映

October 15, 2017, 3:42 am
≫ Next: 1 Million predictions/sec with Machine Learning Server web service
≪ Previous: Some Tools of a PFE
$
0
0

前回の記事ではブランチ間のマージを紹介しました。今回は似て非なる機能としてリベースを見ていきます。

リベース : Git

リベースはマージと何が違うのでしょうか。例えば dev を切ってコミットを追加した後、何かしらの事情で master にもコミットが作成された場合を考えます。

  • マージ: dev から master をマージすると master のコミットが、現在の dev のコミットの後に適用されます。
  • リベース: master のコミットが dev のコミットの前に入るため、あたかも master でが作成された後に dev を切ったような状態となります。

実際にやってみましょう。

1. ‘git log --oneline --graph’ を実行して現在の状況を確認。前回の記事の直後のためマージコミットが最後に存在。

image

2. マージ前に戻すため、‘git reset --hard c79adb2’ を実行し、再度 ’git log --oneline --graph --all’ を実行。

image

3. この時点で Visual Studio 2017 より Patch1.cs を追加。全て保存。

image

4. ‘git add VS_GitPatch1.cs’ および ‘git commit -am “Patch1.cs の追加”’ を実行。

image

5. ‘git log --oneline --graph --all’ を実行。dev ブランチと別に master にも新しいコミットが存在。

image

6. リベースは dev ブランチから行うため、’git checkout dev’ を実行してから ‘git rebase master’ を実行。Git が csproj ファイルでの競合エラーを出力。これは master ブランチの csproj ファイルは Patch1.cs の情報を持っているが、dev ブランチの初めのコミットは Class5.cs を持っているため。

image

7. ‘git mergetool’ を実行してマージ用のツールを起動。Visual Studio が構成されている場合 VS が起動する。

image

8. Source では Class5.cs だけが表示され Class6.cs が表示されないが、これはリベースがコミットを 1 つずつ適用するため。今回は両方の変更をチャックボックスをクリックし選択後、”マージの許可” をクリック。

image

9. 競合の解消が終わったので、‘git rebase --continue’ を実行してリベースを続ける。するとまた競合。今度は dev のコミット 2 つ目だが先ほどと同じ理由。

image

10. 'git mergetool’ を実行してマージツールを起動。

image

11. 期待通り、今回は Class6.cs が存在。両方を選択して、”マージの許可” をクリック。

image

12. ‘git rebase --continue” を実行。もうエラーが出ないことを確認。

image

13. ‘git log --oneline --graph --all’ を実行。グラフが一列に表示され、Patch1.cs の追加が dev のコミットより前に入ることを確認。しかし親コミット ID が変わるため Git としては新しい Git オブジェクトが作成され、結果コミット ID は変わる事を確認。

image

14. Git はオブジェクトをすぐには削除しないため、‘git reset --hard 5df97d2’ を実行すると dev は前のコミットをポイントする為リベース前にリセットされる。‘git log --oneline --graph --all’ を実行して確認。

image

リベース : VS

次に Visual Studio 2017 のリベースを見ていきます。

1. チームエクスプローラー | 変更 | アクション | 履歴の表示より履歴を確認。残念ながら -all オプションに該当する機能が見当たらず特定のブランチ履歴しか見れないため master、dev ともに確認。

2. ブランチメニューより dev をチェックアウトし、リベースをクリック。

image

3. 対象ブランチで master を選択し、”リベース” をクリック。

image

4. Git でのテストから期待した通り、競合のメッセージが出現。”競合: 1” リンクをクリック。

image

5. 競合の解決画面に遷移。すべての競合が表示されるが今回は 1 つだけのため、VS_Git.csproj をクリック。”マージ” ボタンをクリック。

image

6. マージツールが起動するので、競合を解消。

image

7. 競合が無いことを確認して、”変更を表示” をクリック。

image

8. 現在の状況が表示されているがここではコミットは必要がないため、”進行中のリベース” セクションの ”続行” リンクをクリック。

image

9. 期待通り、2 つ目の競合が表示される。”競合: 1” リンクをクリック。

image

10. 同様に競合を解消。再度 ”続行” リンクをクリック。

image

11. リベースが完了したら履歴を更新。

image

まとめ

リベースを理解する前は、なぜ競合が何度も出るかや競合解消後の進め方が分からず、断念することがありましたが、今は安心してリベースが行えます。また Git オブジェクトが削除される前であれば、簡単にリベースの前段階に戻すこともできます。

途中で分からなくなった場合、’git rebase --abort’ または Visual Studio 2017 から中断を行えば、初めからやり直しが可能です。次回はチェリーピック機能を見ていきます。次の記事へ

中村 憲一郎

↧
↧

1 Million predictions/sec with Machine Learning Server web service

October 15, 2017, 10:54 pm
≫ Next: Detecting drift between ARM templates and Azure resource groups
≪ Previous: Git と Visual Studio 2017 その 6 : リベースで他のブランチからコミットを反映
$
0
0

Microsoft Machine Learning Server Operationalization allows users to publish their R/Python based models and scripts as "Web-Services" and consume them from a variety of client applications in a scalable, fast, secure and reliable way. In this blog, I want to demonstrate how you can score > 1 Million predictions per second with the 'Realtime' web-services. Realtime web services offer lower latency to produce results faster and score more models in parallel. The improved performance boost comes from the fact that these web services do not depend on an interpreter at consumption time even though the services use the objects created by the model. Therefore, fewer additional resources and less time is spent spinning up a session for each call. Additionally, the model is only loaded once in the compute node and can be scored multiple times.

 

Step 1: Publish a Realtime web-service:

We create a RevoScaleR::rxDTree model based on Kyphosis dataset. We then publish the model as a 'Realtime' web-service to get fast performance, and try to score against it to ensure we have same prediction results.


library(RevoScaleR)
library(rpart)
library(mrsdeploy)
form <- Kyphosis ~ Number + Start
parms <- list(prior = c(0.8, 0.2), loss = c(0, 2, 3, 0), split = 'gini');
method <- 'class'; parms <- list(prior = c(0.8, 0.2), loss = c(0, 2, 3, 0), split = 'gini');
control <- rpart.control(minsplit = 5, minbucket = 2, cp = 0.01, maxdepth = 10,
maxcompete = 4, maxsurrogate = 5, usesurrogate = 2, surrogatestyle = 0, xval = 0);
cost <- 1 + seq(length(attr(terms(form), 'term.labels')));
myModel <- rxDTree(formula = form, data = kyphosis, pweights = 'Age', method = method, parms = parms,
control = control, cost = cost, maxNumBins = 100, maxRowsInMemory = if(exists('.maxRowsInMemory')) .maxRowsInMemory else -1)
myData <- data.frame(Number=c(70), Start=c(3)); op1 <- rxPredict(myModel, data = myData);
op1 <- rxPredict(myModel, data = myData)

print(op1)
# absent_Pred present_Pred
# 1 0.925389 0.07461104


# Let's publish the model as a 'Realtime' web-service

remoteLogin("http://[Your-Server-IP]:[Port]", username="[username]", password="[Password]", session=FALSE)
name <- 'rtService'
ver <- '1.0'

svc <- publishService(name, v=ver, code=NULL, serviceType='Realtime', model=myModel)
op2 <- svc$consume(myData)
print(op2$outputParameters$outputData)

# $absent_Pred
# $absent_Pred[[1]]
# [1] 0.925389
# $present_Pred
# $present_Pred[[1]]
# [1] 0.07461104

 

Step 2: Create Load-Test plan:

For our load-testing, we will use Apache JMeter application which is open source software, a 100% pure Java application designed to load test functional behavior and measure performance. Follow the steps below to create a JMeter test plan that works with ML Server web-service. As always, you can find the test-plan JMX file here on github: MLServer JMeter Test Plan

  • For load testing we will create 1000 number of threads (users) with each thread sending input dataframe with 1000 rows ('nrows' below) for scoring.
  • We will create 2 thread groups "Authentication" and "ServiceConsumption" as shown in screenshot. The first thread-group will authenticate with the server, and will set the token property. This token will then be used by the second thread group for consuming our web-service with 1000 threads. Without the token, we will not be able to consume the service. For this reason, make sure to click the "Run Thread Groups consecutively" checkbox.
  • Create HTTP Request sampler and add the body data as shown. For headers, you just need Content-Type field.
  • Under the HTTP Request, we add a 'regular expression extractor'. In this, we will parse the response from the server and extract the Bearer token returned.
  • In order for subsequent HTTP requests to use the bearer token, let's add a "Bean Shell post-processor", which will set token property. Additionally, here we will also create the "Body" JSON request with 'nrows' number of rows for each column. For example, if nrows = 5, then our JSON body will look like this: "Number": [70,70,70,70,70], "Start":[3,3,3,3,3]
  • Let's look at the second Thread group "ServiceConsumption" now. Let's add "HTTP Request" sampler under this thread group. Ensure the below parameter values, that will make sure we have 1000 threads that will keep running forever. We will also stop the test as soon as we encounter any HTTP errors with our requests.
  • Here is how our Body for the requests will look like. Notice, we are using "body" property that we created in first thread group. Ensure you have correct server, port and path values.
  • Let's add a "HTTP Header Manager" under the HTTP Request sampler. We will add 'Content-Type' and 'Authorization' header values as shown. Notice, we are using "Bearer <token>" format for the Authorization header value and the token in our case is the property we set in first thread group.
  • A request assertion will help JMeter to identify how to assert the server response is successful or not. In our case, we will look for HTTP status response 200 (OK) to assert this.
  • At this stage, we should have everything set up! Let's try it out... Let's add a "View Results tree" listener under ServiceConsumption thread group. Then, click on the run button. For debugging, you might want to temporarily change the number of threads to 1 and disable 'forever' checkbox property in 'serviceConsumption' thread group. Once the run is complete, you should see successful HTTP requests as shown.
  • Save your test plan as a "Realtime_Load_Test.jmx" file on your machine.

 

3. Run the Load Test:

Now that we have the test plan ready, we will use JMeter's command line to run the test for us. In previous step, if you debugged the test plan in GUI, remember to change the number of threads back to 1000, and change the 'Loop Count' back to 'forever' and save the file.

Run the command now:


cd [your-jmeter-location]/bin
.jmeter.bat -n -t  [your-jmx-directory]/Realtime_Load_Test.jmx -l results.csv   (On Windows)
.jmeter.sh -n -t  [your-jmx-directory]/Realtime_Load_Test.jmx -l results.csv    (On Linux)

 

For my testing, I used an Azure GS5 instance (32-cores, 448 GB RAM) and ran the JMeter test plan on the same machine against localhost, to reduce network latency. This way, we will have the numbers as close to the actual server performance as possible. Here is how my output looked like:

The output shows that within 43 seconds, the server completed around 64,700+ requests @ roughly 1505 HTTP requests per second. Each of our request had 1000 rows of inputs, which means on an average, the server delivered > 1.5 Million predictions per second!

 

Feel free to play around with the number of threads (users) and the number of input rows with each request (nrows) and see how the server behaves. You will need a good balance of both these numbers for your testing. Too large the number of threads and your threads might end up waiting more for CPU than the server. Too many rows of input requests and your network stack performance will impact your numbers.

 

I hope this exercise will help you load-test your web-services and prepare you to handle your performance requirements using Machine Learning Server Operationalization.

↧

Detecting drift between ARM templates and Azure resource groups

October 15, 2017, 6:53 pm
≫ Next: Planning made easy with OneNote
≪ Previous: 1 Million predictions/sec with Machine Learning Server web service
$
0
0

In DevOps Utopia, all of your Azure resources are deployed from ARM templates using a Continuous Deployment tool. The ARM templates and parameters files are all stored in source control, so you can go back through the version history to determine what was changed and what was deployed at any given time. And since only your CD service principal has permission to modify resources, you can be sure that nobody made any "rogue" changes outside of the templates and release process.

While we should all aspire to live in DevOps Utopia, chances are you're not quite there yet. Getting to this requires an investment in processes, tools and culture, and different organisations will have different priorities and rates of progress. So a lot of teams are in a situation where they are investing in automation and continuous deployment, but they still have situations where changes are being made manually, for example using the Azure portal.

For teams in this semi-automated state, one common question I hear is how to tell whether their Azure resources that were originally deployed from a template have "drifted" from their original configuration via manual changes. One approach is to check the Azure Activity Log, but this is far from ideal. While the Activity Log does show changes to resources, it's often hard to figure out exactly what was changed and whether the change was made via a template or some other means. Also the logs are only stored for 90 days unless you've explicitly chosen to retain them in another location.

So I decided to build my own script to make this a little easier. The ArmConfigurationDrift script takes an ARM template and parameters file and compares it to the resources deployed in an existing Azure Resource Group. It will then let you know if any resources are deployed in one location but not the other, or if any parameters differ between the two. The following diagram shows the approach.

The "expand template" script is particularly interesting. An ARM template can't be directly compared with deployed resources as it will include parameters, variables and functions that need to be expanded before the resource list can be used. I've never seem this documented, but it turns out that you can use the Test-AzureRmResourceGroupDeployment cmdlet and capture the "Debug" stream to get an expanded version of the template, as shown in this sample.

Once you have the expanded template, it's relatively easy to pull the metadata about the resources deployed to a resource group and compare the two. Unfortunately I found that there are a number of cases where the properties in the two collections are not identical even if the resources are the same. To minimise false positives, I only report on differences where the exact same property exists in each collection and has different values. I also normalise the different representations of locations so, for example, "Australia East" and "australiaeast" are treated as the same. Even so, there may still be situations where the tool reports false positives or doesn't detect legitimate differences. If you find any such cases or feel like improving the script, please log an issue or make a pull request on the GitHub site.

↧
Search

Planning made easy with OneNote

October 16, 2017, 12:00 am
≫ Next: Dissecting the pattern matching in C# 7
≪ Previous: Detecting drift between ARM templates and Azure resource groups
$
0
0

Planning. Just the word alone instils fear in an educator. We all start with good intentions to keep ourselves afloat and ahead but in reality, it's a nightmare. Research with a group of Head teachers revealed that a considerable amount of time of the creative planning process wasted due to time spent searching through SharePoint folders, USB disks and checklists was extremely high. As a result, this time suffocated the creative conversations and brainstorming that educators so desperately require. Then along came OneNote.

Using OneNote has without doubt, changed the way we plan, think and organise. By bringing all documents for teaching and learning into one organised and dynamic workspace, educators are able to collaborate in real-time and invest more time in delivering high-quality learning opportunities to students.

 Here are the Top Tips to get you on your way to Planning with OneNote 

  1. Collaboration: As you work through the planning process, create conversations with your co-teachers. Inviting a contribution to the planning process will enhance a better learning outcome for students. As a result, you are eliminating that email train and focussing on the sole purpose of planning.
  2. Feedback options: It is often much easier to offer ideas verbally rather than in writing. OneNote offers the option to include verbal feedback (from your Senior Leadership Team) using the Insert Audio tool.
  3. Sharing resources with students: If resources for a unit of work are listed on a separate page within a unit of work, resources can be quickly and efficiently shared with students by simply copying the resources page to the Content Library or collaborative section of a Class Notebook.

Click the badge below to complete the CPD course and get planning the smart way!

↧

Dissecting the pattern matching in C# 7

October 15, 2017, 8:43 pm
≫ Next: Git と Visual Studio 2017 その 7 : チェリーピックで他のブランチから特定のコミットを反映
≪ Previous: Planning made easy with OneNote
$
0
0

C# 7 finally introduced a long-awaited feature called "pattern matching". If you're familiar with functional languages like F# you may be slightly disappointed with this feature in its current state, but even today it can simplify your code in a variety of different scenarios.

Every new feature is fraught with danger for a developer working on a performance critical application. New levels of abstractions are good but in order to use them effectively, you should know what is happening under the hood. Today we're going to explore pattern matching and look under the covers to understand how it is implemented.

The C# language introduced the notion of a pattern that can be used in is-expression and inside a case block of a switch statement.

There are 3 types of patterns:

  • The const pattern
  • The type pattern
  • The var pattern

Pattern matching in is-expressions

public void IsExpressions(object o)
{
   
// Alternative way checking for null
    if (o is null) Console.WriteLine("o is null"
);

   
// Const pattern can refer to a constant value
    const double value = double.
NaN;
   
if (o is value) Console.WriteLine("o is value"
);

   
// Const pattern can use a string literal
    if (o is "o") Console.WriteLine("o is "o""
);

   
// Type pattern
    if (o is int n) Console.
WriteLine(n);

   
// Type pattern and compound expressions
    if (o is string s && s.Trim() != string.
Empty)
       
Console.WriteLine("o is not blank");
}

is-expression can check if the value is equal to a constant and a type check can optionally specify the pattern variable.

I've found few interesting aspects related to pattern matching in is-expressions:

  • Variable introduced in an if statement is lifted to the outer scope.
  • Variable introduced in an if statement is definitely assigned only when the pattern is matched.
  • Current implementation of the const pattern matching in is-expressions is not very efficient.

Let's check the first two cases first:

public void ScopeAndDefiniteAssigning(object o)
{
   
if (o is string s && s.Length != 0
)
    {
       
Console.WriteLine("o is not empty string"
);
    }

   
// Can't use 's' any more. 's' is already declared in the current scope.
    if (o is int n || (o is string s2 && int.TryParse(s2, out
n)))
    {
       
Console.WriteLine(n);
    }
}

The first if statement introduces a variable s and the variable is visible inside the whole method. This is reasonable but will complicate the logic if the other if-statements in the same block will try to reuse the same name once again. In this case, you have to use another name to avoid the collision.

The variable introduced in the is-expression is definitely assigned only when the predicate is true. It means that the n variable in the second if-statement is not assigned in the right operand but because the variable is already declared we can use it as the out variable in the int.TryParse method.

The third aspect mentioned above is the most concerning one. Consider the following code:

public void BoxTwice(int n)
{
   
if (n is 42) Console.WriteLine("n is 42");
}

In most cases the is-expression is translated to the object.Equals(constValue, variable) (even though the spec says that operator== should be used for primitive types):

public void BoxTwice(int n)
{
   
if (object.Equals(42
, n))
    {
       
Console.WriteLine("n is 42");
    }
}

This code causes 2 boxing allocations that can reasonable affect performance if used in the application's critical path. It used to be the case that o is nullwas causing the boxing allocation if o is a nullable value type (see Suboptimal code for e is null) so I really hope that this behavior will be fixed (here is an issue on github).

If the n variable is of type object the o is 42 will cause one boxing allocation (for the literal 42), even though the similar switch-based code would not cause any allocations.

The var patterns in is-expressions

The var pattern is a special case of the type pattern with one major distinction: the pattern will match any value, even if the value is null.

public void IsVar(object o)
{
   
if (o is var x) Console.WriteLine($"x: {x}");
}

o is object is true when o is not null, but o is var x is always true. The compiler knows about that and in the Release mode (*), it removes the if-clause altogether and just leaves the Console method call. Unfortunately, the compiler does not warn you that the code is unreachable in the following case: if (!(o is var x)) Console.WriteLine("Unreachable"). Hopefully, this will be fixed as well.

(*) It is not clear why the behavior is different in the Release mode only. But I think all the issues falls into the same bucker: the initial implementation of the feature is suboptimal. But based on this comment by Neal Gafter, this is going to change: "The pattern-matching lowering code is being rewritten from scratch (to support recursive patterns, too). I expect most of the improvements you seek here will come for "free" in the new code. But it will be some time before that rewrite is ready for prime time.".

The lack of null check makes this case very special and potentially dangerous. But if you know what exactly is going on you may find this pattern useful. It can be used for introducing a temporary variable inside the expression:

public void VarPattern(IEnumerable<string> s)
{
if (s.FirstOrDefault(o => o != null) is var v
&& int.TryParse(v, out var n))
{
Console.WriteLine(n);
}
}

Is-expression meets "Elvis" operator

There is another use case that I've found very useful. The type pattern matches the value only when the value is not null. We can use this "filtering" logic with the null-propagating operator to make a code easier to read:

public void WithNullPropagation(IEnumerable<string> s)
{
   
if (s?.FirstOrDefault(str => str.Length > 10)?.Length is int
length)
    {
       
Console.
WriteLine(length);
    }

   
// Similar to
    if (s?.FirstOrDefault(str => str.Length > 10)?.Length is var length2 && length2 != null
)
    {
       
Console.
WriteLine(length2);
    }

   
// And similar to
    var length3 = s?.FirstOrDefault(str => str.Length > 10)?.
Length;
   
if (length3 != null
)
    {
       
Console.WriteLine(length3);
    }
}

Note, that the same pattern can be used for both - value types and reference types.

Pattern matching in the case blocks

C# 7 extends the switch statement to use patterns in the case clauses:

public static int Count<T>(this IEnumerable<T> e)
{
   
switch
(e)
    {
       
case ICollection<T> c: return c.
Count;
       
case IReadOnlyCollection<T> c: return c.
Count;
       
// Matches concurrent collections
        case IProducerConsumerCollection<T> pc: return pc.
Count;
       
// Matches if e is not null
        case IEnumerable<T> _: return e.
Count();
       
// Default case is handled when e is null
        default: return 0;
    }
}

The example shows the first set of changes to the switch statement.

  1. A variable of any type may be used in a switch statement.
  2. A case clause can specify a pattern.
  3. The order of the case clauses matters. The compiler emits an error if the previous clause matches a base type and the next clause matches a derived type.
  4. Non default clauses have an implicit null check (**). In the example before the very last case clause is valid because it matches only when the argument is not null.

(**) The very last case clause shows another feature added to C# 7 called "discard" pattern. The name _ is special and tells the compiler that the variable is not needed. The type pattern in a case clause requires an alias and if you don't need it you can ignore it using _.

The next snippet shows another feature of the switch-based pattern matching - an ability to use predicates:

public static void FizzBuzz(object o)
{
   
switch
(o)
    {
       
case string s when s.Contains("Fizz") || s.Contains("Buzz"
):
           
Console.
WriteLine(s);
           
break
;
       
case int n when n % 5 == 0 && n % 3 == 0
:
           
Console.WriteLine("FizzBuzz"
);
           
break
;
       
case int n when n % 5 == 0
:
           
Console.WriteLine("Fizz"
);
           
break
;
       
case int n when n % 3 == 0
:
           
Console.WriteLine("Buzz"
);
           
break
;
       
case int
n:
           
Console.
WriteLine(n);
           
break;
    }
}

This is a weird version of the FizzBuzz problem that processes an objectinstead of just a number.

A switch can have more than one case clause with the same type. If this happens the compiler groups together all type checks to avoid redundant computations:

public static void FizzBuzz(object o)
{
   
// All cases can match only if the value is not null
    if (o != null
)
    {
       
if (o is string s &&
            (s.Contains("Fizz") || s.Contains("Buzz"
)))
        {
           
Console.
WriteLine(s);
           
return
;
        }

       
bool isInt = o is int
;
       
int num = isInt ? ((int)o) : 0
;
       
if
(isInt)
        {
           
// The type check and unboxing happens only once per group
            if (num % 5 == 0 && num % 3 == 0
)
            {
               
Console.WriteLine("FizzBuzz"
);
               
return
;
            }
           
if (num % 5 == 0
)
            {
               
Console.WriteLine("Fizz"
);
               
return
;
            }
           
if (num % 3 == 0
)
            {
               
Console.WriteLine("Buzz"
);
               
return
;
            }

           
Console.WriteLine(num);
        }
    }
}

But there are two things to keep in mind:

  1. The compiler will group together only consecutive type checks and if you'll intermix cases for different types the compiler will generate less optimal code:
switch (o)
{
   
// The generated code is less optimal:
    // If o is int, then more than one type check and unboxing operation
    // may happen.
    case int n when n == 1: return 1
;
   
case string s when s == "": return 2
;
   
case int n when n == 2: return 3
;
   
default: return -1;
}

The compiler will translate it effectively to the following:

if (o is int n && n == 1) return 1;
if (o is string s && s == "") return 2;
if (o is int n2 && n2 == 2) return 3;
return -1;
  1. The compiler tries it best to prevent common ordering issues.
switch (o)
{
   
case int n: return 1
;
   
// Error: The switch case has already been handled by a previous case.
    case int n when n == 1: return 2;
}

But compiler doesn't know that one predicate is stronger than the other and effectively supersedes the next cases:

switch (o)
{
   
case int n when n > 0: return 1
;
   
// Will never match, but the compiler won't warn you about it
    case int n when n > 1: return 2;
}

Pattern matching 101

  • C# 7 introduced the following patterns: the const pattern, the type pattern, the var pattern and the discard pattern.
  • Patterns can be used in is-expressions and in case blocks.
  • The implementation of the const pattern in is-expression for value types is far from perfect from the performance point of view.
  • The var-pattern always match and you should be careful with them.
  • A switch statement can be used for a set of type checks with additional predicates in when clauses.

 

Discussions on reddit and hacker news.

↧
↧

Git と Visual Studio 2017 その 7 : チェリーピックで他のブランチから特定のコミットを反映

October 15, 2017, 8:30 pm
≫ Next: Git と Visual Studio 2017 その 8 : 一時的に作業を保存する
≪ Previous: Dissecting the pattern matching in C# 7
$
0
0

前回の記事ではリベースについて説明しました。今回は他のブランチにあるコミットのうち、任意のものだけを反映する方法として、チェリーピックを見ていきます。

チェリーピック : Git

まずは Git のチェリーピックがどのように動作するか確認します。

1. 現状確認のため ‘git log --oneline --graph --all’ を実行。前回記事の処理により、リベース完了時点であることを確認。

image

2. まず dev ブランチをリベース前に戻す。’git reflog’ よりリベース前のコミット ID を検索。

image

3. ‘git reset --hard 5df97d2’ 実行後 ‘git log --oneline --graph --all’ でリベース前に戻ったことを確認。

image

4. 今回はマージやリベースではなく、master の Patch1.cs コミットを dev に適用するため、‘git cherry-pick ac2b093’ を dev ブランチで実行。競合発生を確認。

image

5. ‘git mergetool’ で競合を解消。

image

6. 競合解消後、‘git commit -am “Patch1.cs をチェリーピック”’ でコミットを実行。

image

7. ‘git log --oneline --graph --all’ で履歴を確認。dev ブランチの最後に Patch1.cs コミットがある。また master からは Patch1.cs コミットは消えていない事を確認。

image

8. 例えば Patch1.cs のコミットを間違えて master ブランチにした場合、’git checkout master’ および ‘git reset --hard c79adb2’ で解決。

image

9. 次のテストのために master ブランチで ‘git reset --hard ac2b093’, dev ブランチで ‘git reset --hard 5df97d2’ を実行して元に戻す。

image

複数のコミットをチェリーピック

チェリーピックは複数のコミットを対象にすることも可能です。上記とは逆に master に dev からチェリーピックしてみましょう。

1. ‘git checkout master’ および ‘git cherry-pick f9b4da3 5df97d2’ を実行して 2 つのコミットをチェリーピック。これら 2 つは dev の最新から 2 つのため、‘git cherry-pick ..dev’ でも同様。

2. 競合のメッセージが出るため、都度 ‘git mergetool’ で競合を解消。また競合を解消するたびに ‘git cherry-pick --continue’ を実行してチェリーピックを続行。続行時にコミットコメント編集画面が開くが、変更する必要はないため、’:q’ でエディターは終了。

image

3. 競合は 2 つ目のコミットでも発生するため、同様に解消。

image

4. チェリーピック完了後 ‘git log --oneline --graph --all’ で状況確認。.

image

5. 次のテストのために master ブランチで ‘git reset --hard ac2b093’, dev ブランチで ‘git reset --hard 5df97d2’ を実行して元に戻す。

チェリーピック : VS

Git と同じ操作を Visual Studio 2017 でもやってみましょう。

1. チェリーピックは dev ブランチから実行するため、dev をチェックアウト。その後 master ブランチを右クリックして ”チェリーピック” を選択。

image

2. 期待通り競合が発生するので、マージで競合を解消。

image

3. 競合解消後、”変更を表示” をクリック。進行中のチェリーピックで ”続行” をクリック。

image

4. この時点で ”チェリーピックが完了してコミットされました” と出る。

image

5. 履歴の表示より状況を確認。

image

複数のコミットをチェリーピック

Visual Studio 2017 では複数のコミットを一気にチェリーピックする方法が見つかりませんでしたが、任意のコミットをチェリーピックはできるため、複数回繰り返すことで対応可能です。

1. master ブランチでチェリーピックするため、master をチェックアウト。

2. dev ブランチの任意のコミットをチェリーピックしたいので、dev ブランチを右クリック | ”履歴の表示”

image

3. dev ブランチの履歴が表示されるので、f9b4da38 を右クリックして ”チェリーピック” を選択。

image

4. 後は同じフローのため、競合を解消してチェリーピックを完了。その後、5df97d22 もチェリーピック。完了時に master の履歴を開き、2 つのコミットが追加されている事を確認。

image

まとめ

マージやリベースと同様に、Git と Visual Studio では言葉などが少し違いますが、根本的な動作を理解していると不安なくチェリーピック出来ます。次回は作業の一時保存について見ていきます。次の記事へ

中村 憲一郎

↧

Git と Visual Studio 2017 その 8 : 一時的に作業を保存する

October 15, 2017, 10:55 pm
≫ Next: Microsoft Student Partner Team gewinnt Agorize AI Challenge in Paris
≪ Previous: Git と Visual Studio 2017 その 7 : チェリーピックで他のブランチから特定のコミットを反映
$
0
0

前回の記事ではチェリーピックで、他ブランチのコミットを一部適用する方法を説明しました。今回はブランチの切り替え時に作業中のファイル類を一時保存する方法を見ていきます。

Git におけるブランチ変更時の挙動

まずブランチ切り替え時に Git がどのように動くかを確認しましょう。

1. ‘git log --oneline --graph --all’ 実行で現状確認。前回記事のチェリーピックが終わった状態。

image

2. シンプルにするため、チェリーピックの前にリセット。

image

3. 次に作業を行う。Class1.cs ファイルに変更を加えてステージに追加。

image

4. さらに Class1.cs file に変更を加えて保存。

image

5. ‘git status’ を実行して、Class1.cs がステージングエリアと作業ディレクトリに存在することを確認。これは git add は実行したタイミングのファイルを保持するため、その後に変更したものは追跡していない。

image

6. ‘git checkout master’ で master ブランチに移動。Class1.cs に変更がある事が表示される。

image

7. ‘git status’ を再度実行。ブランチは master だがステージングエリアも作業ディレクトリも残っていることを確認。

image

作業中ファイルの保存 : Git

ブランチを間違えて作業している場合は上記動作が好ましいのですが、緊急で別のブランチに移動しないといけない場合には、移動先のブランチには作業中の変更を持ち越したくありません。その場合 git stash コマンドで作業中のアイテムを一時保存できます。

1. ‘git checkout dev’ でブランチ切り替え後に、‘git stash’ を実行。stash で現在追跡されているアイテムを保存可能。未追跡アイテムも保存したい場合は ‘git stash --include-untracked’ を実行。逆に作業ディレクトリは保存したいがステージングエリアはそのまま切り替え後のブランチに持っていきたい場合 ‘git stash --keep-index’ を実行。名前を付けていないため、コミット 5df97d2 に対して作業中 (WIP = Work in progress) と表示。

image

2.‘git status’ で現在の状況を確認。ファイルの変更がない状態。

image

image

3. ‘git stash list’ を実行すると保存した stash を確認可能。

image

4. 保存した変更を再度適用するには ‘git stash apply’ を実行。’git status’ を実行するとステージングエリアの状態が失われている。

image

image

5. ステージングエリアの状態も復元したい場合には、‘git stash apply --index’ を実行。stash apply の前に戻すために、’git reset --hard HEAD’ を実行してから試行。HEAD は最新のコミットのため、現時点では 5df97d2 と同じ。ステージングエリアも復旧。

image

6. ‘git stash list’ を再実行するとまだ前回の情報が残っている。’git stash drop’ で削除。’git stash pop’ を使うと apply と drop が同時に実行される。慣れないうちは apply を使えば stash 情報を即座に失わないため安心。

image

stash を使うことで、複数の状態を保存することもできます。詳細はこちら。

作業中ファイルの保存 : VS

Visual Studio 2017 は現時点では stash をサポートしていないようです。残念。

Stash からブランチの作成

作業を一時的に保存できるのは便利ですが、適用するタイミングを逃してコードを変更してしまう場合があります。この場合保存した内容を直接現在のコードに適用せず、ブランチとして復旧することが出来ます。

1. ‘git stash’ を再度実行して現在の作業を保存。その後 ‘git stash branch temp’ を実行。このコマンドはブランチを作成して保存内容を適用後、stash を削除する。

image

2. ‘git branch’ を実行して作成されたブランチを確認。

image

3. 後は必要に応じてマージなり、リベースなり実行します。

4. ‘git checkout master’ と ‘git branch -D temp’ を実行して作成した一時ブランチを削除。-D オプションが必要な理由は、temp ブランチの内容をどこにもマージしていないため。

image

まとめ

Visual Studio 2017 ではまだサポートされていない機能ですが、緊急にブランチ移動する場合とても便利な機能です。個人的にはコミットして後から整理していますが、それはまた別の機会に。次回はいよいよリモートを見ていきます。

中村 憲一郎

↧

Microsoft Student Partner Team gewinnt Agorize AI Challenge in Paris

October 16, 2017, 1:22 am
≫ Next: Event-Tipp: Infinity Game Jam an vier Standorten
≪ Previous: Git と Visual Studio 2017 その 8 : 一時的に作業を保存する
$
0
0

Die Agorize AI Challenge ist ein internationales Event rund um AI. Künstliche Intelligenz ist ein ständig wachsendes Thema. Mit der Challenge wurden Studenten dazu eingeladen innovative Ideen und Projekte rund um das Thema künstliche Intelligenz einzureichen. Unser Microsoft Student Partner Martin Wepner hat mit seinem Team den 1. Platz in Paris belegt. Insgesamt haben 365 Teams, 695 Teilnehmer verteilt auf 338 Schulen/Universitäten teilgenommen.

Informationen zur Challenge

  • Website: Agorize AI Challenge [Jury]
  • Sponsoren: Volkswagen Financial Services, Deutsche Telekom und Onepoint

Was wurde erwartet?

Es gab bei der Challenge ein paar Rahmenbedingungen bzw. Leitfragen an denen man sich bei seiner Idee und Konzept richten soll. Diese waren wie folgt:

    • Everyday life: What solutions, changes and impacts can AI-technologies bring us in our everyday life?
    • Corporate life: How can AI-technologies add value to the quality of work in companies? What solutions to help the tedious tasks of employees?
    • Mobility: How could AI-technologies support customers in their mobility demands?
    • Machine Learning: How will AI-technologies improve the client experience? What solution will change the communication with companies in the future?

Das Student Partner Team, bestehend aus

  • Martin Wepner,
  • Jasmin Rimmele,
  • Sophie Pellas,

hat sich mit seinem Projekt den ersten Platz gesichert. In Ihrem Projekt MakeUsGrow ging es darum eine App zu entwickeln, die dich spielerisch beim Pflanzen von zum Beispiel Gemüse unterstützt. Die ganze Idee basiert darauf, dass der Klimawandel bei immer mehr Menschen ein nicht zu vernachlässigbares Thema wird und jeder etwas dazu beitragen kann, die Welt ein Stück besser zu machen.

Realisiert wurde diese Idee als Chatbot, der mit der aktuellen Position des Benutzers vorgeschlagen hat, wo und was und vor allem wie man etwas pflanzt. Die eingesetzten Technologien sind LUIS (Spracherkennung) und das Microsoft Bot Framework, damit der Chatbot auf möglichst vielen Plattformen integriert werden kann.

 

 

 

↧
Remove ADS
Viewing all 29128 articles
Browse latest View live

Search

  • RSSing>>
  • Latest
  • Popular
  • Top Rated
  • Trending
© 2025 //www.rssing.com
<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>