Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Monitoring Azure Resources – Tools and Technology

$
0
0

As you think about building applications in Azure, or migrating VMs to Azure, one of the common first thoughts you might have is ‘How do my monitoring activities change once I’ve moved to Azure?’.  I get this question (or a variation thereof) quite often when meeting with my customers, so I thought it would be a good idea to dig a bit deeper and provide some details around what tools are available to you.

Monitoring encompasses several different, but related topics.  Have you ever had any of the following questions about your infrastructure?

  • How is my application performing?
  • Is my application online and running now?
  • Will I get alerts when things stop working?
  • Who made that change to my service?
  • Has someone been attempting an attack on my infrastructure?
  • What is causing my application to slow down?
  • Are those packets arriving or being dropped?
  • Am I reaching my networking limits in my subscription?
  • …and many more

As you’ll see, Azure provides tools to answer all of these questions, and depending on what you are deploying in Azure, you can also bring many of your own tools.  Let’s take a look at what’s available.  Note: this post is rather long, but in this case, that’s a good thing!

In the following post, I’ll introduce and provide links for the following monitoring tools:

  1. Azure Monitor
  2. Network Watcher
  3. Application Insights
  4. Log Analytics
  5. Security Center
  6. Azure Log Integration

 

Azure Monitor

 

 

Perhaps the best place to start is with the tool Azure Monitor.  This tool can be thought of as a PaaS monitoring tool that is available to use against all services in Azure using the Azure management portal.  The graphic below depicts the sources of monitoring data on the left in the purple boxes, and what can be done with this data on the right in the blue boxes.  There are different levels of monitoring depending on weather you are monitoring IaaS or PaaS services.  Note that data collected can be routed to other Azure services, stored in Azure blob storage, queried, visualized, or be used to kick off some type of automation using webhooks.

 

 

Activity Logs

A fundamental collection source is activity logs, which are logs specific to Azure platform level objects, which come from the control plane.  For example, an activity log will capture activities to answers questions such as ‘Who stopped that virtual machine?’, ‘Is there an outage in US West?’, or ‘When did my App Service scale up and scale down automatically?’.  The activity log will contain a significant number of log entries over time and is therefore searchable based on subscription, resource group, resource type, operation type, and timespan (among others).  Activity logs are turned on and retain data for 90 days by default.  Learn more about activity logs here.

Diagnostics Logs

In addition to activity logs, Azure Monitor provides us with diagnostic logs.  Diagnostic logs operate at both a resource-level and guest OS-level.  Resource-level diagnostics capture resource specific data from the Azure platform (PaaS), while OS-level diagnostics capture data from the VM, operating system, and applications running within (IaaS).  Think of resource-level diagnostics as logs that help you understand what operations occurred within a resource at the platform level.  The available options differ based on the resource you have chosen.  For example, with a recovery services vault, you can choose to get diagnostics logs for Backup reports, Site Recovery Jobs, or Site Recovery Events, to name a few.  With OS-level diagnostics you can be very specific on what you choose to capture per VM.  For example, you can choose basic performance counters, or get more specific by perusing the countless counters available in the custom settings.  You can also choose which logs you want to capture weather it be application logs, IIS logs, event tracing logs, etc.—you get to choose how much you want to capture.  You can also capture crash dumps or send data to application insights (discussed below).

Diagnostics logs are not turned on by default, you will choose per PaaS object or IaaS VM what you want to monitor.  Of course, you can always use templates or scripts to define these settings for all future deployments.  Learn more about diagnostics logs here.  Keep in mind that with VMs, you can also continue monitoring the guest level events using tools that you’ve used on-premises.

Metrics

Metrics are provided by Azure Monitor to get telemetry and visibility into the performance and health of your workloads on Azure.  These metrics are available for nearly all Azure resources including both PaaS and IaaS offerings.  Metrics are on by default at the resource-level, which includes virtual machines.  For virtual machines, this means host level telemetry is available by default.  If you want to dig deeper into the metrics of the operating system and applications, then turning on diagnostics for a VM will expose these metrics to Azure Monitor for building dashboards, getting alerts, and taking action on these metrics.  In the graphic below, I have used Azure Monitor Metrics to define a metrics chart for a webapp, which I later pinned to my dashboard.  Read more about Metrics here.

 

 

Alerts

You can use Azure Monitor Alerts to take trigger-based actions on events that occur in activity logs or through metrics.  For example, if I want to be alerted when a VM’s CPU sustains greater than 90% usage (Guest metrics) for 15 minutes, I can easily configure an alert.  On the other hand, if I want to know if a VM is turned on (activity log), I can also be alerted to this activity.  There are numerous scenarios that you’ll want to investigate.  Once an alert is triggered, I can send emails, txt messages, trigger webhooks, or even run Azure Automation runbooks.  Learn more about alerts here.

 

Azure Network Watcher

Azure Network Watcher provides many capabilities that will benefit you as you deploy and manage your Azure infrastructure.  As you can see in the graphic below, Network Watcher is another tool, just like Azure Monitor, which is provided without you having to deploy anything, you simply have to enable it.  Think of this tool as a network monitoring technology provided as a service.

 

 

Network Watcher provides several important capabilities including:

Topology

As you deploy networking components and VMs in Azure, you can have a mental idea of how everything is connected, and you can spend time building out a Visio diagram, but there is a better way.  Network Watcher allows you to choose a VNet from any of your subscriptions to have the tool build out a diagram of the topology.  The image below shows a diagram (rendered by Network Watcher) of an ADFS infrastructure deployed as part of my ADFS Deployment Series from a while back.  Note the details included for NICs, NSGs, Subnets, load balancers, etc.  The tool also lets you download a copy for your own documentation and internal sharing needs.

 

 

IP Flow Verify

Various layers of security across VNETs, subnets, and network interface cards can make it very challenging to determine if the packets sent from a source and destination are allowed.  IP Flow Verify simplifies this process by allowing you to test the flow of traffic between two sources using the protocol, the source and destination IP, and the source and destination port.  If the connection is blocked, you’ll be able to see the NSG that has the rules that block it.

Next hop

Next hop allows you to choose a virtual machine, a network interface from that virtual machine, and finally a source and destination address.  With these items chosen, click the ‘Next Hop’ button to see which route will be chosen for the destination IP address.  This can be helpful if you are defining custom routes and need to make sure the routing is working as you expected.

Security Group View

This tool is very helpful to understand the ‘effective’ results of various Network Security Groups that impact a particular VM.  With the tool you simply choose a VM, and let the tool output the effective rules as seen in the graphic below.  You can then find the scope of where a particular rule is set by clicking on the tabs for subnet, NIC, and default.  Note: If you have started using Security Center Just-in-time security rules, this is an interesting place to see how those are applied (rules 1000 and 1001 below were automated from Security Center JIT)

 

 

Virtual Network Gateway Troubleshooting

Have a VPN or ExpressRoute connection that you would like to troubleshoot?  Network Watcher provides us with the capability of troubleshooting both the gateway and connections that may be experiencing issues.  You will be able to see error messages and capture log data for review.

Packet Capture

This powerful feature allows you to capture packets as they enter or leave a virtual machine. This is useful for analysis, intrusion detection, performance monitoring, and more. You can selectively capture packets that meet certain criteria for a limited amount of time.  You could also use alerts from Azure monitor to programmatically enable a packet capture scenario.  Captured data can be placed in blob storage and downloaded as a standard .cap file.

Connectivity Check

At the time of this post, Connectivity Check is still in preview, however it is something you can use if you have preview features turned on in your subscription.  The essence of this tool is to give you the ability to actually test connectivity between two VMs in Azure, or a VM in Azure and some FQDN, URI, or IPv4 address.  The results returned will include a connection status of ‘reachable’ or ‘unreachable’.

Network Subscription Limits

This tool lets you check in and see how much of each network resource you are using and the related limits in your subscription.  This can be very useful for understanding your networking footprint and what changes you may need to make as you grow your Azure footprint.  Simply choose your subscription and region, and let the tool calculate the results.

 

NSG Flows

Ever wonder what traffic is flowing through a network security group?  NSG flows is a logging tool that lets you turn on flow logging per NSG, choose a retention period, and then capture 5-tuple information, in JSON format, about what traffic is passing through.  The output will show both allowed and denied logs, which can be helpful for troubleshooting connectivity issues, particularly if you aren’t certain of the requirements of such connections.

 

Application Insights

To this point, we’ve focused on monitoring Azure control plane events through Activity logs, and Guest OS level events through Azure Diagnostics.  How about application level events?  Application Insights is a 'monitoring as a service' solution, but specifically for Azure App Service applications.  This includes WebApps, Mobile Apps, Logic Apps, etc.  With Application Insights you can see page views and unique visitors, as well as failures, latency statistics, and a whole lot more.  You can use the same tools as mentioned for Monitor above to export this data for alerting, viewing in PowerBI, building dashboards, triggering events, etc.  In the graphic below you can see that I’ve pinned Application Insights to my favorites on the management portal.  To the right of that, you’ll see the App Service applications where I’ve enabled Application Insights, and on the far right, all the tools available to me to get deep insights into my applications.  You’ll want to make sure that any developers in your organization are aware of these toolsets, as this will provide significant insight into application performance and bottlenecks, while also providing useful dashboards that can output ongoing telemetry.  You can learn more about Application Insights at the documentation site, but I would also advise watching the 2 minute video below to get a quick example of how it can be used.

https://applicationanalyticsmedia.azureedge.net/home_page_video.mp4?v=1

 

 

Log Analytics

Most organizations have a number of tools they use to monitor their environment, some for on-premises, some for cloud, and then additional tools for niche solutions.  With Azure Log Analytics, you can collect and correlate data from multiple sources to get a unified dashboard view and gain insights to detect and mitigate IT issues.

Think of Azure Monitor as a way to collect and analyze data for your Azure resources, but think of Log Analytics as the tool you can use to correlate that data with data coming from other sources such as an on-premises SCOM environment, or even VMs running in other cloud platforms.  In the graphic below you can see that Log Analytics can ingest data from Azure VMs (associate Azure VMs with Log Analytics with just a few clicks), from SCOM agents you already have deployed, and from VMs or physical servers where you install the agents available for download from the Log Analytics portal.  All of this data ends up in a central repository that you can manually search with a rich query language.  See the query language documentation to see just how powerful these interactive queries can be.

 

 

The capabilities don’t end there.  Log Analytics also provides canned solutions that are configured to look for specific things such as AD Replication Health events, SQL Health, AD Health, and the list goes on.  See a clipping from the solutions gallery below.  Using these solutions will give you immediate insight into the data you are collecting from all sources.

 

 

I would recommend checking out the Azure Friday video on Azure Log Analytics, and then heading over to the official documentation to learn more.

 

Security Center

As organizations move their workloads to the public cloud, security becomes one of the most important areas of interest and concern.  Microsoft takes security very seriously, and Azure Security Center is one example of what has resulted.

Azure Security Center is a platform that helps organizations do two important tasks.  First, we want to prevent attacks on our platforms by deploying secure and resilient architectures.  Second, if we happen to experience some type of attack or intrusion, we want to detect it and respond appropriately.  Azure Security Center gives you a set of tools to prevent and detect attacks, all centrally managed in the Azure Portal that is familiar to you.  Learn how Azure Security Center detects and responds to threats on your behalf.

The graphic below shows the landing page of Security Center, which provides an overview of current recommendations and events, displays tiles that depict specific information to help you prevent attacks, and details information that helps you understand security alerts and attacked resources, so you can appropriately respond.   This tool also provides the ability to provide Just-In-Time (JIT) access to VMs in the cloud by automatically updating network security groups to allow connections and then closing those connections automatically after a specified period of time.  In addition, you can create application white lists to allow the tool to prevent any application not on the list from being executed in your VMs.  To get started with Azure Security Center and learn how you can use the tool to protect, detect, and alert you to events, see the landing page.

Important Update!  - Azure Security Center can now monitor your on-premises and 3rd party cloud applications!

 

Azure Log Integration

The Azure Log Integration service is a tool you can download and install onto a Windows Server 2008 R2 or later server, which enables you to gather raw logs from Azure resources and share those logs with third party SIEM systems.  You can choose to install on a server running in your own datacenter or in Azure.  Please check out the overview of the tool and detailed installation instructions to learn more.

Once installed, the tool collects Windows events, Azure AD Audit logs, Azure activity logs, Azure Security Center alerts, and Azure Diagnostics logs from Azure.  The graphic below depicts how the integration works with third party SIEM systems including Splunk, HP Arcsight, and IBM QRader.  You can read about partner integration here.

 

 


Microsoft Teams Envisioning Workshop mit dem Produktteam

$
0
0

Gemeinsam mit dem Produktteam aus den USA bieten wir Kunden und Partnern die Möglichkeit die Produktgruppe von Microsoft Teams persönlich in Wien kennenzulernen. Am Montag den 30. Oktober findet bei uns im Büro ein Microsoft Teams Envisioning Workshop statt wo sich interessierte Kunden und Partner über die Roadmap von Microsoft Teams erkundigen können.

Diese Ganztagsveranstaltung ist eine tolle Gelegenheit sich mit technischen Experten aus der Microsoft Teams Produktgruppe persönlich zu vernetzen, die Erweiterbarkeit von Microsoft Teams kennenzulernen und sich über Ressourcen zu bzw. die Roadmap von Microsoft Teams zu informieren.

Anmeldeseite: Anmeldung zum Microsoft Teams Envision Workshop

Wer sollte teilnehmen?

  • Technische Entscheidungsträger die in Ihrer Firma die technologische Roadmap definieren und die Richtung der Produktentwicklung bestimmen
  • Solution Architects welche die Möglichkeiten der Integration von Teams näher beleuchten wollen
  • Entwicklungsleiter die Teams kennenlernen und Ihre Entwicklerressourcen entsprechen planen wollen

Was Kunden und Partner aus der Veranstaltung mitnehmen können

  • Ein Verständnis für Microsoft Teams, wie es in die Office 365 Platform paßt und welchen Wert es für sie selber bzw. ihre Kunden bietet
  • Wissen über die Microsoft Teams Erweiterungsmöglichkeiten, und wie sie Ihre eigenen Anwendungen dafür entwickeln können.
  • Details zur Microsoft Teams Roadmap, insbesondere im Zusammenhang mit der “Skype for Business Ankündigung“
  • Einen persönlichen Kontakt zur Microsoft Teams Produktgruppe und den Ingenieuren / Entwicklern

Was Kunden und Partner von uns erwarten können:

  • Tiefgreifende Informationen: Wir helfen Ihnen das Potential von Microsoft Teams zu erkennen, welche Ressourcen Sie zum Entwickeln eigener Anwendungen benötigen und die Unterstützung, die wir Ihnen dabei geben können
  • In dieser Veranstaltung wird (noch) nicht geocoded. Es handelt sich hier um einen ersten Schritt in Richtung ihrer eigenen Anwendung.
    In weiterer Folge können wir mit Ihnen eine Lösung im Detail planen und danach auch gemeinsam mit Microsoft Software Ingenieuren ein Hackfest durchführen.

Was Kunden und Partner mitbringen sollten

  • Ihre Fragen und Interessen – Wie kommen Sie mit der Microsoft Teams Erweiterungs-Plattform zurecht?
  • Ihre Erfahrung – lassen Sie uns Ihre Einstellung zu Microsoft Teams und zur Erweiterungs-Plattform wissen.
  • Ehrlichkeit – Lassen Sie uns über alles reden. Was Ihnen gefällt und auch was nicht.

Agenda (Die Vorträge finden in englischer Sprache statt):

Mo. 30.10., 09:00 – 16:00

  • 09:00 - Eintreffen und Kennenlernen der Teilnehmer, Möglichkeit für individuelle Gespräche
  • 10:00 - Overview of the Teams platform
    • Teams background and customer value
    • Teams functionality (demo)
    • Teams/Skype for Business convergence
  • 11:00 – Overview of Teams extensibility options
    • Making your app shine on Teams via Tabs, Bots Connectors, Compose Extensions
    • Office store submission process
  • 12:00 – Real life cases
  • 13:00 – Mittagspause
  • 14:00 – Office 365 Developer Platform and Microsoft Graph
  • 15:00 – Next steps, resources available

Zusätzlich und Ergänzend zum Microsoft Teams Envisioning Workshop am Montag den 30. Oktober in Wien findet in Kooperation mit den deutschen Kollegen in der Woche vom 6. - 9. November ein Microsoft Teams Hackfest in München statt. Bei diesem Hackfest können Kunden und Partner gemeinsam mit dem Microsoft Ingenieuren Ihre eigenen Microsoft Teams Erweiterungen entwickeln und in den Store stellen.

Showcase School Case Study- Hardenhuish School

$
0
0

Hardenhuish is an 11-18 co-educational comprehensive school set in the magnificent parkland of the former Hardenhuish Manor and Chippenham Grammar School. Hardenhuish School caters to nearly 1500 pupils aged 11-18 and is one of three secondary schools in Chippenham, Wiltshire. They became an Academy Converter in 2010 and have been a Microsoft Showcase School since 2016/2017. They are fully dedicated to maintaining strong ethos and high standards, whilst offering experiences relevant to the 21st century, including the use of modern technologies.

 

Their aim is for all their young people to be ‘inspired to learn’ – to leave us as independent learners, with the necessary skills for lifelong learning. They provide a purposeful approach to learning, ensuring that this incorporates fun and engagement. As a Microsoft Showcase School they are at the leading edge of new technologies and are continually exploring ways to incorporate ICT into lessons in a meaningful way. They were awarded the status following two years of promoting technology to enhance teaching and learning by using Office 365, OneNote Class Notebooks and SharePoint during lessons. The School has invested in tablet computers for all teaching staff, as well as allowing tablets to be accessed by pupils in class, and allowing sixth form students to bring in their own devices. 

Reinier Spruijt, ICT Innovations Manager at Hardenhuish School, said: “Over the next few months, we have a group of lead practitioners who have been tasked with enthusing the school’s wider staff to deliver lessons that will enable pupils to access learning when and where they need it.”

 

They are also invested in using single-sign-on web apps that integrate with Office 365. All teachers have been issued with Surface 3 tablets running Windows 10 and take part in a competitive, tailored technology CPD programme. The School is a Certiport test centre, offering Microsoft certifications, which enriches our STEM offer and staff CPD.

 

Check out their Twitter Feed @HardenhuishSch to see more great things happening at Hardenhuish.


 

Xamarin Madrid: Comunidad renovada

$
0
0

Presentación:

Xamarin Madrid es un grupo para todos los interesados en una multiplataforma usando Xamarin.

Nuestra idea es realizar encuentros periódicos donde se puedan realizar laboratorios, demonstraciones de todas las posibilidades de Xamarin y encuentros, donde podamos intercambiar experiencias.

No importa el nivel ni la experiencia, solo es necesario tener ganas de aprender y disfrutar con el desarrollo de Software.
En nuestro Meetup podrás encontrar la agenda con nuestros próximos eventos:
• 7 de noviembre: Experiencia de Xamarin en grandes proyectos reales

De momento tenemos planificado uno, ¡pero solo es el primero de muchos!

Objetivo

El objetivo del grupo es compartir conocimiento y experiencia en el desarrollo multiplataforma con Xamarin y todo lo que esté relacionado, como puede ser IOS, Android, UWP, .Net, patrones y etc…

¡Anímate y únete, te esperamos!

Información de contacto y enlaces de interés

Email: xamarin-madrid@outlook.com
Twiter: @Xamarin_Madrid
Meetup: https://www.meetup.com/es-ES/Xamarin-Madrid/

Skype for Business 2016 2017 年 10 月のセキュリティ 更新プログラム (Sec Patch) がリリースされています。

$
0
0

こんにちは。Japan Lync/Skype サポート チームです。
Skype for Business 2016 2017 年 10 月のセキュリティ 更新プログラム (Sec Patch) がリリースされています。

Description of the security update for Skype for Business 2016: October 10, 2017
https://support.microsoft.com/en-us/help/4011159/description-of-the-security-update-for-skype-for-business-2016-october

適用後のファイル バージョンは、16.0.4600.1000 となります。

この修正プログラムにより、修正される多数の問題と、適用後も発生する問題が確認されています。
同公開技術情報から Link されておりますので、詳細は公開技術情報をご確認ください。
是非、最新の修正プログラムを適用していただき、快適な Lync/Skype ライフをお送りください。

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

Skype for Business 2015 (Lync 2013) 2017 年 10 月のセキュリティ 更新プログラム (Sec Patch) がリリースされています。

$
0
0

こんにちは。Japan Lync/Skype サポート チームです。
Skype for Business 2015 (Lync 2013) 2017 年 10 月のセキュリティ 更新プログラム (Sec Patch) がリリースされています。

Description of the security update for Skype for Business 2015 (Lync 2013): October 10, 2017
https://support.microsoft.com/en-us/help/4011179/descriptionofthesecurityupdateforskypeforbusiness2015-lync2013-october

適用後のファイル バージョンは、15.0.4971.1000 となります。

この修正プログラムにより、修正される多数の問題と、適用後も発生する問題が確認されています。
同公開技術情報から Link されておりますので、詳細は公開技術情報をご確認ください。
是非、最新の修正プログラムを適用していただき、快適な Lync/Skype ライフをお送りください。

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

Git and Visual Studio 2017 part 16 : Find out who introduced the issue and when

$
0
0

In previous article, I explain how to revise local commits afterwards via Git. In this article, I explain how to find the commit which causes the issue when something went wrong.

Sometimes, I find that one function stop working all of sudden, but I don’t know when it’s broken. If I am the only developer, it maybe easier to track down the issue, but what if we have many people working on the same project and they add commits every day. How I can quickly identify which commit contains the bug? I know we should implement unit testing, so that we can find the issue as soon as someone commit new items Smile

Find who modified the file in Git

In this case, I would like to track down when README.md file had “Change 2” line and who did it.

1. Run ‘type README.md’ and ‘git log --oneline --graph --all’ to check the latest situation.

image

2. When I see the commit comment, it looks like commit f9e666f seems to add the line. Run ‘git diff 1cf7510 f9e666f README.md’ to see the difference, and…. YES! the case closed.

image

3. Well, things are not as simple as this in the real world. So what else I can do? Run ‘git blame README.md’. The blame command shows all the changes for the item. The work “blame” sounds a bit scared to me though Smile. Now I see commit f9e666f5 did change the file, who did the change and when it happened.

image

4. Run ‘git blame -L 3, 3 README.md’. The -L option filters the result by line number. First one is from, and second one is to. So in this case, I filtered the result which changes the line 3 to line 3 (so just one line) for README.md.

image

5. Blame also tracks down file name changes. Let’s try it now. Run ‘git mv README.md Read_Me.md’ to rename the file.

image

6. Run ‘echo "Renamed" >> Read_Me.md’ and ‘git commit -am "Rename README.md"’ to commit file name change.

image

7. Run ‘git blame Read_Me.md’ and it shows all history for the file including before renaming the file.

image

8. What happens if I delete the lines instead of adding? Let’s do it. Run ‘echo # VS_Git  > Read_Me.md’ and ‘echo Change 1 >> Read_Me.md’, then run ‘git commit -am “Removed Lines from Read_Me.md”’.

image

9. Run ‘git blame Read_Me.md’ and I couldn’t see deletion changes.

image

10. Now what? Well, I can search it by using log command with -S option. Run ‘git log --oneline -S “Change 2”. This shows when it’s added or modified.

image

11. If you run ‘git log -p -S “Change 2”, then it shows you all the detail. From the log, the commit 1c79720 deleted the “Change 2” line from Read_Me.md.

image

Find who created the issue in VS

Next, do the same in Visual Studio.

1. In Solution Explorer, you don’t see Read_Me.md. Why? Because I just renamed the file in Git, and solution didn’t track it. Select the solution and press Shift+Alt+A to add existing item. Select Read_Me.md and add it.

image

2. Right click the file and select “Blame (Annotate)”.

image

3. VS takes ‘git blame Read_Me.md’ information and displays in GUI. If I select the line in the right pane, it gives me which line has been modified in right pane.

image

4. I couldn’t find –S option for log command via Visual Studio 2017. But I can still use compare feature to see it. Right click Read_Me.md again and click “View History”. Right click the latest commit and click “Compare with Previous”.

image

5. You see the changes. You can also select two history lines and compare them.

image

Find which commit contains bug in Git

By using blame or log with -S option, I could find out who and when modified a particular file. But often time, I even don’t know which file caused the entire issue. In that case, we can checkout each commit one by one to see which commit actually introduce the issue. What is the most efficient way to check this? I know the latest commit has issue. So shall I go back each commit? Or shall I start from middle of the entire commits to reduce the commits into half every time? Git provide a command which does it for us.

1. Run ‘git bisect start’, which starts the inspection operation. Then, run ‘git bisect bad’ to mark current commit has the issue.

image

2. Now time to find a commit which doesn’t have the issue and mark it as good. For now, I mark the first commit as good here. Run ‘git log --online’ to find the first commit and ‘git bisect good 728f37d’. Then Git “checkout” the commit in the middle. In this case, commit 3127d2f.

image

3. Run the application to see if it contains the issue. In this case, I mark this as good. Run ‘git bisect good’. Now Git checkout the commit between latest and commit 3127d2f, which is commit f9e666f.

image

4. Let’s assume I find the same issue in this commit, then I run ‘git bisect bad’ to mark this as bad commit. Now, only commit lest to inspect is commit 1cf7510, as commit f9e666f is bad whereas commit 3127d2f is good.

image

5. Assume again this commit was good, then run ‘git bisect good’. Then Git tells me that commit f9e666f is the first commit contains the issue, and it also shows me which items are part of this commit.

image

6. Run ‘git bisect reset’ to reset everything.

image

To run less command, I can run ‘git bisect HEAD 728f37d’, to let git the bad and good commit as bisect parameters.

Find which commit contains bug in VS

Unfortunately, I couldn’t find bisect support for Visual Studio. But, I believe everyone has unit tests and function tests so that you can see what’s wrong immediately when someone adds new commit Smile!!

Summary

I barely use these features so far, as I have tests in-place or I am lucky enough so far. But bisect is one of the powerful tools to track down which commit has the issue.

By the way, this is the last article of the series at the moment, and the next series will be VSTS/GitHub and Visual Studio  2017. Please feel free to give me any feedback in comments area what's good, bad or missing.

Ken

Microsoft Dynamics CRM 2013 SP1 UR5 がリリースされました!

$
0
0

みなさん、こんにちは。

Microsoft Dynamics CRM 2013 設置型サービスパック 1 更新プログラム 用ロールアップ 5 (SP1 UR5) が 2017 年 10 月 11 日にリリースされ、Microsoft ダウンロードセンターよりダウンロード可能になりましたのでお知らせします。

ダウンロード情報 
タイトル: Microsoft Dynamics CRM 2013 Service Pack 1 の更新プログラム ロールアップ 5 (KB 4018582) 
URL: https://www.microsoft.com/ja-JP/download/details.aspx?id=56108
ビルド番号: 6.1.5.0111

更新プログラムのロールアップ情報や更新方法、SP1 UR5 で解決される問題一覧は、以下のリンクをご参照ください。 
https://support.microsoft.com/ja-jp/help/4018582/update-rollup-5-for-microsoft-dynamics-crm-2013-service-pack-1

バージョン確認方法
製品画面右上の歯車 –> [情報] クリックするとバージョン情報が表示されます。

– Dynamics 365 サポート 早坂
※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります


Visual Studio支持CMake –包括CMake 3.9的更新,支持Linux系统已及对一些反馈的改进

$
0
0

[原文发表地址] CMake support in Visual Studio – CMake 3.9, Linux targeting, feedback

[原文发表时间] 2017/9/14

Visual Studio 2017 15.4 Preview 2现在可用,包括对Visual StudioCMake工具的增强。最新的预览将CMake升级到3.9版本,包括的更好地支持独立的CMakeLists,并支持直接定位Linux

请查看预览并尝试使用最新的CMake功能。如果您刚刚开始使用CMake,请按照链接详细了解Visual Studio中的CMake支持。我们期待您的反馈。

CMake工具升级到 CMake 3.9

您可以在CMake 3.9发行说明中找到关于增强功能的完整列表。

更好地支持包含多个独立CMakeLists的文件夹

最新的预览改进了对包含多个独立CMake项目的文件夹的支持。当您打开一个独立项目的文件夹时, CMake项目中的所有目标都应该被检测到。

此功能在预览中确实有一些限制。例如,如果根文件夹中有CMakeLists,则可能无法正确检测到子文件夹中的独立CMakeLists。请让我们知道这是否会对您的项目产生负面影响。在此之前,您可以直接打开子文件夹来解决这个限制。

CMake 支持 Linux

Visual Studio现在支持使用CMake直接定位Linux

Target Linux or Windows with Visual Studio and CMake.

此功能允许您无需修改即可打开Linux项目,在Windows上使用完整的IntelliSense进行编辑,并在远程Linux目标上进行构建和调试。另外,Visual Studio解决了远程目标的链接问题,因此您不必担心设置SSH通道。这应该使跨平台开发变得轻而易举,因为您可以通过在下拉列表中切换配置来在WindowsLinux之间切换。如果您想了解更多信息,请查看使用CMake直接定位Linux

错误修正和改进

你们给了我们反馈,我们听了。Visual Studio 2017 15.4预览1包含几个改进和解决社区报告的错误。 以下问题已在最新预览中修复:

给我们反馈

要尝试最新最好的CMake功能,并给我们一些早期的反馈意见,请下载并安装最新的Visual Studio 2017 Preview。和往常一样,我们欢迎您的反馈。 请通过电子邮件 cmake@microsoft.com,通过Twitter @visualcMicrosoft Visual CppFacebook发送任何意见。

如果您遇到Visual Studio 2017的其他问题,请通过报告问题通知我们,该问题在安装程序和IDE本身都可用。有关建议,请通过UserVoice通知我们。我们期待您的反馈!

Visual Studio 2017中的C ++开发人员的Visual Studio扩展

$
0
0

[原文发表地址] Visual Studio extensions for C++ developers in Visual Studio 2017

[原文发表时间] 2017/8/29

在这个博客中,我们要强调几个Visual Studio扩展,如果您使用Visual Studio 2017或考虑升级,可以使您的C ++开发人员的生活更美好。我们也听说过许多人在Visual Studio 2017上没有提供某些C ++扩展,阻止您转移到最新版本。我们想通知您,我们正在解决这些反馈,您提到的许多扩展程序现在可以在Visual Studio 2017上使用。

我们很高兴地宣布,以下扩展程序现在可用于Visual Studio 2017

扩展名 说明
C++ Quick Fixes 此扩展允许您将鼠标悬停在一个波浪上以获得LightBulb,或使用默认键盘快捷键Ctrl + Dot(Ctrl +.)来了解如何快速修复代码中的问题。

Macros for Visual Studio Macros for Visual Studio是VS的一个扩展,VS可以使用这个扩展来自动执行IDE中的重复任务。这个扩展可以记录Visual Studio中的大部分命令,包括文本编辑操作。

PdbProject 可以直接从PDB中创建一个.vcxproj以便快速的代码浏览和智能感应。
Test Adapter for Boost.Test 适配器可以自动地发现单元测试并使IDE工具可以基于Boost.Test框架运行和管理单元测试,参照测试执行结果以及Visual Studio Enterprise用户来检查代码覆盖率。
Test Adapter for Google Test 可以自动地发现单元测试并使IDE工具能够运行和管理基于Google Test框架的单元测试,参照测试执行结果以及Visual Studio Enterprise用户来检查代码覆盖率。Visual Studio Test Explorer和代码覆盖结果可用于直接在IDE中管理单元测试。

Productivity Power Tools 这是一个扩展捆绑安装程序,它将安装Productivity Power Tools 2017的各个组件,包括许多工具,如 Ctrl+Click GoTo Definition, Custom Document Well Peek Help.

Ctrl+Click GoTo Definition

Peek Help

Structure Visualizer Structure Visualizer添加了视觉线索来用语法表达代码块。这些代码块允许你快速找出类,方法和许多其他代码作用域的范围。

Whack Whack 这个终端仿真器允许你在Visual Studio IDE中通过Windows子系统Linux(WSL)来运行命令提示符,PowerShell和bash。

Windows Driver Kit 可以创建在Windows设备上运行的驱动程序,从打印机到VR耳机。WDK对Visual Studio 2017的支持将在下一个版本的WDK中提供给大家。

注意:Visual Studio 2017的预览版本对Windows内部人员是支持的。

Visual Studio Color Theme Editor Visual Studio Color主题编辑器对喜欢将Visual Studio环境的颜色更改为产品附带的标准Light/Dark/Blue的用户是比较受欢迎的。它也提供了其他的预定义主题:绿色,红色,紫色等。 你可以创建新的主题或更改现有的主题。

我们目前正在积极做的Visual Studio 2017的另一个扩展是Image WatchImage Watch是一个监视窗口,可在内部支持的OpenCV类型的本地C++代码调试时查看内存那种的位图。我们知道很多人在VS2015中都被卡住了,为了使用Image Watch我们正在努力解决这个问题。

你是否正在使用不支持Visual Studio 2017的扩展程序?你对VS扩展有什么想法可以让C++开发体验更好呢?如果有可以让我们知道并分享你的想法。

Compilation Pipeline in the DirectX Compiler

$
0
0

A few months ago I discussed how to invoke the compiler from your code. Today I want to discuss what the invocation does, so you can find your way around if you decide to inspect the code or if you want to learn how your shaders go from text to bytes.

  • DxcCompiler: this class is the implementation for the IDxcCompiler interface, and puts together the rest of the pipeline.
  • Parser: the parser component setups up a tokenizer/preprocessor and pulls tokens out of a stream; the tokenizer and preprocessor work together to handle #include and #define directives and to break up incoming text into lexical units. As it recognizes structures, it pushes them to the next component.
  • Semantic Analyzer: the semantic analyzer (Sema) looks at the parser structures, performs various checks, and generates declarations, statements and expressions, all of which land on the ASTContext that the parser also has access to.
  • Code Generator: once the parser is done processing all input, the code generator turns the constructs in the AST context into LLVM IR in a straightforward fashion. We call this form High-Level Intermediate Representation (sometimes HL-IR in code).
  • DXIL Lowering and Optimization: a number of passes work by transforming LLVM IR in HL-IR form into LLVM IR in DXIL form. The representation is the same, but DXIL has references and invariants that aren't present in HL-IR. Some optimizations are done before this lowering, while some are done after.
  • Container assembly: after all the transformations have been performed, the DXIL program is put into a container that allows other program to have easy access to important information, without having to deserialize and process DXIL.
  • Validation: after all the transformations have been performed and the container has been assembled, validation is run to ensure that a number of rules or invariants hold for that program; downstream consumers can then rely on these being present to simplify their work.

If you're familiar with clang and LLVM, you will notice that there is very little that has changed. We mostly do what clang regularly does with sources, then do some post-processing on the IR without targeting a specific backend, and then put things together for DirectX consumption.

Some of these phases have been broken up into other components to enable some additional scenarios, like having the optimizer passes support instrumentation. In particular, the compiler can be made to output HL-IR by using the /fcgl flag. The DxcOptimizer can then perform the work to turn it into DXIL. DxcAssembler will generate the container, and DxcValidator will perform validation on it.

Enjoy!

Issues with Visual Studio Team Services – 10/13 – Investigating

$
0
0

Initial Update: Friday, October 13th 2017 11:15 UTC

A potentially customer impacting alert is being investigated. Triage is in progress and we will provide an update with more information.

  • Next Update: Before Friday, October 13th 2017 12:30 UTC

Sincerely,
Ariel Jacob

Drive development success with the Ad Monetization Platform

$
0
0

Co-authored by Vikram Bodavula

 

You’ve worked on your app, published it and made it available for downloads. Now what?

While most developers dedicate a significant amount of time towards creating their app, they often neglect a crucial aspect - getting it into the hands of users and monetizing the engagement. Great user experience and excellent features are important, of course, but even the best apps need a smart user acquisition and monetization strategy.

That’s what developers and business partners Sam Kaufmann and Jake Poznanski discovered after a few years of independently developing games for the Windows platform. Sam and Jake started off in college and pooled money together from their credit cards to co-found Random Salad Games, an indie development studio, in 2011.

Over the years, Random Salad Games has developed and launched more than 50 titles. It was the first indie game to reach $1 million in revenues from the Windows Store. Random Salad Games has been leveraging Microsoft’s Ad Monetization Platform across the customer cycle - acquisition, monetization, and engagement.

Acquire

A common misconception most developers have is that users simply stumble on new apps on the Microsoft app store. This isn’t the case. Even the best apps need some form of promotion to create a user base.

Starting on a tight budget, Jake and Sam realized the best way to create a user base was to use the Promote Your App (PYA) framework on the platform. PYA is a self-service tool built into the platform that helps get apps out to more potential users. The PYA framework leverages machine learning algorithms to match your app with the right users. Creating a new campaign takes less than three clicks. New campaigns can be either paid or free. Free ad campaigns allow you to barter for exposure on the Microsoft Developer Community. Paid campaigns, meanwhile, can be launched for as little as $10, which is why upstarts like Random Salad Games could easily afford it.

Running an ad campaign is a balance between acquisition costs and estimated revenue per user.  A great way to estimate your return-on-investment from each campaign is to estimate the lifetime value (LTV) of a new user and see if the cost per install (CPI) is less than that amount. For 90% of the ad campaigns on the Microsoft platform the CPI is less than $2.

By keeping costs low and using the PYA platform appropriately, Random Salad Games gradually built up a sizeable user base. The next step was to monetize the user base.

Monetize

The co-founders combined different combinations of ads to generate revenue. Ad banners, interstitial ads, and incentivised ads were integrated without compromising the experience of the app.

Effective monetization requires a long-term strategy that’s been built into the core of the platform. Within the Microsoft ecosystem there are multiple ways to monetize an app.

Advertising is the most prominent of these. The platform supports every type of ad, from playable video ads to native advertising. Most developers find that monetizing their app based on ads is the least time consuming and most straightforward way.

Sam discovered that Microsoft has one simple Software Development Kit (SDK) for these ads so that integrating them with an app is easy. For most apps, inserting a native, banner or interstitial ad requires adding in just a few lines of code. Here’s what it looks like:

A few simple lines and your chosen ad is quickly integrated with your app. Incentivised and interstitial ads can work very well and earn a lot in revenue. However, like banner ads they must be used sparingly to avoid diluting the user experience.

The platform also supports other forms of monetization including 3rd-party sponsorship and in app purchases.

Nevertheless, monetization is only part of the puzzle. A great monetization strategy and a strong user base may help generate revenue, but the key to success on any platform is engagement.

Engage

An engaged user base is easier to monetize. Driving engagement on your app completes the cycle to ensure a stable and recurring cash flow from your monetization strategy.

Monetizing a game without compromising on fun meant Random Salad Games had to get creative with ad placement.

The Microsoft platform offers a higher degree of control over the type of ads that show up on the app. Customizing these ads and making sure they don’t interfere with the user experience is as easy as drag-and-drop. Random Salad Games took advantage of this customizability to integrate a combination of interstitial and incentivised ads within their game.

Interstitial ads show up when there’s a natural pause in what the user is doing (like the loading screen on a mobile game). Incentivised ads are those that users willingly view to earn some reward. Random Salad Games incentivised users to watch ads so that they could earn more lives on their game ‘Jewel Star’. The co-founders claim this simple dialog box helped them double their revenues from the game:

Clever visual and interactive tricks like these can make your app more engaging and less frustrating for users. Users are more likely to come back to an app with conveniently placed ads that are actually fun to interact with.

Final Thoughts

Creating a great user experience on an app is only part of the puzzle. A solid user acquisition- monetization-engagement strategy will eventually lead to success on the Microsoft platform.

The Microsoft Ad Monetization Platform offers developers more control over the monetization strategy and handy support to drive engagement to acquire and sustain new customers.

Wondering about WCF Proxy usage

$
0
0

Usually one of the topics in the WCF space that causes the most doubts and discussions are around WCF Proxy creation and usage.

I can say from my experience that most of time the doubts and discussion are based mostly one some confusion around how a WCF Proxy works, and oddly enough those are around using a WCF Proxy on a Singleton scenario.

One of the most recurrent question I got around this topic is always something like the following.

If I do not close my Singleton proxy object after making service call, would this have an impact on server load in terms of too many open connections? And (wait for it) if yes, how this concerns could be addressed while still leveraging performance gains derived from not creating proxy for every request.

That's just great, right? We always want to getting away with everything.

So, lets start by pointing to some background references on the web that could help here, so we do not go for a path of rephrasing knowledge that is already there.

In terms of WCF Client Proxy creation best practices we can always go to the Wenlong post around this topic, you can find it here.

If I was asked for a a straight answer on this, I would say that it is not advisable to leave proxies open as discussed here. Just bear in mind that even if you closing it, you are always getting the benefit of WCF implicit proxy caching (please refer ‘Best Practices’ in Wenlong post referred above).

However, going through "Best Practices" section in same link, we can see that reusing the same proxy is first in list, so keeping proxy open if client application intend to reuse is being seen as acceptable and even as a good thing.

We need to understand though, that this needs to be tested and you always need to considering the need to implement a throttling behavior on the Service (and any other adjustments made accordingly) to avoid server side bottlenecks.

On client side, there are also other considerations, like for example the number of concurrent connections allowed, this is governed by DefaultConnectionLimit property (defaults to 2) which would be relevant to avoid client side bottlenecks. This can be either controlled programmatically or through configuration with "ConnectionManagement".

Here enters another usual source of confusing, what this "MaxConcurrentCalls" and "MaxConnections" means and how to take advantage of them. 

Lets start by clarifying, "MaxConcurrentCallswill be the number of max calls that can be executing at the same time on the Service, "MaxConnections" has to do with the total number of open connections on the service, regardless if the service is executing anything for the connection.

However, we need to know that there are indeed two types of "MaxConnections".

  • System.Net MaxConnections, it is relevant only on client side and signifies the allowed number of outgoing connections to a specific host.
  • netTcpBinding MaxConnections, these are relative to the number of pooled connections.

So in our WCF Proxy scenario, if a client opens a connection to the service, calls a method, and is waiting for the method to return, it will count against the "MaxConcurrentCalls".
As soon as the service returns a response to the client’s method call, it will not count against the "MaxConcurrentCalls" even if you didn’t close the client-side proxy.

Clear, right?

Finally, with this approach of not closing my Singleton WCF Proxy, we could ending up with a lot of connections opened on the IIS Server, could this been a serious issue on the IIS side?

The main point here will be that the service will be using ephemeral ports here and though configurable (they have a large default values on modern OSs), they do have a limit.

Please refer this post, around Ephemeral Port Limits to get a better understanding of this.

I would say that, for throttling in .NET 4 and above, the defaults should be good enough as they are defined accordingly with the processor count, but you are still free to modify them based on testing and fine tuning. This blog could be a starting point.

For connections, as I pointed out above, ephemeral ports are limited but depending on OS and configuration, you may have a huge range available.

So, concluding, keeping connections open for long time is ok and even advisable if we are constantly using them and if you are using "netTcpBinding", since it already uses inherent Connection Pool, you may be able to achieve same goal even with closing the proxy.

Hope that helps

 

 

 

Issues with Visual Studio Team Services – 10/13 – Investigating

$
0
0

Initial Update: Friday, October 13th 2017 13:55 UTC

A potentially customer impacting alert is being investigated. Triage is in progress and we will provide an update with more information.

  • Next Update: Before Friday, October 13th 2017 15:00 UTC

Sincerely,
Ariel Jacob


Install Visual Studio Code on Ubuntu 16.04 LTS

$
0
0

Overview

I had trouble getting Visual Studio Code to install by simple clicking on the package from https://code.visualstudio.com.  Whether I tried do download or use Ubuntu Software to install this I got an error: status code 400: Bad Request.

Solution

Download the .deb package from the Visual Studio Code site, then right click on the desktop and open a terminal:

capture20171013144839126

Go to the Download directory.  Type cd Dow and hit the tab key to complete the directory name:

capture20171013144931211

Type ls to list the files in the Download directory and find the code_ package:

capture20171013145032064

Install the package with this command sudo dpkg –i code (and hit tab to complete the name):

capture20171013145404656

Provide your password and Visual Studio Code will install.

NOTE: if you get dependency errors when using dpkg with a package, simply type sudo apt-get install –f  which will download and install missing dependencies and configure them.

Now, find it and run it:

capture20171013145544701

And you can right click on the icon in the side bar (Launcher) and choose Lock to Launcher so you don’t need to hunt for it again!

capture20171013145638693

 

Conclusion

Just a quick blog to help you out.  If you find it helpful, please let me know!

Note:  You can kick of Visual Studio Code in a terminal window if you are command line guy like me by simply typing code in the terminal.

References

https://help.ubuntu.com/lts/serverguide/dpkg.html

https://www.digitalocean.com/community/tutorials/how-to-manage-packages-in-ubuntu-and-debian-with-apt-get-apt-cache

(WAL) – Workflow Example – Removal of a multivalued reference attribute

$
0
0

Special Thanks to Mr. David Hodge for putting the WAL Workflow Documentation together

Things to keep in mind

• the RemoveValues function requires a “List” to be passed to it. Adding in the GUID into the RemoveValues function didn’t seem to do it.
• An example PowerShell activity that allows you to build a list of object GUIDs to pass to the Update Resources activity.

Referencing https://social.technet.microsoft.com/Forums/en-US/63213b2d-4f31-416d-8e70-b871f37a7db8/removevaluesstringlist-not-removing-values?forum=Mimwal

Below is how I modified it… We could probably be more elegant by add

function New-GenericObject

{

<#

.Synopsis

Create a new generic object.

.Description

Create a new generic object.

.Example

New-GenericObject -TypeName System.Collections.Generic.List  -TypeParameters Microsoft.MetadirectoryServices.CSEntryChange

#>

 

[CmdletBinding()]

[OutputType([object])]

param(

[parameter(Mandatory = $true)]

[string]

$TypeName,

[parameter(Mandatory = $true)]

[string[]]

$TypeParameters,

[parameter(Mandatory = $false)]

[object[]]

$ConstructorParameters

)

 

process

{

$genericTypeName = $typeName + '`' + $typeParameters.Count

$genericType = [Type]$genericTypeName

 

if (!$genericType)

{

throw "Could not find generic type $genericTypeName"

}

 

# Bind the type arguments to it

$typedParameters = [type[]] $TypeParameters

$closedType = $genericType.MakeGenericType($typedParameters)

 

if (!$closedType)

{

throw "Could not make closed type $genericType"

}

 

# Create the closed version of the generic type, don't forget comma prefix

,[Activator]::CreateInstance($closedType, $constructorParameters)

}

}

 

 

$MembersToAdd = New-GenericObject System.Collections.Generic.List Microsoft.ResourceManagement.WebServices.UniqueIdentifier

$MembersToRemove = New-GenericObject System.Collections.Generic.List Microsoft.ResourceManagement.WebServices.UniqueIdentifier

 

#########################################################

#Example of how to add/remove a MIM Object by GUID

#########################################################

#$FIMService = New-Object Microsoft.ResourceManagement.WebServices.UniqueIdentifier("e05d1f1b-3d5e-4014-baa6-94dee7d68c89")

#$BulitInSyncAccount = New-Object Microsoft.ResourceManagement.WebServices.UniqueIdentifier("fb89aefa-5ea1-47f1-8890-abe7797d6497")

#$MembersToAdd.Add($FIMService)

#$MembersToAdd.Add($BulitInSyncAccount)

#$MembersToRemove.Add($FIMService)

#$MembersToRemove.Add($BulitInSyncAccount)

 

$ObjectID = New-Object Microsoft.ResourceManagement.WebServices.UniqueIdentifier("e05d1f1b-3d5e-4014-baa6-999999999999")

$MembersToRemove.Add($ObjectID)

 

return @{ "MembersToAdd" = $MembersToAdd; "MembersToRemove" = $MembersToRemove }

 

Microsoft EDU Moments

$
0
0

Last week we were delighted to hold our first ever E2 Exchange Event at our Paddington offices and the atmosphere was electric! We even made it as a Twitter Moment! This is a result of the hard work, dedication and commitment we all have in empowering our students to achieve more, so thank you! Here is a snapshot of the excitement from last week:


We also announced the launch of our Microsoft UK Roadshows! 

The Microsoft UK Education Roadshow will help fulfil our mission to empower the students and teachers of today to create the world of tomorrow. With over 100 events taking place across the UK in 2017 and 2018, this is the perfect opportunity for educators to see first-hand how Microsoft technologies can enhance teaching and learning.

Events are completely FREE and perfect for those at the very beginning of their digital transformation journeys. All events will involve hands-on training workshops led by our specialist Microsoft Learning Consultants and/or Microsoft Training Academies, and will focus specifically on how Office 365 and Windows 10 can help transform learning.

 

 


Check out the latest information in our Sway to help get you started with your unique opportunity to host a Microsoft UK Roadshow!


Here are some of our twitter highlights from the week:


That's a wrap! We hope that you will continue to inspire students and give them unique learning opportunities using technology to enhance their outcome! Thank you for another great week! 

Securing Azure Functions calls to Dynamics 365 secured by Azure AD!

$
0
0

As we were developing our custom engagement bot for MTC, in order to be compliant with security policies we needed to make sure all the calls to our Dynamics 365 from our Azure Functions are secured by Azure AD for our team using the bot within organization on their phone or other devices.

Since we could not use a service account Pass-Through Auth was the way to go, while implementing the code for Azure Function I hit couple of road blocks which I ended up adding this as an issue on GitHub here as we couldnt get access to auth headers.

After some digging and try and error efforts I finally managed to write a code to get this working, so if you are planning to do something similar with Dynamics 365 here it goes:

  • Make sure to secure your Azure Function with Azure AD using your Org subscription
  • Register a custom App within your Azure AD and give permission to access Dynamics 365
  • Get Client ID and Secret for the new app
  • Adding required references:
    • Microsoft.IdentityModel
    • Microsoft.IdentityModel.Clients.ActiveDirectory
  • Next get access to right header in your Azure Function code and pass the user assertion value along with other parameter to your runTask function:
    public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
    {
    
    // parse query parameter
     string accountName = req.GetQueryNameValuePairs()
     .FirstOrDefault(q => string.Compare(q.Key, "AccountName", true) == 0)
     .Value;
    
     string data = string.Empty;
     IEnumerable<string> headerValues;
     var tokenId = string.Empty;
    
    if (req.Headers.TryGetValues("X-MS-TOKEN-AAD-ID-TOKEN", out headerValues))
     {
     tokenId = headerValues.FirstOrDefault();
     }
    
    log.Info("Results:" + tokenId);
    
     UserAssertion userAssertion = new UserAssertion(tokenId);
    
     // Get request body
     data = await runTask(accountName, userAssertion);
     return data == null
     ? req.CreateResponse(HttpStatusCode.BadRequest, "Please pass an Account Name on the query string or in the request body")
     : req.CreateResponse(HttpStatusCode.OK, "Results:" + data);
    }
  • Next in your runTask function using the user assertion value to create the auth token:
    public static async Task<String> runTask(string accountName, UserAssertion userAssertion)
    
     {
    
     string resource = ConfigurationManager.AppSettings["Resource"];
     string clientId = ConfigurationManager.AppSettings["ClientID"];
     string secret = ConfigurationManager.AppSettings["Secret"];
     string redirectUrl = ConfigurationManager.AppSettings["RedirectURL"];
     try
     {
     //Authentication parameters received from Resource Server
     AuthenticationParameters ap = AuthenticationParameters.CreateFromResourceUrlAsync(new Uri(resource + "/api/data/")).Result;
     // Authenticate the registered application with Azure Active Directory.
     ClientCredential credential = new ClientCredential(clientId,secret);
     AuthenticationContext authContext = new AuthenticationContext(ap.Authority, false);
     AuthenticationResult result = await authContext.AcquireTokenAsync(ap.Resource, credential, userAssertion);
     using (HttpClient httpClient = new HttpClient())
    {
     httpClient.Timeout = new TimeSpan(0, 2, 0); // 2 minutes
     httpClient.DefaultRequestHeaders.Authorization =
     new AuthenticationHeaderValue("Bearer", result.AccessToken);
     httpClient.BaseAddress = new Uri(resource + "/api/data/v8.1/accounts?$select=name&$filter=name eq '" + accountName +"'");
     httpClient.Timeout = new TimeSpan(0, 2, 0);
     httpClient.DefaultRequestHeaders.Add("OData-MaxVersion", "4.0");
     httpClient.DefaultRequestHeaders.Add("OData-Version", "4.0");
     httpClient.DefaultRequestHeaders.Accept.Add(
     new MediaTypeWithQualityHeaderValue("application/json"));
     HttpResponseMessage response = await httpClient.GetAsync(httpClient.BaseAddress);
     Stream rStream= await response.Content.ReadAsStreamAsync();
     StreamReader reader = new StreamReader(rStream);
     return reader.ReadToEnd();
     }
     }
      catch (HttpRequestException e)
     {
      throw new Exception("An HTTP request exception occurred.", e);
     }
    }

Top stories from the VSTS community – 2017.10.13

$
0
0

Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics.

image TOP STORIES

image VIDEOS

 image STICKERS

  • Do you like stickers? I’ll personally send a few stickers to the first three (3) users to send the date on which the VSTS-Bot was announced to my @wpschaub twitter account.

TIP: If you want to get your VSTS news in audio form then be sure to subscribe to RadioTFS .

image FEEDBACK

What do you think? How could we do this series better?
Here are some ways to connect with us:

  • Add a comment below
  • Use the #VSTS hashtag if you have articles you would like to see included
Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>