Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

How to stand out in a crowd of 35,000+ people at Microsoft Inspire

$
0
0

Guest Post from Luke Debono, Co-founder Incredible Results

With 35,000+ people and 160 sessions, Microsoft Inspire will offer you countless opportunities to connect. The question is – how are you going to make sure that you and your team stand out?

This following process will help you consider, and frame, your unique value proposition so that when you find yourself in the elevator with the contact you’ve been trying to court for the past year, what you say will stick.

  1. Define your perfect customer. Who are the customers that you’d love to have more of and what is it that makes them perfect for your business? Are they in a specific vertical or industry? Are they of a particular size or do they all face a common business problem?

<read the rest here>


Next on our Menu – Extended Events Targets Overview

How to set ImageView ScaleType to TOPCROP

Supercharging the Git Commit Graph II: File Format

$
0
0

Earlier, we announced the commit-graph feature in Git 2.18 and talked about some of its performance benefits. Today, we'll discuss some if the technical details about how the commit-graph feature works, including some helpful properties of its file format. This file speeds up commit-graph walks so much that we were able to identify other ways to speed up these walks using small optimizations.

If you prefer a video, I talked about the commit-graph format in the second half of a talk at Git Merge 2018.

What makes a commit?

In Git, a commit consists of a few pieces of information:

  1. The hash of the root tree. This stores the state of the working directory at that point in time.
  2. A list of hashes of the commit parents. These are the commits from which this commit is based. There may be any number of parents.
    • No parents mean the commit is a "root" or "initial" commit.
    • The commits created by a single developer doing work typically have exactly one commit.
    • Two or more parents means the commit is a "merge" commit, combining the history of previously independent commits; commits with three or more commits are "octopus" merges.
    • Parent order matters! For instance, you can use git log --first-parent to only traverse the relationships between a commit and its first parent.
  3. Name and email information for the author and committer, including the time that the commit was authored and committed. See this StackOverflow answer on the difference between "author" and "committer". It is important to our discussion that these times are stored as seconds since Unix epoch.
  4. A commit message.

If we think of the commits as vertices and the commit-to-parent relationship as an edge, then the set of commits creates a directed acyclic graph. This graph stores interesting structure about your commit history.

For example, the figure below shows the commit graph for a small Git repository.

An example Git commit graph

Each circle represents a commit and the lines demonstrate a commit-to-parent relationship. Commits are arranged left-to-right based on time, so the commit R is a root commit. Commits A and B represent two "tip" commits (no commits have A or B as parents). Commit A has a single parent while commit B is a merge between two commits.

There are a lot of questions we want to ask about this commit graph that help with developer workflows in Git, such as:

  • What commit is the best merge base between A and B?
  • Which commits are ancestors of A but not ancestors of B? We use this to determine which commits are introduced by a pull request.

To make things easier, we say a commit X is reachable from a commit Y if there is a sequence of commit-to-parent edges that we can follow to "walk" from Y to X. Consider the following:

Reachable sets from commits A and B

In light pink we highlight the commits reachable from A but not reachable from B. In blue we highlight the commits reachable from B but not A. The gray commits P and Q are the commits that are reachable from both A and B but not from any other commit that is also reachable from A and B. These commits P and Q are options for merge-bases between A and B. The other gray nodes are not acceptable merge bases, because they are reachable from P or Q.

Git answers these commit graph questions by using various graph walking algorithms, such as breadth first search (BFS), depth first search (DFS), or "best first" search using commit date as a heuristic.

While walking these commits, Git loads the following information about a commit:

  1. The parent list
  2. The commit date

The parent list is required so we know which commits to visit next in our walk. The commit date is required due to the heuristic that helps select which commit to visit next.

The rest of the information in a commit is only required if we need to display that information to a user (such as in git log) or when we want to filter on that information (such as in git log --author=X).

Before Git 2.18, Git had to go looking for where the commit was stored, decompress the commit file, and parse the commit in plain-text. This can be expensive, especially when doing it thousands of times.

Basic Structure of a Commit-Graph File

In Git 2.18, we can now store the commit graph information in a compact way in the .git/objects/info/commit-graph file. The file is structured to make parsing as efficient as possible. It also has extension points so we can create more advanced features in the future using this file.

The commit-graph file has four main sections, called "chunks".

  1. Commit IDs: A sorted list of every commit ID. The position of a commit ID in this list is called the graph position of the commit.
  2. Fanout: A fanout table of 256 entries. The fanout value F[i] stores the number of commits whose first byte (or first two hex digits) of their ID is at most i. Thus, F[0xff] stores the total number of commits. This allows us to seed the initial values of the binary search into the commit IDs.
  3. Commit Data: A table containing the important data for each commit, including its root tree ID, the commit date, and the graph position of up to two parents. If a commit has fewer than two parents, we use a special constant to mark the column to be ignored.
  4. Octopus Edges: In the case of an octopus merge, we need more than two parent values. In these cases, we keep a separate list of graph positions and the second parent value of an octopus merge stores a position in this list of octopus edges. We use a null-termination trick to let the list of parents be arbitrarily large, such as when someone commits a Cthulhu merge with 66 parents.

Thus, the file looks something like this:

The Git commit-graph file format

The full file format descirbes more of the specifics, including how we version the file, how the file can be flexible to a new hash algorithm, and how we can add new tables of data to the file. See the commit-graph design document for some future plans to extend the file format.

The performance gains we discussed earlier are from the speed of parsing the commit-graph file instead of raw commits, and from avoiding object database lookups. Not only is parsing this tabular data faster than unzipping and parsing the plain-text commit information, we only need to do the binary search once for each of the starting positions of our walk. Our parent references use the graph position so we can navigate directly to the row storing our parent. This means that walking N commits went from O(N log N) to O(N) and the constant multiple hidden by the big-O notation are smaller.

This parsing speed can give us a 5-10x speedup for certain Git commands.

Small Optimizations

At Microsoft, we care deeply about Git's performance. After we saw performance benefits in the commit-graph file, we took another look at Git's performance. We inspected the stack traces for commands like git status using PerfView. Here are a few of those stack traces with some hot spots identified:

Stack traces for a git status call

On the left, we recognized the method mark_parents_uninteresting() is spending a lot of time calling has_object_file(). We knew that the commit-graph should allow Git to avoid checking the object database to see these commits, so this must be wasted effort. After digging in, we found that this call can be avoided. Since this call seemed to exist for a historical reason, we first made the safe change that conditionally skipped the call. Later, Jeff King removed the call entirely. This was an excellent example of collaboration in the community! Together, this change can save up to 7% of end-to-end time on a git status call in the Linux repository without the commit-graph feature. The relative benefit is even higher when the commit-graph feature is enabled (over 50% in the stack traces above).

In the right side of the stack traces, we see that our commit parsing is taking 6.7% in the lookup_tree() method. We knew that git status was computing merge bases at the time, so why did it need to load trees? Recall above that the root tree object ID is an important element in a commit. Since Git frequently needs to walk from a commit to its root tree, the parse_commit() method is expected to load an object pointer for that tree. The tree object is just a placeholder, but Git also has an object cache to avoid duplicate copies of the same object. That's what lookup_tree() is doing: it is checking the in-memory cache for a copy of that tree, even though we never need to look at the tree for our current code path. To improve commit-graph walks that do not need to also walk trees, we made Git load trees lazily. This gave us an additional 28% speedup in git log --graph calls in the Linux repository.

Parsing is not Enough

While the commit-graph file speeds up commit graph walks by improving parsing speed, it still has a problem: if git log --graph took 15 seconds to show a single commit before, it takes at least 1.5 seconds with Git 2.18. This is the case we are in with the Windows repository, as it has over 2 million reachable commits. The algorithm for git log --graph requires reporting the commits in topological order and the algorithm walks every reachable commit before displaying a single commit to the user.

Visual Studio Team Services (VSTS) displays commit history in topological order by default, and can return history queries on the Windows repository in under 250 milliseconds.

commit graph in VSTS

The commit graph is visualized when viewing commit history in VSTS

The reason we can do this is because we can more dynamically report a single page of commits in topological order without walking the entire commit history. We use an extra piece of information, called generation number, to walk a much smaller set of commits.

In the next article, I'll go deep into the definition of generation numbers and how this algorithm works. If you can't wait, then you can see the discussion on the Git mailing list, since we are hard at work bringing this algorithm to every Git user!

App-V Auto Sequencing Part 1, the blueprint

$
0
0

Creating App-V packages sometimes seems  like an artform. (I've even heard people call it a Dark Art) Depending on who you talk to about App-V or which experiences they have had, responses  range between general interest to outright enthusiasm. Likewise with any artform you'll always meet somebody doing it better or asking for costs.

 

This series of blogs will help you understand

  • What automatic package creation is
  • How to build a package.
  • Solutions from the field, what we have seen and/or built out with some of our customers.

What Sequencing options are there ?

When Sequencing a package there are 2 options both provided with the Windows 10 ADK.

  • Build packages with the Microsoft Autosequencer
  • Create your own customized solution

The default boxed App-V Autosequencer will suit most scenarios and can be automated whereas the Sequencing application is manual and has the necessary graphical interface more suited to complex applications.

The Shopping list

Before you start there are some prerequisites to fulfill:

The blueprint

So let's sort things out, define the overall plan and steps needed

With the following blog posts, we'll guide you through all of the steps needed to prepare and use your new Autosequencer.

 

Johannes Freundorfer and Ingmar Oosterhoff

Graph Equivalence

$
0
0

I want to talk about some interesting graph processing I've done recently. As a part of a bigger problem, I needed to collate a few millions of (not that large) graphs, replacing every set of equivalent graphs with a single graph and a count. I haven't found much on the internets about the graph equivalency, all I've found is people asking about it and other people whining that it's a hard problem. Well, yeah, if you brute-force it, it's obviously a factorial-scale problem. But I believe I've found a polynomial solution for it, with not such a high power, and I want to share it. It's kind of simple in the hindsight but it took me four versions of the algorithm and a couple of months of background thinking to get there, so I'm kind of proud of it.

The details are in my other blog:

1: The overall algorithm

2: Comparing the graph signatures

3: Why it works, simplified version

4: Why it works

5: Resolving the hash collisions

6: Optimizations

Debugging through the .NET Core framework using VSCode (e.g. on Linux)

$
0
0

In my blog 'Debugging Through the .NET Core framework' I give specific instructions for setting Visual Studio up so that you can debug into the source code for the .NET Core Runtime.      Since Version 2.1 of the .NET Core runtime, it is also possible to do this using the 'Visual Studio Code'  editor.    Since Visual Studio code runs on Linux (as well as windows, or  MacOS), this is what you would be using if you were developing on non-windows platform.

Conceptually we are doing the same thing, but the exact details are a bit different, so I document them here.    They are mostly a transcription of VSCode doc on the subject.

Basically you have to sure you are running V2.1 of .NET Core or beyond, as well as V1.15 or beyond of the OmniSharp C# VSCode extension.

With those preliminaries out of the way, you basically need to update the '.vscodelaunch.json' that VSCode set up in your project to describe how to launch your app and add these entries inside the 'configurations' array for the configuration that you use.

 "justMyCode": false,
"symbolOptions": {
"searchMicrosoftSymbolServer": true
},
"suppressJITOptimizations": true,
"env": {
"COMPlus_ZapDisable": "1"
}

The critical piece is turning off 'justMyCode'  This tells the debugger that you want to step into things outside your project.   However in order to do this it needs to find the symbols for code outside the project.  This is what the 'searchMicrosoftSymbolServer' does.    Note that this WILL cause alot of files to be downloaded from the symbol server THE FIRST time you debug something, but after your local caches has been populated it should be fast.

The last two entries improve the experience debugging through the code.   By default you are debugging the optimized code that was compiled by Microsoft and shipped with the .NET Core framework.   This means you don't tend to see local variables or even arguments of methods.   Sometimes this is OK, but sometimes it can be VERY frustrating.   Setting the 'COMPlus_ZapDisable' environment variable tells the runtime NOT  to use the precompiled code, and the 'suppressJITOptimizations' says when you do recompile it (Just in time) don't optimize it.  Together this makes the code unoptimized and thus you see all the local variables as you would like.   This DOES slow the code down a bit, but for debugger you probably don't care.

 

Cook Book on Windows Mobile App Performance using Windows Performance Analyzer(WPA)

$
0
0

Windows Performance Analyzer is a tool to evaluate the performance of windows applications. It can be used for analyzing the performance of windows applications.

Here, we will discuss on how to use the tool step by step to evaluate the performance of the windows mobile apps. Performance testing of mobile apps can be done through the api layer but that does not complete the entire process. We need to analyze the impact of memory leaks/cpu utilization etc in our application. This tool will help us to evaluate those parameters.

The following file formats are supported by the tool: etl/wpa/xml/wpapk/zip files.

 

To start with, we will be going through the below sequence

1. How to Deploy Windows Phone app -> 2. How install WPA -> 3. How to extract Logs -> 4. How to Analyze using WPA

1. How to deploy a Windows phone application?

There are three ways of doing the installation : 1a. Windows Phone Application Deployment Tool / 1b. Visual Studio / 1c. Command Prompt.

1a. Using Windows Phone Application Deployment tool.

Install the latest version of windows mobile application in emulator or device using windows phone application deployment. Windows phone application deployment tool comes as a part of windows phone SDK.

In the above tool, set the target as either device or the emulators as listed. Browse the *.appx/*.xap/*.appxbundle and click deploy. Application will be deployed.

1b. Deploy Using Visual studio

Open a windows phone project. Select the target device in the toolbar and run the project(F5).

1c. Deploy using the command prompt

To deploy a windows phone 8/8.1 app in device. Use the steps below

  • Navigate to the location : C:Program Files (x86)Microsoft SDKsWindows Phonev8.0ToolsXAP Deployment.
  • And use the below command : XapDeployCmd.exe /installlaunch filelocation*.xap /targetdevice:xd

To deploy a windows phone 10 app in device. Use the steps below

  • Navigate to the location :C:Program Files (x86)Windows Kits10binx86WinAppDeployCmd.exe
  • And use the below command : WinAppDeployCmd install -file ” filelocation*.appx” -ip [ipaddr] -pin A1B2C3

       More information about WinAppDeployCmd can be found at https://msdn.microsoft.com/en-us/library/mt203806.aspx

 

2. How to Install WPA

Download the windows performance analyzer which comes as a package with Windows ADK.

https://msdn.microsoft.com/en-us/windows/hardware/dn913721(v=vs.8.5).aspx

Windows Performance Analyzer will be available under the path : C:Program Files (x86)Windows Kits10Windows Performance Toolkitwpa.exe

  1. How Extract Log Files
  • Open ‘Windows Phone Developer Power tools’ from the below path which comes as a part of Windows Phone SDK.

             C:Program Files (x86)Microsoft SDKsWindows Phonev8.1ToolsPowerToolsPwTools.exe

  • Choose the target as either device or the emulator.
  • Click Connect.

Make sure the connectivity is established between the device/emulator with the power tools. App should be pre-installed in the emulator or device before the log is captured.

  • Navigate to Performance Recorder. Screen looks like the below one.

 

  • Select the profiles for performance recording in the bottom panel. And click on ‘Start’.
  • In the emulator(target device) open the app which needs to be evaluated. Do a set of execution and Press Stop in the above screen. A file with *.etl format is saved.
  1. How to Analyze using WPA 
  • Open WPA from the path : C:Program Files (x86)Windows Kits10Windows Performance Toolkitwpa.exe
  • The below screen will be displayed.
  • File -> Open -> *.etl/*.xml/*.wap
  • Select a file which has any of the above format.

The below picture has mainly two panes. One on the left known as Graph Explorer and another on the right side Analysis.

 

 

Graph Explorer:

Graph Explorer displays a list of thumbnails of all the graphs of the input file. Ex: System Activity/Computation/Storage/Memory/Power/Etc in the left pane.

Each thumbnail as stated above has sub divisions which are collapsed initially. To view the detailed graph, select the thumbnail and drag it to analysis pane.

 

Analysis Tab

 

In the above picture, a portion of computation -> Utilization by CPU is taken into analysis. Analysis pane has the option of zooming the selection and other options as listed in the below image. And we have options of configuring the data table columns/ filtering the rows etc.

Analyze using default catalog

  • Navigate to ‘Profiles’
  • Choose ‘Apply’

 

 

  • Click Browse Catalog.

 

  • Choose a default catalog profile from the above and click open.
  • Based on the profile chosen(here I chose applaunch.wpaprofile) the graph will be altered as below.

Major Parameters used to measure the performance of the mobile apps.

  • Measuring the app launch time.
  • Checking the app responsiveness w.r.t app’s response based on the UI input.
  • Measure the Maximum Memory usage by app.
  • CPU Utilization (directly proportional to battery usage).
  • Out of memory issues caused by high resource usage.

 


7/2: Errata added for [MS-SMB2]: Server Message Block (SMB) Protocol Versions 2 and 3

7/2: Errata added for [MS-TDS]: Tabular Data Stream Protocol

7/2: Errata added for [MC-NMF]: .NET Message Framing Protocol

7/2: Errata added for [MS-GPSB]: Group Policy: Security Protocol Extension

Experiencing Data Access Issue in Azure Portal for Many Data Types – 07/02 – Resolved

$
0
0
Final Update: Monday, 02 July 2018 19:55 UTC

Application Insights had brief data access issues and have confirmed that all systems are back to normal with no customer impact as of 07/02, 17:20 UTC. Our logs show the incident started on 07/02, 17:05 UTC and that during the 15 minutes that it took to resolve the issue, ~10% of customers would have experienced data access issues in Azure Portal and Application Analytics Portal.

  • Root Cause: The failure was due to issues in one of the backend services.
  • Incident Timeline:  15 minutes - 07/02, 17:05 UTC through 07/02, 17:20 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Sapna


Early technical preview of JDBC 6.5.4 for SQL Server released

$
0
0

We have a new early technical preview of the JDBC Driver for SQL Server. Precompiled binaries are available on GitHub and also on Maven Central.

Below is a summary of the new additions to the project, changes made, and issues fixed.

Added

  • Added new connection property "useBulkCopyForBatchInsert" to enable Bulk Copy API support for batch insert operation #686
  • Added implementation for Java 9 introduced Boundary method APIs on Connection interface #708
  • Added support for "Data Classification Specifications" on fetched resultsets #709
  • Added support for UTF-8 feature extension #722

Fixed Issues

  • Fixed issue with escaping catalog name when retrieving from database metadata #718
  • Fixed issue with tests requiring additional dependencies #729

Changed

  • Made driver default compliant to JDBC 4.2 specifications #711
  • Updated ADAL4J dependency version to 1.6.0 #711
  • Cleaned up socket handling implementation to generalize functionality for different JVMs and simplified the logic for single address case #663

Getting the Preview
The latest bits are available on our GitHub repository and Maven Central.

Add the JDBC preview driver to your Maven project by adding the following code to your POM file to include it as a dependency in your project.

Java 8:

<dependency>
    <groupId>com.microsoft.sqlserver</groupId>
    <artifactId>mssql-jdbc</artifactId>
    <version>6.5.4.jre8-preview</version>
</dependency>

Java 10:

<dependency>
    <groupId>com.microsoft.sqlserver</groupId>
    <artifactId>mssql-jdbc</artifactId>
    <version>6.5.4.jre10-preview</version>
</dependency>

We provide limited support while in preview. Should you run into any issues, please file an issue on our GitHub Issues page.

As always, we welcome contributions of any kind. We appreciate everyone who has taken the time to contribute to the project thus far. For feature requests, please file an issue on the GitHub Issues page to help us track and follow-up directly.

We would also appreciate if you could take this survey to help us continue to improve the JDBC Driver.

Please also check out our tutorials to get started with developing apps in your programming language of choice and SQL Server.

David Engel

Azure IoT Edge の GA (一般提供) について

$
0
0

Azure IoT 開発サポートチームの S.M です。

 

2018 年 6 27 日に Azure IoT Edge が GA (一般提供) となりました。IoT Edge は主に直接インターネット経由で IoT Hub に接続できないデバイスに対してクラウドテクノロジーを提供するための製品になります。GA に伴い行われた主な更新は以下の通りになります。

 

● IoT EdgeのソースコードはGitHubから入手いただけるようになりました。

 

● IoT Edge のランタイムはオープンソースコンテナーの Moby でサポートされるようになりました。

*Docker CE and Docker EEも引き続き利用可能ですが、Microsoft はこれら製品の不具合などに対して修正ができないことから、Moby の利用を推奨しています。なお、Preview時に作成いただいた既存のDockerベースのモジュールは引き続きそのままご利用いただけます。

 

● IoT Edge デバイスの Certification プログラムが開始されました。なお、IoT Edge デバイスCertification の取得は必須ではございません。

 

● Cognitive Service Custom Vision IoT Edge 上で利用できるようになりました。

 

GA の詳細は下記資料にまとめられています。

https://azure.microsoft.com/en-us/blog/azure-iot-edge-generally-available-for-enterprise-grade-scaled-deployments/

 

Azure IoT Edge の利用が正式にサポートされているOSの一覧も資料として公開させていただきましたので、こちらもあわせてご確認いただければと思います。

 

Azure IoT Edge Support

https://docs.microsoft.com/en-us/azure/iot-edge/support


Custom Vision Service の Object Detection を使ってみよう

$
0
0

Microsoft Japan Data Platform Tech Sales Team

高橋 健太

2018年5月7日に Azure Cognitive Services の Custom Vision Service に Object Detection(物体認識)の機能が追加されましたので、利用方法をご紹介します。(7/3 現在はまだプレビューです)

Custom Vision Service の Object Detection とは

はじめに、Custom Vision Service の概略です。Custom Vision Service は、自分で準備した画像を使って独自の画像認識モデルを GUI だけで作成できる Azure Cognitive Services の一つです。これまで Custom Vision Service では画像の分類(例えば、この画像は “犬の画像” という分類)はできましたが、画像の中に複数の物体がある場合に、それぞれの物体の位置の識別(例えば、この画像の “この部分は犬” 、”この部分は猫” という識別)をすることはできませんでした。今回の Object Detection の機能が追加されたことにより、その物体識別ができるようになりました。

Object Detection の使い方

Object Detection は以下の手順で簡単に利用できます。

  1. プロジェクトを作成
  2. 画像をアップロード
  3. 画像内のオブジェクトにタグ付け
  4. モデルの学習
  5. モデルの(クイック)テスト

1. プロジェクトを作成

New Project から新規でプロジェクトを作成します。今回は Object Detection をしたいので、Project Type で ” Object Detection (preview) ” をチェックします。 ちなみに、Project Type で " Classification “ をチェックすると、画像分類の機能が利用できます。

image

2. 画像をアップロード

次に、” Add images ” からモデルを作成するための画像をローカルPCからアップロードします。識別したい物体毎(タグ毎)に最低15枚の画像が必要になります。今回は、犬と猫の画像を使用します。

image

3. 画像内のオブジェクトにタグ付け

画像をアップロードしたら、画像の物体にタグ付けをします。一つの画像に複数の物体が写っている場合でも、マウス操作で範囲を指定して、複数のタグを付けることができます。

image

この作業をアップロードした写真の数分、繰り返します。今回は犬を22枚、猫を16枚タグ付けしました。

4. モデルの学習

画像のタグ付けが終わったら、いよいよモデルの学習を行います。緑色の “ Train “ ボタンを押して数十秒~数分(時間は枚数によります)待つと、学習は完了します。

image

ここで、モデルのパフォーマンスを見ることができます。ざっくり言いますと、パーセンテージが高いほど、モデルが優れているという意味合いになります。 ” Performance Per Tag “ の下にあるタグをクリックして、結果の詳細をドリルダウンすることもできます。

5. モデルの(クイック)テスト

モデルを公開したりアプリケーションで使用したりする前に、" Quick Test " 機能からモデルをテストすることができます。” Quick Test “ ボタンを押してローカルPCまたは Azure Blob などのオンラインストレージから画像を選択するとテストができます。

image

image

犬と猫の両方が写っている画像2枚をテストしたところ、2枚とも犬と猫の両方を結構良い精度で当ててくれています。

API 経由でモデルを使用したい場合は、” PERFORMANCE ” の ” Prediction URL ” から情報を取得できます。

image

最後に

いかがでしたでしょうか?Azure Cognitive Services では、今回ご紹介した Custom Vision Service の他にも、動画・文字・顔・会話の解析など、様々なサービスを非常に簡単にご利用できますので、是非お試しください。

参考情報

Custom Vision: https://azure.microsoft.com/ja-jp/services/cognitive-services/custom-vision-service/

Azure Cognitive Services: https://azure.microsoft.com/ja-jp/services/cognitive-services/

Protected: Best Practices for Unified Service Desk Deployment and Upgrade

$
0
0

This content is password protected. To view it please enter your password below:

Minecraft Education Edition: Aquatic Update

$
0
0

written by Microsoft Learning Consultant, Liz Wilson

 

Minecraft Education Edition is an amazing, immersive tool for the classroom and just keeps getting better and better. This time with the Aquatic update.

 

If you have used Minecraft Education Edition before, you will know how immersive this digital world is. Students can lose themselves in carefully curated story settings, engage with mathematical and scientific models and explore key historical and geographic concepts. They can use CodeConnection to code directly into their world and they can even explore chemical elements. What more could you ask for? The answer lies in the Aquatic Update…

 

So what is this new update?

The Aquatic update for Minecraft Education Edition brings with it a whole host of new features to the worlds we use. There are now new ocean and underwater biomes for our students to explore and the water has a new look, allowing increased visibility. This will make it far easier for our learners to explore the vast range of new fish, coral and underwater plant-life that is now found beneath the surface. If they explore thoroughly, they may event find shipwrecks and buried treasure!

Here is the full lists of new mobs, items and biomes added to Minecraft: Education Edition:

  • Fish (salmon, cod, pufferfish, tropical fish)
  • Bucket of fish
  • Coral (coral, coral fans, coral blocks)
  • Kelp, Dried Kelp, Kelp Block
  • Dolphins (follow boats, get a boost swimming next to them)
  • Icebergs
  • Blue Ice
  • Nine Ocean Biomes including frozen ocean
  • Underwater Ravines & caves
  • Sea grass
  • Sea pickle (with illumination!)
  • Shipwrecks
  • Treasure chests (in ruins, shipwrecks)
  • Ruins
  • Swimming
  • Treasure Map
  • Buried Treasure
  • Heart of the Sea in buried treasure chests (non-active)
  • Trident
  • Trident enchantment
  • Stripped Logs
  • Buttons, Trap Doors, Pressure Plates with variation
  • Carved Pumpkins
  • Floating Items
  • Boat Polish (smoother control)
  • Water has a completely new look and increased visibility while underwater
  • Prismarine Stairs and Slabs
  • New swimming animation while sprinting in water

 

How can I use this in my lessons?

The Aquatic update for Minecraft Education Edition gives educators so many new possibilities for learning activities. There are countless new environments and situations for our students to engage with, be it to inspire story telling, explore ecosystems or work on global issues such as plastic pollution. If you aren’t sure where to start, check out the 17 aquatic activities from education.minecraft.net.

 

 

Are there any pre-made worlds I can use?

YES! Head over to education.miencraft.net where you will find 5 brand new worlds that you can download and use with your students. These are all focused on making the most out of the new aquatic update and supporting the curriculum.

 

How do I get it?

All Windows 10 users will receive the update automatically when they next login. If you use MacOS, you will have to reinstall Minecraft: Education Edition to access the updated version. Download the Update Aquatic here.

 

 

 

 

New to Minecraft? Get trained on the Microsoft Educator Community by completing the ‘My Minecraft Journey’ course and earn a new badge!

tessssssst

How to create a new kubernetes service endpoint for AKS?

$
0
0

In this post, I will talk about the steps that you should perform to create the kubernetes endpoint for an existing AKS cluster.

1. Go to the endpoints UI -> New Service endpoint -> Kubernetes.

2. This dialog should open where you need to fill the cluster URL, KubeConfig file etc.

3. Since the cluster is in AKS, you can use the azure CLI to get the KubeConfig file and since KubeConfig file contains the URL to cluster as well, we will also get that.

4. Use get-credentials command in az cli to get the KubeConfig file. The exact syntax is as follows: -

az aks get-credentials --resource-group --name

The above command will output the path to KubeConfig file as shown below.

3. Open the KubeConfig file from this path and get the cluster URL.

4. Copy paste the complete content of the KubeConfig file in the KubeConfig file parameter in the endpoint UI.

5. Enable "Accept Untrusted Certificates", verify connection and press OK. The dialog should look similar to this.

Enjoy !!

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>