Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

WordPress Migration: Easy as A-B-C, 1-2-3

$
0
0

WordPress Migration Components
Fig 1. WordPress Migration Components


There are 3 steps to migrating a WordPress website to Azure App Service Web Apps.

  1. Copy WordPress files
  2. Migrate the MySQL Database
  3. Configure WordPress

 

Step 1. Copy WordPress files

Make a backup of the current WordPress website. Tools commonly used for this is FTP Software like FileZilla or WinSCP.

Sample WordPress Installation Folder:
2017-04-28-15_47_39-clipboard

Again, use FTP software or Kudu (http://<webappname>.scm.azurewebsites.net) to upload the files to your web app.

Step 2. Migrate the MySQL Database

If the DB is visible externally, you can use tools such as WP Buddy+ to migrate the MySQL contents. Other popular methods are MySQL Workbench, PHPMyAdmin or command-line in Kudu.

Step 3. Configure WordPress

Now that the contents and database have been migrated. The next step is to configure WordPress to talk to the new database. This is done in wp-config.php. Open this file and ensure the appropriate credentials are being used to communicate with the MySQL Database. Once the DB is working, use tools such as WordPress Buddy+ to update the HOME and SITE_URL. More on this here (see WordPress Tools).

MySQL In-App Sample


SSL Certificate Disappears After Binding into IIS site

$
0
0

Just other day I was engaged with an Enterprise customer as he was experiencing a SSL Cert déjà vu, it was disappearing from the IIS site binding. It was interesting for sure, and hence the preclude for blogging it!

First thing first, what’s the error message here? An event similar to the following is logged in the System event log:

Log Name:      System
Source:          Microsoft-Windows-HttpEvent
Date:              3/25/2017 5:33:23 PM
Event ID:         15300
Task Category: None
Level:             Warning
Keywords:      Classic
User:               N/A
Computer:      IISServer
Description:
SSL Certificate Settings deleted for Port : x.x.x.x:443

The error description does match with the symptom, so it’s a good start for now. The problem occurs because of a legacy SSL certificate hash property interfering with the current SSL binding, resulting in the correct binding being deleted.

The fix was rather simple, if you know where to look at. We located the following property in the applicationHost.config file and deleted it:

<key path=”LM/W3SVC/YourSiteName”>
<property id=”5506″ dataType=”Binary” userType=”1″ attributes=”None” value=”AJKFOIEURKJEJNOIAUFJDJF=” />
</key>

An iisreset is not necessary after the above change, but recommended.

[SQLAudit] Como visualizar mais de 1000 registros no Audit Log Viewer

$
0
0

Recentemente um cliente questionou se era possível aumentar o volume de registros retornados pelo Audit Log Viewer. Por padrão essa tela só retorna os 1000 registros mais recentes.

É possível aumentar esse número; no entanto a opção fica um pouco “escondida” no Management Studio. Além disso é preciso considerar que se você definir um número muito alto, isso poderá impactar o desempenho dessa tela.

Para aumentar o número de linhas retornadas, basta acessar o menu Tools > Options > opção “SQL Server Object Explorer” > sub-opção “Audit Log Viewer Options“:

auditlogvieweroptions

Outra abordagem para recuperar dados de auditoria é utilizar a função de sistema: sys.fn_get_audit_file. No comando abaixo, leremos todos os arquivos de auditoria do endereço ‘E:LogAuditoria’, recuperando todos os eventos que ocorreram no dia 26-04-2017:

SELECT *
FROM sys.fn_get_audit_file ('E:LogAuditoria*',default,default)
WHERE event_time BETWEEN '2017-04-26 00:00:00' and '2017-04-26 23:59:59'; 
GO

Silas

 

The code and techniques described in this blog are presented to the reader ‘as is’, without any warranty, whether express or implied, of its accuracy, completeness, fitness for a particular purpose, title or non-infringement, and none of the third-party products or information mentioned in the work are authored, recommended, supported or guaranteed by any of the authors or Microsoft. Further, the authors shall not be liable for any damages you may sustain by using this information, whether direct, indirect, special, incidental or consequential, even if it has been advised of the possibility of such damages. Your use of the information contained in these pages, is at your sole risk.

SCOM Printer Monitoring Management Pack Supplement (Updated: Windows Server 2003, 2008, 2012, 2016)

$
0
0

 

Back in 2010 Steve Rachui posted a fantastic management pack which added support for monitoring Windows 2003 and 2008 print servers. The original post is located here so you can read all about the functionality:

https://blogs.msdn.microsoft.com/steverac/2010/12/04/focusing-on-the-printer-printer-monitoring-mp-supplement/

 

In summary:

Printer MP Supplement
The focus for building out this MP supplement was to offer individualized printer monitoring.  Further, monitoring should include the ability to detect the 12 error conditions available from the printer MP in the catalog and each error condition should be configurable as to whether or not it is enabled and, if so, how long the error condition should persist before considering it to be a problem.  Lets walk through the MP.” –Steve

 

Introducing the “Tasty Printer Monitoring” management pack. I made a few changes and improvements. I streamlined the discovery scripts for the Printer and Print Server (role and server) classes. There are now only 3 scripts (1 for printers, 1 for print server roles, 1 for print servers)  instead of an individual script for each OS version role and each OS version server. I cleaned up a few little things and renamed the MP so this is not an “update” to the original management pack. This is technically a different pack. You would not want to use both this pack and the original. If this new one meets your needs then you should remove the older pack. All of the Printer Role discoveries are enabled by default (2003, 2008, 2012, 2016).

 

Here are a few screenshots from my lab. I don’t have many printers so I had to manipulate my lab for these screenshots so that even the non-shared printers were discovered. THIS IS ONLY FOR THESE EXAMPLE SCREENSHOTS. Only your shared printers will be discovered.

 

AllPrinterServersState1

 

This is just an example of my lab where I enabled the discovery of the non-shared printers for demonstration purposes only.

Normally your standard printer types would not be discovered (“Microsoft Print to PDF”, “Microsoft XPS Document Writer”, “Fax”, etc.)

AllPrintersState1

 

DOWNLOAD

UWP: Working with Bluetooth devices (part 1)

$
0
0

Code: Chapter26_BluetoothData on https://github.com/sbaidachni/Win10BookSamples

Universal Windows Platform supports really great Bluetooth API that allows us to build applications that are connected to various kinds of wiring devices. Your application can support Bluetooth LE (Low Energy, 4.x version) or even previous versions of the protocol, and I am going to cover how to use all of them, but let’s start with Bluetooth LE.

Bluetooth Low Energy

In order to build couple samples, I would recommend to use any BLE development board. The board should contain several sensors and a preinstalled example. In my case I have an evaluation board from STMicroelectronics (STEVAL-IDB007V1) based on BlueNRG-1 Bluetooth chip. This board contains a 3D digital accelerometer, a gyroscope (preinstalled example doesn’t return data from gyroscope by design) and a pressure sensor with an embedded temperature sensor. All these things we will be able to use in our application.

clip_image002

In fact, you can use any BLE device like a smart bulb that you can buy in Home Depot, but prior to buy anything you need to make sure that there is a document that describes all services and characteristics. Of course, using UWP API you will be able to read all available characteristics, but in the case of “no name” devices it’s really hard to understand how to interpret incoming data.

Universal Windows Platform contains five namespaces with Bluetooth keyword including Windows.Devices.Bluetooth. Exactly this namespace contains the BluetoothLEDevice class that will help us to get information about all available services and characteristics. But prior to start creating any instance of this class we need to implement interface that will allow a user to select our device from the list. Of course, we can list all device in code and connect to any of them without any interaction with the user, but it’s always better to get confirmation from the user.

Therefore, I am going to implement UI that will use some criteria to list all available devices and allows users to select one from the list. We can do it using the Windows.Devices.Enumeration namespace. In general, the namespace doesn’t contain any Bluetooth related classes. Instead, there are several universal classes that allow us to list anything that is connected to our computer/phone. This namespace even supports DevicePicker and DevicePickerFilter classes that can simplify our work thanks to embedded dialogs. But using these classes is a trivial task:

DevicePicker picker = new DevicePicker(); picker.Filter.SupportedDeviceSelectors.Add
    BluetoothLEDevice.GetDeviceSelectorFromPairingState(false));
picker.Filter.SupportedDeviceSelectors.Add(
    BluetoothLEDevice.GetDeviceSelectorFromPairingState(true));
picker.Show(new Rect(x, y, width, height));

Code below will show the following dialog that display all paired and non-paired Bluetooth Low Energy devices:

clip_image004

The dialog support DeviceSelected event that helps us to understand which device is selected. Alternatively, you can use PickSingleDeviceAsync method to avoid any event handlers. You can see that DevicePicker can display any devices. So, in order to display just BLE devices we need to setup a filter using Advanced Query Syntax string. ADS can be a little bit complex to setup. That’s why we used the BluetoothLEDevice class to get access to some predefined strings. Using the GetDeviceSelectorFromPairingState method we can return all BLE devices that are paired or non-paired. Adding both strings to the filter, we can select all available BLE devices.

In my examples, I am not going to use embedded dialogs. Instead of using them, I decided to build my own interface. In order to implement it, I need to create a page that will display a list of all devices in my own way. Using the DeviceInformation class I can find all BLE devices using the FindAllAsync method, but this approach is not the best one, because, I have to assume that users can activate new Bluetooth devices “on fly”. So, we have to look at all changes all time rather than use a snapshot and we cannot use just the FindAllAsync method in the DeviceInformation class. Instead, we will create a watcher that will notify us about any changes in the list of available devices. In order to do that, we can use the DeviceWatcher class that is available in the Windows.Devices.Enumeration namespace. Let’s look at the following code:

DeviceWatcher deviceWatcher;
protected async override void OnNavigatedTo(NavigationEventArgs e)
{
    deviceWatcher = DeviceInformation.CreateWatcher(
        “System.ItemNameDisplay:~~”BlueNRG””,
        new string[] {
            “System.Devices.Aep.DeviceAddress”,
            “System.Devices.Aep.IsConnected” },
        DeviceInformationKind.AssociationEndpoint);
    deviceWatcher.Added += DeviceWatcher_Added;
    deviceWatcher.Removed += DeviceWatcher_Removed;
    deviceWatcher.Start();
    base.OnNavigatedTo(e);
}

You can see that in order to create the watcher, we used the CreateWatcher static method that you can find in the DeviceInformation class. This method accepts several parameters. Using the first one we can provide a filter that can help us list just needed devices. It’s the same Advanced Query Syntax string that we used before, but in this case, we created it from scratch rather than using a predefined one. It’s a good idea to show different approaches and I used a filter that helps me find exactly devices that contain BlueNRG string in their names. To implement this filter I used the System.ItemNameDisplay property. Because all my dev kit is available as BlueNRG by default, my application will show just my device. Of course, we should not hardcode any names and it’s better to use less restrictive filter like we did in the first block of codeIf you want to find more information about possible ADS properties you can visit this page.

The second parameter is a set of properties that should be available for a device. In this case we requested DeviceAddress and IsConnected properties, but I used it just for the demo. I assume that you will not able to find many different devices with BlueNRG name around, so, you can simply remove this parameter.

Finally, we have to pass the DeviceInformationKind flag. In our case it should be AssociationEndpoint that is true for all Bluetooth LE devices that advertise their interface.

Once we create the watcher, we need to assign event handlers to Added and Removed events and start the watcher. Pay attention that if you forget to assign Removed event handler, the watcher will not work. I am not sure “why”, but once I assigned an empty event handler, the watcher started without any problem. Here is my code:

protected override void OnNavigatedFrom(NavigationEventArgs e)
{
    deviceWatcher.Stop();
    base.OnNavigatedFrom(e);
}
private void DeviceWatcher_Removed(DeviceWatcher sender, DeviceInformationUpdate args)
{
    //throw new NotImplementedException();
}
private async void DeviceWatcher_Added(DeviceWatcher sender, DeviceInformation args)
{
    var device = await BluetoothLEDevice.FromIdAsync(args.Id);
    var services=await device.GetGattServicesAsync();
    foreach(var service in services.Services)
    {
        Debug.WriteLine($”Service: {service.Uuid}”);
        var characteristics=await service.GetCharacteristicsAsync();
        foreach (var character in characteristics.Characteristics)
        {
            Debug.WriteLine($”Characteristic: {character.Uuid}”);
        }
    }
}

Now, you can run the code above and it will start looking for a BlueNRG device and once it’s available, it will print all supported services and characteristics to the Output window. I got the following data for my device:

Service: 00001801-0000-1000-8000-00805f9b34fb
Characteristic: 00002a05-0000-1000-8000-00805f9b34fb
Service: 00001800-0000-1000-8000-00805f9b34fb
Characteristic: 00002a00-0000-1000-8000-00805f9b34fb
Characteristic: 00002a01-0000-1000-8000-00805f9b34fb
Characteristic: 00002a04-0000-1000-8000-00805f9b34fb
Service: 02366e80-cf3a-11e1-9ab4-0002a5d5c51b
Characteristic: e23e78a0-cf4a-11e1-8ffc-0002a5d5c51b
Characteristic: 340a1b80-cf4b-11e1-ac36-0002a5d5c51b
Service: 42821a40-e477-11e2-82d0-0002a5d5c51b
Characteristic: a32e5520-e477-11e2-a9e3-0002a5d5c51b
Characteristic: cd20c480-e48b-11e2-840b-0002a5d5c51b

You can see that the device supports four services and each of them contains from one to three characteristics. In order to understand what is it, we need to visit STMicroelectronics web-site (st.com) and find a document that describes all these things. The document called UM2071: BlueNRG-1 development kit. If you type this name in the Search box and open the Resource tab, you will be able to find it. This document contains much information, but we need just a table with characteristics (Figure 20):

clip_image006

Looking at this table we can note service UUIDs for Acceleration, Temperature and Pressure. It’s exactly that I am going to display in my application. Potentially we can check Free Fall, but I have just one board, and I will try to avoid any scary experiments.

Pay attention that Bluetooth LE standard contains some standard profiles that can help you recognize some standard services. Profiles, it’s something that operating system uses to connect any Bluetooth mouse or stream video/audio between different devices (not BLE, but the idea is the same). In some case you will need to rely on standard profiles, but in some cases, you can create your own. Looking at the table below you can see that our development board implements standard attribute and generic access profiles. Thanks to them Windows can read some information about the device like device name. But all other services implement custom profiles. It’s ok if your device is not going to support any standard feature and it can be a problem for you if you want to connect some low-cost Bluetooth devices to your application. I found that lots of devices from “no name” companies implement own custom profile and if you want to connect your phone or tablet to these devices you have to download an application. But once you want to create your own application, it’s really hard to find how the device works.

Ok. Now we know where to find information about supported services and characteristics, but we still didn’t finish our main page. So, let’s do it.

In order to store information about all available devices I will use a collection that will store references to DeviceInformation objects:

ObservableCollection<DeviceInformation> deviceList =
    new ObservableCollection<DeviceInformation>();

The collection should be observable because we are going to bind it to our interface and it should track any changes dynamically.

Now, we can modify our event handlers:

private async void DeviceWatcher_Removed(DeviceWatcher sender, DeviceInformationUpdate args)
{
    var toRemove = (from a in deviceList where a.Id == args.Id select a).FirstOrDefault();
    if (toRemove != null)
        await this.Dispatcher.RunAsync(
            Windows.UI.Core.CoreDispatcherPriority.Normal,
            () => { deviceList.Remove(toRemove); });
}
private async void DeviceWatcher_Added(DeviceWatcher sender, DeviceInformation args)
{
    await this.Dispatcher.RunAsync(
        Windows.UI.Core.CoreDispatcherPriority.Normal,
        () => { deviceList.Add(args); });
}

You can see that we use the handlers to track changes in the list. Pay attention that both handlers are running in non-UI thread. So, we have to invoke dispatcher to modify our collection.

Finally, we can build UI. It will be primary a ListView that will display Id, Name and Pairing properties:

<ListView Grid.Row=”2″ Name=”deviceListView” ItemsSource=”{Binding}” IsItemClickEnabled=”True” ItemClick=”deviceListView_ItemClick” HorizontalAlignment=”Center”>
    <ListView.ItemTemplate>
        <DataTemplate>
            <Grid Margin=”10″>
                <Grid.ColumnDefinitions>
                    <ColumnDefinition Width=”100″></ColumnDefinition> 
                    <ColumnDefinition Width=”Auto”></ColumnDefinition>
                </Grid.ColumnDefinitions>
                <Grid.RowDefinitions>
                    <RowDefinition></RowDefinition>
                    <RowDefinition></RowDefinition>
                    <RowDefinition></RowDefinition>
                </Grid.RowDefinitions>
                <Image Source=”Assets/stlogo.png” Margin=”10″ Grid.RowSpan=”3″></Image>
                <TextBlock Grid.Column=”1″ Grid.Row=”0″ Text=”{Binding Id}”></TextBlock>
                <TextBlock Grid.Column=”1″ Grid.Row=”1″ Text=”{Binding Name}”></TextBlock>
                <StackPanel Grid.Column=”1″ Grid.Row=”2″ Orientation=”Horizontal”>
                    <TextBlock Text=”Can be paired: “></TextBlock>
                    <TextBlock Text=”{Binding Pairing, Converter={StaticResource pairingConv}, Mode=OneWay}”></TextBlock>
                </StackPanel>
            </Grid>
        </DataTemplate>
    </ListView.ItemTemplate>
</ListView>

To run this code, you need to add an empty event handler for ItemClick. We will implement it later. Additionally, you can use OnNavigatedTo to set DataContext property and activate binding:

this.DataContext = deviceList;

Running this code, we will see the following window:

clip_image008

Pay attention that our device is still not paired, but in order to start reading data from it we should not pair it. This feature is available since Windows 10 Creators Update. All previous UWP versions requires pairing prior start reading any data.

Ok. Now we can implement ItemClick event handler and once a user selects the device, we may navigate our application to the second page:

private void deviceListView_ItemClick(object sender, ItemClickEventArgs e)
{
    this.Frame.Navigate(typeof(DevicePage), e.ClickedItem);
}

In order to simplify work with all sensors I created a base class that implements all needed features to read values from characteristics:

public class SensorBase:IDisposable
{
    protected GattDeviceService deviceService;
    protected string sensorDataUuid;
    protected byte[] data;
    protected bool isNotificationSupported = false;
    private GattCharacteristic dataCharacteristic;
    public SensorBase(GattDeviceService dataService, string sensorDataUuid)
    {
        this.deviceService = dataService;
        this.sensorDataUuid = sensorDataUuid;
    }
    public virtual async Task EnableNotifications()
    {
        isNotificationSupported = true;
        dataCharacteristic = (await deviceService.GetCharacteristicsForUuidAsync(
            new Guid(sensorDataUuid))).Characteristics[0];
        dataCharacteristic.ValueChanged += dataCharacteristic_ValueChanged;
        GattCommunicationStatus status =
            await dataCharacteristic.WriteClientCharacteristicConfigurationDescriptorAsync(
                GattClientCharacteristicConfigurationDescriptorValue.Notify);
    }
    public virtual async Task DisableNotifications()
    {
        isNotificationSupported = false;
        dataCharacteristic = (await deviceService.GetCharacteristicsForUuidAsync(
            new Guid(sensorDataUuid))).Characteristics[0];
        dataCharacteristic.ValueChanged -= dataCharacteristic_ValueChanged;
        GattCommunicationStatus status =
            await dataCharacteristic.WriteClientCharacteristicConfigurationDescriptorAsync(
            GattClientCharacteristicConfigurationDescriptorValue.None);
    }
    protected async Task<byte[]> ReadValue()
    {
        if (!isNotificationSupported)
        {
            if (dataCharacteristic == null)
                dataCharacteristic = (await deviceService.GetCharacteristicsForUuidAsync(
                    new Guid(sensorDataUuid))).Characteristics[0];
            GattReadResult readResult =
                await dataCharacteristic.ReadValueAsync(BluetoothCacheMode.Uncached);
            data = new byte[readResult.Value.Length]; 
            DataReader.FromBuffer(readResult.Value).ReadBytes(data); 
        }
        return data;
    }
    private void dataCharacteristic_ValueChanged(GattCharacteristic sender,
        GattValueChangedEventArgs args)
    {
        data = new byte[args.CharacteristicValue.Length];
        DataReader.FromBuffer(args.CharacteristicValue).ReadBytes(data);
    }
    public async void Dispose()
    {
        await DisableNotifications();
    }
}

In general, the most important method in this class is ReadValue, but BLE supports notification mechanism. From the GATT table, we can see that our board supports at least one sensor that allows us use notifications to receive updated data. This is accelerometer sensor. Of course, I could read data from accelerometer using just ReadValueAsync method, but I wanted to show how to use notifications. It’s not always true that your device will send updated data several time per second and in this case notifications allows to save some time, avoiding not needed queries. You can see that in order to enable notification for a characteristic we simply have to add a special descriptor using WriteClientCharacteristicConfigurationDescriptorAsync method. Once the descriptor is assigned, we will be able to receive messages with new values for the characteristic.

Using our base class, it’s not a problem to create three more classes that describe our sensors. Here is an example for the temperature sensor:

public class TemperatureSensor: SensorBase
{
    public TemperatureSensor(GattDeviceService dataService) :
        base(dataService, SensorUUIDs.UUID_ENV_TEMP) { }
    public async Task<double> GetTemperature()
    {
        byte[] data = await ReadValue();
        return ((double)BitConverter.ToInt16(data,0))/10;
    }
}

Pay attention that by default our sensors don’t use notification mechanism. So, if you want to enable notification for accelerometer, it’s needed to call EnableNotification method.

protected async void InitializeAccelerationSensor(GattDeviceService service)
{
    accSensor = new AccelerationSensor(service);
    await accSensor.EnableNotifications();
}

Finally, we can create a view model and bind it to our interface. I am not going to discuss all aspects of the view model, but want to concentrate your attention around couple methods. The first one is a method that initializes everything:

public async void StartReceivingData()
{
    leDevice = await BluetoothLEDevice.FromIdAsync(device.Id);
    string selector = “(System.DeviceInterface.Bluetooth.DeviceAddress:=”” +
         leDevice.BluetoothAddress.ToString(“X”) + “”)”;
    var services = await leDevice.GetGattServicesAsync();
    foreach (var service in services.Services)
    { 
        switch (service.Uuid.ToString())
        {
        case SensorUUIDs.UUID_ENV_SERV:
            InitializeTemperatureSensor(service); 
            InitializePressureSensor(service); 
            break;
        case SensorUUIDs.UUID_ACC_SERV:
            InitializeAccelerationSensor(service);
            break;
        }
    }
    timer.Interval = new TimeSpan(0, 0, 1);
    timer.Tick += Timer_Tick;
    timer.Start();
}

You can see that we simple list all available services and once we found needed service we initialize appropriate sensor. This code should work even if you select a wrong device from the list of all BLE devices. The user interface will simply display nothing. Because two of our characteristics don’t support notifications we will simply use a timer to update the UI once per second.

The second method is exactly Tick implementation. It simply read data from all available sensors (characteristics) and uses INotifyPropertyChanged to update bindings:

public async void UpdateAllData()
{
    if (tempSensor != null)
    {
        temperature = String.Format(
            $”{(await tempSensor.GetTemperature())} Celsius”);
        if (PropertyChanged!=null)
            PropertyChanged(this,
                new PropertyChangedEventArgs(“Temperature”));
    } 
    if (presSensor != null)
    {
        pressure = String.Format(
            $”{(await presSensor.GetPressure()).ToString()} milliBar”);
        if (PropertyChanged != null)
            PropertyChanged(this, new PropertyChangedEventArgs(“Pressure”));
    }
    if (accSensor != null)
    {
        var data = await accSensor.GetAcceleration();
        angleX = data[0];
        angleY = data[1];
        angleZ = data[2];
        if (PropertyChanged != null)
        {
            PropertyChanged(this, new PropertyChangedEventArgs(“AngleX”));
            PropertyChanged(this, new PropertyChangedEventArgs(“AngleY”));
            PropertyChanged(this, new PropertyChangedEventArgs(“AngleZ”));
        }
    }
}

Running our code, we will be able to see two tabs. On the first tab (Acceleration), you will be able to see data from the accelerometer:

clip_image010

The second tab provides information about temperature and pressure:

clip_image012

Now, you know how to connect your application to any BLE device and get data from there. At the same time, we didn’t pair out device. Let’s see, how to use Bluetooth API to implement pairing and get some benefits from there.

SCOM Operations Manager Shell PowerShell Error When Launched as Administrator

$
0
0

Sometimes when the Operations Manager Shell is opened as an Administrator you will see the error in the screenshot below. I believe this has to do with how User Account Control (or UAC) is configured. The “Start in” parameter on the shortcut is ignored and the shortcut is unable to locate the startup scripts.

shortcut

 

.OperationsManagerFunctions.ps1 : The term ‘.OperationsManagerFunctions.ps1’ is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify
that the path is correct and try again.
At line:1 char:34
+ … t-Module OperationsManager; .OperationsManagerFunctions.ps1; .Oper …
+                                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (.OperationsManagerFunctions.ps1:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

.OperationsManagerStartup.ps1 : The term ‘.OperationsManagerStartup.ps1’ is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify
that the path is correct and try again.
At line:1 char:69
+ … r; .OperationsManagerFunctions.ps1; .OperationsManagerStartup.ps1
+                                           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : ObjectNotFound: (.OperationsManagerStartup.ps1:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

PS C:Windowssystem32>

LaunchErrors

 

Here’s one easy workaround to fix the problem: simply run the line of code in ANY elevated PowerShell console (as Administrator). This will add a very simple command to your general profile that will set the location of the prompt so that the OpsMan startup scripts can be located by the shortcut. You only need to run this PowerShell code (below) once on each server where the Console is installed.

$CMD = '$SCOMInstallPath = (Get-ItemProperty -Path "HKLM:SOFTWAREMicrosoftMicrosoft Operations Manager3.0Setup" -Name "InstallDirectory").InstallDirectory; Set-Location $SCOMInstallPath..PowerShell'
Add-Content -Path $profile.AllUsersAllHosts -Value $CMD -verbose Force

 

Simply copy and paste into an elevated console:
SetProfile

If your Opsman shell is open, close it. Reopen it as Administrator. Problem solved.

Shell_Is_Happy

 

To view your profile paths use this command:

$profile | Select-Object -Property *

  

Azure Billing REST API sample postman request

$
0
0

Azure Billing REST API helps to predict and manage Azure costs.  Following API’s are there to query the billing data.

1) Resource Usage (Preview)

Get consumption data for an Azure subscription- https://msdn.microsoft.com/en-us/library/azure/mt219001.aspx

Sample working query:-
https://management.azure.com/subscriptions/your-subscription-id/providers/Microsoft.Commerce/UsageAggregates?api-version=2015-06-01-preview&reportedStartTime=2014-05-01T00%3a00%3a00%2b00%3a00&reportedEndTime=2017-04-01T00%3a00%3a00%2b00%3a00&aggregationGranularity=Daily&showDetails=false

image

2) Resource RateCard (Preview)

Get price and metadata information for resources used in an Azure subscription- https://msdn.microsoft.com/en-us/library/azure/mt219004.aspx

Sample working query:-
https://management.azure.com/subscriptions/your-subscription-id/providers/Microsoft.Commerce/RateCard?api-version=2016-08-31-preview&$filter=OfferDurableId eq ‘MS-AZR-0063P’ and Currency eq ‘USD’ and Locale eq ‘en-US’ and RegionInfo eq ‘US’

image

 

Ps:- Assuming you have a valid bearer token for authorization.

https://docs.microsoft.com/en-us/rest/api/index?redirectedfrom=MSDN#send-the-request

Continuously publish your PowerShell module to PowerShell Gallery by using GitHub and VSTS

$
0
0

PowerShell rocks!

I know that we all write PowerShell modules every day to automate the world, but do you publish them to PowerShell Gallery so that the others can take advantage of your work.
To optimize the workload, I automated the process by using Visual Studio Team Services (VSTS) extension.

What is PowerShell Gallery?

In case you don’t know about PowerShell Gallery, it’s a central repository for all PowerShell module which Microsoft hosts. If you are using PowerShell v5.0 or PowerShellGet, then you can simply use Find-Module and Install-Module to find and get modules from PowerShell window, so that you don’t need to open web browser to find desired modules anymore.

To publish the module manually, you can use Publish-Module cmdlet, which requires API Key. So please register yourself first at https://www.powershellgallery.com/

Prerequisites

You need API Key to publish modules.

1. Sign in to https://www.powershellgallery.com/

2. Click your account on right top of the page.

3. Obtain API key in Credentials section.

Manual Publish

Use Publish-Module cmdlet. https://msdn.microsoft.com/en-us/powershell/reference/5.1/powershellget/publish-module

Automate the publish

No, I don’t want to publish the module manually every time. So in this article, I explain how you can use GitHub to host your source code and use VSTS to publish the module to Gallery in DevOps fashion. Oh, you don’t have VSTS account? Sign up at https://www.visualstudio.com/team-services/

Source management

Create a repository at GitHub, and check in psm1 and psd1 files as well as any dependencies. If you love to use VSTS as source repo, go ahead to use it <3

Create VSTS Project and Build Definition

1. Login to VSTS, and click [New Project].

2. Enter any name and click [Create].

image

3. Go to [Build & Release]

image

4.Click [New definition].

image

5. As we don’t have appropriate build template, simply select [Empty] and hit [Next].

image

6.Select GitHub as your repo, check [Contentious integration] and click [Create].

image

7. Click [Repository]. It is in red as you don’t have a connection to GitHub repo yet.

image

8. Click [Manage].

image

9. You will be navigated to manage page. Click [New Service Endpoint] | [GitHub]

image

10. Enter any connection name and click [Authorize]. Sign in to your GitHub account.

image

11. Click [OK] to close the dialog, and go back to previous tab.

12. Click refresh icon next to [Manage] and you can select the connection you just added.

image

13. Select your repository.

image

14. Click Build tab and [Add build step].

image

15. Click [Add] for [Copy and Publish Build Artifacts]. This task simply copies files from repo to temporary place.

image

16. Click more menu (…) in Copy Root and select the file path in GitHub repo.

image

image

17. Specify files in [Contents]. You can use wildcard such as *.psm1, *.psd1.

18. Specify your module name at [Artifact Name].

19. Select [Server] for [Artifact Type]

image

20. Save the definition.

image

Test the build

1. Click [Queue new build].

image

2. Click [OK].

image

3. Verify if the build completed with success.

image

4. Click [Artifact] in build result to get what has been copied over.

image

Create Release Definition

Now, let’s create release definition to publish the module.

1. Click [New definition] in Release tab.

image

2. Select [Empty] and click [Next]

image

3. Select [Build] and Project which you created above. Check [Continuous deployment] and hit [Create].

image

4. Click [Add tasks].

image

5. As we don’t have appropriate task for PowerShell publish, click marketplace link.

image

6. Search extension by [PowerShell] and click [PowerShell Gallery Publisher].

image

7. Click [Install].

image

8. Select your account and click [Continue].

image

9. Go back to the release definition and add the installed task.

image

10. Enter API Key and select Module Folder by clicking more menu (…).

image

image

11. Modify the name by clicking pencil icon next to its name.

image

Test the release

1. Click [Release] | [Create Release].

image

2. Simply click [Create]

3. Select the release definition you just created in left pane. Then you can find Release which you added right now.

image

4. When I tested, I see the following error, which indicates I already have same version of the module.

2017-04-18T11:52:18.2341643Z ##[error]System.InvalidOperationException: Module ‘Microsoft.PowerBI.PowerShell’ with version ‘1.2’ cannot be published. The version must exceed the current version ‘1.2’ that exists in the repository ‘https://www.powershellgallery.com/api/v2/’.

image

Run through

To test all processes, you simply update your code and check-in to the GitHub repo.

 

Links

VSTS PowerShell Module Publisher Add-in
https://marketplace.visualstudio.com/items?itemName=kenakamu.PSGalleryPublisher

Source: https://github.com/kenakamu/vsts-tasks/tree/master/Tasks/PSGalleryPublisher

Ken


Create a security system with cameras on your Windows IOT Raspberry Pi

$
0
0

Last time I wrote about getting started using a Raspberry Pi and Windows Internet Of Things. This time I played around with adding USB Web cameras to the Raspberry Pi.

The code below will dynamically check for cameras as they get connected and disconnected from the device. It runs on Windows IoT, so works on a PC, Raspberry Pi, Windows  Phone, Xbox.

The code has a timer that checks for cameras and takes a picture every tick, defaulting to 7 seconds. If there are multiple cameras, it will alternate cameras. I’ve had a Pi going with 4 USB cameras and a touchscreen display from a single 5V power supply, continuously taking pictures for weeks.

I noticed after a while that the clock would drift. After a many hours of taking pictures, the time might be off by several minutes. A little research showed that the Pi does not have a Real Time Clock (the electronic equivalent of a wrist watch). Because it can easily connect to a network,  I added some code I found to look up the current time on the network. This code conditionally runs on the Pi. On other devices, it just uses the host OS time function.

I could use the Windows IoT Remote Client to see the camera images from any machine on the network. So I could place the cameras in an area I want to secure, and view the pictures remotely.

Several different cameras were hooked up and all seemed to work to some degree. I found that the LifeCam Show was very good at lighting and the pictures were pretty good. The LifeCam VX-5000 was good too, but not as good at adjusting to lighting conditions. The LifeCam Studio didn’t seem to work as well (although it worked fine attached to a Windows PC). I found a generic WebCam for which I don’t know the manufacturer, and it seemed to work fine.

Some enhancement ideas:

  • Upload the pictures to a server
  • Use a motion detector to trigger photos, and perhaps even a light, sending email or calling the police 
  • Play around with video and sound
  • This setup can be used to take time lapse pictures of, say, a construction site, weather, or the daily position of the sun in the sky

<code>

using System;
using System.Collections.Generic;
using System.Runtime.InteropServices;
using System.Threading;
using System.Threading.Tasks;
using Windows.Devices.Enumeration;
using Windows.Foundation;
using Windows.Media.Capture;
using Windows.Media.MediaProperties;
using Windows.Networking;
using Windows.Networking.Sockets;
using Windows.Storage.Streams;
using Windows.System.Profile;
using Windows.UI;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
using Windows.UI.Xaml.Media;
using Windows.UI.Xaml.Media.Imaging;

// Start Visual Studio
// File->New->Project->C#->Windows Universal->Blank app

namespace UWPCamera
{
  public sealed partial class MainPage : Page
  {
    Button _btnSwitchCamera;
    CheckBox _chkCycleCameras;
    Image _img = new Image();
    TextBlock _tbStatus = new TextBlock();
    object _timerLock = new object();

    int _cameratoUse = 0;
    DeviceInformationCollection _cameraDevices = null;
    List<MediaCapture> _lstMedCapture = new List<MediaCapture>();
    bool _fUseNetworkTime = false;

    public MainPage()
    {
      this.InitializeComponent();
      this.Loaded += MainPage_Loaded;
    }
    private void MainPage_Loaded(object sender, RoutedEventArgs e)
    {
      try
      {
        //                this.Background = new SolidColorBrush(Color.FromArgb(255,128, 128, 128));
        if (AnalyticsInfo.VersionInfo.DeviceFamily == "Windows.IoT")
        {
          // "Windows.Desktop"
          // "Windows.Mobile" (phone)
          // "Windows.IoT"
          _fUseNetworkTime = true;
        }
        Action resetCameras = () =>
        {
          lock (_timerLock)
          {
            _cameraDevices = null;// force reload
                  }
        };
        var deviceWatcher = DeviceInformation.CreateWatcher(DeviceClass.VideoCapture);
        deviceWatcher.Added += new TypedEventHandler<DeviceWatcher, DeviceInformation>(
            (wat, info) => { resetCameras(); });
        deviceWatcher.Removed += new TypedEventHandler<DeviceWatcher, DeviceInformationUpdate>(
            (wat, info) => { resetCameras(); });
        deviceWatcher.Updated += new TypedEventHandler<DeviceWatcher, DeviceInformationUpdate>(
            (wat, info) => { resetCameras(); });
        deviceWatcher.Stopped += new TypedEventHandler<DeviceWatcher, object>(
            (wat, obj) => { deviceWatcher.Start(); });
        deviceWatcher.Start();
        var relPanel = new RelativePanel();
        var spCtrls = new StackPanel()
        {
          Orientation = Orientation.Horizontal
        };
        _img.HorizontalAlignment = HorizontalAlignment.Center;
        _img.Stretch = Stretch.UniformToFill;
        _btnSwitchCamera = new Button()
        {
          IsEnabled = _cameraDevices?.Count > 1,
          Width = 260
        };
        SetBtnSwitchLabel();
        ToolTipService.SetToolTip(_btnSwitchCamera, new ToolTip()
        {
          Content = "Click to switch camera if available"
        });
        spCtrls.Children.Add(_btnSwitchCamera);
        _btnSwitchCamera.Click += (oc, ec) =>
        {
          IncrementCameraInUse();
          SetBtnSwitchLabel();
        };
        _chkCycleCameras = new CheckBox()
        {
          Content = "Cycle Cameras",
          IsChecked = false
        };
        ToolTipService.SetToolTip(_chkCycleCameras, new ToolTip()
        {
          Content = "Automatically switch through all attached cameras"
        });
        spCtrls.Children.Add(_chkCycleCameras);
        relPanel.Children.Add(spCtrls);
        var tbInterval = new TextBox()
        {
          Text = "7"
        };
        spCtrls.Children.Add(tbInterval);
        var btnQuit = new Button()
        {
          Content = "Quit"
        };
        spCtrls.Children.Add(btnQuit);
        btnQuit.Click += (oq, eq) =>
        {
          lock (_timerLock)
          {
                    // make sure we're done with cam before exit
                    Application.Current.Exit();
          }
        };
        spCtrls.Children.Add(_tbStatus);
        relPanel.Children.Add(_img);
        RelativePanel.SetBelow(_img, spCtrls);
        var tmr = new DispatcherTimer();
        tmr.Interval = TimeSpan.FromSeconds(4);
        tbInterval.LostFocus += (otb, etb) =>
        {
          double n;
          if (double.TryParse(tbInterval.Text, out n))
          {
            tmr.Interval = TimeSpan.FromSeconds(n);
          }
        };
        bool fIsInTickRoutine = false;
        _tsSinceLastTimeCheck = TimeSpan.FromDays(1); // force time check
        tmr.Tick += async (ot, et) =>
        {
          if (!fIsInTickRoutine)
          {
            fIsInTickRoutine = true;
            if (Monitor.TryEnter(_timerLock))
            {
              try
              {
                if (_fUseNetworkTime)
                {
                  _tsSinceLastTimeCheck += tmr.Interval;
                  if (_tsSinceLastTimeCheck.TotalMinutes >= 1)
                  {
                            // resync the clock
                            try
                    {
                      _dtLastTimeCheck = await NtpClient.GetDateTimeAsync();
                      _tsSinceLastTimeCheck = TimeSpan.Zero;
                    }
                    catch (Exception ex)
                    {
                      _tbStatus.Text = ex.ToString(); // task cancelled exception
                            }
                  }
                }
                await LookForCameraAndTakeAPicture();
              }
              finally
              {
                Monitor.Exit(_timerLock);
              }
            }
            fIsInTickRoutine = false;
          }
        };
        tmr.Start();
        this.Content = relPanel;
      }
      catch (Exception ex)
      {
        this.Content = new TextBlock() { Text = ex.ToString() };
      }
    }

    void IncrementCameraInUse()
    {
      lock (_timerLock)
      {
        if (++_cameratoUse == _cameraDevices.Count)
        {
          _cameratoUse = 0;
        }
      }
    }

    TimeSpan _tsSinceLastTimeCheck;
    DateTime _dtLastTimeCheck;
    DateTime CurrentDateTime { get { return _dtLastTimeCheck + _tsSinceLastTimeCheck; } }
    async Task LookForCameraAndTakeAPicture()
    {
      try
      {
        bool fWasCycling = _chkCycleCameras.IsChecked == true;
        DateTime now;
        if (_fUseNetworkTime)
        {
          now = CurrentDateTime;
        }
        else
        {
          now = DateTime.Now;
        }
        _tbStatus.Text = now.ToString("MM/dd/yy hh:mm:ss tt") + " " + _tsSinceLastTimeCheck.TotalMinutes.ToString("n1");
        // do we need to initialize or reinitialize?
        if (_cameraDevices == null || _cameraDevices.Count == 0)
        {
          _chkCycleCameras.IsChecked = false;
          await initializeCamerasAsync();
        }
        else if (_chkCycleCameras.IsChecked == true)
        {
          IncrementCameraInUse();
        }
        SetBtnSwitchLabel();
        var bmImage = await TakePictureAsync();
        _img.Source = bmImage;
        _img.HorizontalAlignment = HorizontalAlignment.Center;
        if (fWasCycling && _cameraDevices?.Count > 1)
        {
          _chkCycleCameras.IsChecked = true;
        }
      }
      catch (Exception ex)
      {
        _tbStatus.Text += ex.ToString();
        _cameraDevices = null; // will reset looking for camera
        var comex = ex as COMException;
        if (comex != null)
        {
          if (comex.Message.Contains("The video recording device is no longer present"))
          {
            // could be more specific
          }
        }
      }
    }

    async Task initializeCamerasAsync()
    {
      _cameratoUse = 0;
      _lstMedCapture.Clear();
      _chkCycleCameras.IsChecked = false;
      _chkCycleCameras.IsEnabled = false;
      _cameraDevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
      switch (_cameraDevices.Count)
      {
        case 0:
          _btnSwitchCamera.Content = " No camera found";
          _chkCycleCameras.IsChecked = false;
          break;
        case 1:
          _chkCycleCameras.IsChecked = false;
          _btnSwitchCamera.IsEnabled = false;
          break;
        default:
          _btnSwitchCamera.IsEnabled = true;
          break;
      }
      if (_cameraDevices.Count > 0)
      {
        _chkCycleCameras.IsEnabled = _cameraDevices.Count > 1;
        _chkCycleCameras.IsChecked = _cameraDevices.Count > 1;
        int ndx = 0;
        int nFrontCamera = -1;
        foreach (var cam in _cameraDevices)
        { // high priority for front camera
          if (cam.EnclosureLocation?.Panel == Windows.Devices.Enumeration.Panel.Front)
          {
            nFrontCamera = ndx;
          }
          var medCapture = new MediaCapture();
          MediaCaptureInitializationSettings settings = new MediaCaptureInitializationSettings();
          //settings.StreamingCaptureMode = StreamingCaptureMode.AudioAndVideo;
          //settings.PhotoCaptureSource = PhotoCaptureSource.VideoPreview;
          //                    var exposuretime = _medCapture.VideoDeviceController.ExposureControl.Value;
          settings.VideoDeviceId = _cameraDevices[ndx].Id;
          medCapture.Failed += (o, e) =>
          {
                      //                        _tbStatus.Text += e.Message;
                    };
          await medCapture.InitializeAsync(settings);
          _lstMedCapture.Add(medCapture);
          ndx++;
        }
        if (nFrontCamera >= 0)
        {
          _cameratoUse = nFrontCamera;
        }
      }
    }

    void SetBtnSwitchLabel()
    {
      var camName = "No Camera";
      if (_cameraDevices != null)
      {
        var dev = _cameraDevices[_cameratoUse];

        var camLoc = dev.EnclosureLocation?.Panel.ToString();
        camName = $"{_cameratoUse} {dev.Name} {camLoc}".Trim();
      }
      _btnSwitchCamera.Content = camName;
    }

    async Task<BitmapImage> TakePictureAsync()
    {
      // https://docs.microsoft.com/en-us/windows/uwp/audio-video-camera/basic-photo-video-and-audio-capture-with-mediacapture
      var medCapture = _lstMedCapture[_cameratoUse];
      var imgFmt = ImageEncodingProperties.CreateJpeg();
      var llCapture = await medCapture.PrepareLowLagPhotoCaptureAsync(imgFmt);
      var photo = await llCapture.CaptureAsync();
      var bmImage = new BitmapImage();

      await bmImage.SetSourceAsync(photo.Frame);
      await llCapture.FinishAsync();
      return bmImage;

      //var camCapUI = new CameraCaptureUI();
      //camCapUI.PhotoSettings.AllowCropping = true;
      //camCapUI.PhotoSettings.Format = CameraCaptureUIPhotoFormat.Jpeg;
      //var storageFile = await camCapUI.CaptureFileAsync(CameraCaptureUIMode.Photo);
      //var bmImage = new BitmapImage();
      //if (storageFile != null)
      //{
      //    using (var strm = await storageFile.OpenReadAsync())
      //    {
      //        bmImage.SetSource(strm);
      //    }
      //}
    }
  }

  //http://stackoverflow.com/questions/1193955/how-to-query-an-ntp-server-using-c
  /// <summary>
  /// Represents a client which can obtain accurate time via NTP protocol.
  /// </summary>
  public class NtpClient
  {
    private readonly TaskCompletionSource<DateTime> _resultCompletionSource;
    public async static Task<DateTime> GetDateTimeAsync()
    {
      var ntpClient = new NtpClient();
      return await ntpClient.GetNetworkTimeAsync();
    }
    /// <summary>
    /// Creates a new instance of <see cref="NtpClient"/> class.
    /// </summary>
    public NtpClient()
    {
      _resultCompletionSource = new TaskCompletionSource<DateTime>();
    }

    /// <summary>
    /// Gets accurate time using the NTP protocol with default timeout of 45 seconds.
    /// </summary>
    /// <returns>Network accurate <see cref="DateTime"/> value.</returns>
    public async Task<DateTime> GetNetworkTimeAsync()
    {
      var utcNow = await GetNetworkTimeAsync(TimeSpan.FromSeconds(45));
      var tzOffset = System.TimeZoneInfo.Local.GetUtcOffset(utcNow); // -7 hrs for Redmond with DST
      var dtNow = utcNow + tzOffset;
      return dtNow;
    }

    /// <summary>
    /// Gets accurate time using the NTP protocol with default timeout of 45 seconds.
    /// </summary>
    /// <param name="timeout">Operation timeout.</param>
    /// <returns>Network accurate <see cref="DateTime"/> value.</returns>
    public async Task<DateTime> GetNetworkTimeAsync(TimeSpan timeout)
    {
      using (var socket = new DatagramSocket())
      using (var ct = new CancellationTokenSource(timeout))
      {
        ct.Token.Register(() => _resultCompletionSource.TrySetCanceled());

        socket.MessageReceived += OnSocketMessageReceived;
        //The UDP port number assigned to NTP is 123
        await socket.ConnectAsync(new HostName("pool.ntp.org"), "123");
        using (var writer = new DataWriter(socket.OutputStream))
        {
          // NTP message size is 16 bytes of the digest (RFC 2030)
          var ntpBuffer = new byte[48];

          // Setting the Leap Indicator, 
          // Version Number and Mode values
          // LI = 0 (no warning)
          // VN = 3 (IPv4 only)
          // Mode = 3 (Client Mode)
          ntpBuffer[0] = 0x1B;

          writer.WriteBytes(ntpBuffer);
          await writer.StoreAsync();
          var result = await _resultCompletionSource.Task;
          return result;
        }
      }
    }

    private void OnSocketMessageReceived(DatagramSocket sender, DatagramSocketMessageReceivedEventArgs args)
    {
      try
      {
        using (var reader = args.GetDataReader())
        {
          byte[] response = new byte[48];
          reader.ReadBytes(response);
          _resultCompletionSource.TrySetResult(ParseNetworkTime(response));
        }
      }
      catch (Exception ex)
      {
        _resultCompletionSource.TrySetException(ex);
      }
    }

    private static DateTime ParseNetworkTime(byte[] rawData)
    {
      //Offset to get to the "Transmit Timestamp" field (time at which the reply 
      //departed the server for the client, in 64-bit timestamp format."
      const byte serverReplyTime = 40;

      //Get the seconds part
      ulong intPart = BitConverter.ToUInt32(rawData, serverReplyTime);

      //Get the seconds fraction
      ulong fractPart = BitConverter.ToUInt32(rawData, serverReplyTime + 4);

      //Convert From big-endian to little-endian
      intPart = SwapEndianness(intPart);
      fractPart = SwapEndianness(fractPart);

      var milliseconds = (intPart * 1000) + ((fractPart * 1000) / 0x100000000L);

      //**UTC** time
      DateTime networkDateTime = (new DateTime(1900, 1, 1, 0, 0, 0, 0, DateTimeKind.Utc)).AddMilliseconds((long)milliseconds);
      return networkDateTime;
    }

    // stackoverflow.com/a/3294698/162671
    private static uint SwapEndianness(ulong x)
    {
      return (uint)(((x & 0x000000ff) << 24) +
                     ((x & 0x0000ff00) << 8) +
                     ((x & 0x00ff0000) >> 8) +
                     ((x & 0xff000000) >> 24));
    }
  }
}

</code>

Article 0

$
0
0

這篇文章要透過幾個簡單的指令,快速的將 TensorFlow, CNTK 或是 Python 所建立的預測模型,部署到 Azure上的 Azure Container Service (ACS)HDInsight Spark 上,以達到高擴充性的需求。

AML CLI (Azure Machine Learning Command Line Interface) 是個全新的 Azure 機器學習指令集,目前是 preview 版本,放在GitHub 上隨時更新,已可以在Linux版本的 Data Science Virtual Machine (DSVM)上使用。

環境準備

首先你需要有 Azure 訂用帳戶 (取得訂用帳戶),然後依說明文件建一個 DSVM 出來 (要完成以下練習,只要選擇最便宜的機器即可)。

透過 SSH 連線至 DSVM (可使用 X2Go Putty )後,執行以下指令:

$ wget -q http://amlsamples.blob.core.windows.net/scripts/amlupdate.sh -O – | sudo bash –

$ sudo /opt/microsoft/azureml/initial_setup.sh

注意: 請登出再重新登入以讓改變有效

接下來執行以下指令來設定 AML CLI 環境:

$ az login

$ aml env setup

以上指令會在這個 DSVM 中新增以下這些服務:

  • A resource group
  • A storage account
  • An Azure Container Registry (ACR)
  • An Azure Container Service (ACS)
  • Application insights

注意:

  1. 首先你會被要求到 https://aka.ms/devicelogin 輸入一組代碼,再登入您的Azure帳號,以確保你有足夠的權限以新增這些服務。
  2. 得輸入小於20個字元、只能包含小寫字母及數字的環境名稱 (Environment names)。

之後你隨時可以執行以下指令來了解所有的環境設定:

$ aml env show

amlenv1

使用 Jupyter

身為資料科學家,您可以使用喜歡的 IDE 來寫作。若使用 Jupyter,在 DSVM 中是跑在 https://<machine-ip-address>:8000 ,請直接以瀏覽器打開並以登入 DSVM 的帳密登入。

在以下資料夾中可分別找到即時及批次服務的範例:

即時: azureml/realtime/realtimewebservices.ipynb notebook

批次: azureml/batch/batchwebservices.ipynb notebook

jupyter1-1

照著步驟即可訓練出一個預測模型:

jupyter2-1

CNTK, TensorFlow 及 Python 範例

 

利用 AML 部署

若是即時服務 (realtime),會部署到 Azure Container Service (ACS),若是批次服務 (batch) 會部署到 HDInsight Spark,以達成高擴充性及高可用性的目標,無論您的預測模型是由 TensorFlow、CNTK 或 Python 所建制的。

以 realtime 服務範例。SSH到DSVM後,切換到訓練出來的預測模型所在資料夾:

$ cd ~/notebooks/azureml/realtime

接下來執行以下指令,就能部署 realtime service 至所在的 DSVM:

$ aml env local

$ aml service create realtime -f testing.py -m housing.model -s webserviceschema.json -n mytestapp

你也可以執行以下指令,部署到 Azure Container Service (ACS) cluster 中:

$ aml env cluster

$ aml service create realtime -f testing.py -m housing.model -s webserviceschema.json -n mytestapp

了解更多 AML CLI 指令

https://github.com/Azure/Machine-Learning-Operationalization/blob/master/aml-cli-reference.md 可看到所有的指令,例如可以執行

$ aml service list

會看到所有已部署的 realtime 或 batch 服務

 

Binding Multiple Sites With Single SSL Certificate

$
0
0

One of our Premier customers called me the other day needing assistance on running multiple web sites with a single SSL certificate using the same port. The idea was clever, especially when you have many several sites but only a handful of SSL certificates. But how do you solve this one-Cert-fits-all issue?

There are two ways to solve this puzzle depending on you situation:
• Wildcard certificate : when sites belong to the same domain
• Unified Communications Certificate (UCC): when sites belong to different domains

Wildcard Certs are more common than UCC. Wildcard Certificates use Subject Alternative Names (SANs) to secure a domain and all of its first-level subdomains.

wildcard

For my customer, he chose the Wildcard route. So, we asked his Certificate vendor to issue him a wildcard certificate with a friendly name that matches his domain suffix.

For example:
You have 2 sites: mystie1.mystie.com, mystie2.mysite.com
So ask your certificate vendor to issue a wildcard certificate with this friendly name: *.mysite.com

…And it solved the problem!

Build your credibility in Configuring Windows Devices

$
0
0

Are you an IT Professional / Consultant helping customers build solid identities, protection of content (data loss protection), mobile device management policy, virtualization with Hyper-V, application management using the Company Portal and the Windows Store; using Windows 10 security and integrated Azure features?

It might help for your to be familiar with the Forrester Total Economic Impact of Microsoft Windows 10, Microsoft-commissioned report that demonstrates how deploying Windows 10 can reduce costs and provide significant benefits to organizations. The report found an ROI of 233% with a payback period of only 14 months. Also, here is some more information around the Windows 10 business you might find useful to know for conversations with your stakeholders.

Being in this space of configuring Windows Devices, it might help for you to validate your skills by getting certified. Take the exam 70-697: Configuring Windows Devices.

To help you prepare for the exam, you could go through the XSeries on edX, Windows 10 Features for a Mobile Workforce.

The courses in the XSeries are:

  • Configuring Additional Resources
  • Identity Management and Data Access
  • Managing and Maintaining Devices in the Enterprise
  • Windows & Devices in the Enterprise

What you’ll Learn:

  • Plan and design the implementation and support of a mobile workforce across an enterprise.
  • Migrate from an earlier version of the Windows operating system to Windows 10.
  • Manage Windows 10 desktop and application deployments.
  • Discover the why and when behind Windows 10 deployment tasks.

Lab Work:

The XSeries also provides access to the relevant ‘Microsoft Labs Online’ for free. To go through some of the labs, the student needs access to Azure and one way is to sign up for Free Azure 30-Day Trial.

Course Completion:

Each Course in the Series has a Student Assessment. 70% score is required to Pass the assessment.

In addition, optionally for a fee, one can get a verified certification from edX along with a 50% discount Microsoft certification voucher. For most up to date information check out the details on edX.

Happy Learning 🙂

How to list Packages available when creating a Nano server

$
0
0

I have written numerous articles on Nano server, check them out.

Specifically in this article “IIS on Nano server” I include the Nano server “–Package Microsoft-NanoServer-IIS-Package” property while I was building my Nano server VHD.  However, I wanted to know or get a list of all the possible packages that can be used when creating the Nano server.  The following command is how I achieved it, also see the results in Figure 1.

Import-Module .NanoServerImageGenerator -Verbose
Get-NanoServerPackage -MediaPath D:

image

Figure 1, what package available for building a Nano server

And here is the list.

Microsoft-NanoServer-Compute-Package
Microsoft-NanoServer-Containers-Package
Microsoft-NanoServer-DCB-Package
Microsoft-NanoServer-Defender-Package
Microsoft-NanoServer-DNS-Package
Microsoft-NanoServer-DSC-Package
Microsoft-NanoServer-FailoverCluster-Package
Microsoft-NanoServer-Guest-Package
Microsoft-NanoServer-Host-Package
Microsoft-NanoServer-IIS-Package
Microsoft-NanoServer-OEM-Drivers-Package
Microsoft-NanoServer-SCVMM-Compute-Package
Microsoft-NanoServer-SCVMM-Package
Microsoft-NanoServer-SecureStartup-Package
Microsoft-NanoServer-ShieldVM-Package
Microsoft-NanoServer-SoftwareInventoryLogging-Package
Microsoft-NanoServer-Storage-Package
I will probably build a Nano server that includes all of these and then run the dism /online /get-features command to learn more about what features are contained within each of the packages.
dism /online /get-features

Until later.

How to install IIS and Tracing on a Nano for Windows Server 2016

$
0
0

If you have not already reviewed my other articles on using IIS on Nano, then you might want to take a look at them.  Just so you know the perspective I am coming from and the context in which I work within.

I followed these instructions to setup a Nano server using Hyper-V, “Quickly deploy Nano Server in a virtual machine”.  As you see in Figure 1, you need to have an ISO of Windows Server 2016 and then copy the .NanoServerNanaServerImageGenerator folder to a local location for running the PowerShell cmdlet discussed later and in the article I worked with, here.

image

Figure 1, Nano Server Image Generator

Include IIS when you build the Nano Server Image

I wanted include/install IIS so I added the “-Package Microsoft-NanoServer-IIS-Package” parameter and value to the New-NanoServerImage cmdlet.  Specifically, I ran this cmdlet, also shown in Figure 2.  Therefore, NOTE that the IIS package is added while the Nano Server VHD is being created.  I have not tried it, but would not expect to be able to add this feature while remotely connected to the machine running the Nano server, so include it while building the VHD.

New-NanoServerImage -Edition Standard -DeploymentType Guest -MediaPath D: -BasePath .Base -TargetPath .NanoServerVmNanoServerVM.vhd -ComputerName Nano2016IIS10 -Package Microsoft-NanoServer-IIS-Package

image

Figure 2, How to install IIS on Nano using Nano Server Image Generator

Configure Failed Request Tracing in IIS running Nano

At the beginning of this article I linked to another article named “How to connect and configure IIS running on Nano” here.  After some looking around running these commands in PowerShell and was not able to find a way to install Failed Request tracing, I simply looked in the c:windowssystem32inetsrvconfigapplicationHost.config file and found the <tracing> section, so apparently it is installed by default.

Import-Module IISAdministration
Get-Command -Module IISAdministration
dism /online /get-features
Get-Content .applicationHost.config
Get-IISConfigSection -SectionPath "system.webServer/tracing/traceFailedRequests"
psedit c:windowssystem32inetsrvapplicationHost.config

Then I ran this PowerShell command that configured Failed Request Tracing in IIS on Nano.  The configuration will capture all the requests which result in a 500 HTTP status code.

$serverManager = Get-IISServerManager
$config = $serverManager.GetWebConfiguration("/")
$traceFailedRequestSection = $config.GetSection("system.webServer/tracing/traceFailedRequests")
$traceFailedRequestsCollection = $traceFailedRequestSection.GetCollection()
$addElement = $traceFailedRequestsCollection.CreateElement("add")
$addElement["path"] = "*"
$traceAreasCollection = $addElement.GetCollection("traceAreas")
$addElementArea = $traceAreasCollection.CreateElement("add")
$addElementArea["provider"] = "WWW Server"
$addElementArea["areas"] = "Authentication,Security,WebSocket,StaticFile"
$addElementArea["verbosity"] = "Verbose"
$traceAreasCollection.Add($addElementArea)
$failureDefinitionsElement = $addElement.GetChildElement("failureDefinitions")
$failureDefinitionsElement["statusCodes"] = "500"
$traceFailedRequestsCollection.Add($addElement)
$serverManager.CommitChanges()

Then I enabled the Failed Request Tracing on the IIS Nano server for the give web site.

As I have not yet published a site to my IIS Nano server (there is no web.config) and enabling FREB happens at the web site level, I haven’t done this yet.  But it is simply just figuring out which commands to run.  I’ll come back to this at some point in the future if possible.

How to copy files or deploy code to a Nano Server

$
0
0

I am writing numerous posts about IIS on Nano with my intent to get an ASP.NET Core application up and running on it.  There are already instructions for this shown directly below, but I like to do the exercise myself and document my experiences and lessons in addition. #ASPNETCORE #Nano #IIS

This first thing I struggles with was how to make a deployment once I get my test ASP.NET Core application built, and also how to configure and install the ASP.NET Core binaries.  Becuase up to now all I have seen is the ability to remote manage my Nano IIS server with PowerShell.  I am sure I will learn and progress further.  Regardless, I found this trick neat to copy or put files on the Nano IIS server.  I did try to get FTP to work and obviously Web Deploy wouldn’t work (out of the box) because both require some server side configuration.  I ultimately found that I can create a share and just move files like that.

Here are the steps to create the file share to copy files to the Nano server

  • Create the folder to share
  • Enable the Firewall to allow a share to be created
  • Create the share
  • Access the share

Create the folder to share

As there is no interface to access the Nano server, everything must be done remotely.  This is different than Server Core where you have a console to run some configuration while RDPed to the VM, but not with Nano.  Completely remote only, that is kind of cool IMO.  Execute the following script after remotely connecting to your Nano IIS server.  I describe how to make the remote connection here.

New-Item C:DeploymentFileShareMyASPNETCoreTestApp -type directory

The results of the PowerShell command is shown in Figure 1.

image

Figure 1, how to create a new folder on a Nano server

Enable the Firewall to allow a share to be created

Next, you need to allow or create a firewall rule for “File and Printer Sharing”.  This is done by executing the following PowerShell command.

netsh advfirewall firewall set rule group="File and Printer Sharing" new enable=yes

The output of the PowerShell command, is shown in Figure 2.

image

Figure 2, how to create or enable a firewall rule in Nano Server, Nano IIS

Create the share

Then, create the share using the following PowerShell command.

net share MyASPNETCoreTestApp=C:DeploymentFileShareMyASPNETCoreTestApp /GRANT:EVERYONE`,FULL

This results in the output shown in Figure 3.

image

Figure 3, how to create a file share on Nano server, Nano IIS

Access the share

Finally, you can access the share same like any other share either using the WIN+R and entering the path or opening Windows Explorer and entering the same.  See Figure 4 and Figure 5.

image

Figure 4, access a file share on a Nano Server using WIN+R

 

image

Figure 4, access a file share on a Nano Server using Windows Explorer

I hope you find the helpful.


How to connect and configure IIS running on Nano

$
0
0

If you have not already reviewed my other articles on using IIS on Nano, then you might want to take a look at them.  Just so you know the perspective I am coming from and the context in which I work within.

After you have created the Nano server and boot it up, you will get to a console that looks something like that shown in Figure 1.

image

Figure 1, how to configure IIS on a Nano server

You use the credentials which were used to create the VHD to login.  This approach only allows you to set some Network, Firewall, WinRM, etc.… configurations, see Figure 2.  Nano does not have any console or GUI to perform any configuration.  All of it must be done remotely using, for example PowerShell.  You do need to login and get the IP address because it is needed to configure any remote management tool.

image

Figure 2, how to configure access the Nano server, Nano Server Recovery Console

Initially, when I tried to connect with PowerShell using the Enter-PSSession cmdlet I received an error stating:

Enter-PSSession : Connecting to remote server 10.###.###.202 failed with the following error message : The WinRM client cannot process the request. Default
authentication may be used with an IP address under the following conditions: the transport is HTTPS or the destination is in the TrustedHosts list, and explicit
credentials are provided. Use winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more information
on how to set TrustedHosts run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic.

To resolve the issue I execute the folllowing command, also shown in Figure 3.

winrm set winrm/config/client '@{TrustedHosts="*******"}'

image

Figure 3, The WinRM client cannot process the request

Once I ran the above command, I execute the Enter-PSSession cmdlet again, and as shown in Figure 4, I was prompted and able to login.

Enter-PSSession -ComputerName 10.###.###.202 -Credential ######

image

Figure 4, Remote manage IIS on Nano, remote connect to Nano

Once you have made the remote connection, you can execute some IISAdministration PowerShell cmdlets and install/view some additional IIS features.  See Figure 5.

image

Figure 5, configure IIS running on Nano server

You can use any of the IISAdministration cmdlets to administer IIS on Nano, as shown in Figure 6.

image

Figure 6, configure IIS running on Nano server using IISAdministration

Examples and more information about the IISAdministration module can be found here.

Using Wildcard Certificates for Multi-Server Farm

$
0
0

In terms of scalability, Workflow Manager farm sits on top of a Service Bus farm, and can only either be a single server or three servers.

The most important factor is that there is no other supported topologies, also most importantly, Service Bus reaches a quorum with three or five nodes.

So, if you want to deploy a highly available Workflow Manager farm, you of course will need three or five machines in our farm.

Because, as previously referred Workflow Manager sits on top of Service Bus, so this means that you “get high availability for free” via its scale out model.

This means, that Service Bus takes care of business behind the scenes for us and the correct node will handle tasks. This is automatic and also provides the performance and throughput benefits of a farm deployment.

There is however one thing that you need to consider on this out-of-the-box load balancing given by the Service Bus, you need to understand that this mechanism of load balancing provided by the Service Bus is internal to the WFM Farm and not external.

For example, when we create a connection to a Workflow Manager farm from a SharePoint farm with the Register-SPWorkflowService cmdlet we pass in a WorkflowHostUri parameter.

This typically is the host name of a Workflow Manager host, If we have three/five Workflow Manager hosts, you might get confused about what host you should use.

In reality, you can use anyone of the hosts, this will always work.

The problem here might be if that  particular host is down for whatever reason, in this case  your Workflow will be broken and we cannot configure or execute any SharePoint 2013 workflows.

In this case, the solution is not complicated, you will need to implement the usual process of load balancing for the Workflow Manager Farm, in a way that the Sharepoint Farm that is consuming the Workflow Manager can communicate with its Virtual Name. This should be pretty forward (you can use Application Request Routing or any other approach of Network Load Balancing).

All of this are explained in detail in our article:

Scaling out Workflow Manager 1.0https://msdn.microsoft.com/library/azure/jj250902(v=azure.10).aspx

In there you can confirm this statements:

You don’t need to change your workflows to take advantage of Workflow Manager 1.0 scale-out topology. You also don’t need to worry about how the execution of workflows s distributed between servers in the farm. All Workflow Manager 1.0 servers that joined the farm use efficient algorithms to process workflow instances with pending work.”

If you also want to implement scalability when connect to the WFM, you can use the bellow article that explained with great details all the steps to implement this

http://www.harbar.net/articles/wfm1.aspx

You just need to be careful when you create your Multi-Node Farm certificate, you must make sure that the certificate is a domain certificate.

A domain validated SSL is a digital certificate in which the validated identifying information of the certificate is limited to the domain name and works across any machine in the domain.

For example, the subject name of the certificate has a value of *.domain.

Hope that helps.

 

Lync Phone Edition Cumulative Update 17 がリリースされました。

$
0
0

こんにちは。Japan Skype/Lync Support Team です。

Lync Phone Edition Cumulative Update 17 がリリースされました。

April 2017 cumulative update for Lync Phone Edition for Aastra 6721ip and Aastra 6725ip telephones (KB4019527)
https://support.microsoft.com/en-us/help/4019527/april-2017-cumulative-update-for-lync-phone-edition-for-aastra-6721ip-

April 2017 cumulative update for Microsoft Lync Phone Edition for HP 4110 and HP 4120 telephones (KB4019528)
https://support.microsoft.com/en-us/help/4019528/april-2017-cumulative-update-for-microsoft-lync-phone-edition-for-hp-4110-and-hp-4120-telephones-kb4019528

April 2017 cumulative update for Microsoft Lync Phone Edition for Polycom CX500, Polycom CX600, and Polycom CX3000 telephones (KB4019529)
https://support.microsoft.com/en-us/help/4019529/april-2017-cumulative-update-for-microsoft-lync-phone-edition-for-polycom-cx500-polycom-cx600-and-polycom-cx3000-telephones-kb4019529

ぜひ最新の更新プログラムを適用して、快適な LPE ライフをお過ごしください。

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Meetups im Mai 2017 – ein Überblick

$
0
0

Hallo zusammen,

gerne möchten wir euch auch regelmäßig mit einer Übersicht anstehender interessanter Meetups informieren. Fehlt ein Meetup was hier unbedingt mit aufgelistet werden sollte, so lasst es uns bitte in den Kommentaren wissen.

Eine Auswahl an Meetups mit Microsoft-Thema:

Datum Ort Thema Meetup
02.05.2017 München Observables — Introduction by Brian Terlson JavaScript Coding Nights Munich
03.05.2017 Hamburg Xamarin Mobile Apps mit Azure Backend Hamburg C# and .NET User Group Meetup
04.05.2017 München Structuring your Frontend gutefrage IT Drinkup
09.05.2017 Denzlingen KickOff – Azure und .NET Meetup Freiburg Azure & .NET Freiburg
09.05.2017 Frankfurt/Main Office 365 – Deployment Office 365 Meetup Raum Frankfurt
09.05.2017 München Munich AR Regulars Table #ARMUC
10.05.2017 Hamburg Angular Talk & Code Hamburg AngularJS Meetup
11.05.2017 Essen Microsoft Bot Framework und Cognitive Services PottNet
11.05.2017 München Erik Monchen – BI rebranded PASS Regionalgruppe Bayern
11.05.2017 Aachen C++ User Meeting Aachen C++ User Gruppe Aachen
16.05.2017 Leipzig Azure Stack – Azure für das eigene Datacenter Azure Meetup Saxony
16.05.2017 München Riverbed Azure Meetup & Review Build 2017 Azure Munich Meetup
16.05.2017 Hamburg Xamarin <3 Azure! Azure Meetup Hamburg
16.05.2017 Duisburg Azure Ruhrgebiet Meetup Mai Azure Ruhrgebiet
17.05.2017 Nürnberg Developer Camp 2017 DeveloperCampGermany
18.05.2017 Dortmund Microsoft loves Docker Docker Dortmund
18.05.2017 Köln PowerBI User Group Köln PowerBI User Group Köln
19.05.2017 Hannover PowerShell Usergroup #region Hannover German PowerShell Usergroup
22.05.2017 Koblenz Docker und .NET mit Frank Pommerening Koblenz .NET Meetup
26.05.2017 Nürnberg Summer Warm Up with Christian Heilmann FrankenJS
31.05.2017 Friedrichshafen Debugging .NET mit Visual Studio 2017 und OzCode .NET User Group Friedrichshafen

Auf der verlinkten Seite findest Du eine Übersicht der Meetups, die sich speziell mit Microsoft Azure beschäftigen.

明日につながる現場力が身につく「de:code ハッカソン!」@ Hackdays

$
0
0

みなさま、こんにちは。

今回は、Hackathon(ハッカソン)についてのお話です。

以前、de:code 2017「Hack days」(Day3, 4)  についてご案内をさせていただきたましたが、「Hack days」内のプログラムである「Hackathon(ハッカソン)」について、より詳細を知りたい、とのご意見を頂いたので、こちらでご案内したいと思います。

そもそも、Hackathonとは何でしょうか。Wikipedia には、以下のような説明がありました。

—————————————-

ハッカソン(Wikipedia) ハッカソン(英語: hackathon 、別名:hack day ,hackfest ,codefest )とはソフトウェア開発分野のプログラマやグラフィックデザイナー、ユーザインタフェース設計者、プロジェクトマネージャらが集中的に作業をするソフトウェア関連プロジェクトのイベントである。個人ごとに作業する場合、班ごとに作業する場合、全体で一つの目標に作業する場合などがある。時にはハードウェアコンポーネントを扱うこともある。

—————————————-

今回の de:code の Day3,4 に Hackason を企画している理由は、de:code の 2 日間で学んで頂いた技術力を単なる知識で終わらせるのではなく、実践でビジネスに生かしていただくために、その後の Hack days を企画いたしました。

ハッカソンの意義は、ハックすることにあるのではなく、各グループのみんなで「ああだこうだ」と話をし、アイディアを出し合いながらアプリケーションを作成したり、グループで行うことで、テーマ技術に対する面白い発見や気づきをもらえたり、抱えていた悩みが解消したり、と一緒に行うことのメリットがあります。

このようにみんなで考えたものが、革新的なサービスやアプリケーションを開発するためのの新たなプロジェクトの誕生につながったりすることも考えられます。

単なるアウトプットの場ではなく、技術者一人一人が、業務場面以外のこのような場所でプラクティスを行うことで、みなさんの発想力や解決力を醸成されますし、デジタルトランスフォーメーションを推進していく企業・社会にとっても皆様の成長は非常に重要なものなのです。

この de:code week では、知識の吸収から、アウトプットする力の吸収まで、ぜひご活用ください。

stb13_jason_12

stb13_ken_08

 

 

 

 

 

 

————————————————

Hackason 詳細@ de:code 2017「Hack days」(Day3, 4) 

目的と概要

皆さんが会社に持ち帰ることのできるデモンストレーションをこの2日間で完成させることを目標にします。ご自身で開発を進めていただくため、アプリケーションの開発経験をお持ちの方を対象としております 。de:code のスピーカー達もサポートをさせていただきます。登録される際に上記 3 テーマのうちご興味のある内容をお選びください。また、そちらのテーマで取り組みたいアイデアを事前に考えてきてください。当日、全てのアイデアから取り組みたいものに投票いただき上位のアイデアを残したうえで、好きなアイデアを選んだ人同士で自由にチームを組んでいただきます。そのためもちろん一人での参加も可能です。

テーマ

  • Microsoft AI API Bot Framework を組み合わせた Bot 開発
  • Xamarin Azure を活用したモバイルアプリ開発
  • IoT Development with Azure Services

進め方

チーム編成が決まった段階から2日間デモンストレーションの完成を目標にひたすら開発に没頭していただきます。

現段階では最大 20 チーム (1 チーム最大 5 名まで = 最大 100 )を想定しております。テーマは以下の3テーマで企画しておりますが、大枠から逸れなければチームとしてどんな新技術に挑戦いただいてもかまいません。Hackathon での2日間で実際に各技術を触る中で、実問題にぶつかりながら解決していくことで、Hands-on 以上に深い技術力を身に着けること、また同じチームの方と共同開発し、情報交換をしながら開発を進めていくことで効率よく技術力を身に着けることが期待できます。1 日目、2 日目共に学びの共有を目的として、各チーム毎に発表の時間を設けさせていただきます。また1日目の夜には、Hackathon 参加者限定で懇親会を企画させていただいております。担当エバンジェリストもそのまま参加させていただきますし、技術者同士の最新技術情報の交換やコネクションの場としてご活用ください。

お楽しみも!

Hackason プログラムでは、最後懇親会を予定しています。

あくまで実案件につながる深い技術力を身に着けることを目的とした Hackathon ではありますが、画期的なアイデアを形にして素晴らしいデモを完成させたチームには会場の皆様の投票に基づいて2日目の最後に簡単な景品と共に表彰をさせていただくことを予定しております。

※Hackathon側で企画されている懇親会はHackathon参加者限定になっており、Hands-on参加者の方はご参加いただけません。

日時/会場

  • 5 月 25 () 10:00 20:15 (懇親会含む) [受付開始 9:30 予定]
  • 5 月 26 () 10:00 16:30          [受付開始 9:30 予定]

日本マイクロソフト品川本社
108-0075  東京都港区港南 2-16-3 品川グランドセントラルタワー 31F

————————————————

ただ技術を触った、サービスを動かしただけではない、経験に基づく確かな実力を身に着けたい、そんな方はぜひともこちらのHackathon にご参加ください。

 

de:code 当日まであと 20 日!

会場でお会いできるのを楽しみにしています!

————————————————————————–——–———–———-

参加申し込み受付中!2017 年 5 月 12 日(金)まで

entry

————————————————————————–——–———–———

Follow us !

————————————————————————–——–———–———-

※本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>