Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Make VS2012 MVC4 VSIX template to work for Visual Studio 2013 preview

$
0
0

Visual Studio 2012.2 supports customized MVC4 template, you can get a walkthrough here: http://blogs.msdn.com/b/yjhong/archive/2012/12/13/custom-mvc-4-template-walkthrough.aspx . But the created template VSIX file cannot be uploaded to VS Gallery at the time. Starting 6/26/2013, you can upload the MVC4 template VSIX files to VS Gallery now.

The VS2012.2 created VSIX may not install or work for Visual Studio 2013 preview, however. We need to fix a few thing in the following steps.

1. Change source.extension.vsixmanifest file InstallationTarget version from “11.0” to “[11.0,12.0]” in order to support both VS2012 and VS2013 preview, for example:

<Installation AllUsers="true">

<InstallationTarget Version="[11.0,12.0]" Id="Microsoft.VisualStudio.Pro" />

<InstallationTarget Version="[11.0,12.0]" Id="Microsoft.VisualStudio.VWDExpress" />

</Installation>

In Visual Studio 2012 spec, it mentioned Version=”11.0” means VS2012 and all future VS releases. But in Visual Studio 2013 preview, this is changed to mean that version of VS only. Changing it to “[11.0,]” to support VS2012 and all VS release in the future. Changing it to [11.0,12.0] to support both VS2012 and VS2013. Changing it to “12.0” to support VS2013 only.

2. In your vstemplate file, if you have such field specified to reduce the VSIX size and allow VSIX to use packages installed locally with VS2012.2 installations:

<packages repository="registry" keyName="AspNetMvc4VS11" isPreunzipped="true">

<package id="xxxxx" version="x.x.x" skipAssemblyReferences="true" />
...

</packages>

You will need to change the keyName value to “AspNetMvc4VS12” to support machines that has Visual Studio 2012 preview but does not have Visual Studio 2012 installed. This is because registry value name changed from “AspNetMvc4VS11” to “AspNetMvc4VS12” in HKLM\Software\NuGet\Repository. If you do this way, you will have to create two versions of VSIX, one for VS2012 and one for VS2013 preview.

If you still want to use one VSIX to support both VS2012.2 and VS2013 preview, you will need to include all the packages in the VSIX file and do not depend on the different package versions that installed along with each version of Visual Studio.

Now, you should be able to create the template successfully in VS2013 preview by going through “Web -> Visual Studio 2012 -> ASP.NET MVC 4 Web Application”.

Please test your VSIX file with Visual Studio 2013 preview and make changes if needed.

Thank you for your support.


Error 41342 when creating In-Memory OLTP (‘Hekaton’) filegroup

$
0
0

A Twitter conversation unearthed a specific requirement of using in-memory OLTP features, namely the processor instruction set requirements. The user was trying to create a ‘Hekaton’ database using the sample script from here, and was getting error message 41342:

The model of the processor on the system does not support creating filegroups with MEMORY_OPTIMIZED_DATA. This error typically occurs with older processors.

This error is documented at our (brand new) SQL 2014 Books Online page. According to the page, memory-optimized tables require a processor model that supports atomic compare-and-exchange operations on 128-bit values. In the Intel x86-64 instruction set, such 128-bit atomic operations are accomplished by the CMPXCHG16B (assembly language in case you were wondering Smile) instruction.

With this background, and knowing that the user was running VirtualBox, I thought this is a problem with the CPU emulation code in VirtualBox. Personally, I am fortunate to be running Windows 8 Hyper-V so I would never see this issue!

For VirtualBox, it turns out that we need to adjust the virtualization software settings as per this thread. More information is also available at this link. From my understanding the VirtualBox setting is only effective if the physical hardware allows it as well. In other cases, if you get this error on physical hardware, check if it is a processor which supports the CMPXCHG16B instruction (older AMD processors might not support it, for example.)

Hope this helps!

WWSB: What Would Scoble Blog?: Total immersion with WorldWide Telescope and Oculus Rift

$
0
0

When Robert Scoble saw a demo of WorldWide Telescope under embargo he was so moved that he posted his now famous "Microsoft Researchers make me cry" blog post. If he saw what we have been doing lately, hooking WorldWide Telescopes Eclipse release up to Oculus Rift, he might post "Microsoft Researchers make me sick". Robert did not do a lot of crying during the original WWT demo, and he probably would not be doing any real heaving now if he saw it, but many astronauts get at least a little queasy their first time in space, and I am sure he would have fun with the headline.

Over the last couple of weeks the folks in my office have been getting mind-blowing demos of the total immersion that comes with putting on a Oculus Rift head mounted VR goggles and being transported into space. I usually start people off orbiting with the international space station. They look around them and see all the modules, solar panels and radiator panels surrounding them, then look down at their feet and see the Earth slowly moving down below them as they zoom 17,200 mile per hour orbiting the Earth once every 95 minutes. Grab and XBOX 360 controller and now you are flying around the space station. They we fly down and soar thru Yosemite Valley, speed time up and watch the sun move across the sky, the light dims as night falls and the sun sets to a brilliant red atmosphere and the stars come out. Turning your head north you see the big dipper, and pointing from the end of the cusp we follow the line to Polaris, the north star. I have videos I captured of colleagues giggling, saying "Wow!" or "Oh my gosh!" over and over again. Most agree it was the single most amazing thing they have every experienced.

By this time you may have completely forgotten that you are not actually in Yosemite Valley or orbiting the ISS, but wearing an  1280x800 HDMI 7" LCD panel with a combination of Gyro, Accelerometer and compass mounted on you head. The technology does such a good job at fooling you because the head tracking is so fluid. To make this illusion successful the sensors are read about 500 times a second and the orientation of your head is tracked quickly and accurately, while WorldWide Telescopes new Eclipse DirectX11 is rendering frames at the LCD limit of 60 updates a second. The combination of low latency sensor reading, and high-performance rendering mean the feedback to your brain is near instantaneous and you are fooled into thinking you are really there.

While 3d and Stereo environments have been considered "immersive", they come nothing close to this experience. The WWT/Rift integration does use stereo, but with a twist. Many of the pixels are used to fill your peripheral vision to a point where you can't see them all. They use a lens to give you a very high perceived field-of-view. While this sacrifices some of the resolution in the center of the field, large field of view that allows your eyes to gaze left and right and see more of the scene, even before you head moves. When you add the head tracking you can't ever "get out" of the image. An IMAX has a impressive field of view, a full dome planetarium even more, but the moment you turn you see walls and seats and the projectors behind you. In the WWT/Oculus experience you can't ever turn far enough around to get out of the scene: it becomes your reality. People have talked about virtual reality for years, but no product has every come this close to achieving it.

The Oculus may have been designed to gamers, but with WWT it is already showing its chops for Research and Science Visualization. I loaded data from years of earthquakes and flew under the Earth to analyze the patterns, seeing the distribution of quakes along the tectonic plates. Then I brought in 30 years of rainfall data for the entire US, and flew over the "Terrain" of rain fall stacked over the ground in a grid of a half-million 3d bar-charts. You get a real understanding with visualization of this data, and with the Oculus you can really focus on the data without distraction. The scenarios are endless and your get more out of every data set you view.

Just when you think it could not get any cooler, I added Kinect to the mix. Now I am flying around using my hands and arms, manipulating objects, giving voice commands all without losing the illusion of being there. If you have the space to work with the Kinect lets your really lets you naturally interact with the virtual environment. Over the summer we are going to do more development of this combination, both for standing and sitting scenarios.

The impact to education could also be amazing. Imagine kids with the closest planetarium several hundred miles away, sitting down in a class-room with a headset on, immersing themselves in a WorldWide Telescope tour teaching them the astronomy (or science) lesson of the day. Exploring the Earth, Mars, the moon or the sky and getting contextual answers to their questions. All at a price that almost any school can afford. You could even have virtual planetariums where a whole group of students could put headsets on while their instructor virtually flying around the "dome" showing them the sky.

For researchers, educators and science enthusiasts the combination of WWT and the Oculus rift is a real game changer, and far beyond the gaming it was designed for.

About now you are probably wondering how to get your hands on all of this. Well around July 1st the WWT Eclipse Beta will be on our website, with built-in Oculus Rift support, so if you were lucky enough to get one thru the Kickstarter then you can be up and running right away. Getting a Rift device will be more of a challenge. They are selling $300 development Kits right now, and delivery is quoted as August. We are still developing the integration of Kinect in WWT so that part is purely a demo for now, but we hope to make it public later this year. Their are so many new scenarios that this will enable, what we are releasing now is just the beginning. The sky is not the limit!

Jonathan Fay

PS. Thanks so much to GeorgeDjorgovski from Cal Tech who loaned us his Rift to make this work possible

Related Links

 

 

Implementing custom load balancing with “sticky” connections on Windows Azure

$
0
0

Authors: Jonathan Doyon (Genetec), Konstantin Dotchkoff (Microsoft)

 

Running an app in a cloud environment is all about scalability and using the economics of the Cloud. Scaling out (or scaling back) provides a great flexibility to adapt to workload changes. Windows Azure provides load-balancing capabilities that can transparently distribute workload across multiple instances of an app. However, what should you do if your application is not completely stateless and requires session affinity (a.k.a. "sticky" sessions). Well, there are several solutions to this problem. In this blog post we describe a design pattern used to combine the requirements for load-balancing of a client workload with the need for sticky connections.

This pattern was implemented in the Stratocast solution by Genetec based on the specific application requirements. However, the pattern itself is a common one and applicable to a broader range of scenarios that call for sticky sessions.

Stratocast is a security solution that allows the user to view live and recorded video that is safely stored in Windows Azure from your laptop, tablet or smartphone. The solution allows the user to monitor multiple live or recorded video streams. Once a camera is connected to the cloud service it will keep streaming video. If for some reason a camera gets disconnected then it should reconnect to the same server component.

Even if we don't think about cameras and video streaming, you will find the same type of requirements in a lot of different scenarios. For the remaining part of the blog we will use "client" to refer to a client application or device (such as a camera) that accesses a server-side component running on multiple compute instances in Windows Azure.

So let's drill down into the details. Initially, a client will need to establish a connection to a server component running in Azure (in the case of Stratocast to the Azure-based video recorder). It will initiate an HTTPS call to the public endpoint of the Azure Cloud service. The endpoint is load balanced by the Azure load balancer and the call will reach one of the running instances.
The server component that receives the request will perform a custom load balancing based on the application specific business logic. In the case of Stratocast, for example, there are three important requirements:

  • Each camera is scored on its expected workload (i.e. "weight" associated with the client workload). Based on the expected incremental workload and the current utilization of the server pool, the custom load balancer should direct the video stream to the server that has enough capacity to accept it.
    Note that this is very important requirement, demonstrating that if different clients generate unequal workloads, a simple round-robin distribution of the load might not be the best choice.
  • The second requirement for custom load balancing in this scenario is to reconnect dropped connections to the same server instance. For that reason the app maintains a simple Azure storage table to record the information about which client is (or was) connected to which server instance. A fast lookup allows to identify if the incoming request needs to be routed to a particular server instance that already served that client.
  • And lastly, if the client opens multiple connections, those need to go to the same server instance. (Typically a camera opens one connection for video streaming and one for control purposes.)

While the first requirement is about load distribution, the second and third ones are related to the stateful nature of the communication.

After performing the custom routing logic, the server instance will redirect (using HTTP redirect) the client to the selected server instance by using its direct port endpoint. Once redirected the client will establish a connection that is stateful in nature (hence, all the requirements around session affinity). In the case of Stratocast, the connection will stay open for video streaming.

Let us expand a bit more on the details behind this solution. The Azure compute role hosting the server, (in the case of Stratocast- responsible to record video) has two public endpoints: an input endpoint and an instance input endpoint.

The input endpoint is a regular public endpoint automatically load balanced by Azure. Cameras always initiate communication on this port and then the system will issue an HTTP redirect to the appropriate instance input endpoint as determined by the custom load balancing logic explained earlier.

The instance input endpoint (a.k.a. direct port endpoint) is used to communicate with load balanced role instances. It requires a port range in the configuration file instead of just one public port (see Define Direct Port Endpoints for a Role for more details). Azure will automatically assign one port in the range to each instance. This endpoint can be used for direct communication with the instance - Azure does not load balance requests coming on the instance input port. When a role instance starts up, some code was added in the role to read the public port of the instance using the Azure API and to record it in an Azure storage table. This information is used later by the custom load balancing algorithm to redirecting calls to a specific server instance. The solution also monitors the health of each instance and keeps the list of server instances in the table storage up-to-date.

The following graphic shows the conceptual flow of the pattern:

Figure 1 – Custom load balancing pattern overview

For sure, this is not the only way to solve this challenge, but after reviewing and trying out some different approaches this pattern was found to be very useful for the described requirements. Using IIS Application Request Routing (ARR) is one alternative that can be considered. In comparison, ARR would provide the following:

  • Proxy-based routing with HTTP forwarding instead of HTTP redirect
    The internal forwarding of HTTP requests is extremely efficient and much faster than a HTTP redirect. For apps with a high rate of requests without a need to keep the connection open, ARR will provide faster experience. In the case of Stratocast the HTTP redirect overhead is negligible since it happens only at the beginning; once the connection is established it will stay open and there is no need to route additional requests for that connection.
    On the other hand, running IIS ARR introduces some extra cost in terms of resources on the server (which was avoided in the case of Stratocast).
  • Health monitoring
    ARR provides predefined ways to monitor the health of the servers. In the case of Stratocast, the app monitors the health of servers in a custom way and manages the list of available servers using a custom Azure storage table.
  • URL rewrite
    URL rewrite allows the ability to hide complexity and internal details from the client; the requesting client will never see rewritten URLs. This aspect was not relevant for Stratocast, but it might be a helpful feature for your application.

For a detailed list of ARR features you can take a look at Using the Application Request Routing Module.

In the case of Stratocast, due to the very dynamic nature of the routing information, an ARR-based solution seemed to be more complex with respect to the deployment automation, configuration and management.

In summary, the described pattern includes the following elements:

  • Configure the compute role to use a public load balanced endpoint as well as direct port endpoints that will be used to directly communicate with the role instances.
  • Allow initial requests to be load balanced through the Azure load-balancer.
  • One of the instances will perform the custom load balancing to ensure appropriate distribution of the load and "sticky" connections. We provided a good example of requirements for this, however the routing logic will be specific for each application.
  • After the "right" server component is identified, the client will be redirected to that instance of the service using the instance input port.
  • The client will establish a session that will stay open for a longer period of time until it is closed.

In this blog post, using Stratocast as an example we described a pattern for implementing custom load-balancing in Windows Azure. We also discussed which requirements led to this design and what should be considered when evaluating potential alternatives such as IIS ARR.

Serving Users from a Globally Distributed Multi-tenant Azure Solution – The Stratocast Example

$
0
0

Authors: Jonathan Doyon (Genetec), Konstantin Dotchkoff (Microsoft)

 

Introduction

Running an app in Windows Azure allows you to address global markets. Once you have built an app for Windows Azure you can deploy it in a datacenter (DC) of your choice. And you can easily expand your presence to new regions by deploying the app in multiple DCs. You can install an isolated instance of the app in each region or you can have a cross-datacenter deployment. Here are a few reasons why you would like to do a cross-datacenter deployment:

  • Serving customers across geographies from a single logical system - ease of use by exposing a single URL
  • Providing cross datacenter availability (in a case of a major site disaster)
  • Spreading load across multiple DCs for high performance and resiliency


The Stratocast solution, developed by Genetec, is an example of a multi-tenant globally distributed solution across multiple datacenters. Stratocast is a software security solution that allows the user to view live and recorded video that is safely stored in Windows Azure from your laptop, tablet or smartphone. The solution provides the ability to monitor multiple cameras and to playback recorded video.

In this article we describe how the solution solves some of the specific requirements:

  • Minimize latency for video streaming
  • Data needs to be stored in a specific location (ideally close to the source, or at a specific location for data governance and compliance reasons)
  • All data for a tenant needs to be kept at the same location
  • Gradually upgrade subparts of the systems

High-Level Concept

Although the Stratocast solution is deployed across multiple Windows Azure datacenters, it appears as a single logical instance to the end customer. The user interface (UI) of the system is completely stateless (from a server perspective) and can serve end-users from any Azure datacenter. The middle-tier components of the solution require affinity based on the location of the tenant's data. Because of that, those two layers (UI and middle-tier) are handled differently.

The end-user access to the web portal is distributed by Windows Azure Traffic Manager. In order to serve a client request from the "closest" Azure DC the Performance load balancing method is used. In addition, Azure CDN can be used for static content of the UI to further improve the experience.
Once the user has accessed the web portal, the location of the corresponding tenant will be determined by performing a look-up against a central Azure SQL Database. This database has information about all tenants in the system and contains the URL of the cloud service responsible to serve the tenant. The location of the tenant is configured during initial provisioning - the customer is required to provide a valid country and state which will be used for the selection of the closest Azure datacenter.
After determining the location of the tenant (using the look-up) the web portal will connect to the appropriate cloud service in a specific Azure datacenter. All persisted tenant configuration and data is co-located with the cloud service in the same DC. This ensures that end-users are served from the closest geo-location, while in the background the data for one customer/tenant is kept together (potentially in a different location).
The solution is multi-tenant by design and each middle-tier cloud service deployment handles multiple tenants. For manageability, the system is also partitioned based on the size of a deployment (e.g. number of connected cameras for each deployment) and there are multiple middle-tier deployments in each datacenter. This simplifies tenants management and allows for partial upgrades.

Having this topology, a tenant can be located in the West US Azure datacenter. However a user may access the solution from the east coast of the US and will be connected to the web portal deployed in the East US datacenter. The web portal will determine the "responsible" middle-tier in West US and will call services as necessary. The UI communicates with the middle-tier in a loosely coupled manner. This is not only a good practice because the communication potentially will go across the datacenter, but it is a recommended key pattern for communication between components of a cloud app in general (there is a great blog post on this pattern that you might want to take a look at).

Solution Architecture Overview

The following graphic provides an example of the topology with two Azure datacenters and one middle-tier cloud services deployment in each for simplicity:

Figure 1 – Deployment overview with two Azure datacenters


Stratocast uses multiple queues to decouple the components and layers of the solution.
The web portal consists of a web role responsible for the UI, a worker role that implements business logic as well as a queue and Service Bus for asynchronous communication. The communication from the web to worker role goes through a queue (shown as WR Queue in figure 1); response communication from the worker to web role is implemented using Service Bus Topics.
Each middle-tier cloud service (i.e. server) has a queue for incoming requests (Server Queue in figure 1). The server will pick up a request from the queue, will process it, and will place the response on a "response" queue, which in turn will be consumed by the web portal. Each web portal has its own queue for receiving messages from the middle-tier (Portal Queue in figure 1).

Let's expand a bit on the flow of communication between the components. Based on a user interaction the web portal sends a request for an operation to its worker role using the WR Queue. It includes in the message a generated unique transaction ID. The web portal UI displays a message that the operation is in progress (e.g. updating camera configuration) and at the same time, it subscribes for a notification to a Service Bus Topic with the transaction ID. The web portal worker role dispatches the request to the "right" middle-tier cloud service, which could be in the same or in a different DC. It places the request on the appropriate Server Queue and includes the transaction ID and the address of its own Portal Queue to where the response should be sent.
After processing the request the server will send a response back to the Portal Queue. Once the message is received, the web portal worker role performs business logic, persist information as required and posts a notification for the UI through the Service Bus topic. The web server that is listening on the Service Bus topic for that notification will update the UI with the outcome of the operation. Using Service Bus Topics for the UI notifications allows for long-running transactions (e.g. some operations such as camera plug-in may take long) and out-of-order processing.

For the sake of completeness we need to mention that not all Stratocast communication goes through the queues. There are solution specific requirements that demand "live" service calls. For example video streaming is served back to the client directly from the server to minimize latency. There are also other operations that require "live" information and are implemented through web services hosted on the server worker role. With the exception of those "special" operations, all transactional type operations are implemented using the asynchronous communication pattern as described above, which is fundamental when building distributed systems.

Summary

The presented example demonstrates important considerations for cross-datacenter deployments. Apart from the main goal of serving customers from the closest datacenter, having the web portal separated from the middle-tier services combined with the loosely-coupled communication through queues, minimizes the impact of failures of individual components. Also software maintenance of the web portal can be performed easier by redirecting traffic to another datacenter for the duration of the maintenance. The partitioning of the middle-tier components allows for gradual upgrades. Using the failover load balancing method of the Azure Traffic Manager can be used to redirect traffic to another datacenter in the case of a disaster.
In general, the described techniques such as distributing an installation across multiple datacenters, partitioning and decoupling the components of the solution can improve the overall availability of the system.

Creating VM VHDs in Windows Azure

$
0
0

 

There are several ways to create virtual machines (VM) in Windows Azure. One option is to create a new VM from the Azure management portal using a platform image from the Image Gallery. You can also create an image in an on-premises environment, upload the .vhd file to Windows Azure and then use it to create a virtual machine. The VHD format used in Windows Azure is the same as in an on-premises environment, which allows you to move .vhd disks between on-premises environments and Windows Azure.

Note: There are two important terms for .vhd disks: image and disk – an image is a generalized template you can use to create a new VM, and a disk is a VHD that is used by a specific VM, i.e. a configured VM and could be an OS disk or data disk – for more details, see Virtual Machine building blocks.

Once a VM is created (from image or booted from OS disk) you can attach additional data disks; thus, multiple disks can be associated with a single VM. (For details on creating a VM in Windows Azure, see Create a Virtual Machine Running Windows Server.)

The VHDs for VMs in Windows Azure reside on Azure Storage as page blobs. Hence, Virtual Machines benefit from the scalability and durability of the underlying storage service (e.g. with up to 6 replicas of a VHD). But there is another interesting aspect to this – the Windows Azure Storage service provides a great API that allows you to work directly with blobs. For page blobs you can also write to a range of bytes in a blob. Thus, if you follow the VHD Image Format Specification you can create a valid VHD directly through the Azure API.

We conducted a POC to validate the feasibility of this possibility. The POC scenario was defined and implemented as follows:

  • Create a VHD on-premises
  • Mount it as a drive on-premises and create folder and files
  • Un-mount the VHD and replicate it to an Azure page blob (initial sync)
  • After the initial sync mount the VHD on-premises again and change files
  • Send page updates for offset changes to the page blob in Azure (incremental sync)
  • Stop writing changes to the VHD
  • Create a VM in Windows Azure and add the VHD from the page blob as a data disk

After implementing and testing this scenario, we successfully verified that a) we were able to use the VHD in Azure as a data disk and b) all file changes were properly synchronized. While this scenario is fairly artificial, we demonstrated that you can directly modify VHDs in page blobs and assuming you keep or produce a valid VHD format you can use the page blob as a .vhd disk for an Azure VM.

In this POC we used the REST API Put Blob operation to create and initialize the page blob with the size of the VHD. To write content to a page blob we used Put Page by specifying the range of bytes at specific offset to be written.

A single threaded replication of changes to a page blob is fairly simple to implement. Depending on throughput requirements, it might be necessary to spin up multiple threads for replication (as part of the POC we demonstrated increased throughput when using multi-threading). Sequence of writes is an important consideration and if processing is multi-threaded and operations happen in parallel, there will be a need to guarantee write sequence. The Azure API offers you support for that through the "x-ms-blob-sequence-number" property; however, for every write you'll need to make another API call to update the blob sequence number. This might have an impact on the overall throughput.

Another common requirement is compression of data for the transfer. This could be implemented in the following way: the client can compress the data and then write it to a temporary blob in Azure; a worker role running in Azure can take care of decompressing the data from the temporary blob and then transferring it to the final page blob.
This pattern can be also combined with the previous requirement of guaranteeing the sequence of writes. The worker role can bring the requests in order before finalizing the write to the final blob. The process may sound easier than it is in reality because it will also involve an additional queue and/or a storage table for control purposes. This pertains more to a general pattern of multi-threaded transfer of data while guaranteeing order of request and goes beyond the scope of this post. Here it is important to mention that while introducing some additional cost for the worker role that will be running in Azure (as well as queue and/or table, where additional cost is probably negligible), this pattern is more flexible and will allow you to solve additional requirements (specific to your app). The compression/decompression is a good example of such a requirement that will help you to reduce bandwidth consumption and could help you to increase the overall throughput of transfer.

In this blog post I described an outcome of a POC that demonstrated the ability to create or modify a VHD directly in a page blob through the Azure API that can be used as a VHD disk for an Azure VM. In addition to the standard ways of creating a new VM directly in Azure, or uploading an existing one from an on-premises environment, this may open interesting ways for creating disks and VMs in Windows Azure.

Controlar las diferentes resoluciones en Windows Phone

$
0
0

Hay una frase que desde que la leí en Twitter ha sido uno de mis más grandes mantras, esta frase es: La magia está en los detalles. Y una característica de Windows Phone que no implica más que un simple detalle es justamente una de las más desaprovechadas del SO. Esta característica es su resolución.

En Windows Phone 8 contamos con la posiblidad de manejar tres resoluciones diferentes las cuales explico brevemente a continuación.

WVGA

Es la resolución que desde el lanzamiento de Windows Phone ha existido, no hay mucho por decir, su resolución es de 480 x 800, su relación de aspecto es de 15:9 y es la única soportada por ambas versiones del SO, tanto las versiones 7.x como Windows Phone 8.

WXGA

Es la resolución que muestra la mayor compatibilidad con las pantallas actuales (como la de tu sala), su resolución es de 768 x 1280, su relación de aspecto es de 16:9 y es una de las dos nuevas resoluciones que Windows Phone 8 es capaz de admitir.

HD(720 p)

Esta resolución quizá sea la que más conocida te resulte dado que antes de las pantallas de alta definición, este tipo de resolución fue la indiscutida ganadora en las pantallas, de hecho a partir de este nivel es que se comienza a considerar a una pantalla como de alta definición, su resolución es igual a 720 x 1280, su relación de aspecto es de 15:9 y al igual que la anterior, solo es soportada por Windows Phone 7.

Un detalle importante que debes considerar es que comúnmente asumimos que la resolución es proporcional al tamaño del dispositivo y no es así. Para poner un buen ejemplo, ve la imagen siguiente.

image

Ahora puedes comparar sus características en cuanto a la pantalla.

ModeloLumia 900 (blanco)Lumia 920 (amarillo)
Resolución480 x 800768 x 1280
Tamaño de pantalla56 x 93 mm58 x 97 mm

Como puedes ver, el tamaño físico de la pantalla no es un factor determinante para su resolución, a peras de unos cuantos milímetros de aumento en la pantalla, la resolución prácticamente es duplicada en el 920 con respecto al 900.

Como poder determinar la resolución en mi aplicación

Para poder saber cual es la resolución de con la que cuenta un dispositivo y determinar en función a ello el acomodo, visibilidad y despliegue de elementos, comienza por crear una nueva solución Windows Phone 8, dentro de esta solución crea una nueva carpeta y ahí una nueva clase llamada HelperResolucion. Tu proyecto debe quedar de la siguiente manera.

image

 

Permite que la clase sea pública y también estática.

namespace DiferentesResoluciones
{
    public static class HelperResolucion
    {
       
    }
}

Dentro de la clase crea un enumerador público que se encargue de las tres posibilidades que hay para la resolución.

public enum Resoluciones
{
    WVGA,
    WXGA,
    HD720P
}

Después crea un método público y estático llamado ObtenerResolucion();

public static Resoluciones ObtenerResolucion()
{

}

Por último, basándote en la propiedad del factor de escala, que se encuentra en la clase App, podrás determinar el tipo de resolución con el que estás trabajando y con un par de condiciones podrás controlar esta operación.

public static Resoluciones ObtenerResolucion()
{
    if (App.Current.Host.Content.ScaleFactor == 100)
    {
        return Resoluciones.WVGA;
    }
    else if (App.Current.Host.Content.ScaleFactor == 160)
    {
        return Resoluciones.WXGA;
    }
    else //App.Current.Host.Content.ScaleFactor == 150
    {
        return Resoluciones.HD720P;
    }
}

Una vez que tengas esta clase finalizada, podrás consultarla desde cualquier página de tu aplicación, para este ejemplo vamos a utilizar el método sobre escrito OnNavigatedTo() de MainPage.xaml. Un tip que te sugiero es que uses el fragmento de código para el operador switch.

image

 

Si usas el snippet de manera adecuada y además de ello estableces como condición el método ObtenerResolucion() obtendrás todas las opciones posibles del enumerador.

switch (HelperResolucion.ObtenerResolucion())
{
    case HelperResolucion.Resoluciones.WVGA:
        break;
    case HelperResolucion.Resoluciones.WXGA:
        break;
    case HelperResolucion.Resoluciones.HD720P:
        break;
}

Por último , solo necesitamos demostrar este ejercicio con algún elemento y el fondo de la aplicación es una excelente idea. Establece un color diferente para el fondo por cada resolución.

protected override void OnNavigatedTo(NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    switch (HelperResolucion.ObtenerResolucion())
    {
        case HelperResolucion.Resoluciones.WVGA:
            ContentPanel.Background = new SolidColorBrush(Colors.Green);
            break;
        case HelperResolucion.Resoluciones.WXGA:
            ContentPanel.Background = new SolidColorBrush(Colors.Blue);
            break;
        case HelperResolucion.Resoluciones.HD720P:
            ContentPanel.Background = new SolidColorBrush(Colors.Purple);
            break;
    }
}

El resultado dependerá del emulador con que ejecutes tu aplicación. Recuerda que puedes cambiar de tipo de emulador antes de ejecutar la aplicación.

image

Y al probar los resultados deberán ser como lo siguiente.

Emulador WVGA (no importa cual de los dos elijas).

image

Emulador WXGA

image

Emulador 720p

image

De esta manera podrás manipular la reacción de tu App y su apariencia en función de la clase que determina la resolución con la que cuenta tu dispositivo y agregarás un elemento adicional que podrá ser el gran distintivo. ¿Quién sabe? ¡Hasta podrías hacer que tu aplicación luciera completamente diferente en función de la resolución!

Puedes descargar aquí el código de ejemplo.

Cross-Datacenter Disaster Recovery in Windows Azure – Example Solution

$
0
0

 

Introduction

The planning of a High Availability (HA) and Disaster Recovery (DR) solution for an on-premises environment involves balancing business continuity requirements with the complexity and cost of implementation. It typically involves hardware and software redundancy as protection against failures. As is well-known, HADR can be fairly expensive and increasing the HA targets can lead to exponentially higher implementation cost. Windows Azure provides various built-in platform capabilities for HADR that can help you reduce complexity and cost.

In this blog post I describe an example of a Windows Azure cross-datacenter DR solution based on real-life projects I was involved in. I also discuss additional considerations for implementing DR for applications running in Windows Azure.

Applications running in Azure benefit immediately from the high availability of the underlying services provided within a datacenter. In addition, Azure offers a number of services that help you to deliver a DR solution or cross-datacenter availability if required. The article Business Continuity for Windows Azure describes the fundamental concepts and Azure's built-in capabilities. However, applications will still need to be designed and prepared to take advantage of those capabilities in order to provide high availability to the end user. Instead of using hardware redundancy as in an on-premises world, you need to think about utilizing redundant services of the underlying cloud platform (e.g. PaaS or IaaS) and design for resiliency. You'll need to run multiple instances of compute roles, handle transient faults and potentially use fallback strategies for identified single points of failure. Because cloud apps tend to be composed of multiple services, it is important to consider what is necessary to achieve HADR for the individual services of your app (instead of thinking about one approach for the entire app). The following two white papers provide excellent guidance on strategies for your Windows Azure application to achieve HADR: Disaster Recovery and High Availability for Windows Azure Applications, Failsafe: Guidance for Resilient Cloud Architectures.

Cross-Datacenter DR Solution Architecture – Example Solution

Before deciding on techniques for achieving high availability and disaster recovery, it is important to define the requirements and expected availability for the sub-services/components of your solution. Typically sub-components will have different requirements for availability, scalability, and performance. For example, from a use case perspective, changing settings and configuration of an application can have different performance and availability target levels than performing a business transaction. Based on the type of service, requirements and implementation details, you will apply different techniques.
The following example distinguishes several categories of services and applies different strategies for those services. Figure 1 shows a simplified high level overview of a cross-datacenter deployment of the system.

Figure 1 – Deployment overview with two Azure datacenters

A management portal provides functionality for configuring and managing different aspects of the solution, including provisioning of tenants, managing accounts and users, monitoring, notifications, billing, etc. The functionality exposed through the management portal UI is supported by a number of services, which for simplicity, are omitted in figure 1. For the purposes of this article only the provisioning service is relevant and is illustrated above. The management portal and underlying services use Azure SQL Database to persist the configuration.
The application itself – shown as Application Services in figure 1 - consists of another set of cloud services hosting the application UI (separated from the management portal) and application specific services. It uses Azure SQL Database as well as Windows Azure Storage (WAS) on the backend for persisting application data.

Each of the described components of the system have different HADR requirements and I'll discuss those in more detail separately.

Management Portal and Services

The management portal is considered extremely important because it provides critical operations management capabilities that have an impact on all tenants. It is deployed in an 'active/active' configuration using Traffic Manager's performance policy to distribute traffic across the datacenters (see Business Continuity for Windows Azure for details on 'active/active' deployment). The data is replicated between the datacenters using SQL Data Sync. For this deployment option the application has to be designed to operate across datacenters. One major challenge is ensuring data consistency. There are several possibilities to achieve this, which highly depend on the app specifics and workload:

  • The application could simultaneously write to multiple sites - directly or using intermediate queue(s)
  • Logical partitioning of data and replication to secondary location(s)
  • Continuous synchronization at the database level (e.g. using SQL Data Sync)

In any case, you should consider potential conflicts and conflict resolution handling based on the application logic. If it is too challenging to guarantee data consistency for your application, you can also consider an 'active/passive' deployment. Be aware, in that case you'll have running instances in the secondary location that are not utilized. And even then, when you perform a failover between DCs you still may run into data consistency issues that you need to plan for.

The provisioning service – running in both datacenters – is responsible for the deployment of the application itself. This happens in scale units, thus the service is not only used for initial deployment but also for scaling. You can think of this service being an 'active/passive' type of deployment, because it is not used on the secondary datacenter unless there is a need for failover.

Application

Finally, the app itself is deployed in only one datacenter. In the case of disaster it can be redeployed in the secondary datacenter, which is automated by the provisioning service. The application packages are kept in a blob storage based library in the secondary center. The provisioning service can be scaled out to greatly accelerate the provisioning of the application.
The app uses two types of data: relational and additional data stored in blob storage. Two different approaches are used to handle recovery of that data.
The relational data is more critical for the app and is restored from a database export (bacpac) kept in the secondary datacenter. The frequency of exports from the primary site define the RPO. The RTO time for the application depends on the time necessary to import the bacpac (and on the time necessary to deploy the app services, which is typically faster than the bacpac import). For more information on how to export a bacpac file to blob storage, see Business Continuity in Windows Azure SQL Database. Depending on your requirements, you might consider reducing the RTO time by keeping a synchronized copy of the database in the secondary datacenter (e.g. by using SQL Data Sync) – you need to estimate the cost differences between both approaches. Once the new SQL Database is restored and the application services have been spun up, the traffic can be rerouted to the secondary datacenter using Traffic Manager and the app is fully operational.
The data stored in blob relies on the built-in geo-replication of Windows Azure storage. In this specific example, the app is operational even without access to the blob data. Hence you can perform a failover to the secondary datacenter even if that data is not immediately available. Currently (as of writing) a failover of geo-replicated blob storage will be performed by Microsoft in a case of major disaster or datacenter-wide outage, when the primary site cannot be recovered in a reasonable timeframe. If you want to have full control to perform a failover to a secondary DC at any given time and/or a failover for individual set of blobs, you can keep copies of your blob data in the secondary datacenter. This is expected to have higher cost, but provides you much greater flexibility for failover scenarios to a secondary site.

Summary

The described example demonstrates an implementation of several techniques to achieve cross-datacenter HADR for the different components of an application. Typically in a distributed environment you should consider each component/service individually and decide on the appropriate techniques.
In the example above, the management portal is deployed in an 'active/active' configuration which provides cross-datacenter availability. The secondary datacenter is prepared for a disaster, by having the provisioning service up and running and ready to redeploy the app. In the case of a disaster the application will be redeployed using service packages and data copies stored in blob storage in the secondary datacenter. This scenario is a type of hybrid between the 'active/passive' and the 'redeploy on disaster' strategies described in Business Continuity for Windows Azure. For the given example this is considered to be the best trade-off between RTO and cost. I have also described possible variations of the example scenario and the considerations which may apply depending on your solution's specific requirements.


Custom Provisioning Service for Windows Azure Applications – An Example Pattern

$
0
0

Introduction

Windows Azure provides a variety of possibilities to automate management and deployment of Azure services, for example through scripting from the command line, programmatically calling the Management API or using tools such as Systems Center Operations Manager. In this blog post I introduce a pattern for building a scalable custom provisioning service for a Windows Azure application.
Fully automating the deployment procedures for an app is always a good idea in order to have a consistent and predictable process. This is specifically important if your app has a complex structure of services and/or you want to automate deployment where you anticipate often dynamic changes in the environment (e.g. when serving a large number of tenants and regularly need to adjust deployed resources to serve additional tenants).

Example Pattern

The following graphic provides a high level overview of the architecture of a provisioning service.

Figure 1 – Provisioning Service Overview

The provisioning service can receive requests from different clients. It can be used for manual provisioning of tenants by a management portal, automated from a monitoring solution for dynamic scaling or from the application itself. For example, the application might need to pro-actively create additional resources for planned workloads, based on predefined rules, business logic or schedule. In addition, the service can be deployed in another datacenter (DC) and can be used in a case of disaster to spin up the application on a secondary site. In one of my previous blogs I described a cross-datacenter disaster recovery solution that uses a provisioning service to redeploy the app in a secondary DC.

Once a request is received, the dedicated provisioning service takes care of the deployment and configuration of app services. A good practice is to manage deployment in scale units. Ideally, the scale unit for an application should be defined in relation to expected workload (e.g. number of users). The scale unit typically will define a number of different resources for specific workload – e.g. X number of a web roles, Y number of a worker role, additional storage, etc. necessary to serve a specific number of users. The provisioning service can perform deployments in scale units, i.e. provision the required resources for a specific number of scale units.

The service uses queue(s) to decouple the processing. Optionally, different queues can be used to manage different priorities or potentially to decouple provisioning of independent components of the app.

Worker roles are responsible for processing the queue messages/jobs and use the Azure Management API to programmatically create the necessary Azure resources, as well as to configure them as appropriate. Service packages and reference app data stored in Windows Azure Storage (WAS) is used during provisioning. The worker roles can be fanned out dynamically based on the current workload (i.e. queue size) to speed up deployment. For example, in the case of a disaster recovery failover to a secondary site for a large deployment, all resources need to be provisioned at once. In that case the worker roles can be scaled out in order to accelerate processing and reduce RTO.

The outcome of the provisioning jobs is persisted in a repository (e.g. SQL Database). This repository contains the entire "footprint" of the deployment (i.e. all currently provisioned resources). It can be used to list or visualize all resources being utilized, organized by app specific boundaries (e.g. currently running services in scale units per tenant).

Summary

In conclusion, I have provided a short overview of a pattern for automation of app deployment. In general, it is beneficial to fully automate deployment and abstracting it as a service allows you to use it in a consistent and unified way for initial deployment as well as for scaling in different scenarios: such as manual provisioning through a UI or automated as part of the app logic or from a monitoring solution. For sure, this is not the only option for implementation, but hopefully reading through this article has generated some ideas for your application.

Writing a Windows 8 App with Visual Studio and Team Foundation Service

$
0
0

Summary

A lot of people ask me how they can write a Windows 8 App with Visual Studio and Team Foundation Service (our online, hosted version of Team Foundation Server). In fact, I was talking with a friend recently whom I used to code with about 10 years ago. It had been a long time since he programmed with C# (nearly 10 years?) as he now spends most of his time programming low level driver code. He challenged me with how fast a "new" developer could get started with writing a Windows 8 App. He set the challenge asking if a new developer could get started in under 1 day. Challenge accepted. Challenge accomplished. Here is my journey through documenting how a new developer can get started with Windows 8 application development using Visual Studio & Team Foundation Service.

 

Step 1 - Where to Start?

This MSDN website has wealth of information on how to get started.

 

Step 2 - Select your Tools (Visual Studio)

I already use Visual Studio Ultimate w/ MSDN because it lets me architect my applications better and debug my applications a lot faster using IntelliTrace: http://www.microsoft.com/visualstudio/eng#products/visual-studio-ultimate-2012

 

Step 3 - Select your Programming Language

One of the things I like most about application development on Windows 8 is the access to a variety of programming languages & frameworks. I prefer C# & XAML, which is described here: http://msdn.microsoft.com/en-US/windows/apps/bg125376 

 

Step 4 - Manage your Project using Team Foundation Service

Team Foundation Service, available here, http://tfs.visualstudio.com/en-us/tfs-welcome.aspx is free and it's a great tool to manage your projects. I used TFS for this project to help me:

* Keep track of my user stories

* Create functional requirements

* Storyboard my user experience

* Source Code Control & repository

* Managing my project backlog (i.e. future ideas)

* Manage my test cases

* Solicit feedback from my friends via the UAT Feedback Tool

 

Step 5 - Create User Stories & Tasks

To help myself stay organized, I created user stories and tasks right within TFS.

 

Step 6 - Storyboarding in TFS & PowerPoint

As a visual person, it's always easier for me to see applications come alive before I code them. So I used the Storyboarding capabilities of TFS & PowerPoint.

Step 7 - Begin Coding with the Sample Apps

I found these sample applications to be incredibly helpful: 

http://code.msdn.microsoft.com/windowsapps and http://code.msdn.microsoft.com/windowsapps/Windows-8-Modern-Style-App-Samples

 

Step 8 - Obtain your Developer License

 

Step 9 - Obtain Feedback with the TFS UAT Feedback Request Tool

Right from your TFS as a Service dashboard, you can solicit feedback from your UAT users.

 

 

Step 10 - Obtain your Free Windows Store Developer Account from your MSDN Subscription

If you are a Premium or Ultimate MSDN subscriber, you have access to a free Windows Store Developer Account as part of your included MSDN benefits.

 

 

Step 11 - Certify your App

Microsoft recommends that you always certify your application before submitting it. This will help you identify blockers to publication before you go through the longer process.

 

 

Step 12 - Publish your App

Most developers, myself included, find this to be the most involved step. I encountered a few surprises along the way that I wasn't prepared for. For example, I was declaring that I needed Internet (Client) capabilities in my packaging manifest file. As such, I was required to include a privacy statement in both the submission of my application as well as the Settings charm. But my application doesn't really need internet capabilities, so I just removed this requirement.

There is a great dashboard and step-by-step guide for publishing your applications, here: http://msdn.microsoft.com/library/windows/apps/hh694062.aspx 

 

 

Conclusion

And there you have it! A 12 step guide for how to publish a Windows 8 App in under a day. Let me know how your experiences go!

 

 

 

 

SharePoint 2010 Guru - Adding Charts to Standard Webparts and Visual Webparts

$
0
0

Congratulations to our SharePoint Guru winner for May 2013! To find all the competitors for May (and more information about this monthly contest), see the Wiki article: TechNet Guru Awards, May 2013.

 

Matthew took the top spot with this great article:

SharePoint 2010: Adding Charts to Standard Webparts and Visual Webparts

 

Here are our three winners for SharePoint 2010:

 

Gold Award Winner SharePoint 2010 Technical Guru - May 2013  

Gold Award Winner

 

Matthew YarlettAdding Charts to Webparts
  • "I know I’ll need this code some day."
  • "The article of Matthew is written in such a way that even if you have almost no experience with SharePoint development, you would be able to build a working proof-of-concept version by following the step by step guide he wrote. "
  • "Interesting solution, and it is clear."

Silver Award Winner

 

Christopher ClementHow to filter a list dynamically
  • "Clever thinking, I like it!"
  • "It is a good workaround."

Bronze Award Winner

 

Christopher ClementHow to delete crawled property
  • "Handy tip!"
  • "Good tip!"

Congratulations to Matt for leading the SharePoint 2010 category this month. Perfectly written article that is exactly what we like to see.

For an excerpt from the article, I'm going to mess you up completely and give you steps 9 & 10:

...

9. Now we're nearly ready to deploy... but first we need to add the httphandler to the web.config file for the web application you are going to be deploying the webpart to. To do this, you'll need to add an entry into the <handlers> section and the <appSettings> sections of the web.config file.

Add the Http Handler for the chart images:

<handlers>
    <addname="ChartImageHandler"verb="*"path="ChartImg.axd"type="System.Web.UI.DataVisualization.Charting.ChartHttpHandler, System.Web.DataVisualization, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</handlers>

Add a new key to the <appSettings> section to configure the location (among other things) the image files are written to (see: Image File Management (Chart Controls) for more information):

<appSettings>   
    <addkey="ChartImageHandler"value="storage=file;timeout=20;dir=c:\Temp\;"/>
</appSettings>

 

10. Lastly, deploy your solution to SharePoint and add your webpart to the page. Hopefully it will look something similar to the one below (or the same if you stole my data)!


The only real difference when adding a chart to a Visual Webpart, is that you need to register the charting controls on the ascx page add the chart control directly onto the pages markup (you don't need to initialize the control on the OnInit event).

 

-------------------------------

 

Read the entire article here:

SharePoint 2010: Adding Charts to Standard Webparts and Visual Webparts

 

Thanks again to Matthew for the fantastic contribution!

   - User Ed

SBCraft - Small Basic Featured Game

$
0
0

Go check out SBCraft and leave a comment with your thoughts about the game!

 

Download SBCraft from TechNet Gallery

  

More information from the creator of SBCraft, Ardiezc Quazhulu:

A Minecraft-like game. To play, just extract, (password is "sbcraft") and run SBCraft.exe. You DO NOT have to put the SBCraft folder into C:\ Just run the executable.

You HAVE to read the README.txt file. It will tell you everything.

 

About Ardiezc: I am 14 years old, and I began programming TI-84 calculators when I was 12. I then broadened my senses, and taught myself some Small Basic. I love video games, and even more, trying to figure out how they work. When I get older I want to be a computer software engineer for the rest of my life. I am now working on "SBCraft" as my main project.  The password is "sbcraft" (No Quotes).

 

Thanks Ardiezc for this great contribution! Leave any questions or feedback below!

   - Ninja Ed

How to Deploy SAP on Microsoft Private Cloud with Hyper-V 3.0

$
0
0

Windows Server 2012 includes powerful virtualization technologies that enable customers to improve flexibility, reduce H/W costs and improve performance.

With this blog we are releasing to customers and partners detailed documentation on how to deploy and configure Windows Hyper-V as a consolidation tool for SAP.

 

1.        SAP on Windows Hyper-V Private Cloud Documentation Structure

SAP on Microsoft Private Cloud chapters are organized as follows:

 

Chapter 1. Microsoft Private Cloud Solution for SAP: Hardware, Network & SAN details the servers, network and storage requirements. 

Chapter 2. This chapter will detail how to move existing Hyper-V 2.0/3.0 or VMware VMs to Microsoft Private Cloud

Chapter 3. Details the 42,000 SAP 3 Tier Benchmark 240,000 SAPS record benchmark recently released

Chapter 4. All SAP on SQL Server customers are advised to implement several basic security hardening procedures documented in this blog. 

Chapter 5. This chapter will detail how to monitor the Private Cloud infrastructure. Integration with SAP LVM is discussed in this blog

Chapter 6. Microsoft Private Cloud Solution for SAP: Configuration of Cluster Shared Volumes, VHDX, Disks & LUNs

Chapter 7. HA/DR concepts are detailed in SAP High Availability in virtualized Environments running on Windows Server 2012 Hyper-V Part 1: Overview

Chapter 8. System Copy and Backup/Restore will be documented in this chapter

Chapter 9. Microsoft Private Cloud Solution for SAP: Landscape Design - this chapter details how to consolidate a SAP landscape onto Hyper-V, how to size VMs and how many physical servers are required

 

2.        SAP on Windows Hyper-V Private Cloud Documentation - Question & Answer

Customers and Partners with questions are welcome to post in this blog.

 

 

 

 

1. Microsoft Private Cloud Solution for SAP: Hardware, Network & SAN

$
0
0

Microsoft Private Cloud for SAP requires specific Hardware, Software, Network and Storage configurations. 

These configurations have been tested in lab environments and piloted at customer sites. 

This document details the CPU, RAM, Network and other configurations recommended to achieve consistent stable performance. 

The file 1. Microsoft Private Cloud Solution for SAP: Hardware, Network & SAN.docx is attached to this blog.

Customers and Partners are welcome to post questions in this blog.

thanks

 

 

 

 

6. Microsoft Private Cloud Solution for SAP: Configuration of Cluster Shared Volumes, VHDX, Disks & LUNs

$
0
0

Microsoft Private Cloud for SAP requires specific configuration of Cluster Shared Volumes, Virtual Hard Disks and LUNs. 

These configurations have been tested in lab environments and piloted at customer sites. 

This document details the recommended configurations and naming conventions.  It is strongly recommended to follow this guidance. 

The file 6. Microsoft Private Cloud Solution for SAP: Configuration of Cluster Shared Volumes, VHDX, Disks & LUNs.docx is attached to this blog.

Customers and Partners are welcome to post questions in this blog.

thanks


9. Microsoft Private Cloud Solution for SAP: Landscape Design

$
0
0

Microsoft Private Cloud for SAP is built on fixed Virtual Machine and physical server "building blocks" 

These configurations have been tested in lab environments and piloted at customer sites. 

This document details how to consolidate an entire SAP landscape onto the four different Virtual Machine sizes and how to calculate how many physical servers are required.

The file 9. Microsoft Private Cloud Solution for SAP: Landscape Design.docx is attached to this blog.

Customers and Partners are welcome to post questions in this blog.

thanks

 

"Remove Mapping" hassles in TFSPreview

$
0
0

I'm happily crunching away on my new machine setting up Visual Studio and such. I've mappped in a workspace using TFSPreview to my c:\dev drive. Somehow I accidentally wind up with a second mapping to my c:\dev drive. Attempting to pull down any files results in the following wonderful error message:

In Source Control Explorer, the workspace shows as "not mapped" but when I try to set the map, up comes the dialog of no use as seen abovwe.

I attempted to go the long route in the Visual Studio IDE. File / Source Control / Advanced / Remove Workspaces. Again, up comes the dialog of doom, and again I'm left with no route to remove or modify the offending workspace.

RESOLUTION: I channel my inner Esteban Garcia (one of my favorite Florida TFS gurus) for starters and open the "VS2012 x64 Native Tools Command Prompt". Then type in "TF Workspace". Remove the offending entries. If you get an error message that it can't resolve the workspace, type "TF Workspaces" on the command line to find a list of viable edit workspace targets. One you've removed the workspace you can re-add one in for a different local location and avoid the conflicts.

Posting the full error message for the SEO gods in the hope this helps someone else out in a sticky situation.
Microsoft Visual Studio
Remove Mapping
Multiple identities found matching workspace name
Please specify one of the following workspace specs:

Technical Calendar - July 2013

$
0
0

Boston Windows Dev Meetup events

Northeast User Group Map and Calendars

SundayMondayTuesdayWednesdayThursdayFridaySaturday
 1
2
3
4
5
6
7
8
910
11
12
13
14
15
19
20
21
22
23
24
25
26
29
30
31
Event Legend:

Microsoft-sponsored (FREE)

Microsoft community-sponsored (FREE)

Other community-sponsored (free or nominal charge)

Registration fee required

[Sample Of Jun 29th] How to use bing search API in Windows Azure

$
0
0

 

Homepageimage
RSS Feed

Sample Download :

CS Version: http://code.msdn.microsoft.com/How-to-use-bing-search-API-4c8b287e

VB Version: http://code.msdn.microsoft.com/How-to-use-bing-search-API-dfde7b10 

The Bing Search API offers multiple source types (or types of search results). You can request a single source type or multiple source types with each query. For instance, you can request web, images, news, and video results for a single search query..

imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Inverted Charting

$
0
0

So I considered naming this week's post "Climb Up On My Roof Sunny Boy", but I'd probably get sued by Al Jolson's estate. Though, as we're now officially inverted here at chez Derbyshire, it did seem appropriate. What I wasn't expecting was the domestic strife that is a direct result. They don't tell you about that in the glossy brochures...

...(read more)
Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>