Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

Welcome. Sign in to Visual Studio.

$
0
0

Applications today are leveraging the cloud to deliver personalized experiences and offer new capabilities. So it’s no surprise the tool used to build those applications is also putting the connected developer at the center of the IDE.

In Visual Studio 2012, a few features already offered connected experiences that brought online services to specific features in Visual Studio. Team Explorer was one of the first by connecting developers to team development and collaboration tools in the cloud through Team Foundation Services online. Windows Store has integration into the Windows Store projects allowing you to reserve, associate and publish your Windows Store applications from within the IDE.

In Visual Studio 2013 Preview you can sign in to Visual Studio with a Microsoft account to enable features like synchronized settings that will roam with you to your other Visual Studio devices. This is just the beginning of a personalized and productive connected experience that over time will include more features taking advantage of the primary Microsoft account to deliver value to you, the developer.

In this post, I want to share some of what’s going on under the covers, the concepts that we’ve defined as part of this new capability and how we’ve arrived at experiences that make the connected IDE.

One human, many online identities

Our first step was to understand how developers use online identities across their work and personal lives. Most developers we surveyed actively used at least 2 or more Microsoft accounts for their regular development. Some online identities were created to represent their work personas and manage assets associated with an organization like their employer or a consulting client. Other identities were created to be shared and represent a team activity like credentials used to publish apps on the Windows Store. Of the online identities a developer used, one of them was often used as a primary account for personal activity such as email, recognition, and other personal information. Typically this primary online identity was also associated to mobile devices like their Windows 8 tablets and phones.

To model how developers work with multiple online identities we are introducing a top level online identity, for you to sign in with your existing Microsoft account, that is the primary online identity for the Visual Studio IDE and represents you the human. This identity is used to synchronize your settings across all your devices and stays active even when using a feature like Team Explorer or Store publishing with its own connections. You can sign in to Visual Studio on all your devices with this personal identity and Visual Studio will download your preferred settings like theme and key bindings and keep all devices in sync that are signed in under this identity. We’ll have more detail about how we built the roaming settings experience and the settings we roam in another post.

To enable switching between connections that may use different identities without prompting for credentials all the time, we added a secure credential storage to store connections you’ve used. Team Explorer now uses these stored connections to remember credentials for multiple Team Foundation Service accounts. Team Explorer also can switch between team projects in different Team Foundation Service accounts each with their own identity without prompting you to authenticate each time you switch accounts.

Visual Studio automatically keeps you signed in to your primary online identity and remembers the credentials so settings immediately start roaming and you can quickly access Team Foundation Service accounts without having to enter your password each time. The credentials storage is Windows User specific and is only available to that user. To disconnect a connection you need to manually sign out and Visual Studio will remove those credentials from the device.

Welcome. Sign in to Visual Studio.

One of the important benefits of synchronizing your Visual Studio settings is to make setting up a new device quick and easy. To get you up and running on new machines more quickly we redesigned the first launch experience to integrate your online identity so Visual Studio starts up with your preferred settings.

I’ll describe below the two “first launch” experiences you will see: the real first, on your first Visual Studio 2013 device, where you establish a profile and associated settings, and a first use on subsequent devices.

The first time you sign in on your first Visual Studio 2013 device, we’ll ask you for some information to personalize your profile as well as your preferred theme color and initial environment settings. We’ll remember these choices for you. If you sign in to a new device with Visual Studio 2013, we will download and set your choices automatically. Of course you can always change these and other settings any time and we’ll make sure they roam to all your devices.

First Launch Sign in dialog, enter credentials, select default settings and theme

You can use any valid Microsoft account to sign in to the Visual Studio 2013 Preview. We recommend you sign in with the Microsoft account you have associated with your MSDN subscription or Team Foundation Service account for the best experience. If you have multiple Microsoft accounts just pick the one you use most often like the account associated with your Windows 8 device.For now we only support Microsoft accounts but we are looking at expanding our options in the future.

The identity card

If you skipped signing in when first running VS2013, you can sign in anytime from the identity card in the upper right corner of the IDE. Once you sign in, the identity card will give you quick access to useful identity information: your name and avatar, active TFS account or server, team project, and username as well as shortcuts to other connected IDE tasks.

The identity card

The Account Settings dialog also enables you to access your Visual Studio profile and tosign out from Visual Studio. When you sign out of Visual Studio from the account settings dialog we disconnect your primary online identity. After signing out your personal information is removed from the identity card and Visual Studio stops roaming settings to or from this device but leaves behind the last settings we synced before you signed out. Sign out of the account settings is not a global sign out so you will still need to sign out of other connected experiences within Visual Studio separately.

Why have a 14 day trial on a Preview release?

We think many of you will sign in and leverage the capabilities that come with signing in, so we want to make sure our online services can handle all of our users registering and synchronizing settings across all their devices with Visual Studio 2013. Leading to this Preview release, we have done load simulation internally, and wanted to extend this verification to real use. In the coming weeks, we will be monitoring service health, measuring service responsiveness, improving performance, and responding to live site issues as they come up, as well as reviewing your feedback on all the connected experiences. By asking all of you to sign in to this pre-release we hope to gather usage data to scale out and support millions of connected users by the time we ship.

The 14 day trial period lets users download and use the product offline and then sign in at a later time that works for them. As your trial gets close to expiring we’ll remind you to sign in with notifications in the new notification hub. At the end of the trial period you will be required to sign in to unlock Visual Studio so don’t wait for the last minute.

2 day and 7 day notification to update license

When we release Visual Studio 2013, we will support the same ways to unlock the product as Visual Studio 2012 including volume licensed builds and entering your own product key. Once you unlock Visual Studio with a product key you can still optionally sign in later to start roaming settings across all your devices.

What to do when there is a service outage?

We work very hard to offer a reliable service with minimum downtime but from time to time service downtime might occur, either due to our scheduled service maintenance to provide you a better service or an unscheduled event in the case we run into trouble. Features like our push notifications for roaming settings and periodic polling will make sure your Visual Studio connection is always up to date and minimize any impact to you if an outage does occur. We take every outage that occurs on our live sites very seriously with a dedicated response teams that respond to automated monitor reports as well as customer feedback.

If you encounter a problem with the experiences I described, the first place to check is the visualstudio.com service status site. This is where our ops team will publish any outages that affect visualstudio.com including those that impact the connected experiences in the Visual Studio client. We’ll keep this site updated with progress as the incident is investigated and follow up with a wrap up postmortem once the incident is resolved. To ask a question about any service on the live site use the Team Foundation Service Forum site.

This is just the beginning…

There are many new opportunities to personalize and improve your Visual Studio experiences as we connect you to new cloud services and capabilities. You’ll see more features throughout Visual Studiouse your primary identity to connect to online services and expose new connected features. You’ll also see Visual Studio do a betterjob of remembering credentials for more connected experiences. Stay tuned for more on these and other changes in a later post.

Feedback

We want to hear your feedback about the new connected IDE experiences to make sure we build the best product for you. As you try these new experiences, sign in with your Microsoft account and roam your settings, then reach up and use the send-a-smile to tell us what is working well and what areas you would like to see us improve the experience for you. If you find a bug use the Connect site to let us know. Bugs logged through connect go directly on to the engineering team’s backlog and are also available for other customers to follow the resolution. If you have ideas of what else you’d like to see create a suggestion on User Voice for the community to vote on.

Finally thank you for taking the time to try out our features and letting us know what you think.

clip_image015

Anthony Cangialosi– Lead Senior Program Manager, Visual Studio Platform Team

Short Bio– Anthony Cangialosi is a lead program manager for the Visual Studio platform team which works on the core features that all teams in Visual Studio build on and all developers use. Anthony joined the Visual Studio team in 2001 and has worked on a variety of areas including mobile device development the Visual Studio SDK, and the Visual Studio Ecosystem.


What’s new for C++ AMP in Visual Studio 2013

$
0
0

Since the first release of C++ AMP in Visual Studio 2012 nearly 8 months ago, we have been working hard to bring you the next set of C++ AMP features. BUILD 2013 day 2 keynote demo provided a snapshot of C++ AMP in Visual Studio 2013. In this post, we will delve into the C++ AMP features available in Visual Studio 2013 Preview.

Support for shared CPU\GPU memory

The CPU\GPU data transfer efficiency on accelerators that share physical memory with CPU is now significantly enhanced due to elimination of redundant copying of data between GPU and CPU memory. Depending upon how the code was written, C++ AMP application that run on integrated GPU and WARP accelerators should see no (or significantly reduced) time spent on copying data. This feature is available only on Windows 8.1 and is turned on by default for WARP and some integrated GPUs. Additionally, developers can also opt into the feature programmatically through a set of APIs.

Enhanced support for textures

In Visual Studio 2013, we added a bunch of features to enhance support for textures. The added features include

  • Access to hardware texture sampling capabilities
  • Support for staging textures
  • Texture_view redesigned (to be more consistent with array_view design)
  • A more complete and performant set of texture copy APIs including section copy
  • Better interop support for textures including a much bigger set of DXGI formats
  • Support for mipmap

Improved C++ AMP debugging experience

The debugging experience for C++ AMP code has been improved in multiple fronts. We had previously announced a series of improvements including

Apart from these in Visual Studio 2013, we enabled the following set of features

  • Side-by-side CPU\GPU debugging. Currently mixed mode debugging is available on Windows 8.1 for the WARP accelerator.
  • Ability to debug using the WARP accelerator instead of single threaded ref accelerator. Using WARP for debugging provides a much faster debugging experience.

Faster C++ AMP runtime

We have worked to improve the performance of the C++ AMP runtime in order to provide even faster application performance. The work includes

  • Reduced parallel_for_each launch overheads
  • Optimized texture copy performance
  • Optimized performance of copying small data sizes between the CPU and accelerator

Array_view API improvements

In Visual Studio 2013, the following set of improvements have been made to the array_view abstraction:

  • Ability to create array_view without a data source
  • Ability to synchronize to a specific accelerator.
  • Performant array_view indexing operators on CPU

Additional changes

Apart from the changes listed above, we also took time to refine other parts of C++ AMP too. These changes include:

  • New APIs to enable clean AMP runtime shutdown
  • Improved the accuracy and helpfulness of C++ AMP runtime exception messages
  • Improved the accuracy of ETW events for better profiling experience
  • Ability to lock/unlock accelerator_views to allow safe access to shared resources between C++ AMP and Direct3D APIs.

We are excited to bring the next set of features in C++ AMP and in the coming weeks, we will be discussing these new features in depth. We hope you will take the time to download Visual Studio 2013 Preview and send us your feedback, comments and questions – below or in our MSDN forum.

 

IntelliSense for jQuery in WebMatrix

$
0
0

I recently had the opportunity to take a day-long class about jQuery from the good folks at Wintellect. The class went great, and I wrote all of my code for the class in WebMatrix. You might recall from my previous blogs that I am a big fan of WebMatrix, but at first there was one thing that was missing from WebMatrix's arsenal of cool features; in order for WebMatrix to really be useful as an editor for jQuery, I really wanted to have IntelliSense support for jQuery. Thankfully, even though IntelliSense support for jQuery is not built-in, adding IntelliSense for jQuery is extremely easy, and I thought that would make a great subject for today's blog.

To start things off, let's take a look at a jQuery sample that is little more than a Hello World sample:

<!DOCTYPEhtml><htmllang="en"><head><metacharset="utf-8"/><title>jQuery Test Page</title></head><body><scriptsrc="http://ajax.microsoft.com/ajax/jQuery/jquery-2.0.0.min.js"type="text/javascript"></script><script>
        $(function() {
           $("#bar").text($("#foo").text());
           $("#foo").text("This is some custom text");
        });
</script><h1id="foo">This is the first line</h1><h2id="bar">This is the second line</h2></body></html>

This example does very little: it loads the jQuery library from Microsoft's AJAX Content Delivery Network (CDN), and it uses jQuery to replace the text in a couple of HTML tags. (The example isn't really important - getting IntelliSense to work is the topic du jour.) This sample would look like the following illustration if you opened it in WebMatrix 3:

jQuery in WebMatrix

When you are using a JavaScript library for which there is no built-in support, Microsoft's developer tools allow you to add IntelliSense support by adding Reference Directives to your page, and the files that you would use for your reference directives are available at the same Microsoft CDN where you can get the jQuery library:

http://www.asp.net/ajaxlibrary/cdn.ashx

In order to use IntelliSense for jQuery, you need to download the appropriate jquery-n.n.n-vsdoc.js file for the version of jQuery that you are using and store that in your website. For example, if you are using jQuery version 2.0.0, you would add a script reference to the CDN path for http://ajax.aspnetcdn.com/ajax/jQuery/jquery-2.0.0.min.js, and you would download the http://ajax.aspnetcdn.com/ajax/jQuery/jquery-2.0.0-vsdoc.js file for your website.

Like many developers, I usually add a folder named scripts in the root of my website, and this is where I will typically store the jquery-n.n.n-vsdoc.js file that I am using. Once you have added the appropriate jquery-n.n.n-vsdoc.js file to your website, all that you need to do is add the appropriate reference directive to your script, as I demonstrate in the highlighted section of the following code sample:

<!DOCTYPEhtml><htmllang="en"><head><metacharset="utf-8"/><title>jQuery Test Page</title></head><body><scriptsrc="http://ajax.microsoft.com/ajax/jQuery/jquery-2.0.0.min.js"type="text/javascript"></script><script>/// <reference path="scripts/jquery-2.0.0-vsdoc.js" />
        $(function() {
           $("#bar").text($("#foo").text());
           $("#foo").text("This is some custom text");
        });
</script><h1id="foo">This is the first line</h1><h2id="bar">This is the second line</h2></body></html>

Once you have added the reference directive for your jquery-n.n.n-vsdoc.js file, IntelliSense will being working for jQuery in WebMatrix, as shown in the following illustration:

jQuery IntelliSense in WebMatrix

In Closing...

One last thing that I would like to mention is that is always a good idea to load JavaScript libraries like jQuery from a CDN, and there are lots of CDNs to choose from. There are some additional steps that you can take to ensure that your website works with jQuery even if the CDN is down, but that subject is outside the scope of this blog. ;-]

C++ AMP Highlighted in BUILD 2013 Keynote

$
0
0

Over the past two days at BUILD 2013, Microsoft unveiled a lot of developer goodness with Windows 8.1 Preview, Visual Studio 2013 and .NET 4.5.1. Previews. Apart from these, BUILD conference also used the latest version of C++ AMP to demonstrate how C++ developers can take advantage of parallelism in the GPU. In the day 2 keynote demo, Steven Guggenheimer and John Shewchuk showed how C++ AMP improved the performance of application in a noticeable manner. Please fast forward to 01:59:00 mark in the recorded video to see the C++ AMP aspects of the talk. The demo was performed on a Surface pro (running on third gen Intel Core i5 Processor with Intel HD Graphics 4000) hardware running Windows 8.1 preview OS. Once the demo code is published, we will update this post with more details. So stay tuned.

C++11/14 STL Features, Fixes, And Breaking Changes In VS 2013

$
0
0

I'm Stephan T. Lavavej, and for the last six and a half years I've been working with Dinkumware to maintain the C++ Standard Library implementation in Visual C++.  It's been a while since my last VCBlog post, because getting our latest batch of changes ready to ship has kept me very busy, but now I have time to write up what we've done!

 

If you missed the announcement, you can download 2013 Preview right now!

 

Note: In this post, whenever I say "2013" without qualification, I mean "2013 Preview and RTM".  I will explicitly mention when things will appear between Preview and RTM.

 

Quick Summary

The STL in Visual C++ 2013 features improved C++11 conformance, including initializer lists and variadic templates, with faster compile times.  We've also implemented features from the upcoming C++14 Standard, including make_unique and the transparent operator functors.

 

Compiler Features

As a reminder, here are the C++11 Core Language features that have been added to the compiler in Visual C++ 2013 Preview (in addition to the features in Visual C++ 2012, of course):

 

* Default template arguments for function templates

* Delegating constructors

* Explicit conversion operators

* Initializer lists and uniform initialization

* Raw string literals

* Variadic templates

 

They were available in the "Visual C++ Compiler November 2012 Community Technology Preview" (and they were covered in my video walkthrough), but the compiler team has made lots of progress since then.  That is, tons and tons and tons of bugs have been fixed - many of which were reported by users of the Nov CTP (thanks!).

 

As Herb Sutter just announced in "The Future Of C++" at the Build conference (with a companion blog post), more C++11 Core Language features will be implemented in 2013 RTM:

 

* Alias templates

* Defaulted functions (except for rvalue references v3)

* Deleted functions

* Non-static data member initializers (NSDMIs)

 

(About that caveat: "rvalue references v3" is how I refer to C++11's rules for automatically generating move constructors and move assignment operators, plus the rules for suppressing automatically generated copies when moves are declared.  There simply isn't enough time between 2013 Preview and RTM for the compiler team to implement this with RTM-level quality.  As a consequence, requesting memberwise move constructors and move assignment operators with =default will not be supported.  The compiler team is acutely aware of how important this stuff is, including to the STL, and it is one of their highest priorities for post-2013-RTM.)

 

Additionally, some C99 Core Language features will be implemented in 2013 RTM:

 

* C99 _Bool

* C99 compound literals

* C99 designated initializers

* C99 variable declarations

 

Please see Herb's announcement video/post for more information, especially a post-2013-RTM C++11/14 conformance roadmap, including (among many other things) rvalue references v3, constexpr, and C++14 generic lambdas.

 

STL Features

The STL in Visual C++ 2013 Preview has been fully converted over to using the following C++11 features:

 

* Explicit conversion operators

* Initializer lists

* Scoped enums

* Variadic templates

 

In 2013 RTM, this list will be extended to:

 

* Alias templates

* Deleted functions

 

I say "converted over" because the STL has been faking (I suppose "simulating" would be a more dignified word) some of these features with library-only workarounds, with varying degrees of success, ever since Visual C++ 2008 SP1.  Going into further detail:

 

* In the Core Language, explicit conversion operators are a general feature - for example, you can have explicit operator MyClass().  However, the Standard Library currently uses only one form: explicit operator bool(), which makes classes safely boolean-testable.  (Plain "operator bool()" is notoriously dangerous.)  Previously, we simulated explicit operator bool() with operator pointer-to-member(), which led to various headaches and slight inefficiencies.  Now, this "fake bool" workaround has been completely removed.

 

* We didn't attempt to simulate initializer lists, although I accidentally allowed a broken/nonfunctional <initializer_list> header to ship in Visual C++ 2010, confusing users who noticed its presence.  It was removed in Visual C++ 2012 to avoid confusion.  Now that the compiler supports initializer lists, <initializer_list> is back, it actually works, and we've implemented all of the std::initializer_list constructors and other functions mandated throughout the Standard Library.

 

* Scoped enums were implemented in the Visual C++ 2012 compiler, but due to a long story involving a compiler bug with /clr, we weren't able to use them in the STL for that release.  In Visual C++ 2013, we were able to get rid of our fake scoped enums (simulated by wrapping traditional unscoped enums in namespaces, which is observably imperfect).

 

* Over the years, we've simulated variadic templates with two different systems of "faux variadic" preprocessor macros - the first system involved repeatedly including subheaders, while the second system (more elegant, as far as crawling horrors go) eliminated the subheaders and replaced them with big backslash-continued macros that were stamped out by other macros.  Functions that were supposed to be true variadic, like make_shared<T>(args...), were actually implemented with overloads: make_shared<T>(), make_shared<T>(arg0), make_shared<T>(arg0, arg1), etc.  Classes that were supposed to be true variadic, like tuple<Types...>, were actually implemented with default template arguments and partial specializations.  This allowed us to bring you make_shared/tuple/etc. years ago, but it had lots of problems.  The macros were very difficult to maintain, making it hard to find and fix bugs in the affected code.  Spamming out so many overloads and specializations increased compile times, and degraded Intellisense.  Finally, there was the infinity problem.  We originally stamped out overloads/specializations for 0 to 10 arguments inclusive, but as the amount of variadic machinery increased from TR1 through C++0x's evolution to C++11's final form, we lowered infinity from 10 to 5 in Visual C++ 2012 in order to improve compile times for most users (we provided a way to request the old limit of 10 through a macro, _VARIADIC_MAX).

 

In Visual C++ 2013, we have thoroughly eradicated the faux variadic macros.  If you want tuples with 50 types, you can have them now.  Furthermore, I am pleased to announce that switching the STL over to real variadic templates has improved compile times and reduced compiler memory consumption.  What usually happens during the STL's development is that the Standard tells us to implement more stuff, so we do, and more stuff takes more time and more memory to compile (which can be mitigated by proper use of precompiled headers).  Then the compiler's testers notice the reduced performance and complain, and I respond with a stony face and a clear conscience, because the Standard told me to.  But in this case, faux variadics were so incredibly bloated that removing them made a big observable difference.  (Parsing one variadic template isn't free, but it's a lot cheaper than parsing 6 or whatever overloads.)

 

The precise numbers will change as the compiler, the STL, and its dependencies (notably the CRT and Concurrency Runtime) are modified, but when I checked in the final variadic rewrite, I ran some tests and I can share the concrete numbers.  My test included all STL headers in x86 release mode.  I didn't actually use anything, but the compiler still has to do a lot of work just to parse everything and also instantiate what the STL uses from itself, so this is a reasonable way to measure the overhead of just dragging in the STL without using it extensively.  Comparing Visual C++ 2012 (which defaulted to _VARIADIC_MAX=5) to Visual C++ 2013, preprocessed file size decreased from 3.90 MB to 3.13 MB.  Compile time decreased from 2307 ms to 2029 ms.  (I measured only the compiler front-end; the back-end took 150-250 ms without optimizations and it isn't of interest here.  Additionally, for complicated reasons that I'll avoid going into here, I excluded the relatively brief time needed to generate a preprocessed translation unit - but as you can see, counting that time would just increase the differences further.)  Finally, I measured compiler memory consumption by generating a precompiled header (a PCH is a compiler memory snapshot), which also decreased from 40.2 MB to 33.3 MB.  So, the STL is smaller and faster to compile according to every metric.  If you had to increase _VARIADIC_MAX to get your programs to compile with Visual C++ 2012, the improvements will be even more dramatic (for completeness, I measured _VARIADIC_MAX=10 as 6.07 MB, 3325 ms, and 58.5 MB respectively).

 

(Note: After I performed these measurements at the end of March 2013, the compiler team has significantly reworked some of their internal data structures.  As a result, compiler memory consumption in Preview and especially RTM may significantly differ from my original measurements.  However, that doesn't affect my conclusion: considered in isolation, using real variadic templates improves every metric.)

 

* The Standard requires a few things in the STL to be alias templates.  For example, ratio_add<ratio<7, 6>, ratio<11, 10>> is required to be an alias for ratio<34, 15>.  Previously, we simulated alias templates with structs containing "type" typedefs (sometimes named "other").  This workaround will be completely eliminated in 2013 RTM.

 

* The Standard declares over 100 deleted functions throughout the STL, usually to make classes noncopyable.  Previously, we simulated this with the traditional C++98/03 workaround: private unimplemented declarations.  In 2013 RTM, we will eliminate this workaround, marking free functions as =delete and making member functions public with =delete as mandated by the Standard (except type_info, for horrible reasons which this margin is too narrow to contain).

 

You may have noticed above that there are some features in the compiler list that aren't in the library list.  The reason is simple: not all Core Language features have an observable impact on the Standard Library.  For example, raw string literals don't require corresponding Standard Library implementation changes (although they make <regex> more convenient to use).  Similarly, uniform initialization (excluding initializer lists) and NSDMIs don't affect the STL.  You can say pair<int, int> p{11, 22}, but that doesn't involve any new or changed machinery on our side.  As for defaulted functions, they usually appear in the STL for moves, which (as I explained earlier) is not yet supported by the compiler.  There are a few defaulted functions other than moves, we just haven't updated them yet because implementing them "normally" has few adverse consequences.  Finally, delegating constructors and default template arguments for function templates don't observably affect the Standard Library's interface, although we're using them internally.  (Interestingly, their internal use is actually necessary - delegating constructors are the only way to implement pair's piecewise constructor, and default template arguments for function templates are the only way to constrain tuple's variadic constructor.)

 

You may also have noticed that I mentioned C++14 in the title.  That's because we've implemented the following Standard Library features that have been voted into the C++14 Working Paper:

 

* The "transparent operator functors" less<>, greater<>, plus<>, multiplies<>, etc.

* make_unique<T>(args...) and make_unique<T[]>(n)

* cbegin()/cend(), rbegin()/rend(), and crbegin()/crend() non-member functions

* The <type_traits> alias templates make_unsigned_t, decay_t, etc. (implemented in RTM but not Preview)

 

(Please note that there is an FAQ at the bottom of this post, and "Q1: Why are you implementing C++14 Standard Library features when you haven't finished the C++11 Core Language yet?" is answered there.)

 

For more information, you can read my proposals N3421 "Making Operator Functors greater<>" (Standardese voted in), N3588 "make_unique" (background), N3656 "make_unique (Revision 1)" (Standardese voted in), and my proposed resolution for Library Issue 2128 "Absence of global functions cbegin/cend" which was accepted along with the rest of N3673 "C++ Library Working Group Ready Issues Bristol 2013".  (A micro-feature guaranteeing that hash<Enum> works was also voted in, but it was implemented back in Visual C++ 2012.)

 

Walter E. Brown (who is unaffiliated with Microsoft) proposed the <type_traits> alias templates in N3546 "TransformationTraits Redux", which is unquestionably the best feature voted into C++14.  Anyone who dares to question this assertion is hereby sentenced to five years of hard labor writing typename decay<T>::type instead of decay_t<T>.

 

Additionally, James McNellis (who now maintains our CRT) used the transparent operator functors to reduce <algorithm>'s size by 25%.  For example, sort(first, last) and sort(first, last, comp) previously had nearly-duplicate implementations, differing only in how they compared elements.  Now, sort(first, last) just calls sort(first, last, less<>()).  This cleanup was later extended to algorithms implemented in various internal headers, plus the ones lurking in <numeric> and the member algorithms in <list> and <forward_list>.  (A small part of the compile time improvement can probably be attributed to this.)  James also contributed the fine-grained container requirements overhaul and several bugfixes mentioned below - thanks James!

 

Fixes

That's it for the features.  Now, for the fixes.  This time around, I kept track of all of the STL fixes we made between Visual C++ 2012 and 2013.  This includes both the bugs that were reported through Microsoft Connect, and the bugs that were reported internally (whether filed by myself when I notice something wrong, by our testers, or by other Microsoft teams).  My list should be exhaustive, with a few exceptions: I'm not counting bugs filed against our tests when they didn't involve broken library code, I've tried to omit the (happily, few) bugs that were introduced and then fixed after 2012 shipped, and I'm not counting bugs that were reported against the libraries but were actually resolved in the compiler (e.g. a few type traits bugs were actually bugs in the compiler hooks we depend on).  Also, only formally filed bugs are listed here.  Sometimes we notice and fix bugs that were never filed in our database (e.g. I noticed alignment_of behaving incorrectly for references and fixed that, but nobody had ever reported it).  I'm including our internal bug numbers (e.g. DevDiv#1729) in addition to Connect bug numbers when available so if anyone asks me for further details, I can easily look up the bug.

 

There were a few major overhauls:

 

* <atomic>, whose entire purpose in life is blazing speed, contained unnecessarily slow implementations for many functions.  Wenlei He from our compiler back-end (code generation) team contributed a major rewrite of <atomic>'s implementation.  Our performance on all architectures (x86/x64/ARM) should now be optimal, or very close to it.  (DevDiv#517261/Connect#770885)

 

* <type_traits> was significantly reworked in conjunction with compiler changes.  (Some type traits, like is_array, can be implemented with ordinary C++ code.  Other type traits, like is_constructible, must rely on "compiler hooks" for accurate answers.)  This fixed virtually all known type traits bugs, including is_pod, is_assignable, and other type traits behaving incorrectly for void (DevDiv#387795/Connect#733088, DevDiv#424157), is_scalar behaving incorrectly for nullptr_t (DevDiv#417110/Connect#740872), result_of not working with movable-only arguments (DevDiv#459824), the is_constructible family of type traits behaving incorrectly with references (DevDiv#517460), many type traits being incorrectly implemented in terms of other type traits (DevDiv#520660; for example, is_move_assignable<T> is defined as is_assignable<T&, T&&>, but our implementation wasn't doing exactly that), aligned_union returning insufficiently large types for storage (DevDiv#645232/Connect#781713), and alignment_of emitting spurious warnings for classes with inaccessible destructors (DevDiv#688225/Connect#786574 - fixed in RTM but not Preview).

 

* In C++98/03, life was easy.  STL containers required their elements to be copy-constructible and copy-assignable, unconditionally (C++03 23.1 [lib.container.requirements]/3).  In C++11, as movable-only types like unique_ptr and additional member functions like emplace_back() were introduced, the container requirements were rewritten to be fine-grained.  For example, if you have a list<T> and you populate it with emplace_back(a, b, c), then T needs to be destructible (obviously) and constructible from (A, B, C) but it doesn't need to support anything else.  This is basically awesome for users, but it requires lots of attention to detail from implementers.  James fixed many bugs while overhauling this, but the ones that were formally reported were vector<DefaultConstructible>(10) not compiling (DevDiv#437519), vector<T>(first, last)'s requirements being too strict (DevDiv#437541), container move constructors requiring too much from their elements (DevDiv#566619/Connect#775715), and map/unordered_map<K, V> op[] requiring V to be more than default-constructible (DevDiv#577613/Connect#776500).

 

* 2013 RTM will contain a <ratio> overhaul, implementing alias templates (DevDiv#693707/Connect#786967) and thoroughly fixing numerous bugs (DevDiv#704582/Connect#788745 was just the tip of the iceberg).

 

Then there were individual bugfixes:

 

* std::find()'s implementation has an optimization to call memchr() when possible, but find() was simultaneously calling memchr() too aggressively leading to incorrect results (DevDiv#316853/Connect#703165), and not calling it aggressively enough leading to suboptimal performance (DevDiv#468500/Connect#757385).  We rewrote this optimization to be correct and optimal in all cases.

 

* Operators for comparing shared_ptr/unique_ptr to nullptr_t were missing (DevDiv#328276).  Some comparisons compiled anyways due to the construction of temporaries, but the Standard mandates the existence of the operators, and this is observable if you look hard enough.  Therefore, we added the operators.

 

* shared_future was convertible to/from future in ways prohibited by the Standard (DevDiv#361611).  We prevented such conversions from compiling.

 

* In <random>, subtract_with_carry's streaming operator performed a "full shift" (shifting all of the bits out of an integer in a single operation), which triggers undefined behavior and doesn't actually work on ARM (DevDiv#373497).  We now follow the Standard's rules so this works on all architectures.

 

* Taking the addresses of numeric_limits' static const data members didn't work with the /Za compiler option (DevDiv#376524).  Now it does.

 

* <ostream> provided unnecessary overloads of std::endl (DevDiv#380718/Connect#730916).  While they didn't appear to be causing any problems, we eliminated them, so only the Standard-mandated overloads are present.

 

* As required by the Standard, tuple_element<I, array<T, N>> now enforces I < N (DevDiv#410190/Connect#738181).

 

* <filesystem>'s directory_iterator was returning paths that were too short (DevDiv#411531).  (Note that recursive_directory_iterator worked correctly.)  We fixed directory_iterator to follow N1975, the Filesystem V2 draft.  (Filesystem V3 is on our radar, but it won't be implemented in 2013 RTM.)

 

* <filesystem> contained a handwritten implementation called _Strcpy() (DevDiv#422681).  Now it uses the CRT's string copying functionality, and it internally passes around references to arrays for additional bounds safety.

 

* <ostream>'s op<<() worked with std::hexfloat, but <istream>'s op>>() didn't (DevDiv#425415/Connect#742775).  Now it does.

 

* In a previous release, iostreams was changed to work correctly with the /vd2 compiler option, but this machinery sometimes emitted compiler warnings.  The compiler gained a new pragma, #pragma vtordisp, and iostreams now uses this, avoiding such spurious warnings (DevDiv#430814).

 

* Due to a typo, _VARIADIC_MAX=7 didn't compile (DevDiv#433643/Connect#746478).  We fixed this typo, then later eradicated the machinery entirely.

 

* system_error::what()'s return value didn't follow the Standard (DevDiv#453372/Connect#752770).  Now it does.

 

* codecvt_one_one wouldn't compile without various using-declarations (DevDiv#453373/Connect#752773).  It no longer requires such workarounds.

 

* Due to a misplaced parenthesis, <chrono>'s duration_cast sometimes didn't compile (DevDiv#453376/Connect#752794).  We fixed the parenthesis.

 

* Our tests found an extremely obscure deadlock under /clr when holding the locale lock and throwing an exception (e.g. when a std::locale is constructed from a bogus name).  We rearranged our code to fix this (DevDiv#453528).

 

* On ARM, we realized that we were decrementing shared_ptr/weak_ptr's refcounts in a multithreading-unsafe manner (DevDiv#455917).  (Note: x86/x64 were absolutely unaffected.)  Although we never observed crashes or other incorrect behavior even after focused testing, we changed the decrements to use sequential consistency, which is definitely correct (although potentially slightly slower than optimal).

 

* The Standard doesn't guarantee that tuple_size can be used with things other than tuples (or pairs or arrays).  In particular, tuple_size<DerivedFromTuple> isn't guaranteed to work (DevDiv#457214/Connect#753773).  Instead of silently compiling and returning 0, we changed our implementation to static_assert about such cases.

 

* C++11 says that list::erase() shouldn't invalidate end iterators (DevDiv#461528).  This is a physical consequence of std::list's representation, so release mode was always correct, but our debug checks complained about invalidation.  We fixed them so they now consider end iterators to be preserved.

 

* Constructing a regex with the flags regex::icase | regex::collate resulted in case-sensitive matching, contrary to the Standard (DevDiv#462743).  Now it results in case-insensitive matching.

 

* Constructing a std::thread resulted in a memory leak reported by _CrtDumpMemoryLeaks() (DevDiv#467504/Connect#757212).  This was not a severe, unbounded leak - what happened was that one-time initialization of an internal data structure was allocating memory marked as a "normal block" (observed by the leak tracking machinery), and it also wasn't being registered for cleanup at CRT shutdown.  We fixed this by marking this allocation as a "CRT block" (which is excluded from reporting by default) and also registering it for cleanup.

 

* Calling wait_for()/wait_until() on futures obtained from packaged_tasks was returning future_status::deferred immediately, which is never supposed to happen (DevDiv#482796/Connect#761829).  Now it returns either future_status::timeout or future_status::ready as required by the Standard.

 

* C++11's minimized allocator interface doesn't require allocators to provide a nested rebind struct, but VC didn't correctly implement this (DevDiv#483844/Connect#762094).  We've fixed this template machinery to follow the Standardese, N3690 17.6.3.5 [allocator.requirements]/3: "If Allocator is a class template instantiation of the form SomeAllocator<T, Args>, where Args is zero or more type arguments, and Allocator does not supply a rebind member template, the standard allocator_traits template uses SomeAllocator<U, Args> in place of Allocator::rebind<U>::other by default. For allocator types that are not template instantiations of the above form, no default is provided."

 

* Another minimized allocator interface bug - in debug mode, std::vector was directly using an allocator instead of going through allocator_traits, which resulted in minimal allocators not working (DevDiv#483851/Connect#762103).  We fixed this, and audited all of the containers to rule out similar problems.

 

* A bug in the Concurrency Runtime powering std::condition_variable resulted in crashes (DevDiv#485243/Connect#762560).  ConcRT fixed this.

 

* vector<bool>, humanity's eternal nemesis, crashed on x86 with indices over 2 billion (DevDiv#488351/Connect#763795).  Note that 2 billion packed bits occupy just 256 MB, so this is entirely possible.  We fixed our math so this works.

 

* pointer_traits didn't compile with user-defined void pointers (DevDiv#491103/Connect#764717).  We reworked our implementation so this compiles (instead of trying to form void& which is forbidden).

 

* Although it was correct according to the Standard, merge()'s implementation was performing more iterator comparisons than necessary (DevDiv#492840).  We changed this (and its related implementations) to be optimal.

 

* time_put's return value was correct for char but incorrect for wchar_t, because the implementations had unintentionally diverged (DevDiv#494593/Connect#766065).  Now wchar_t works like char.

 

* C++11 says that istreambuf_iterator::operator*() should return charT by value (DevDiv#494813).  Now we do that.

 

* Constructing a std::locale from a const char * returned from setlocale() could crash, because std::locale's constructor internally calls setlocale()!  (DevDiv#496153/Connect#766648)  We fixed this by always storing the constructor's argument in a std::string, which can't be invalidated like this.

 

* <random>'s independent_bits_engine and shuffle_order_engine weren't calling their _Init() helper functions in their constructors (DevDiv#502244/Connect#768195).  That was bad.  Now we're good.

 

* pow(-1.0, complex<double>(0.5)) took an incorrect shortcut, returning NaN instead of i (DevDiv#503333/Connect#768415).  We fixed the shortcut.

 

* Constructing shared_ptr from nullptr was allocating a reference count control block, which is forbidden by the Standard (DevDiv#520681/Connect#771549).  Now such shared_ptrs are truly empty.

 

* The Standard is very strict about unique_ptr::reset()'s order of operations (DevDiv#523246/Connect#771887).  N3690 20.9.1.2.5 [unique.ptr.single.modifiers]/4: "void reset(pointer p = pointer()) noexcept; Effects: assigns p to the stored pointer, and then if the old value of the stored pointer, old_p, was not equal to nullptr, calls get_deleter()(old_p). [ Note: The order of these operations is significant because the call to get_deleter() may destroy *this. —end note ]"  We now follow the Standard exactly.

 

* Interestingly, the Standard permits iostreams to be tied to themselves, but this triggered stack overflows in our implementation (DevDiv#524705/Connect#772293).  We now detect self-tying and avoid crashing.

 

* The Standard requires minmax_element() to find the last biggest element (DevDiv#532622), unlike max_element() which finds the first biggest element.  This choice is not arbitrary - it is a fundamental consequence of minmax_element()'s implementation.  We fixed minmax_element() to follow the Standard, and carefully audited it for correctness.

 

* Our implementation declared put_time() as taking tm * (DevDiv#547347/Connect#773846).  Now it takes const tm * as required by the Standard.

 

* In the Nov 2012 CTP, which was released before our Standard Library changes were ready, the compiler team shipped a "fake" <initializer_list> header.  This basically worked properly, except that it didn't have a newline at the end of the file, and that infuriated the /Za compiler option (DevDiv#547397/Connect#773888).  Now we have the "real" <initializer_list> from Dinkumware, and we've verified that there's a newline at the end.

 

* system_clock::to_time_t() attempted to perform rounding, but triggered integer overflow when given enormous inputs (DevDiv#555154/Connect#775105).  We now perform truncation, as permitted by the Standard, making us immune to integer overflow.

 

* The Standard says that forward_iterator_tag shouldn't derive from output_iterator_tag (DevDiv#557214/Connect#775231), but it did in our implementation.  We've stopped doing that, and we've changed the rest of our code to compensate.

 

* Due to an obscure compiler bug interacting with operator fake-bool(), unique_ptrs with lambda deleters weren't always boolean-testable (DevDiv#568465/Connect#775810).  Now that unique_ptr has explicit operator bool(), this bug has been completely eradicated.

 

* Between Visual C++ 2010 and 2012, we introduced a regression where parsing floating-point numbers with iostreams (e.g. "iss >> dbl") would get the last bit wrong, while the CRT's strtod() was unaffected.  We've fixed iostreams to get all the bits right (DevDiv#576315/Connect#776287, also reported as DevDiv#616647/Connect#778982 - fixed in RTM but not Preview).

 

* <random>'s mt19937 asserted that 0 isn't a valid seed (DevDiv#577418/Connect#776456).  Now it's considered valid, like the Standard says.  We fixed all of the engines to accept 0 seeds and generate correct output.

 

* <cmath>'s binary overloads weren't constrained like the unary overloads, resulting in compiler errors in obscure cases (DevDiv#577433/Connect#776471).  Although the Standard doesn't require this to work, we've constrained the binary overloads so it works.

 

* inplace_merge() is one of a few very special algorithms in the Standard - it allocates extra memory in order to do its work, but if it can't allocate extra memory, it falls back to a slower algorithm instead of failing.  Unfortunately, our implementation's fallback algorithm was incorrect (DevDiv#579795), which went unnoticed for a long time because nobody runs out of memory in the 21st century.  We've fixed inplace_merge()'s fallback algorithm and audited all fallback algorithms for correctness (including stability).  We've also added a regression test (capable of delivering a Vulcan nerve pinch to operator new()/etc. on demand) to ensure that this doesn't happen again.

 

* Due to an off-by-one error, future_errc's message() and what() didn't work (DevDiv#586551).  This was introduced when the Standard started saying that future_errc shouldn't start at 0, and we changed our implementation accordingly - but didn't notice that the message-translation was still assuming 0-based indexing.  We've fixed this.

 

* basic_regex<char>'s implementation permitted high-bit characters, either directly or in character ranges, after an old bugfix - but it rejected high-bit characters specified through regex hex escapes, either directly or in character ranges (DevDiv#604891).  Now both are permitted.

 

* The Standard requires std::functions constructed from null function pointers, null member pointers, and empty std::functions to be empty.  Our implementation was constructing non-empty std::functions storing null function pointers/etc., which would crash when invoked.  We fixed this to follow the Standard (DevDiv#617384/Connect#779047 - fixed in RTM but not Preview).

 

* pointer_traits<shared_ptr<Abstract>> didn't work due to a template metaprogramming subtlety (DevDiv#643180/Connect#781594) - the compiler really doesn't want to see abstract classes returned by value, even if we're just doing it for the purposes of decltype.  We've fixed this machinery so it works for abstract classes.

 

* In certain cases, our iterator debugging machinery was taking locks followed by no-ops, which is pointlessly slow (DevDiv#650892).  This happened in two cases: _ITERATOR_DEBUG_LEVEL=1 which is never the default, and deque.  We've fixed this so we take locks only in _ITERATOR_DEBUG_LEVEL=2 when we have actual work to protect.

 

* C++11 says that list::splice() doesn't invalidate iterators, it just transfers the affected iterators (DevDiv#671816/Connect#785388).  This is a physical guarantee, but our debug checks considered such iterators to be invalidated.  We've updated the checks so they rigorously follow C++11's rules.  (Note: forward_list::splice_after() is still affected; we plan to fix this in the future, but not in 2013 RTM.)

 

* align()'s implementation in <memory> didn't follow the Standard (DevDiv#674552).  Now it does.

 

* Lambdas capturing reference_wrappers wouldn't compile in certain situations (DevDiv#704369/Connect#788701).  Now that we've fixed conformance problems in reference_wrapper, this code compiles cleanly.

 

* std::cin no longer overheats the CPU when you hold down spacebar (xkcd#1172).

 

Breaking Changes

On that note, these features and fixes come with source breaking changes - cases where you'll have to change your code to conform to C++11, even though it compiled with Visual C++ 2012.  Here's a non-exhaustive list, all of which have been observed in actual code:

 

* You must #include <algorithm> when calling std::min() or std::max().

 

* If your code acknowledged our fake scoped enums (traditional unscoped enums wrapped in namespaces), you'll have to change it.  For example, if you were referring to the type std::future_status::future_status, now you'll have to say std::future_status.  Note that most code is unaffected - for example, std::future_status::ready still compiles.

 

* Similarly, if your code acknowledged our faux alias templates, you'll have to change it for 2013 RTM.  For example, instead of allocator_traits<A>::rebind_alloc<U>::other you'll have to say allocator_traits<A>::rebind_alloc<U>.  Interestingly, although ratio_add<R1, R2>::type will no longer be necessary and you should say ratio_add<R1, R2>, the former will continue to compile.  That's because ratio<N, D> is required to have a "type" typedef for a reduced ratio (which will be the same type if it's already reduced).

 

* explicit operator bool() is stricter than operator fake-bool().  explicit operator bool() permits both explicit conversions to bool (e.g. given shared_ptr<X> sp, both static_cast<bool>(sp) and bool b(sp) are valid) and "contextual conversions" to bool (these are the boolean-testable scenarios, e.g. if (sp), !sp, sp && whatever).  However, explicit operator bool() forbids implicit conversions to bool, so you can't say bool b = sp, and you can't say return sp; given a bool return type.

 

* Now that we're using real variadic templates, we aren't defining _VARIADIC_MAX and its unindicted co-conspirators.  We won't complain if you're still defining _VARIADIC_MAX, but it'll have no effect.  If you acknowledged our faux-variadic-template macro machinery in any other way, you'll have to change your code.

 

* In addition to ordinary keywords, STL headers now strictly forbid macroizing the context-sensitive keywords "override" and "final".

 

* reference_wrapper/ref()/cref() now strictly forbid binding to temporary objects.

 

* <random> now strictly enforces its compiletime preconditions.

 

* Various STL type traits have the precondition "T shall be a complete type".  This is now enforced more strictly by the compiler, although we do not guarantee that it is enforced in all situations.  (STL precondition violations trigger undefined behavior, so the Standard doesn't guarantee enforcement.)

 

* The STL does not attempt to support /clr:oldSyntax.

 

Frequently Asked Questions

Q1: Why are you implementing C++14 Standard Library features when you haven't finished the C++11 Core Language yet?

 

A1: That's a good question with a simple answer.  Our compiler team is well aware of the C++11 Core Language features that remain to be implemented.  What we've implemented here are C++14 Standard Library features.  Compiler devs and library devs are not interchangeable - I couldn't implement major compiler features if my life depended on it (even static_assert would take me months to figure out), and I like to think the reverse is true, although rocket scientists are probably better at pizza delivery than pizza deliverers are at rocket science.

 

Q2: Fair enough, but you mentioned "C++14 generic lambdas" earlier.  Why is your compiler team planning to implement any C++14 Core Language features before finishing all C++11 Core Language features?

 

A2: As Herb likes to say, "C++14 completes C++11".  The compiler team is pursuing full C++14 conformance, and views all C++11 and C++14 features as a unified bucket of work items.  They're prioritizing these features according to customer demand (including library demand) and implementation cost, so they can deliver the most valuable features as soon as possible.  The priority of a feature isn't affected by when it was voted into the Working Paper.  As a result, their post-2013-RTM conformance roadmap places highly valuable C++14 features (e.g. generic lambdas) before less valuable C++11 features (e.g. attributes).  Again, please see Herb's announcement video/post for more information.

 

Q3: What about the C99 Standard Library, incorporated into C++11 by reference?

 

A3: Good news - my colleague Pat Brenner worked with Dinkumware to pick up substantial chunks of C99 and integrated them into Visual C++ 2013's CRT.  We're not done yet, but we're making progress.  Unfortunately, I didn't have time to deal with the corresponding wrapper headers in the STL (<cmeow> wrapping <meow.h>).  Time was extremely limited, and I chose to spend it on getting the variadic template rewrite checked in.  I may be able to get the wrapper headers into 2013 RTM, but I cannot promise that yet.

 

Q4: Will these compiler/library features ship in Visual C++ 2012 Update N, or will we have to wait for Visual C++ 2013?

 

A4: You'll have to wait for Visual C++ 2013.  I know this isn't what most people want to hear, so please allow me to explain.

 

I'm a programmer, and if you're reading this, I assume you're a programmer too.  So, as programmers, let's look at the following diff together.  This is how <tuple> has changed from Visual C++ 2012 to 2013, as viewed through our internal diff tool "odd":

 

 

Pay special attention to the visual summary on the left.  In technical terms, this diff is a horrifying monstrosity.  <tuple>'s code is now wonderful, but the changes required to get to this point were basically a complete rewrite.  <functional> and the other headers received similar (but lesser) changes.

 

The VS Update mechanism is primarily for shipping high-priority bugfixes, not for shipping new features, especially massive rewrites with breaking changes (which are tied to equally massive compiler changes).

 

Major versions like Visual C++ 2013 give us the freedom to change and break lots of stuff.  There's simply no way we can ship this stuff in an Update.

 

Q5: What about the bugfixes?  Can we get those in an Update?

 

A5: This is an interesting question because the answer depends on my choices (whereas in the previous question, I wouldn't be allowed to ship such a rewrite in an Update even if I wanted to).

 

Each team gets to choose which bugfixes they take to "shiproom" for consideration to be included in an Update.  There are things shiproom won't let us get away with (e.g. binary breaking changes are forbidden outside of major versions), but otherwise we're given latitude to decide things.  I personally prioritize bandwidth over latency - that is, I prefer to ship a greater total number of bugfixes in every major version, instead of shipping a lesser total number of bugfixes (over the same period of time) more frequently in multiple Updates.

 

Going into more detail - backporting fixes takes a nonzero amount of time (especially as branches diverge due to accumulated changes).  It's also riskier - as you may have heard, C++ is a complicated world, and apparently simple changes can have unintended consequences.  Even if it's as small as emitting a spurious warning, we really don't want Updates to break stuff.  Fixing stuff in major versions gives us time to fix the fixes and get everything exactly right.

 

We do backport STL fixes from time to time (e.g. in Visual C++ 2010 RTM there was a std::string memory leak caused by the Small String Optimization interacting badly with move semantics - that was really bad, so we backported the 3-line fix from 2012 to 2010 SP1), but it's rare.

 

Q6: Are you targeting the C++11 International Standard, or a C++14 Working Paper?

 

A6: We're usually targeting the current Working Paper (N3690 as of right now), because of the Core/Library Issues that have been resolved since C++11.  In the STL, I consider any nonconformance to the current Working Paper to be a bug.

 

Q7: How do you pronounce "tuple"?

 

A7: It's "too-pull".  It does not rhyme with "supple"!

 

Thanks for reading this extremely long post, and I hope you enjoy using VS 2013's STL.

 

Stephan T. Lavavej

Senior Developer - Visual C++ Libraries

stl@microsoft.com

SharePoint 2013 Search Architecture Pt 2 - Crawl and Feed

$
0
0

Several things haven't changed in 2013 SharePoint search.   Search is still uses a componentized model that is still based on a Shared Services architecture. Simply stated, you still provision a Search Service Application + Proxy. The Search Admin UI is exposed by clicking on the Search Service Application via Central Administrator\Application Management\Manage Service Applications. As I stated in the introductory blog, the search engine still runs under mssearch.exe and invokes daemon process during crawl to go retrieve content. The act of fetching content works very similarly as it did in SharePoint 2010. Finally, the crawler still isn't responsible for the index itself.

This blog series will go through what's changed with Crawl and Feed and more details about what components make up the Crawl and Feed portion of a crawl. I'll write about how these components work together plus Architecture and Scaling these components.

 

Crawl and Feed Components Basics

The most important thing I want you to take away from this blog is that we have now split the crawl and feed into two components in SharePoint 2013. In SharePoint 2010, this was all handled by the crawl component. That is the crawl component was responsible for not only fetching the content but also extracting metadata, links etc… during crawl by using a variety of plugins all running under the context of MSSSearch.exe. As data would pass through plug-ins, index would be built in memory on the crawler and would propagate this over to the Query Component in order to be indexed.

Now with SharePoint 2013, we use both the Crawl Component and the Content Processing Component for this purpose. The Crawl Component still fetches data but instead of processing it and extracting metadata, links, and other rich information, we simply pass crawled items over to the Content Processing Component to do this work which runs under Noderunner.exe. One could say that mssearch is half the process it used to be because this additional processing has been stripped from it and given to another component/process. I don't want to downplay it's importance though because without crawl, you have no index, no search. Plus, several improvements have been made to the crawler because we are offloading a lot of this work and implementing more common sense approaches to crawl like the two new crawl types (Continuous Crawl and Cleanup Crawl). I will talk about content processing in more detail in the next blog as this blog is geared toward crawling and feeding the Content Processing Component.

 

 

Crawl Component\Crawl Database and Distribution

A crawler consists of a crawl component and a crawl database. Back in SharePoint 2010, you have a unique relationship in that a crawl component maps to a unique crawl database. That relationship doesn't exists in SharePoint 2013 Search so a crawl component will automatically communicate with all crawl databases if there is more than one. This is such a cool new concept to take in because in SharePoint 2010 search, requiring the mapping of a crawl component to a crawl database often resulted in lopsided database sizes.

For Example in SharePoint 2010:

Crawl Component 1 ----> CrawlDB1

Crawl Component 2 ----> CrawlDB2

Assuming crawl component 1 is hosting a very large host defined as a content source, you could easily end up in the lopsided database size where Crawl DB1 is much larger than Crawl DB2. This required more administrative effort in that you would often times implement host distribution rules to override this default behavior. This is also why it didn't make since to have a secondary crawl database when only one crawl component exists. If you would like to know more about this see my original 2010 Search Crawl blog here.

In SharePoint 2013, all crawl components use all crawl databases so we no longer have a unique mapping between a crawl component and crawl database. A single large host is pushed out across multiple crawl databases. Yes, one crawl component can utilize multiple crawl databases for a single host.

Important Note: This is only true if that large single host (web application) is utilizing more than one content database.

This is an important note because in SharePoint 2013, we no longer distribute crawl items to crawl DB's based on URL. We now distribute crawl items based on content database ID. So if I have a single large host (web application) that is utilizing two content databases, the associated crawl items will be distributed to a unique crawl database assuming you have at least two crawl DB's. Crawl distribution is an interesting topic that brings up many questions so I added some sample questions and answers below to fill any technical gaps.

 

 

Question and Answer Time

Question: What do you mean we assign crawl items to crawl databases based on database id?

Answer: Each web app is associated to at least one content database. So if I have two web apps, then I have two content databases. During a crawl with two crawl databases, all host associated with content database 1 will be assigned to crawl database 1. All host associated with content database 2 will be assigned to crawl database 2.

 

Question: Can I provision one crawl component and multiple crawl databases and all crawl databases will be used?

Answer: Yes assuming you have more than one content database! The crawler will distribute crawl items across all crawl databases although it's a good idea to provision a second crawl component for redundancy reasons.

 

Question: I provisioned a new secondary crawl database and new items being crawled are still using the original crawl database instead of the new one. Why is this happening?

Answer: After provisioning a new crawl database, all previously crawled content will still be mapped to the original crawl database. This is a fancy way of saying the content database ID's were already mapped to the original crawl database so any new items picked up that reside in one of these content databases will still be utilizing the original crawl database. In order to get some content databases reassigned to the new crawl database, perform a full crawl.

 

Question: Will the second crawl database get utilized if I create a new web application/site collection/site?

Answer: Not at first because continuous crawl will not pick up new items from newly created web applications until the cleanup crawl runs. This is basically an incremental crawl and runs by default every 4 hours. The second crawl database will start populating after the incremental in this case and subsequent continuous crawls will start picking up new items from the new web app and add them to Crawl database 2 only after the incremental runs.

 

Question: Can I provision multiple crawl components with a single crawl database?

Answer: Yes, you can provision crawl topology this way as well however assuming your crawling less than 20 million items which is the recommended limit per Crawl database.

 

 

Scaling Crawl for Fault Tolerance and Performance

In SharePoint 2013, both fault tolerance and performance are gained with the addition of new crawl components. In order to gain fault tolerance back in SharePoint 2010, two crawl components (one mirrored) would need to be provisioned per crawl database. Since we no longer have a unique relationship between crawl components and crawl databases, you automatically gain fault tolerance by simply provisioning a new crawl component. In a scenario where you have 3 crawl components, if any one crawl component goes down, the remaining crawl components will pick up the slack.

As far as performance, the benefit received by adding redundant crawl components means more documents per second (dps) are processed. Yes, you can crawl more aggressively with two crawl components over one crawl component. What really defines whether or not you require one crawl components or several components depends on a variety of factors. Things like the following:

1. How often are items changed or added to sites?
2. How aggressively are you crawling?
3. Are you satisfied with crawl freshness times?
4. How powerful is the hardware hosting your crawl component?
5. Do you require redundancy?

The rule of thumb for observing whether or not you need to provision additional crawl components for performance reasons is usually due to high CPU. During a crawl, the CPU load rises in conjunction with high DPS (documents per second). Sometimes this might be acceptable but if it's consistently pegging CPU, it might be time to offload some of this processing by provisioning a secondary crawl component on a different server. It's also recommended to monitor network load and disk load to ensure they aren't bottlenecks during a crawl. The network load is generated when content is downloaded by the crawler from hosts. The disk load is generated when we temporarily store these crawled items which are later picked up by the Content Processing Component. Again, I suspect the most likely scenario for provisioning additional crawl components is the high CPU + DPS during crawls.

 

Question: What about scaling out the associated Crawl Database?

Answer: In terms of performance, Microsoft recommends one crawl database per 20 million items. So if I have a 100 million item index, I need 5 crawl databases. In terms of redundancy with crawl databases, we leverage both SQL mirroring or SQL AlwaysOn.

 

Question: Where do I provision additional crawl components and additional crawl databases?

Answer: This is now all done through PowerShell. So if you need to update the current Search Topology by adding a crawl component, you essentially clone it into a new topology and add your desired components and this is all done within PowerShell. Please see the resources section for more information.

 

 

 

Crawl Behind the Scenes (Advanced)

When a crawl starts, MSSearch.exe invokes a Daemon process called MSSDmn.exe. This loads the required protocol handlers necessary to connect, fetch the content and hands it off to MSSearch.exe for further processing. Initially, the mssdmn.exe process calls a sites sitedata web service using the "GetChanges" SOAP action to request/receive the latest changes. The site data web service resides within IIS under a site's _vti_bin directory.

 

clip_image001

 

The art of fetching the latest changes is done by comparing the latest change the crawler has processed which lives in the Crawl DB's msschangelogcookies table and compares that to the event cache table of the associated content database. Once the changes are returned from IIS to the mssdmn.exe process, it will go and fetch the changes and after successfully retrieving, passes them off to mssearch.exe for further processing. This process will rinse and repeat until all of the changes are processed. Great tools to see these transactions behind the scenes are fiddler, uls logs, and network monitor.

 

 

 

Feeding Behind the Scenes (Advanced)

The process of feeding crawled items from the crawl component to the content processing component is an interesting one. First, remember I mentioned that the search process back in 2010 consisted of multiple plugins to extract data and index that data. In SharePoint 2013, only one plugin exists called the Content Plugin and it's responsible for routing crawled items over to the content processing component. Also, in order for the crawl component to feed the content processing component a connection is established with the content processing component and session is created. One way to look behind the scenes of crawled items being fed by the crawl component to the content processing component is to turn the SharePoint Server Search\Crawler to verbose in Central Administrator. The following is an example of what this looks like in the ULS logs:

12/02/2012 16:56:57.14        mssearch.exe (0x0340)        0x0CB8        SharePoint Server Search        Crawler:Content Plugin        ai6x1        Verbose        CSSFeeder::SubmitCurrentBatch: submitting document doc id ssic://11; doc size 9909;       

12/02/2012 16:56:57.14        mssearch.exe (0x0340)        0x0CB8        SharePoint Server Search        Crawler:Content Plugin        ai6x1        Verbose        CSSFeeder::SubmitCurrentBatch: submitting document doc id ssic://12; doc size 9281;       

12/02/2012 16:57:00.14        mssearch.exe (0x0340)        0x0CB8        SharePoint Server Search        Crawler:Content Plugin        af7y6        Verbose        CSSFeedersManager::CallbackReceived: Suceess = True strDocID = ssic://11       

12/02/2012 16:57:00.14        mssearch.exe (0x0340)        0x0CB8        SharePoint Server Search        Crawler:Content Plugin        af7y6        Verbose        CSSFeedersManager::CallbackReceived: Suceess = True strDocID = ssic://12

 

 

The following information I picked up from my homey Brian Pendergrass, thanks Brian!

It's important to understand that during this feeding, the entire crawled item isn't sent. So during crawl, the crawler will temporarily store crawl items on local disks as blobs. What the crawl component feeds the content processing component is metadata of the crawled item and a pointer to the blob sitting in the network share. At a later time, the content processing component will go retrieve the associated blob and start processing it.

 

Stay tuned for Part 3, coming soon!

 

Thanks,

Russ Maxwell, MSFT

Fazendo autoscaling de aplicações no Windows Azure

$
0
0
Olá pessoal, Durante o evento Build 2013, anunciamos algumas novidades no Windows Azure. Uma das novidades que me chamou bastante atenção foi o autoscaling, crescer ou diminuir o ambiente automaticamente, sem intervenções manuais, diretamente na plataforma ...read more...(read more)

Fazendo autoscaling de aplicações no Windows Azure

$
0
0

Olá pessoal,

Durante o evento Build 2013, anunciamos algumas novidades no Windows Azure. Uma das novidades que me chamou bastante atenção foi o autoscaling, crescer ou diminuir o ambiente automaticamente, sem intervenções manuais, diretamente na plataforma.

Atualmente o autoscaling funciona com Cloud Services, Virtual Machines e Web Sites, ele funciona monitorando a CPU ou a quantidade de mensagens em uma fila do storage (somente para Cloud Services e Virtual Machines).

Abaixo vou mostrar como fazer o autoscaling para Cloud Services, em posts futuros mostro como fazer para Virtual Machines e Web Sites.

O primeiro passo é abrir os detalhes do cloud service e navegar até a guia Escala, conforme abaixo:

image 

O autoscaling ocorre individualmente para cada role do seu cloud service:

image

Para habilitar o autoscaling, deve-se escolher entre a opção CPU ou Fila, a melhor opção depende da sua aplicação.  No meu cenário, vou habilitar autoscaling via CPU para minha Web Role:

image

Durante o processo, vc pode fazer algumas configurações:

  • Intervalo de Instância: Indica a quantidade mínima e máxima de instâncias que sua aplicação pode ter. Para a quantidade mínima, recomendo pelo menos 2 instâncias para poder ser cumprido o SLA da plataforma;
  • CPU de Destino: No exemplo acima deixa a configuração padrão. Indica os valores médios de CPU da última hora que a sua aplicação irá trabalhar. Se a média de CPU ficar abaixo do valor mínimo configurado, serão retiradas instâncias, se a média de CPU fica acima do valor máximo configurado, novas instâncias serão adicionadas no ambiente.
  • Aumentar Em: Indica a quantidade de instâncias que serão adicionadas quando for necessário aumentar o ambiente;
  • Aumentar Tempo de Espera: É o tempo que o Windows Azure espera o ambiente estabilizar antes de tomar alguma outra ação de aumentar o ambiente novamente;
  • Reduzir Em: Indica a quantidade de instâncias que serão removidas quando for necessário diminuir o tamanho do ambiente;
  • Reduzir Tempo de Espera: Similar ao Aumentar Tempo de Espera, �� o tempo que o Windows Azure espera o ambiente estabilizar antes de tomar alguma outra ação de reduzir o ambiente novamente;

É interessante que vc pode tomar algumas medidas mais agressivas ou conservadoras com o seu ambiente, por exemplo, no caso acima estou sendo mais agressivo no crescimento do meu ambiente (subindo 2 instância de cada vez) e mais conservador quando for necessário reduzir o ambiente (removendo 1 instância por vez). Um ponto de atenção é a configuração da CPU de Destino, valores mínimos e máximos muito próximos dos extremos (0% – 100%) podem dificultar o autoscaling, ele só ocorreria quando a aplicação está praticamente ociosa (existindo muito desperdício de recursos), ou quando a aplicação está com muito acesso (o que pode ser tarde). Já valores mínimos e máximos muito próximos um dos outros pode (50% – 51%) fazer com que o autoscaling ocorra à todo tempo.

Agora, para habilitar o autoscaling na minha worker role, vou realizar a seguinte configuração:

image

A principal diferença da configuração é a indicação da conta de armazenamento e do nome da fila existente nesta conta. Além disso, é necessário indicar a quantidade de mensagens que cada computador deve processar (destino por computador). A quantidade de instâncias será calculada pela divisão da quantidade de mensagens na fila pela quantidade de mensagens por computador, quanto maior o número configurado menor será a quantidade de instâncias, quanto menor o número maior a quantidade de instâncias.

RG


Windows Azure Web Sites : Cannot upload a Self-Signed Certificate created with PowerShell

$
0
0

As SSL functionality was added to Windows Azure Web Sites, I started playing around with it. I was trying to upload self-signed certificates when I ran into a issue.

I created a self-signed certificate using Windows PowerShell ISE (New-SelfSignedCertificate Module). Below is a snippet of the command I ran:

New-SelfSignedCertificate -CertStoreLocationcert:\LocalMachine\My-DnsNamewww.kaushalz.com

I exported the certificate in the PFX format and then tried uploading the certificate to WAWS.

image

But it threw an error as shown below:

image

I clicked on DETAILS, and it showed up this.

image

Out of curiosity I wondered if WAWS allowed self-signed certificates to be uploaded. So I created a self-signed certificate via IIS Manager and exported it in PFX format and tried uploading it on to WAWS. This was successful, no errors at all.

Even a self-signed certificate created using selfssl.exe tool could be uploaded to WAWS.

It seems that the certificate created using PowerShell misses keyset permissions which doesn’t work well with WAWS. I see this as a limitation with PowerShell. However, I’m no PowerShell expert to confirm if nothing can be done further.

 

Novidades do Windows Azure no Build 2013

$
0
0
OIá pessoal, Durante essa semana tivemos o anúncio de algumas novidades do Windows Azure no evento Build. Segue um resumo super-rápido das principais novidades: Disponibilidade da versão final do Windows Azure Web Sites; Disponibilidade da versão final ...read more...(read more)

Novidades do Windows Azure no Build 2013

$
0
0

OIá pessoal,

Durante essa semana tivemos o anúncio de algumas novidades do Windows Azure no evento Build. Segue um resumo super-rápido das principais novidades:

  • Disponibilidade da versão final do Windows Azure Web Sites;
  • Disponibilidade da versão final do Windows Azure Mobile Services;
  • AutoScale para Cloud Services, Virtual Machines e Web Sites (já fiz um primeiro post sobre o assunto aqui);
  • Assinantes do MSDN não precisam utilizar cartão de crédito para ativar as assinaturas.

O Scott Guthrie fez um post bastante detalhado sobre o assunto, recomendo a leitura.

RG

Announcing EASTester

$
0
0

The Exchange Server Interoperability Guidance documents have some really good sample code.  I’ve leveraged some of its code to build a tool with capabilities which have been helpful in resolving EAS issues.  This tool’s code an binaries are now published in Codeplex.

The current functionality in this application will cover many basic issues people run into while troubleshooting EAS.  Further it will convert several forms of EAS payloads into XML. 

EASTester
https://eastester.codeplex.com/

Its primary capabilities include:

  • A basic EAS request submission screen which allows you to submit EAS XML, which it will EAS WBXML encode submit to Exchange and decode its response back into XML.
  • If you export the binary stream of WBXML content from a network capture tool into a file, you can use this tool to read the file and convert its contents into XML.
  • It can convert a WBXML in a hex string, comma-delimited hex string or integer string into XML.

I hope that you find EASTester useful.

 

Extensions and Binding Updates for Business Messaging Open Standard Spec OASIS AMQP

$
0
0

From:

David Ingham, Program Manager, Windows AzureService Bus

Rob Dolin, Program Manager, Microsoft Open Technologies, Inc.

 

We’re pleased to share an update on four new extensions, currently in development, that greatly enhance the Advanced Message Queuing Protocol (AMQP) ecosystem.

First a quick recap - AMQP is an open standard wire-level protocol for business messaging.  It has been developed at OASIS through a collaboration among:

  • Larger product vendors like Red Hat, VMware and Microsoft
  • Smaller product vendors like StormMQ and Kaazing
  • Large user firms like JPMorgan Chase and Deutsche Bourse with requirements for extremely high reliability. 
  • Government institutions
  • Open source software developers including the Apache Qpid project and the Fedora project

In October of 2012, AMQP 1.0 was approved as an OASIS standard.

 

EXTENSION SPECS: The AMQP ecosystem continues to expand while the community continues to work collaboratively to ensure interoperability.  There are four additional extension and binding working drafts being developed and co-edited by ourselves, JPMorgan Chase, and Red Hat within the AMQP Technical Committee and the AMQP Bindings and Mappings Technical Committee:

  • Global Addressing– This specification defines a standard syntax for representing AMQP addresses to enable routing of AMQP messages through a variety of network topologies, potentially involving heterogeneous AMQP infrastructure components. This enables more uses for AMQP ranging from business-to-business transactional messaging to low-overhead “Internet of Things” communications.
  • Management– This specification defines how entities such as queues and pub/sub topics can be managed through a layered protocol that uses AMQP 1.0 as the underlying transport. The specification defines a set of standard operations including create, read, update and delete, as well as custom, entity-specific operations. Using this mechanism, any AMQP 1.0 client library will be able to manage any AMQP 1.0 container, e.g., a message broker like Azure Service Bus. For example, an application will be able to create topics and queues, configure them, send messages to them, receive messages from them and delete them, all dynamically at runtime without having to revert to any vendor-specific protocols or tools.
  • WebSocket Binding– This specification defines a binding from AMQP 1.0 to the Internet Engineering Task Force (IETF) WebSocket Protocol (RFC 6455) as an alternative to plain TCP/IP. The WebSocket protocol is the commonly used standard for enabling dynamic Web applications in which content can be pushed to the browser dynamically, without requiring continuous polling. The AMQP WebSocket binding allows AMQP messages to flow directly from backend services to the browser at full fidelity. The WebSocket binding is also useful for non-browser scenarios as it enables AMQP traffic to flow over standard HTTP ports (80 and 443) which is particularly useful in environments where outbound network access is restricted to a limited set of standard ports.
  • Claims-based Security– This specification defines a mechanism for the passing of granular claims-based security tokens via AMQP messages.  This enables interoperability of external security token services with AMQP such as the IETF’s OAuth 2.0 specification (RFC 6749) as well as other identity, authentication, and authorization management and security services. 

All of these extension and binding specifications are being developed through an open community collaboration among people from vendor organizations, customer organizations, and independent experts. 

LEARNING ABOUT AMQP: If you’re looking to learn more about AMQP or understand its business value, start at: http://www.amqp.org/about/what.

CONNECTING WITH THE COMMUNITY: We hope you’ll consider joining some of the AMQP conversations taking place on LinkedIn, Twitter, and Stack Overflow.

TRY AMQP: You can also find a list of vendor-supported products, open source projects, and customer success stories on the AMQP website: http://www.amqp.org/about/examples. We’re biased, but you can try our favorite hosted implementation of AMQP: the Windows Azure Service Bus. Visit the Developers Guide for links to getting started with AMQP in .NET, Java, PHP, or Python.

Let us know how your experience with AMQP has been so far, whether you’re a novice user or an active contributor the community.

Thanks—

-- Dave and Rob

Authenticate your Exchange client in Office 365

$
0
0

The Exchange Online server for your Office 365 account uses a different default authentication scheme than Exchange on-premises. In most cases, you won't notice the difference – except when you're using the EWS Managed API to call Exchange Web Services (EWS). Then you'll quickly find out that something has changed, because your client that worked before will start failing with a 401 Unauthorized error when it runs Autodiscover.

Exchange Online uses basic authentication, and chances are your client is expecting to use NTLM authentication. The problem is simple, and the fix is simple too: you need to change your code to create a set of credentials for basic authentication.

Typically, that means changing a line of code that looks like this one:

NetworkCredential credentials = CredentialCache.DefaultCredentials;

To one that looks like this one:

NetworkCredential credentials = new NetworkCredential(UserSMTPEmailAddress, SecurelyStoredPassword);

Of course, the devil is in the details, and in this case it's that parameter called SecurelyStoredPassword. You'll need to make sure that your client gets the user's password and stores it, for example, in a SecureString object instead of a plain text string. For a more recent example, you can take a look at the Credential Locker for Windows 8 store and desktop apps.

WCF Data Services 5.6.0 Alpha

$
0
0

Today we are releasing updated NuGet packages and tooling for WCF Data Services 5.6.0. This is an alpha release and as such we have both features to finish as well as quality to fine-tune before we release the final version.

You will need the updated tooling to use the portable libraries feature mentioned below. It takes us a bit of extra time to get the tooling up to the download center, but we will update this blog post with a link when the tools are available for download.

What is in the release:

Visual Studio 2013 Support

The WCF DS 5.6.0 tooling installer has support for Visual Studio 2013. If you are using the Visual Studio 2013 Preview and would like to consume OData services, you can use this tooling installer to get Add Service Reference support for OData. Should you need to use one of our prior runtimes, you can still do so using the normal NuGet package management commands (you will need to uninstall the installed WCF DS NuGet packages and install the older WCF DS NuGet packages).

Portable Libraries

All of our client-side libraries now have portable library support. This means that you can now use the new JSON format in Windows Phone and Windows Store apps. The core libraries have portable library support for .NET 4.0, Silverlight 5, Windows Phone 8 and Windows Store apps. The WCF DS client has portable library support for .NET 4.5, Silverlight 5, Windows Phone 8 and Windows Store apps. Please note that this version of the client does not have tombstoning, so if you need that feature for Windows Phone apps you will need to continue using the Windows Phone-specific tooling.

URI Parser Integration

The URI parser is now integrated into the WCF Data Services server bits, which means that the URI parser is capable of parsing any URL supported in WCF DS. We are currently still working on parsing functions, with those areas of the code base expected to be finalized by RTW.

Public Provider Improvements

In the 5.5.0 release we started working on making our providers public. In this release we have made it possible to override the behavior of included providers with respect to properties that don’t have native support in OData v3. Specifically, you can now create a public provider that inherits from the Entity Framework provider and override a method to make enum and spatial properties work better with WCF Data Services. We have also done some internal refactoring such that we can ship our internal providers in separate NuGet packages. We hope to be able to ship an EF6 provider soon.

Known Issues

With any alpha, there will be known issues. Here are a few things you might run into:

  • We ran into an issue with a build of Visual Studio that didn’t have the NuGet Package Manager installed. If you’re having problems with Add Service Reference, please verify that you have a version of the NuGet Package Manager and that it is up-to-date.
  • We ran into an issue with build errors referencing resource assemblies on Windows Store apps. A second build will make these errors go away.

We want feedback!

This is a very early alpha (we think the final release will happen around the start of August), but we really need your feedback now, especially in regards to the portable library support. Does it work as expected? Can you target what you want to target? Please leave your comments below or e-mail me at mastaffo@microsoft.com. Thank you!


Announcing TouchDevelop v3.0 for Windows Phone 8: unified experience, new language features, NFC, speech and tile APIs

$
0
0

The TouchDevelop Team is thrilled to announce a major update of the TouchDevelop app for Windows Phone 8 that brings you: unified experience with the TouchDevelop Web App, new language features, NFC, new speech APIs, new tile APIs. It also comes with a new set of script templates, and a whole new documentation system.

We’d like to thank all of our beta testers – we have heard and addressed many of the issues and requests we received from you!

If you have a Windows Phone 8 device then you can install the latest version from the Windows Phone Store.

Like TouchDevelop on Facebook to stay up to date.

Is the update available on a Windows Phone 7 device?

No. This update is only available on Windows Phone 8 devices. Devices running a Windows Phone 7.x OS can continue to use the previous version of the TouchDevelop app. (The previous version for WIndows Phone 7 devices is currently unavailable but will be downloadable again shortly.)

Migrating from the old to the new app

If you have been using the previous TouchDevelop app v2.11, then you should first make sure that all you have published all of your scripts, or that they got successfully backed up to the cloud. You can view your latest backups online, or you can log in to the TouchDevelop Web App to see your scripts in the cloud.

After you update the app, when you log with the same credentials that you used before, then all of your scripts will get synchronized back to the updated app.

Unified experience, new language features

homeThe new TouchDevelop app for Windows Phone 8 is a complete re-implementation based on the same code base as the TouchDevelop Web App at touchdevelop.com. This means that you can use all of the new language features of the Web App, including pages and boxes, OAuth v2.0 support, new event handlers, uploading and searching for art, libraries using records, async web requests, JSON builder, and more.

The new TouchDevelop app for Windows Phone 8 is better than the Web App, as it gives you access to many more sensors and data providers: Calendar, Camera, Compass, Contacts, Gyroscope, Media, Microphone, Phone, Orientation, and now also NFC, Windows Phone speech APIs.

You can seamlessly switch between the new TouchDevelop app for Windows Phone 8 and the Web App that runs on iPad, iPhone, Android, PC, Mac (and also Windows Phone 8). When you log in with the same credentials, then all of your scripts will be synchronized between your devices.

New script templates and documentation system

When you tap on "create script", you will find a new set of new script templates that help you create different kinds of games and utility apps. The templates are integrated with the whole new documentation system that you can reach by tapping on the big "Docs" button in the "chat & learn" section.

Near Field Communication (NFC), Proximity

nfc Use Near Field Communication (NCF) to send and receive messages. NFC allows smart-phones and similar devices to establish radio communication with each other when they are in close proximity. Depending on your device capabilities, you can write tags, send messages and receive messages.

Learn more by trying out this ample script: nfc my music /kkuj - A little script that uses NFC to launch Nokia Music for the active song on another phone

New speech APIs

You have access to new speech APIs that use the speech engine built into the Windows Phone 8, not using any slow cloud service.
Sample script: voice to text to voice /uzmd A script that using speech recognition and text to speech together.

New tiles APIs

We simplified how to create your own tiles. You can use the new tiles APIs that make it easier to pin and use a default tile. The old tile APIs are obsolete and won’t work.

Wider live tiles

liveWhen you pin the new TouchDevelop app as a wide tile on your start screen, and enable push notifications in the settings in the app, then you will see your latest notifications on the tile.

You can turn off notifications if you no longer wish to receive them.

Other API changes

This app update no longer supports the radio APIs, as Windows Phone 8 does not currently support radio. This update also drops support for the home singleton and the motion APIs, as they were rarely used.

Will scripts written with the new app work on a Windows Phone 7 device?

As soon as you use new language features that were not available in the TouchDevelop v2.11 app for Windows Phone 7, then that script, and all scripts derived from it, will not be visible on a Windows Phone 7 device.

Can scripts exported as Windows Phone apps use all the new features?

Not yet. At this time, you can only export a script as a Windows Phone app for the Windows Phone Store if the script doesn’t use any of the new features (and is not derived from a script that uses a new feature). We will enable the export of Windows Phone 8 apps that use the new features in the near future.

Feedback

The new TouchDevelop app is a big change from the older app. We understand that you might be missing some particular functionality of the old app, or you might even find some bugs in the new app. Please don’t hesitate to send us feedback at touchdevelop@microsoft.com.

Stay in touch

If you want to stay up-to-date, like TouchDevelop on Facebook.

Interview on Timeboxing for Harvard Business Review

$
0
0

A while back I was asked to do an interview on timeboxing for a Harvard Business Review book.   They didn’t end up using it.   It might be just as well since I think it works better as a blog post, especially if you have a passion for learning how to use timeboxing to help you master time management and get great results.

One of the interesting points is that when I originally responded to the questions, I gave myself a 20 minute timebox to answer as best I could within that timebox.   So my answers were top of mind and pretty much raw and real.  I simply wrote what came to mind, and then offered to follow up with a call if they needed any elaboration.

With that in mind, here’s the secrets of using timeboxing to master productivity and time management …

 

1.  How do you personally use time-boxing in your working life? Has it helped you overcome procrastination?

I use timeboxing as a way to invest my time and to set boundaries.   It’s probably one of the most effective tools in my time management toolbox for making things happen, as well as enjoying the journey as I go.

Parkinson’s Law teaches us that “Work expands so as to fill the time available for its completion.”  I find this to be true.   I often use timeboxing to set boundaries because when something is unbounded, it’s easy to make it bigger than it needs to be.   And when it’s too big, it’s easy to procrastinate.  To overcome procrastination, I simply ask myself, “How much can I do in 20 minutes?”  (20 minutes is an effective chunk of time I learned to optimize around in college.)   Using 20 minute timeboxes helps make it a game, and it gives me a chance to improve my efficiency.   I’ve learned to tackle many problems using 20 minute chunks.   On the flip side, I also use timeboxing to defeat “perfectionism.”   To do this, I focus on “What’s good enough for now, within the timebox I have?” versus chasing the moving target of perfection.   To bake in continuous improvement, I then “version perfection.”   So I might do a quick version within a timebox to be “good enough for now”, but in another timebox I’ll make another pass to take it to the next level.   This way I am learning and improving, but never getting bogged down or overwhelmed.

2. Some say time-boxing helps you find balance in your personal life – enabling you to properly balance periods of work and rest. Do you find this to be true? How so?

Timeboxing is probably one of the best ways I know to find balance.   When we’re out of balance, it’s usually because we’re either over-investing in an area or under-investing in another.   For example, I like to think of spreading my time across a few key areas of investment:  mind, body, emotions, career, money, relationships, and fun.   If I’m underinvesting in an area, I’ll set a minimum.  For example, let’s say I’m under-invested in body, then I’ll add a timebox to my week and set a minimum, such as 3 hours a week, or “run for 30 minutes each day.”   Maybe I’m over-investing in an area, such as career, in which case, I might cut back 60 hours to be 50 hours or 50 hours to be 40 hours, etc. for the week.  

Setting these minimums and maximums when I need them help me establish better boundaries, even if they seem arbitrary.   They are way more effective than going until I run out of energy or burn out or get too tired, and they are way more effective than when I completely ignore or forget about an area to invest in.    Even just asking the question how much time are you investing in one of the areas helps you start to pay more attention to what counts.

 

3. How does time-boxing boost your efficiency?

Timeboxing can help you stay focused, as well as set a better pace.  For example, maybe I can sprint for a minute, but not for five.  When you put a time limit in place, you effectively designate the time to be fully focused on the task at hand.  If you use small timeboxes, then you can effectively treat your task more like a sprint versus a marathon, because you know it’s short-burst.  

One thing that’s important to keep in mind is that it’s easy to fatigue the deliberate thinking part of our brain.  If you’ve ever felt like your brain hurts or you need a break from concentrating on something, then you know what I mean.   Rather than “march on”,  in general, you are more effective by thinking in bursts and taking little breaks.   Some people say take breaks, every ten minutes, others say take breaks every  twenty minutes or forty minutes.  I’ve learned that your mileage varies, and what’s important is that you have to test taking breaks at intervals that work for you, and you will likely find that it largely depends on the type of task and your level of engagement.

4. Do you find time-boxing to be a good motivational tool?

The beauty is that with timeboxing you can turn any task or goal into a game.   Going back to my earlier example, where I see “How much can I do in 20 minutes?”, I can treat this like a game of improvement.  I can try to do more each time.  That’s the quantity game.  I can also play the quality game.    For example, I tend to use a timebox of 20 minutes to write my blog posts.   If I’m playing the quantity game, then I might see how many little ideas I can come up with to say about the topic.  If I’m playing the quality game, I might see how I can take one little idea and elaborate on it, and give myself enough time to wordsmith and tweak the fine points.

On a daily basis, I tend to use my “power hours” for getting results.   My power hours are the times in the day in which I am “in the zone” and firing on all cylinders.   I find that I tend to be my strongest at 8:00am, 10:00am, 2:00pm, and 4:00pm.  I use these power hours, these one-hour timeboxes, to tackle my toughest challenges and to move the ball forward.   Once I realized these are my most powerful hours, I started to guard them more closely and use them to produce my greatest results within the shortest amounts of time.   Using my power hours to get results helps me exponentially improve my productivity.  Rather than something dragging on, I can blast through it pretty fast.    Simply by using the same time I already spend, but by reshuffling my work around, has been one of the greatest game changers in my personal productivity.   I’ve also extended this to teams as well.   I do so in two ways.   First, I make sure that people on the team know their power hours and use them more effectively.   Second, I  use the natural rhythms and energy of the day to plan and execute work.   For example, one of the practices I use I call “Ten at Ten.”   At 10:00am, our team takes ten minutes to touch base on priorities, progress, and blockers.   We go around the team and ask three simple questions: 1) What did you get done?  2) What are you working on?, and 3) Where do you need help?  It sounds simple, but it’s highly effective for keeping the team moving forward, embracing the results, and using their power hours.   I’ve experimented with longer meetings and different times of the day but I found this “Ten at Ten” strategy to be the most effective.   Following this meeting, since I’m in my natural “Power Hour”, I can then throw my energy into debottlenecking the team or moving some of the tough rocks forward, or pairing with somebody on a key challenge they are facing.

 

5. Is it possible to use time-boxing to get done what you need from others?

I think when it comes to getting others to get done what we need, we hit on things more than timeboxing.   For example, one key to getting something done from others is to have them “sign up” for the work, versus “assign the work” to them.   If they are part of the process, and you have buy-in then they will naturally want to do the work versus resist the work.   It’s also important to have the person that will do the work, estimate the work.   This helps set expectations better as well as account for how long the work actually takes.  Sometimes there are deadlines of course, but if it’s about having somebody sign up to do their best work, it’s important they have a say in how long it should take.   This improves personal accountability if they internalize the schedule.  

 

If we assume somebody wants to do the work, then the next thing to focus on is when will it be done?   This is where timeboxing comes into play.   If you’re working within a timebox, then you can work backwards from when it’s due.   For example, aside from timeboxes within the day, I also think of timeboxes in terms of a day, a week, and a month.   Beyond the month, I tend to think in terms of quarters.  If I need somebody to do something for me, I now make it a habit to tell them when I need it by.   I used to make the mistake of just asking for the work.    This makes it easier for them because they see what timeframe I’m operating within.   Here is the art part through.   Sometimes people think they can’t do the work justice within the timebox, so what I do is I set reset expectations and help them see the minimum types of things they might do within the timeframe.   For example, if I need quick feedback on something, I’ll let somebody know that I just need high-level or directional feedback at this stage, otherwise, they won’t think it’s reasonable to do a detailed, comprehensive review, which is not even what I want at that stage.   That’s another reason why timeboxes can help.   They force you to put expectations on the table and get clarity on what’s good enough for now versus what’s the end-in-mind, and how to chunk up value along the way.

 

I do think one of the most powerful tools for any longer-term project is milestones.   Chunking up the timeline into meaningful milestones helps everybody see key dates to drive to.   Effectively, this also chunks up the project into smaller timeboxes or windows of time.   It then becomes easier to focus on identifying the value within a particular timebox to reach the milestone.   The other advantage of this approach, when it comes to driving results from others, is that you can do milestone reviews.   People like to look good in front of their peers, so it naturally encourages them to do the work, to be seen as reliable and effective.  

That’s really timeboxing in a nutshell.  It’s simply treating time as a limited resource, and setting limits (both minimums and maximums) to help you stay balanced, stay focused, and get great results.

You Might Also Like

How To Use Timeboxing for Getting Results

Timebox Your Day

Time-boxes, Rhythm, and Incremental Value

Crafting Your 3 Wins for the Day Using Agile Results

Agile Results On a Page

Use Custom Attributes to initialize test environments

$
0
0
Some tests can be quite complex, perhaps having prerequisites that consist of various steps, querying initial conditions, loading test data, etc. You can use Attributes to specify various test configurations. The sample below shows how to create your own attribute class and how to retrieve and use it in various parts of your code. I use this technique in conjunction with an XML config file (using System.Configuration.ConfigurationManager ) to specify configurations that are global to all tests, which...(read more)

Microsoft Dynamics Lifecycle Services Released!

$
0
0

 

clip_image002

Microsoft Dynamics Lifecycle Services (LCS)  is designed to manage and optimize customer implementations – powered by Azure. The customer creates project workspaces for implementation, upgrade and maintenance activities, and will invite implementation experts from partner organizations and Microsoft into these workspaces for the execution of related activities. Partners will be able to create projects for learning and presales purposes. This enables partners to build out expertise around LCS, and demonstrate this in sales cycles.

Take it for a test drive at  https://lifecycleservices.dynamics.com or https://lcs.dynamics.com.

Overview of what’s available in LCS V1:

COLLABORATION WORKSPACE

· Customer-managed collaboration workspace

· Cloud-based, secure environment

· Project management using SureStep or other methodologies

· Project-specific dashboard

· Tools & data that connect multiple lifecycle phases to enable better decision making by key stakeholders

Business process modeler

· Aligns industry processes (APQC) with process maps for Microsoft Dynamics AX.

· Create cross-functional flowcharts based on rich metadata.

· Map processes & perform fit-gap analysis.

· Integration between BPM & RapidStart.

· Quickly generate documents and flowcharts using the updated TaskRecorder (KB#2863182).

CODE & UPGRADE ANALYSIS

· Leverages a cloud-based rule engine to analyze code and identify potential best practice, performance and upgradeability issues.

· Generates actionable reports in Excel & HTML that can be imported into MorphX IDE as actionable to-do’s for developers.

License sizing estimator

· Estimate how many licenses you need (CAL)

· Model the effect of custom roles on license needs.

· Get automatic CAL-level estimates

USAGE PROFILER

· Model user & batch loads.

· See graphic representation of load volumes.

· Identify a starting point for infrastructure sizing.

Issue search

· Search a repository of reported, in-progress & fixed issues.

· Identify specific code objects & lines of code affected by hotfixes.

· Get notifications for issue status changes, and new fixes for AX functional areas.

DIAGNOSTICS

· Automate & monitor health checks for AX ecosystem.

· Collect data from AX environments.

· Evaluate data against pre-defined rules

· Generate reports to provide actionable corrective actions


[Build2013] Day3 - 참석한 세션 정리

$
0
0

빌드2013 번째날은키노트가없고, 마지막날이어서다른보다일찍(오후 3시에) 행사가마무리되었습니다. 제가참석한세션에대해서간단히정리합니다.

 

Windows Runtime Internals: Understanding the Threading Model

WinRT에서 Threading쓰고,어떻게동작하는지에대해서깊이있게설명해주는세션. 심화세션이기때문에기존에 WinRT앱을만들었고, WinRT Threading Model대해서궁금한것이있었다면, 세션에서많은부분을알게것으로생각. UI 스레드의차이점이나, Windows 8.1에서개선을위해추가된기능도다룸. Windows 8 개발자에게추천.

   

Cutting Edge Games on Windows Tablets

PC팩터와사용패턴이변화하는상황에서, 최신게임을윈도우태블릿에서구동하기위해서어떻게해야하는지방법을소개. 그래픽, 프로그래밍언어, 패키지크기, 컨트롤, 화면크기, 화면분할멀티디스플레이, 크로스플랫폼등에대해서설명해. 끝으로 Windows 8 Windows 8.1최신게임지원기능도비교. PC최신게임을개발하는업체의경우 Windows 8.x 플랫폼으로기존게임을포팅하거나새로만드는경우활용할내용많음.

   

Building .NET Applications for Devices and Services

개발자의타겟과애플리케이션패턴이진화하고있는상황에서닷넷으로어떻게, 네이티브, 서비스환경을대응할지설명. Nuget통해서의존관계가있는라이브러리를설치하기쉽도록하고, Portable Class Library .NET, Silverligjt , Windows Phone 등의마이크로소프트플랫폼간에공통으로사용할있게활용. 아주새로운내용은많지않지만, 디바이스와서비스개발에닷넷을고려하는경우에들을만한세션.

   

From Android or iOS: Bringing Your OpenGL ES Game to the Windows Store

OpenGL ES 만든아이폰안드로이드게임을 Windows 스토어앱으로변경하기 위해서 거쳐야하는 부분과 주의사항에 대해서 먼저 다루고, 프룻닌자로 잘 알려진 하프브릭에서 후반부에 나와서, 본인 들이 겪은 부분들에 대해서 매우 절실하게 공유함. (윈도우 8은 디바이스 종류가 많고, 사양도 제작이라서 이에 대한 테스트를 초기 단계에서부터 진행하는 것이 필요). OpenGL로 개발된 게임을 갖고 있고, Windows Store 게임 앱으로 고려하고 있다면 매우 도움이 많이 될 세션. 중반부가 조금 빠르기 진행 되긴 함.

   

이외에도 괜찮은 세션은 향후 VOD로 보고, 이 블로그에 간단한 정리를 하도록 하겠습니다.

   

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>