Quantcast
Channel: MSDN Blogs
Viewing all 29128 articles
Browse latest View live

When I memcpy a struct into a std::atomic of that struct, why does the result not match?

$
0
0


Consider the following code:



// Code in italics is wrong.

struct Point3D { float x, y, z; };

std::atomic<Point3D> currentPoint;

bool LoadCurrentPointFromFile(HANDLE file)
{
DWORD actualBytesRead;
if (!ReadFile(file, &currentPoint, sizeof(Point3D),
&actualBytesRead, nullptr)) return false;
if (actualBytesRead != sizeof(Point3D)) return false;
return true;
}



This code tries to load a Point3D structure
from a file directly into a std::atomic.
However, the customer found that the results were not properly
loaded
and suspected there may a bug in the Read­File
function,
because the value that should have been in the
z member ended up in y,
the value that should have been in the
y member ended up in x,
and the value that should have been in the
x member wasn't loaded at all.



The Read­File function is working fine.
What's wrong is that you aren't using the
std::atomic variable properly.



The contents of a std::atomic variable are
not directly accessible.
You have to use methods like store and
load.
There are operator overloads which make atomic variables
appear to be regular variables, but at no point can you
get the address of the underlying Point3D storage.



Processors have restrictions on the sizes of operands
on which they can natively perform atomic operations.
Some restrictions apply to the size of the operand:
Most processors do not support atomic operations on 12-byte objects,
and it's not reasonable to expect a processor to be able
to perform an atomic operation on a memory object that is
megabytes in size, after all.
Some restrictions are based on layout,
such as whether the object is suitably aligned.



In the cases where the object cannot be managed atomically
by the processor, the standard library steps in and adds
a lock,
and operations on the atomic variable take the lock to
ensure that the operation is atomic.
The reason everything is shifted is that the code took the
address of the atomic variable itself, which includes the
intenral lock, and the value you intended to read into x
didn't vanish. It overwrote the lock!



Access to the contents of the atomic variable must be done
by the appropriate methods on the atomic variable.



bool LoadCurrentPointFromFile(HANDLE file)
{
DWORD actualBytesRead;
Point3D point;
if (!ReadFile(file, &point, sizeof(Point3D),
&actualBytesRead, nullptr)) return false;
if (actualBytesRead != sizeof(Point3D)) return false;
currentPoint.store(point);
return true;
}


There's

a presentation from CppCon 2017
that covers std::atomic from start to finish
,
including performance characteristics.
I'm going to consider this video to be homework,
because next time I'm going to chatter about it.


(Cross Post) Announcing public preview of soft delete for Azure Storage Blobs

$
0
0

Today we are excited to announce the public preview of soft delete for Azure Storage Blobs! The feature is available in all regions, both public and private.

When turned on, soft delete enables you to save and recover your data when blobs or blob snapshots are deleted. This protection extends to
blob data that is erased as the result of an overwrite.

For more details on the feature, see the Microsoft Azure blog and the complete soft delete documentation.

Server Error 0x800004005 Request timed out.

$
0
0

I wrote this post and this lab about the impact of having debug=true in your web.config file.  The fact is, when you are running in a production environment, you do not want to have debug=true.  However, I was writing a series of NETSH tracing posts:

and I needed to have a request run, without timing out so that all my scenarios completed and those took longer than the default request timeout, therefore I got the Request timed out YSOD seen in Figure 1 because I deployed it correctly without debug=true.  Note that I had deployed my code to an IIS server and was not testing from within Visual Studio or IIS Express.

image

Figure 1, request timed out, 0x800004005

Server Error in '/' Application.

Request timed out. 
  Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace 
    for more information about the error and where it originated in the code. 
 Exception Details: System.Web.HttpException: Request timed out.
Source Error: 
 An unhandled exception was generated during the execution of the current web request. Information regarding the origin and 
   location of the exception can be identified using the exception stack trace below.
Stack Trace: 
[HttpException (0x80004005): Request timed out.]
Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.7.2053.0 
Server Error in '/' Application.
Request timed out. 
  Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace 
    for more information about the error and where it originated in the code. 
  Exception Details: System.Web.HttpException: Request timed out.
Source Error: 
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and 
location of the exception can be identified using the exception stack trace below.  
Stack Trace: 
[HttpException (0x80004005): Request timed out.]
Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.7.2053.0 

So, I set debug=true in my web.config which overrides the request timed out setting.  Don’t do this in production because you would create many other problems.  It does make sense that while in debug mode the timeout is disabled because when your code execution hits a breakpoint the timeout should not get triggered…

I am testing from a client, which is calling an ASP.NET Web Forms application executing this code, for example:

using (TcpClient client = new TcpClient())
{
  try
  {
    client.Connect("###.###.###.###", 80); 
    var status = "Did the client connect? {client.Connected.ToString()}";
    client.Close();
    var statusLabel = labelStatus.Text;
  }
  catch (Exception ex)
  {
    labelStatus.Text += "<br /><br />{System.DateTime.Now.ToString()} - This is iteration number: {i.ToString()}" +
      $"there was an exception: {ex.Message.ToString()}";
  }
}
...
using (HttpClient client = new HttpClient())
{
    Task.Run(() => RequestDataFailHttps(i).Wait());
}
...
using (var client = new HttpClient())
{
  try
  {
     var result = await client.GetStringAsync("https://***brokenlink***.com");
  }
  catch(Exception ex)
  {
     labelStatus.Text += $"<br /><br />{System.DateTime.Now.ToString()} - This is iteration number: {i.ToString()} " +
       $"with an exception of: {ex.Message.ToString()}";
  }
}

The result is displayed to the page, Figure 2 and logged in my NETSH trace.

image

Figure 2, request timed out, 0x800004005

17-18 апреля. Бесплатная онлайн-конференция. Платформа данных

$
0
0

Сегодня данные повсюду! Решения гибридной платформы данных раскрывают потенциал, скрытый в ваших данных – как локальных, так и облачных, – и дают возможности для преобразования бизнеса. По-новому взгляните на имеющиеся у вас возможности.

Воспользуйтесь эффективностью и гибкостью облачных решений — перенесите свои базы данных в облако без изменения кода. Получите доступ к аналитическим возможностям и быстрее стройте прогнозы благодаря Azure.

Современный мир ориентирован на данные и нуждается в совершенно новом решении для их хранения. Это решение должно полноценно масштабироваться по запросу и приостанавливать работу, когда в нем нет необходимости. Оно должно быть рассчитано на экспоненциальный рост данных всех типов, обеспечивать защищенный доступ к информации, а также предоставлять возможности прогнозируемой аналитики для преобразования бизнеса.

На этом онлайн-мероприятии вы узнаете:

  • Функциональные возможности для повышения производительности, защиты данных, использования встроенных аналитических инструментов и миграции систем управления данных в облако.
  • Предиктивная аналитика и средства Machine Learning.
  • Работа с большими данными: Azure SQL Data Warehouse.
  • Возможности Azure Cosmos DB.
  • Построение и реализация решений по базам данных для Microsoft SQL Server и базы данных Microsoft Azure SQL.
  • Проектирование для высокой доступности, аварийного восстановления и масштабируемости.
  • Трансформация бизнеса с помощью искусственного интеллекта.
  • Проекты по миграции с конкурентных платформ.
  • Ad Hoc анализ с помощью Spark в Azure.

Мероприятие бесплатное, регистрация обязательна!


Второй день традиционно будет отдельно посвящен более глубокому, детальному изучению технологий, их важности, построению современной облачной архитектуры, а так же где и как пройти соответствующее обучение, чтобы полностью соответствовать современным стандартам профессионалов.

Посетив эту онлайн-сессию, вы сможете качественно подготовиться к сдаче экзамена 70-473 Designing and Implementing Cloud Data Platform Solutions.

Ключевые темы:

  • Построение и реализация решений по базам данных для Microsoft SQL Server и базам данных Microsoft Azure SQL.
  • Разработка и реализация системы безопасности.
  • Проектирование для высокой доступности, аварийного восстановления и масштабируемости.
  • Мониторинг и управление реализациями баз данных в Azure.

Мероприятие бесплатное, регистрация обязательна!


Откройте для себя Microsoft Azure!
Больше информации вы сможете найти на сайте Azure Atlas

SharePoint 2013/2016 Usage Reports fail to show the correct number of clicks from anonymous access

$
0
0

 

Symptom

When using a Search Center with anonymous access, click statistics do not compute correctly for anonymous access but registers other user's accesses properly.

ULS logs will show entries like this:

w3wp.exe SharePoint Foundation CSOM ajwqj Medium Request does not have SPBasePermissions.UseRemoteAPIs permission. Need to check it when each API is accessed

 

Repro Steps

  • Set up web application for anonymous users:
  • From CA / Application Management, select the web Application and click on Authentication Providers, then the default Zone
  • From the Edit Authentication form, select Enable Anonymous Access and click Save
  • Navigate to the site collection / Site Settings / Site Permissions and click the Anonymous Access button on the ribbon
  • Configure anonymous access for the Entire Web Site
  • Close the browser and start a browser anonymous

NOTE: When you set up anonymous access for the site, you will need the Require Use Remote Interfaces Permission checked

 

Results

When using the Search Center as anonymous, the clicks will not be computed.

 

Expected Results

Anonymous and non-anonymous clicks should be computed.

 

Cause

The permission UseRemoteAPIs is somehow disabled on Search Center web site.

 

Solution

Run the following script:

 

 

LCS (March 2018 – release 2) release notes

$
0
0

The Microsoft Dynamics Lifecycle Services (LCS) team is happy to announce the immediate availability of the release notes for LCS (March 2018, release 2). 

Updates to the methodology 

In this release of LCS, we have added the ability for customers to modify and delete any new phases that are appended to a methodology. This applies to methodologies in Implementation projects and Create and Learn projects. We have received feedback that new phases added to a methodology were locked and prevented users from making further changes to the phases. We addressed this issue by ensuring that any new phases that are added to a methodology are not locked by default, which means that you can continue to make edits.  

Application upgrade 

With this release, we have improved the workflow of application upgrade requests. When the Microsoft Service Engineering (DSE) team completes the upgrade process, the service request status will change to Ready for Validation. The system is available to you at this stage. Customers will validate and then change the status of the service request to Validation Successful or Validation Failed. 

This new workflow enables a rollback of the upgrade even after an upgrade is successfully completed by DSE. If the status is changed to Validation Failed, a rollback of the upgrade is initiated. 

For more information on application upgrades, refer to scenario 3 in the topic, Process for moving to the latest update. 

Users can create, publish, and update solution packages with Global (restricted) scope and access packages that are managed at solution level  

Solution packages with Global (restricted) scope allow users to: 

  • Publish a solution with Restricted scope 
  • Manage users for Global (restricted) scope solution at solution level, such as add or delete.
  • Create and publish new versions of the solution package in which customers will receive automatic notification that a new version is available for update 

 This feature is available through Preview feature management. Click the Preview feature management tile to access the features available for preview. 

Select the preview feature, SharedToolAsset - Solution package, and set Preview feature enabled to Yes. 

Dynamics 365 - Translation Service for documentation

Dynamics 365 - Translation Service (DTS) introduced a new preview feature, Dynamics 365 Translation Service – Documentation Translation Support. When you enable this feature, you can submit a translation request for a Microsoft Word (.docx) file in DTS.

To enable this feature, go to LCS Preview feature management, select Dynamics 365 Translation Service - Documentation Translation Support, and set Preview feature enabled to Yes.

To create a new request, go to the DTS dashboard and select to create a new request. In the New translation request window, select Documentation in the File type field to proceed with a .docx file translation request.

After the request is submitted with source .docx files, DTS will generate the target language .docx files in a side-by-side text review format and the final output .docx in the source file format. You can review and edit the translations, and regenerate the files in the target language with your edits included.

  

 

TFS 2018 Update 2 RC1

$
0
0

We have released Team Foundation Server 2018 Update 2 RC1. Update 2 is the first "feature" update for TFS 2018 based on our updated release approach.

Here are some key links:
TFS 2018.2 RC1 Release Notes
TFS 2018.2 RC1 Web Installer
TFS 2018.2 RC1 ISO
TFS 2018.2 RC1 Express Web Installer
TFS 2018.2 RC1 Express ISO

RC1 is a go-live release that is fully supported for installation in your production environment. We've tested and used it in our own environment in addition to having run it on VSTS. RC1 is available in all languages, but you may notice that some strings are still in English. We expect to have all the strings localized by RC2. We'd love for you to install and use it in production and report any problems on Developer Community. As with all our Updates, it should be a seamless upgrade – no breaking changes, no changed pre-reqs, just a better product. We expect to release an RC2 and then the final RTW release.

Update 2 is the last “feature” release for TFS 2018. The only other update planned for TFS 2018 is Update 3, which will be a roll up of bug fixes, much like Update 1 for 2018. Our next major release will be TFS 2019 in November.

Update 2 includes many features from VSTS since our September deployment. You can see the details of the features in the release notes. Here are a few highlights:

Pull Requests

There have been many updates to pull requests, including pull request labels and mentions for pull requests. Also, the PR notifications now include the thread context. When a reply is made to a PR comment, the prior replies show in the body of the notification, so you won't need to open the web view to get the context.

Work

There are now two new macros for queries. @MyRecentActivity shows the work items you've viewed recently and @RecentMentions returns the items you were mentioned in over the 30 days.

We have also added support for the Not In query operator. You can query for work items "Not In" a list of states, IDs, or many other fields, without nested "Or" clauses.

Build and Release

We have made several enhancements to multi-phase builds. You can use a different agent queue for each build phase, run tests in parallel, give scripts access to the OAuth token for each phase, and run a phase only under specific conditions.

In Release, you can now use release gates to react to health checks in your release pipelines.

Package

You can now set retention policies in package feeds to automatically clean up older, unused package versions. Also, the Packages page has been updated to use the standard layout and filter bar.

Wiki

There are a bunch of improvements to the wiki, such as search, referencing work items, and pasting rich text. You can also preview the wiki page as you're editing.

 

RC1 includes all the features we have planned for Update 2. Please report any problems on Developer Community or call customer support if you need immediate help. We'd love for you to install RC1 and we're looking forward to your feedback.

Dealing with control/junk characters in the message body using code

$
0
0

Emails, meeting invitations and NDR messages sometimes may have control characters. Those characters would likely have been added by the sending application or API which was not written properly (due to a bug or encoding setting issue). It's also possible something in the transport path of the message has altered the body and have added such characters. Add-ins on the sending side of an email may alter the body of a message and introduce issues – an example is an add-in which will add message footer text. Also, please keep in mind that some controls are sensitive to control characters and may not display characters correctly, truncate text due to a NULL or other control character or throw an error.

In contrast, if you are seeing more than just a few control characters and the whole body looks like gibberish then something else may be the culprit. If the full message body (or body) is scrambled (looks like gibberish or fake Chinese) then there may be issues with the language encoding settings in the email or with what is being used to view body.

Generally, Exchange stores the body it gets without removing control characters. Code should always read and handle issues with control character in bodies (usually the action is to remove them). Exchange does not scan and remove control (i.e. junk) characters from bodies. Outlook is a very mature email client which is built to handle many types of bad data and from what I've seen it usually does a very good job of removing the problem characters when the message is read or when forwarded.

The way to make your code reliable is to read the full text and remove the problem characters. You should also consider following up on what added those characters and ask the application's owner to fix their code.


Deadline extended for connecting VSTS accounts to AzureAD

$
0
0

On January 5, 2018, I announced that Visual Studio Team Services will no longer allow creation of new MSA users with custom domain names backed by AzureAD.  While most customers agree with the direction of this change, I got clear feedback that they could not connect their VSTS to AzureAD by the March 31 deadline.  Based on this feedback, we are changing our tactics and extending the deadline to the end of September.

PLEASE NOTE: VSTS will continue to work the way it does today and current users will never lose access. The only restriction starting in October will be on the creation of new MSA users; existing users will continue to have access to VSTS.

From talking to hundreds of customers going through this process, we have heard clear feedback that customers see this as a Microsoft problem, not something specific to VSTS.  They’re looking for Microsoft to provide a clear path to not only change the way users login to VSTS but to ensure their complete developer ecosystem stays intact including access to their VS and Azure subscriptions.  We have updated our documentation to provide clearer guidance on how you can continue to access these as well as migrate them to an AzureAD identity if you so choose.

We are extending the deadline to allow administrators to move to AzureAD by the end of September, at which time no new MSAs can be created with custom domain names backed by AzureAD.  We are also working on a new technical solution that will allow new users to sign into VSTS with their AzureAD identity while existing users continue to sign in with their MSA identity.  I will post more about this once we have a firm plan in place.

Please let me know if you have any other questions or concerns. Thank you for your continued use and support of VSTS.

Thank you,

Justin Marks, Principal PM, VSTS Identity

Known Issue: Unexpected Client Error occurs on the Environment details page in LCS for Dynamics 365 for Finance and Operations on-premises

$
0
0

A recent change is causing an Unexpected Client Error on the Environment details page in LCS. This error only occurs for Dynamics 365 for Finance and Operations on-premises environments. Although the error occurs, there is no impact on the on-premises environment or the features in LCS that allow you to interact with the on-premises environment. We are actively working on resolving this issue. We will update this blog post once the fix has been released.

Important changes to OpenAPI import and export

$
0
0

We introduced some changes in how OpenAPI import and export works in API Management. We made these changes based on customer feedback and to better align with OpenAPI semantics. The new behavior is in effect in the March 28 release when using Azure portal (including integrated Swagger editor) or management API version 2018-01-01 (or later).

Add new API via OpenAPI import

For each operation found in the OpenAPI document, a new operation will be created with Azure resource name and display name set to operationId and summary respectively.

If operationId is not specified, Azure resource name value will be generated by combining HTTP method and path template, e.g. get-foo.

If summary is not specified, display name value will be generated by combining HTTP method and path template, e.g. Get - /foo..

Update an existing API via OpenAPI import

During import existing API is changed to match API described in the OpenAPI document. Each operation in the OpenAPI document is matched to existing operation by comparing its operationId value to Azure resource name of existing operation.

If a match is found, existing operation's properties will be updated "in-place".

If a match is not found a new operation will be created using the rules described in the section above. For each new operation, the import will attempt to copy policies from an existing operation with the same HTTP method and path template.

All existing unmatched operations will be deleted.

To make import more predictable please follow these guidelines:

  • Make sure to specify operationId properties for every operation.
  • Refrain from changing operationId changes.
  • Never change operationId and HTTP method or path template at the same time.

Export API as OpenAPI

For each operation, its Azure resource name will be exported as an operationId, and display name will be exported as a summary.

What’s new in VSTS Sprint 131 Update

$
0
0

The Sprint 131 Update of Visual Studio Team Services (VSTS) has rolled out to all accounts and includes several features that were prioritized and influenced by feedback over the last several months.

The Work Items hub, which is now generally available, brings important work together into a new hub with several views for items you’re following, your activity, and, of course, items assigned to you. I showed this new hub to another team in Microsoft yesterday and they were eager to start using it since it enables them to drop some of the queries they often run.

We also add virtual machine support to Azure DevOps Projects in this Update so you can retain that lower level control over your app. If you’re new to VSTS or ready to start a new project, Azure DevOps Projects makes it super simple to get started with a sample app and full pipeline, complete with telemetry, right in the Azure Portal.

For those that want to use your Visual Studio subscription with an alternate email, such as on customer projects, now you can use an AAD-based alternate email account to enable you to leverage that existing subscription whether your authentication is using Azure Active Directory or Microsoft Account.

Check out the full release notes for more.

How to build a chat bot in 10 minutes

$
0
0

Written by Natalie Afshar

This blog post is inspired by a workshop on Azure and Chatbots by Ray Fleming at the Microsoft Learning Partner Summit in January 2018.

It was Arthur C Clarke’s third law that "any sufficiently advanced technology is indistinguishable from magic."  And Ray Kurzweil, who likened modern day technology to the spells and magic in Harry Potter: “Harry unleashes his magic by uttering the right incantation. Our incantations are the formulas and algorithms underlying our modern-day magic. With just the right sequence we can get a computer to read a book out loud or understand human speech…”

Through Microsoft Azure’s cognitive services and bot framework, we can help our students and teachers discover exactly that.

In this post I’ll outline some simple bots you can take advantage of: 

But first of all: What is the bot framework? And how do you explain a bot?

The bot framework lies within Microsoft’s cloud offering, Azure, with a multifaceted host of solutions best summarised by the graphic below. The bot framework lies under the intelligence offerings, along with Microsoft’s cognitive services, and Cortana.

So what is a bot?

When I first heard of a chat bot, I imagined small robots buzzing around to complete tasks. Actually, a bot is an application that performs one or more automated tasks. You can find them all over the internet.

Chatbots use conversation as the interface.

What can bots do?
  1. Information Retrieval: Lookup, reference and information seeking, scenarios backed by a data source. E.g. “What subjects are offered for year 12 in 2018?”  “When are the trains leaving on Thursday?”
  2. Transactional: Look up info, make amendments, scenarios backed by a data source. “Upgrade my account to plan B” or “book two tickets for film A on Monday using my credit card”
  3. Advisory Role: Prescriptive guidance via ‘expert systems’ based on user input.“Are these school shoes appropriate?” “Should I add an additional component to my service plan? ”
  4. Social Conversations: Ability to sense sentiment and engage in open-ended conversation within the bots area of expertise.“Your product is terrible, I would like a refund.” “I have had a terrible experience, who can I talk to?”

Bot services can be used in a very different way to how we conventionally think about IT, and setting them up can be simple enough that students can take it on as a task. It is making a request to an internet service to do something for you, translate a text, bot conversation or tell you what is in an image.

Explore Bots around the web

One example is Microsoft’s CaptionBot that can interpret photos by suggesting a caption. You can play around with this tool by uploading your own photo or link a photo from the web.

Other bots that exist around the web include: Microsoft summarize bot, Bing image bot, Bing news bot, build bot and Murphy bot, to name a few.

Murphy Bot, is an online chatbot running on Azure that is powered by the intelligence of Microsoft Cognitive Services, including the knowledge of Bing. You can chat with Murphy using Skype and ask it hypothetical "what if ..." questions like "what if I were superman?" Murphy will try to respond with an image that visualizes an answer to your question. Murphy is brand new and still learning so it sometimes doesn't have an answer right away, but the more people interact with it, the more creative it will become, gradually improving the results.

Make a bot right now: QnA maker

One quick and easy way to create a bot, right now, is through Qnamaker.ai – it is a chatbot service that runs on Microsoft Azure, and is a super quick way to build a bot.

It will take you 5-10 minutes, simply link it to an existing FAQ document or webpage and it will generate a bot to answer QnA’s. These can be embedded onto a variety of mediums, from Websites, to Skype, SMS or voice services, WhatsApp, Facebook messenger or even WeChat.

Here’s a bot that I made in about 10 minutes.

Are there scenarios you could imagine using a bot within your school or university? Such as admissions information or building a student service bot?

If you school has a STEM club, perhaps you could build a bot for parents of the school as a lesson for students.

Bots built on Microsoft Azure have language understanding built into them. For example, if you ask “can I donate clothes”, even if that exact phrase isn’t on the website, the bot will work out what you are trying to ask and give you an answer.

Although still in its early stages, one future use for this in education can be asking a bot to find you all the photos of the school library with students standing in front of it, or parents sending a photo of black school shoes via text message to the bot to get feedback if they will be approved by the school’s shoe policy – or even texting the bot for ideas on a healthy school lunch recipe while at the supermarket.

If you are interested in learning more about Machine Learning, to use in your classrooms or to explore more generally, visit our Cognitive Services website for information on how our intelligence APIs work, such as our vision, speech, language, knowledge and search APIs.

 

Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.

Takeaways from the 2018 CSUN Assistive Technology Conference

$
0
0

A group of people wearing blue T-shirts at CSUN

Microsoft had a huge presence at CSUN 2018!

Last week concluded the 33rd annual CSUN Assistive Technology Conference, a gathering of leaders and change-makers working to make technology more accessible. This year’s conference was filled with thought-provoking sessions and impactful demonstrations from experts in academia, advocacy organization, government agencies and businesses.

It’s an honor to be involved in this event, and this year was a record year for us. With over 80 people from Microsoft in attendance, we were able to host 23 sessions – a record -- lead by engineers, program managers and researchers to showcase our latest accessible technology at our booth at the Grand Hall. We were so grateful for all the questions, comments and feedback gained throughout the conference, especially at our reception. We took away reams of feedback and nuggets of joy from experts across the industry. Every bit will help us on the journey to empower all people and organizations to achieve more.

Jenny Lay-Flurrie, chief accessibility officer at Microsoft, kicked off with a talked about the progress to make Microsoft's products and services more accessible. From inclusive hiring initiatives to technology for school, work and home, we are making progress towards are initiative to empower more people to achieve their goals.

Highlights

  • Making education more accessible was a key theme with discussions around tactics and resources for engaging students with disabilities. Microsoft demoed Project Torino, which is focused on developing a physical programming language for teaching computational thinking skills and basic programming concepts to children ages 7–11 who are blind or have low vision.
  • Swetha Machanavajhala discussed her work on the Hearing AI app for people that are deaf or hard-of-hearing. It leverages Microsoft’s artificial intelligence platform to deliver visualized audio for your environment.
  • Some sessions offered a look at exploratory uses of virtual and augmented reality in the classroom or for therapy purposes, as well as the presence of many wearables and head-mounted displays, such as the Windows Mixed Reality Headsets, in the Expo Hall.
  • Several speakers addressed the importance of including people with disabilities in the review process during product development. If you want to make products that work for everyone, you need real feedback from a diverse set of customers. Megan Lawrence discussed the new Accessibility User Research Collective, a partnership between Microsoft and the Shepherd Center to improve the quality and quantity of user feedback from the disability community.
  • You can find all of the presentation decks from Microsoft's CSUN 2018 sessions right here.

(Left) Musician, Stevie Wonder (left) and college student, Veronica Lewis (right)

(Left) Musician, Stevie Wonder pictured with (right) college student, Veronica Lewis.

One of the high points of the event for many attendees was the opportunity to meet world-renowned musician, Stevie Wonder. He is a fan of the power of technology to change the world for people with sight loss, and met folks from Seeing AI (which he is a fan of!) and Soundscape. It was great to see his support for Microsoft, attendees of the conference and the role that technology can play to empower people with disabilities.

On behalf of the entire Microsoft Accessibility team, we’d like to thank CSUN for hosting another incredible event. We left deeply inspired and were thrilled to speak with so many attendees who stopped by our booth, participated in our sessions and participated in our reception. The insightful feedback we gain from events like CSUN are invaluable as we work on the next phase of our accessibility journey at Microsoft. Please keep it coming!

How to Check Your Azure Management Tools Version

$
0
0

Both Azure Powershell SDK and Azure CLI 2.0 are frequently updated to stay current.  Older versions have limited shelf life.

If a certain Azure management scenario isn't working as expected, it may be because of stale tools. Here are commands to check the version of the tools you may currently have installed:

Azure Powershell:

Get-Module -ListAvailable -Name Azure -Refresh

Azure CLI 2.0: 

az --version​

The version number corresponding to the most recent release for each of these projects is documented in the Github release history:

​If keeping the installed version of these tools up to date is an annoyance, I would suggest checking out Azure Cloud Shell as a possible alternative.   The Cloud Shell feature is maintained by Azure Portal and tracks to the most recent version of Powershell or CLI automatically.  Doesn't require you to install Powershell or CLI locally on your machine.  You access it through the browser, but it feels pretty close to what you'd get from a local shell. More details at:    https://azure.microsoft.com/en-us/features/cloud-shell/

Azure Event Hubs Geo-DR configurations, now enabled in all Azure regions

$
0
0

Geo-disaster recovery for Azure Event Hubs is now enabled in all Azure regions. You can now enable this configuration on your existing namespaces and new namespaces in the region of your choice.

This feature gives you the flexibility of picking a primary region and a primary namespace in it to pair with a secondary namespace in a different region. Once paired, this feature maintains the sync of the entities between these regions. So in the event of a regional disaster and Event Hubs being unavailable in the primary region, you initiate a fail-over. Once the fail-over is initiated, the Event Hubs in the secondary region takes charge so you can continue with your application.

The Geo-DR feature also introduces the alias configuration which represents the pairing of the primary and secondary namespaces. Once Geo-paired, you can obtain the connection string to the alias which can be used by the clients to talk to Event Hubs. With this, when a fail-over is initiated, the connection to the primary is cut and the clients now talk to the secondary namespace without having to change the clients.

Please Note: This release foes not contain data disaster recover-ability. This feature currently covers meta-data only recovery.

Please see the full documentation of the feature, including the code samples here: https://docs.microsoft.com/azure/event-hubs/event-hubs-geo-dr

This feature is available only on Standard Event Hubs namespace. Geo-DR configurations can be enabled either using PowerShell or Azure CLI for Event Hubs.

Enjoy this new capability and feel free to leave us your valuable feedback/comments below!

 

Happy Event-ing!

Event Hubs team

NAV on Docker 0.0.5.5 or…– What’s new

$
0
0

As some users of NAV on Docker has noticed, the images gets rebuild from time to time. We typically rebuild all images when we have changes to the generic layer, which might be of value to users of NAV on Docker. This blog post describes what's new since the last blog post on 0.0.4.1 (December 2nd 2017).

A lot of small improvements has happened, but for most users, the changes aren't really visible, if you "just" use NAV containers for development or test.

0.0.5.5 - 2018.03.29

Remove tenant data from the App database. For performance reasons, we left the tenant part in the App database when switching to multitenancy. This however made Export-NavContainerDatabasesAsBacpac create wrong bacpac files.

Test Assemblies, used for the performance test framework are included in images built on generic 0.0.5.5 or newer. They are located in C:Test Assemblies inside the container.

Task Scheduler is now enabled by default for developer preview and Business Central Sandboxes.

TestToolkit and Test objects for local developer preview and business central sandbox containers

0.0.5.4 - 2018.03.24

In order to stay with the new naming strategy of the microsoft dotnet images, NAV containers are now using microsoft/dotnet-framework:4.7.1-windowsservercore-ltsc2016 as base image.

TenantEnvironmentType is set to Sandbox for Business Central Sandboxes and new tenants are mounted as sandbox tenants.

Container Age was checked on restart of the container, meaning that a container, where the age exceeds 90 days will be unable to start once stopped.

0.0.5.3 - 2018.02.26

Generate symbols and setup NAV for dual development between AL and C/AL.

Restore bakfile to seperate folder to avoid conflict with the database in the container.

0.0.5.2 - 2018.02.23

Support for Azure SQL in the ARM template (http://aka.ms/getnavext), which allows you to deploy bacpacs to Azure SQL as part of your Azure Resource Manager Template.

Support for TLS 1.2, which has become a requirement for github and other download places.

Bugfix: ClickOnce failure without AcsUri defined.

0.0.5.1 - 2018.02.17

AAD support for Windows Client using ClickOnce

0.0.5.0 - 2018.02.15

Support for Azure Active Directory (AAD) authentication

0.0.4.5 - 2018.02.03

Fail fast if the amount of memory assigned to the NAV container isn't at least 3Gb.

Support for multitenancy.

0.0.4.4 - 2017.12.19

Support for sharing URLs as folders to containers to allow for script overriding in Azure Container Services and more.

Bugfix: Report preview not working due to missing t2embed.dll

0.0.4.3 - 2017.12.09

Include upgradetoolkit and extensions from the DVD on the container.

0.0.4.2 - 2017.12.04

Support for specifying custom config settings for Service Tier, Web Client on Windows Client as a parameter for Docker. Example: --env CustomNavSettings=EnableTaskScheduler=true

Split the install scripts to version specific folders to simplify source code.


 

Alongside all of these changes, the navcontainerhelper PowerShell module has also been updated together with the NAV ARM Templates to give a better experience when deploying test, development and demo environments.

Enjoy

Freddy Kristiansen
Technical Evangelist

Get Image Meta Data (Part 1)

$
0
0

 

Lets build off what we learned of the “ComputerVisionMetaData” Project we built in the post Computer Vision – Analyze (Get Meta Data of Image) , lets first Begin by creating a new Windows Form App Project called “GetImageMetaData” in Visual Studio

If you have want to add this project to a Source Control such as TFS or GIT, select the add to Source Control option. If you do not wish to at this time or not sure what this is than I would verify that this option is unchecked and and than click on OK.

In a few moments you will be presented with the Project Template that we will begin to build our code on.

Within the Solution Explorer Window Click on References

Using Visual Studios 2017 my project has automatically been assigned the references in this image, but we need a few more.

Right click on  either “GetImageMetaData” or “References” and select “Manage NuGet Packages….

you should be presented with the NuGet Package Center

Select Newtonsoft.JSON and be sure to Install the latest version.

When installation is completed In the Solution Explorer Click on “Form1.cs

Expand Form1.cs in the Solution Explorer

Now Click on Form 1() and in your main image you should see the default code that was in this template

At the Top add the following using statements

using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System.IO;
using System.Net.Http;

After you added all the Necessary using statements underneath the lines , just after the brace.

public partial class Form1 : Form
{

add the following constants

const string skey = "your vision api key here";
const string uriBase = "https://westus.api.cognitive.microsoft.com/vision/v1.0/";
const string apiMethod = "analyze";

At this point your code should now look like this…

Now just after the following block of code...

public Form1()
{
InitializeComponent();
}

add the following block of code

private void cmdBrowse_Click(object sender, EventArgs e)
{
var dlg = new OpenFileDialog();
dlg.Filter = "jpg|*.jpg";
var result = dlg.ShowDialog();
if (result == DialogResult.OK)
txtPicFilename.Text = dlg.FileName;
}

after this add the next block of code

public static async Task<AnalysisResults> MakeAnalysisRequest(string imageFilePAth)
{
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", skey);
AnalysisResults results;

string requestParameters = "visualFeatures=Categories,Description,Color&language=en";
string uri = uriBase + apiMethod + "?" + requestParameters;

HttpResponseMessage response = null;
byte[] byteData = GetImageAsByteArray(imageFilePAth);

using (ByteArrayContent content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);
string contentstring = await response.Content.ReadAsStringAsync();
Console.WriteLine("nResponse:n");

var jsonObj = JObject.Parse(contentstring); // JsonConvert.DeserializeObject(contentstring);
string indentedJson = JsonConvert.SerializeObject(jsonObj, Formatting.Indented);
results = new AnalysisResults() { JsonObj = jsonObj, jsonStr = indentedJson };
}

return results;
}

And then finally add the last block of code

public static byte[] GetImageAsByteArray(string imageFilePath)
{
FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}

Your code should now look like the following...

 

 

At this time ignore the red squiggly lines

Continue to Get Image Meta Data (Part 2)

Get Image Meta Data (Part 2)

$
0
0

Continuing from the  “Get Image Meta Data (Part 1)” blog posting, we should now have the first part of the code Written which will send the captured image the Microsoft Cognitive Services Vision API. In this next section we will build the Image collection piece with a user friendly UI as well as create a new class.

1st lets create the additional class

In the Solution Explorer Right Click on the Project Name in this example its “GetImageMetaData” and select Add Class

image 1a

In the Add new class window , Name the class “AnalysisResults”

image 2a

 

Click on Add

Now you should see a new class added to your code that has the following code within it…

image 3a

 

In between the braces of

class AnalysisResults
{

}

Add the following code

public dynamic JsonObj;
public string jsonStr;

The code in the class should now look like …

image 4a

Before we continue change the “class AnalysisResults” to “public class AnalysisResults”

namespace GetImageMetaData
{
public class AnalysisResults
{
public dynamic JsonObj;
public string jsonStr;
}
}

The code should now look like…

image 5a

I know its not much to it but lets just continue shall we.

now we could create the text boxes manually and define actions to the text boxes for the User Interface , but that will be for another blog. To speed things up we will just add the needed code to set up the interface.

In the Solution Explorer Double click on  “Form1.Designer.cs” which is under the “Form1.cs”

image 6a

After the brace of

private void InitializeComponent()
{

remove the existing code after the brace and replace with the following block of code.

this.label1 = new System.Windows.Forms.Label();
this.txtPicFilename = new System.Windows.Forms.TextBox();
this.cmdBrowse = new System.Windows.Forms.Button();
this.cmdAnalyze = new System.Windows.Forms.Button();
this.label2 = new System.Windows.Forms.Label();
this.txtResults = new System.Windows.Forms.TextBox();
this.txtDescription = new System.Windows.Forms.TextBox();
this.SuspendLayout();
//
// label1
//
this.label1.AutoSize = true;
this.label1.Location = new System.Drawing.Point(23, 23);
this.label1.Name = "label1";
this.label1.Size = new System.Drawing.Size(157, 25);
this.label1.TabIndex = 0;
this.label1.Text = "Picture Filename";
//
// txtPicFilename
//
this.txtPicFilename.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.txtPicFilename.Location = new System.Drawing.Point(28, 52);
this.txtPicFilename.Name = "txtPicFilename";
this.txtPicFilename.Size = new System.Drawing.Size(818, 29);
this.txtPicFilename.TabIndex = 1;
//
// cmdBrowse
//
this.cmdBrowse.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Right)));
this.cmdBrowse.Location = new System.Drawing.Point(863, 45);
this.cmdBrowse.Name = "cmdBrowse";
this.cmdBrowse.Size = new System.Drawing.Size(65, 45);
this.cmdBrowse.TabIndex = 2;
this.cmdBrowse.Text = "...";
this.cmdBrowse.UseVisualStyleBackColor = true;
this.cmdBrowse.Click += new System.EventHandler(this.cmdBrowse_Click);
//
// cmdAnalyze
//
this.cmdAnalyze.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Right)));
this.cmdAnalyze.Location = new System.Drawing.Point(946, 45);
this.cmdAnalyze.Name = "cmdAnalyze";
this.cmdAnalyze.Size = new System.Drawing.Size(185, 45);
this.cmdAnalyze.TabIndex = 4;
this.cmdAnalyze.Text = "Analyze";
this.cmdAnalyze.UseVisualStyleBackColor = true;
this.cmdAnalyze.Click += new System.EventHandler(this.cmdAnalyze_Click);
//
// label2
//
this.label2.AutoSize = true;
this.label2.Location = new System.Drawing.Point(23, 115);
this.label2.Name = "label2";
this.label2.Size = new System.Drawing.Size(76, 25);
this.label2.TabIndex = 5;
this.label2.Text = "Results";
//
// txtResults
//
this.txtResults.Anchor = ((System.Windows.Forms.AnchorStyles)((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom)
| System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.txtResults.Location = new System.Drawing.Point(28, 183);
this.txtResults.Multiline = true;
this.txtResults.Name = "txtResults";
this.txtResults.ScrollBars = System.Windows.Forms.ScrollBars.Vertical;
this.txtResults.Size = new System.Drawing.Size(1103, 520);
this.txtResults.TabIndex = 6;
//
// txtDescription
//
this.txtDescription.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.txtDescription.Location = new System.Drawing.Point(28, 143);
this.txtDescription.Name = "txtDescription";
this.txtDescription.ReadOnly = true;
this.txtDescription.Size = new System.Drawing.Size(1095, 29);
this.txtDescription.TabIndex = 7;
//
// Form1
//
this.AutoScaleDimensions = new System.Drawing.SizeF(11F, 24F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.ClientSize = new System.Drawing.Size(1161, 735);
this.Controls.Add(this.txtDescription);
this.Controls.Add(this.txtResults);
this.Controls.Add(this.label2);
this.Controls.Add(this.cmdAnalyze);
this.Controls.Add(this.cmdBrowse);
this.Controls.Add(this.txtPicFilename);
this.Controls.Add(this.label1);
this.Name = "Form1";
this.Text = "getMetadata v1.0";
this.ResumeLayout(false);
this.PerformLayout();

That complete block of code should now look like …

Add images 7a through 10a

after the #endregion

private System.Windows.Forms.Label label1;
private System.Windows.Forms.TextBox txtPicFilename;
private System.Windows.Forms.Button cmdBrowse;
private System.Windows.Forms.Button cmdAnalyze;
private System.Windows.Forms.Label label2;
private System.Windows.Forms.TextBox txtResults;
private System.Windows.Forms.TextBox txtDescription;

That should look like..

image 11a

so the final code should look like …

namespace GetImageMetaData
{
partial class Form1
{
/// <summary>
/// Required designer variable.
/// </summary>
private System.ComponentModel.IContainer components = null;

/// <summary>
/// Clean up any resources being used.
/// </summary>
/// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param>
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}

#region Windows Form Designer generated code

/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
this.label1 = new System.Windows.Forms.Label();
this.txtPicFilename = new System.Windows.Forms.TextBox();
this.cmdBrowse = new System.Windows.Forms.Button();
this.cmdAnalyze = new System.Windows.Forms.Button();
this.label2 = new System.Windows.Forms.Label();
this.txtResults = new System.Windows.Forms.TextBox();
this.txtDescription = new System.Windows.Forms.TextBox();
this.SuspendLayout();
//
// label1
//
this.label1.AutoSize = true;
this.label1.Location = new System.Drawing.Point(23, 23);
this.label1.Name = "label1";
this.label1.Size = new System.Drawing.Size(157, 25);
this.label1.TabIndex = 0;
this.label1.Text = "Picture Filename";
//
// txtPicFilename
//
this.txtPicFilename.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.txtPicFilename.Location = new System.Drawing.Point(28, 52);
this.txtPicFilename.Name = "txtPicFilename";
this.txtPicFilename.Size = new System.Drawing.Size(818, 29);
this.txtPicFilename.TabIndex = 1;
//
// cmdBrowse
//
this.cmdBrowse.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Right)));
this.cmdBrowse.Location = new System.Drawing.Point(863, 45);
this.cmdBrowse.Name = "cmdBrowse";
this.cmdBrowse.Size = new System.Drawing.Size(65, 45);
this.cmdBrowse.TabIndex = 2;
this.cmdBrowse.Text = "...";
this.cmdBrowse.UseVisualStyleBackColor = true;
this.cmdBrowse.Click += new System.EventHandler(this.cmdBrowse_Click);
//
// cmdAnalyze
//
this.cmdAnalyze.Anchor = ((System.Windows.Forms.AnchorStyles)((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Right)));
this.cmdAnalyze.Location = new System.Drawing.Point(946, 45);
this.cmdAnalyze.Name = "cmdAnalyze";
this.cmdAnalyze.Size = new System.Drawing.Size(185, 45);
this.cmdAnalyze.TabIndex = 4;
this.cmdAnalyze.Text = "Analyze";
this.cmdAnalyze.UseVisualStyleBackColor = true;
this.cmdAnalyze.Click += new System.EventHandler(this.cmdAnalyze_Click);
//
// label2
//
this.label2.AutoSize = true;
this.label2.Location = new System.Drawing.Point(23, 115);
this.label2.Name = "label2";
this.label2.Size = new System.Drawing.Size(76, 25);
this.label2.TabIndex = 5;
this.label2.Text = "Results";
//
// txtResults
//
this.txtResults.Anchor = ((System.Windows.Forms.AnchorStyles)((((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Bottom)
| System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.txtResults.Location = new System.Drawing.Point(28, 183);
this.txtResults.Multiline = true;
this.txtResults.Name = "txtResults";
this.txtResults.ScrollBars = System.Windows.Forms.ScrollBars.Vertical;
this.txtResults.Size = new System.Drawing.Size(1103, 520);
this.txtResults.TabIndex = 6;
//
// txtDescription
//
this.txtDescription.Anchor = ((System.Windows.Forms.AnchorStyles)(((System.Windows.Forms.AnchorStyles.Top | System.Windows.Forms.AnchorStyles.Left)
| System.Windows.Forms.AnchorStyles.Right)));
this.txtDescription.Location = new System.Drawing.Point(28, 143);
this.txtDescription.Name = "txtDescription";
this.txtDescription.ReadOnly = true;
this.txtDescription.Size = new System.Drawing.Size(1095, 29);
this.txtDescription.TabIndex = 7;
//
// Form1
//
this.AutoScaleDimensions = new System.Drawing.SizeF(11F, 24F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.ClientSize = new System.Drawing.Size(1161, 735);
this.Controls.Add(this.txtDescription);
this.Controls.Add(this.txtResults);
this.Controls.Add(this.label2);
this.Controls.Add(this.cmdAnalyze);
this.Controls.Add(this.cmdBrowse);
this.Controls.Add(this.txtPicFilename);
this.Controls.Add(this.label1);
this.Name = "Form1";
this.Text = "getMetadata v1.0";
this.ResumeLayout(false);
this.PerformLayout();

}

#endregion

private System.Windows.Forms.Label label1;
private System.Windows.Forms.TextBox txtPicFilename;
private System.Windows.Forms.Button cmdBrowse;
private System.Windows.Forms.Button cmdAnalyze;
private System.Windows.Forms.Label label2;
private System.Windows.Forms.TextBox txtResults;
private System.Windows.Forms.TextBox txtDescription;
}
}

Continue to Get Image Meta Data (Part 3) Where we will Build Compile and Test

testtest

Viewing all 29128 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>