简体   繁体   中英

Azure Cloud Service: how to log IIS requests

I would like to see the internals of http requests as they arrive at my cloud service in IIS. I have looked at quite a bit of the information, but can't quite see how to enable it. For example, this site gives some helpful information. Following that page, what I have achieved is this:

  1. Connected to my Cloud Service in Visual Studio and Updated the Log directories transfer period to 1 min (I tried different buffer sizes, but that had no effect).

    启用iis日志传输


  2. Selected the cloud service in Visual Studio and chose to view diagnostic data. Then chose IIS logs inside Windows Azure Log directories:

    选择IIS日志


  3. At this point I get a 404 error:

    404错误


So, it seems that while I have set up the potential to receive the log files, they are not actually being generated. It would be ideal if I could do this all without redeploying, but I think that I might need to enable something in web.config - just not sure what. I have read up on the link provided in this answer , but can't find what I actually need to enable beyond what I have already done. Any pointers would be great, happy to try a different (non-IIS) approach if that is easier.

Update

So, based on the helpful advice from MikeWo and kwill, I did some further digging. First I made sure that the storage account is configured correctly. It does seem to be, firstly because this is the same account I use for my web application where the users' uploaded files turn up correctly and secondly because I enabled Infrastructure logs using the same process I did for IIS and the logs turn up:

基础架构日志

That made me think that the IIS logs were not being generated in the first place. So, I used remote desktop to connect to the server. Using IIS Manager, I first looked at the logging for the server:

iis服务器日志记录

The log file location doesn't exist. So it seems that the folder hasn't even been created and nothing is being logged here. Next, I looked at the logging for my site:

网站记录

In this case, there were log files, but they were from over a month ago. Finally, I followed the instructions here and here in the hope that I would be able to increase the logging level on the site.

appcmd set config /section:httpLogging /dontLog:False
appcmd.exe set config "<mysite>" -section:system.webServer/httpLogging /dontLog:"false" /commit:apphost
appcmd.exe set config "<mysite>" -section:system.webServer/httpLogging /selectiveLogging:"LogAll" /commit:apphost

The commands were successful but this seemed to have no effect, I didn't see any more logs turn up in my IIS folders.

I then tried:

appcmd set config /section:httpLogging /dontLog:False /commit:WEBROOT

And received:

Description: The configuration section 'system.webServer/httpLogging' cannot be read because it is missing a section declaration

I am not too keen to go changing config files on the server, but I would if I got some reassurance that I am going down the right path. Also, I realise that these changes wouldn't be persistent, but I just wanted to see if I could get anything to work at all.

These are the diagnostics settings in Visual Studio it is using the same storage account as my application which works fine:

在此处输入图片说明

It is apparent that the IIS diagnostic files are turning up in the correct location to be transferred to storage:

在此处输入图片说明

Solution

kwill's answer put me on the right track in the end. The IIS folder did exist, but the last log was from over a month ago. I added a dummy file and it turned up in the blob. I will add a separate question about why the IIS logs are not being updated.

The first thing I'd check is the connection string to the storage account you've set up for the diagnostics data. The next things I'd do if that looks correct is use a storage tool to look into the storage account to make sure that the WAD-IIS-LogFiles container is created. This is where the Windows Azure Diagnostics (WAD) drops the files. You can use the storage tool in Visual Studio's Server Explorer, or something else like the free Cerebrata Azure Explorer (I'm biased since I work for Cerebrata, but there are many storage tools to choose from).

My guess is that your are getting the "ContainerNotFound" exception because it's not in the storage account. It should be automatically created when a transfer occurs, so this leads me to believe the transfers aren't happening either due to lack of data to put there, or a bad configuration.

The Buffer size in the settings is just how much space you want to set aside on the individual local instances in for buffer the data that is then later transferred. There is a maximum amount of space you can configure for all diagnostics. You need to make sure this buffer has a value and it's the same as the sum of the directory quotas you are putting in there. In your example you have 1024 for three different directories, but then NONE buffer for the directories total. This should be 3072 for what you have here.

As pointed out in the comment below by @kwill the Buffer is the local speace reserved for which logs have been transferred, and a value of None is acceptable. Added edit here for those that might not read through the comments to see the correction.

On the transfer period the data is copied over to the storage account. Note that a transfer period of a minute is pretty aggressive if your site sees a ton of traffic and generates a lot of diagnostics data. Transferring a lot of data every minute does add to the resource load on the machine and takes up some of the bandwidth you are allotted to move the data to the storage account.

Also, you change the diagnostics remotely via the server explorer plug in, API or another tool. When you do this the API writes the values to BLOB storage which the WAD agent on the instances poll to see changes. This should NOT require a redeployment and shouldn't cause a recycle of the machines either; however, it does take a little bit for the WAD agent to pick up the change. You can configure that value lower with tools that expose it (the VS Explorer doesn't).

Mike's answer is very good for WAD troubleshooting. But for your specific issue, assuming you have your storage accounts configured correctly per Mike's answer, you just need to wait a bit longer. WAD won't transfer the IIS logs until IIS has released the lock on the files, which typically won't happen for up to an hour (until IIS starts using a new log file at the top of the hour).

If you are getting other data in diagnostics storage (ie. you see your Logs or perf counter data) then you know the storage accounts are setup correctly.

The path should be C:\\Resources\\Directory{DeploymentID}.{Rolename}.DiagnosticStore (see here ). If you are getting Infrastructure logs written to your storage account then you must have that folder since that is where the diagnostics configuration and cache files live.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM