Microsoft exchange rpc client access high cpu
This means that under certain conditions, having too many cores can actually lead to a high CPU condition. Hyper-Threading can also have an effect here since a 16 core Hyper-Threaded server will appear to Exchange as having 32 cores. This is one of the multiple reasons why we recommend leaving Hyper-Threading disabled.
These are just a few examples but they show that staying within the recommendations made by the product group when it comes to server sizing is extremely important. Scaling out rather than up is better from a cost standpoint, a high availability standpoint, and from a product design standpoint.
Rather, the server just looks "busy". The CPU utilization is high, but no single process appears to be the cause. There are times though where a single process can be causing the CPU to go high. In this section we will go over some tricks with performance monitor to narrow down the offending process and dig a bit into why it may be happening. Perfmon Logs Perfmon is great, but what if you were not capturing perfmon data when the problem happened? Luckily Exchange includes the ability to capture daily performance data and this feature is turned on by default.
The built in log capturing feature has to balance between gathering useful data and not taking up too much disk space so it does not capture every single counter and it only captures on a one minute interval. In most cases this is enough to get started. If you find you need a more robust counter set or a shorter sample interval you can use ExPerfWiz to setup a more custom capture.
A tip here: if you want to collect this information regularly and from multiple servers, check out this blog post. It gives you an idea of the total CPU utilization for the server. This is important because first and foremost, you need to make sure the capture contains the high CPU condition.
With this counter a CPU utilization increase should be easy to spot. If it was a brief burst you can then zoom into the time that it happened to get a closer look at what else was going on at the time. I'll note the difference between Process and Processor. Processor is based on a scale of CPU usage in overall percentage and can break down values by individual core. Process uses a scale based on the core count of the server and can break down values by individual process.
If you are looking at a perfmon capture and don't know the total number of cores, just look at the highest number in the instances window under the Processor counter. It is a zero based collection, each number representing a core. If 23 is the highest number, you have 24 cores.
Now that you know that there was a high CPU condition and when it occurred, we can start narrowing down what caused it. During this phase of troubleshooting it may be best to change the vertical scale of the perfmon window. To do this right click in the window, properties, graph tab, change the maximum to core count x In our 16 core example you would change it to Look for any specific process that takes up more CPU than the others and goes up in tandem with the overall CPU utilization.
If there isn't one in particular, you don't have a single process causing the issue. This tends to point to some of topics covered in the previous sections such as sizing, load, and CPU throttling. Mapping w3wp instances to application pools Let's say you do find one particular process that is causing the high CPU condition. Suppose that the process has the name "w3wp 1". What exactly are you supposed to do with that?
Exchange runs multiple application pools in IIS for the various protocols it supports. We need to find out which application pool "w3wp 1" maps to. Luckily perfmon has the information we need, you just need to know how to find it.
Let's say it's Click on any of the counters. That tells us that w3wp 1 belongs to the Exchange ActiveSync application pool. You may also want to set the vertical scale back to and right click on the counter and choose "Scale Selected Counters". I should also note here that due to managed availability health checks, sometimes an application pool is restarted.
When this happens the PID and the w3wp instance may change. If this value changes that means the process was recycled, the PID changed, and perhaps the w3wp instance as well. You will need to verify if the instance changed after the process recycled to make sure you are still looking at the right information. What is the process doing? Now that we've narrowed it down to w3wp 1 and know that ActiveSync is the cause of our issue, we can start to dig into troubleshooting it specifically.
These methods can be used on multiple other application pools but this example will be specific to ActiveSync. The most common thing to look for is burst in activity. Whether there was or was not, we now know if increased request traffic led to the CPU increase. If it did, we need to find the cause of the traffic. If this ticks up right before the CPU increase, it tells you that there was a burst of incoming messages that likely triggered it. You can then review the transport logs for clues.
If it wasn't message delivery it may have been some mobile device activity that caused it. Garbage Collection GC If there was no noticeable increase in request traffic or message delivery before the increase, there may be something inside the process causing it.
Garbage collection is a common trigger. You can look at ". If this is the case also look at ". I want to note very clearly here that if you encounter this, Garbage Collection throughput usually isn't the root of the problem. It is another symptom. Increases of this type usually indicate abnormal load is being placed on the system.
It is much better to find the root cause of this and eliminate it rather than to start changing the garbage collector settings to compensate. You really need to have a baseline of your environment to know what "normal" is but you can definitely use this counter to compare to the overall CPU utilization to see if client requests are causing a CPU utilization increase.
LPS contains several built in queries to help you easily analyze traffic for the various protocols used by Exchange. The built in queries are great for just about anything you'd need to find out. If you need more and know a bit of TSQL, you can even write your own. LPS is covered in depth in Kary Wall's blog post. If you get to the point where you have the client type causing your issue narrowed down, LPS is usually the next step.
Conclusion Performance is a vast topic and I don't expect this blog post will make you an expert immediately, but hopefully it has given you enough tips and tricks to start tracking down Exchange high CPU issues on your own. If there are other topics you would like to see us blog about in the realm of Exchange performance please leave feedback below. Happy troubleshooting! Marc Nivens. Tags: Exchange Notify me of followup comments via e-mail. You can also subscribe without commenting.
This site uses Akismet to reduce spam. Learn how your comment data is processed. Subscribe to new tech posts. We will never send you spam email or forward your details to third parties. This will close in 0 seconds. Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
The cookie is used to store the user consent for the cookies in the category "Analytics". The cookie is used to store the user consent for the cookies in the category "Other. The cookies is used to store the user consent for the cookies in the category "Necessary".
The cookie is used to store the user consent for the cookies in the category "Performance". It does not store any personal data. Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features. Get new features first. Was this information helpful? Yes No. Thank you! Any more feedback?
The more you tell us the more we can help. Can you help us improve? Resolved my issue. Clear instructions.
0コメント