The database was slow yesterday, can you please check what's the problem?Of course, I had some short discussion if he really means the DB or should it be called the application is slow. Also some other questions needed to be asked first, e.g. if it's a response time or throughput issue, when it was "good" last time, what "bad" and "good" means in numbers (seconds/ experience, requests / second), if it's affecting all the experiences on this DB or only a subset. Can it be reproduced or at least does it occur on a known pattern. Also the DB name and users affected I had to ask.
Some time I should create a comprehensive checklist for these questions as I'm missing some of them regularly. But that not today's topic.
As the answers are very vague (at least I got a DB & username with some rough timestamp when it might has started) I checked the DB. I'm it would be more precise to focus on user experience. But I know the developers quite well; it would cost a lot of effort & management pressure to convince them to implement proper instrumentation.
So I was going for low hanging fruits first. ASH/AWR showed this picture:
There is time of more activities in the Database, and it's quite visible where it started and ended. With some more analysis it was obvious the number of calls didn't change dramatically. Getting data from the application (yes, there was at least an activity log) showed in fact the number of requests followed a daily pattern, but this didn't look like the picture above. The pattern of demand was similar to other days; but on any other day AWR showed no such picture, only a "breathe" similar to the applications demand.
By this conversation we implicit generated a (still very blurry) "good" pattern to compare the "bad" one.
These bits of information were not perfect, but at least something I could use for further investigations.
AWR shows much higher IO times than normal. I crosschecked the most active SQL_IDs from a "good" pattern against the "bad" one and could confirm they are IO-bound in "good" as well, so they might be affected by this unusual IO.
On the DB-cluster TFA is installed, and so is oswatcher. (Again, aggregates only. But still much better information than nothing). A small awk script analysed the data there (I just don't know why oswatcher doesn't has a SQL interface similar to osquery ) and there were huge response time, but no queueing on the Linux hosts.
This would now require digging deeper through the storage layers (FC SAN and several big central storage boxes). Unfortunately I does not have access to them and their performance data is not externalized right now and the SAN admin was to busy with other things, like compiling reports for senior managers 🤦.
But at least I'm somehow lucky: At this company a nice schema is implemented which gives ASM-disks much more information than only /dev/mapper/anything. As the name of the storage box is part of the ASM-disk path, by some knowledge about the possible servers involved (e.g. only servers in this datacenter, and with some other attributes could be involved at all) and a centralized repository I got a list of all DB-servers and DBs potentially involved on that layer.
With this list and another part of our repository (in that case periodic v$sysstat snapshots [aggregates of aggregates - guess the accuracy there]) a list of other DBs with high IO numbers during that time could be compiled.
Checking these databases there was a top producer of IO:
Some additional analysis showed this load on this particular DB created sufficient IO to saturate one subsystem in the storage box. By doing so all other system which use the same subsystem also suffered. To crosscheck that, the Storage admin (who completed the report in the meantime) provided other DBs utilizing that subsystem - and they showed similar patterns at exactly that time.
Even with all the data collected, this accumulation of coincidents is still no prove what was going on. But it was sufficient likely for the management to think about remediation activities.
Here the question might show up why only the admins of this one application showed up and complained about their system? During the investigation it was visible that other DBs showed similar patterns and so other applications were likely to suffer also.
I invested some time to get answers to these questions. This is not related to the initial goal of the activities, only to satisfy my curiosity and improve my knowledge about the whole landscape.
It showed these other applications also suffered, but the impact was much smaller for 2 reasons:
- The portion of DB IO time to their user experience was relatively small. Even these high response time didn't cause a lot of harm.
- The requirements of the most suffering application were more strict. It can be explained by the time a person accepts to wait after a click: If this click is on a webpage, most readers of this blog might understand what's going on behind the scene and some tenth of a second up to some seconds are acceptable for an answer. But if you flip a switch to turn on the light, they expect the light to shine immediately (aka. within less time than the human senses can grasp)
After all the analysis was done and roughly explained to the manager of the most suffering application, he invited for a meeting.
A manager, some people with different roles in development and ops, a representative for the "culprit application which caused the high load" and I were there. He opened the meeting with a summary (condensed by me to):
We and they share the same storage, they did some heavy query and we suffered.Technically this is right, but the way it's provided to those not experienced in IT leads to the managers request:
We must ensure they do not cause such IO load anymore.
This might seem legit, but it might not lead to the managers real goal: Make his application reasonable fast in as many situations as possible. So I had to kill this branch of the discussion for a moment and explained, why they are not the real problem:
As the whole IT is a shared system on various layers (different hosts share the same storage system, different virtual hosts share the same VM layer, different instances share the same DB-cluster, different sessions share the same instance, ...) it is an unreasonable big effort to define resource limits, quotas and similar for all components involved. Especially in a highly dynamic environment with hundreds of DBs, thousands of servers and zillions of processes.
A majority in this room showed some shock & horror and claimed they were not aware of the high level of complexity in basic infrastructure.
With some effort we can limit the IO consumption of this particular DB which caused the issue this time. But next time (and there will be a next time) we only can be sure this particular DB is not involved.
As that's not sufficient, the next request was:
Let's then monitor the storage box and alarm every time a component is saturated.A nice idea, isn't it?
But whom to alarm? -
The SAN admin of course.
What should he do then? He can't reduce the load and can't increase the storage capacity either (at least not short term without budget, vendor involved, ...).
So let's inform those which are connected to this resource!
In our example it would have been two dozen applications - most of them didn't suffer or care.
Then only the top 5.
In our case you wouldn't have been alarmed.
…This also doesn't lead to anything useful.
About at that time I raised a simple question:
Who in this room is responsible for the (suffering part of the) application?
---
<silence>
(the managers face went pale for a second).
After some moments, an external (not employed) developer raised his hand and muttered something like "If no one wants to, I can take that".
I was slightly bored of this meeting, so to speed up things my next question was:
Is every IO your application doing [during the critical activity] necessary?
The argument in short goes like that: If you are less dependent on IO timing, even bad IOs can't harm that much. I call this
make it robust against impacts out of your control
The meeting stopped close to that point. The new found architect & developers promised to see if they can make their application more robust, the manager wants to raise awareness in all IT departments what a shared environment is and how it can affect business processes within the company.
I have a mixed feeling: the analysis seems valid, but it was full of educated guess & shortcuts - this doesn't feel right as it contains "experience" and "luck"; there is a method which describes a more structured way, I just felt unable to follow these steps . I hope I could transfer the initial reaction (finger-pointing to another team) to something more valuable for the company: use less resources, make your application robust.
But I doubt my effort was lasting.
Keine Kommentare:
Kommentar veröffentlichen