Hi Carrie,
Thanks again for wonderful article.. very helpful and you are making all of us in forum a little Carrie :)
Couple of questions I have in my mind.
We are a finance company and aborting batch queries is not an option for us.
However I would like to abort the bad queries which are user written adhoc.
We have a penalty box where based on CPU the queries are demoted to rogue bucket but they still consume a lot of CPU what would we be mssing ?
Most of the times we see that explain plan is bad and we demote the queries and it doesnt get enough resource but actually it executes with less CPU and we have users complaining for delay ? How to fix this?
In viewpoint can we have the query aborted at query level instead of session level because aborting a session may lead to loss of work as many people use volatile tables ?
What should be the criterion in TASM for defining a bad query and have them aborted, should it be time based or CPU based, if CPU or time based what threshold would you recommend ?
Hi Carrie,
Thanks again for wonderful article.. very helpful and you are making all of us in forum a little Carrie :)
Couple of questions I have in my mind.
We are a finance company and aborting batch queries is not an option for us.
Thanks in advance for your help.