You mentioned both multivalue compression. There is no overhead when reading a table that contains multivalue compression.
Please refer to the Database Administration Manual if you need help in understanding how multivalue compression works. Page 183 of the 14.10 version (B035-1093-112A) says:
==========================
Using Multivalue Compression
MVC compresses repeating values in a column when you specify the value in a compression list in the column definition.
When data in the column matches a value specified in the compression list, the database stores the value only once in the table header, regardless of how many times it occurs as a field value for the column. The database then stores a smaller substitute value, often as small as 2 bits, in each row where the value occurs.
MVC generally provides the best cost/benefit ratio compared to other methods. Because it requires minimal resources to uncompress the data during query processing, you can use MVC for hot (frequently-used) data without compromising query/load performance. MVC is also considered the easiest to implement of all the compression methods.
=============================
Stats Manager does not read base tables for the Analyze jobs. It reads DBQL log tables and other dictionary tables. So if only the non-dictionary base tables are compressed with block level compression, I don't believe there will be any impact on what Stats Manager is doing within Analyze Jobs.
But Collect jobs could run somewhat longer due to the extra CPU to decompress data blocks in order to collect statistics against base tables. In our testing back in 13.10, we found elapsed time was from 3% to 9% longer with BLC on the table, compared to no BLC. That was on an EDW platform.
If you are on an appliance where everything is compressed automatically, the overhead of Stats Manager due to compression will be equivalent to the overhead of doing anything else in the system. Because everything is compressed, there would be no baseline of comparison with no BLC, so the same overhead that is normally experienced in accessing base table data due to block-level compression will be present with the Collect Jobs.
Thanks, -Carrie
You mentioned both multivalue compression. There is no overhead when reading a table that contains multivalue compression.
Please refer to the Database Administration Manual if you need help in understanding how multivalue compression works. Page 183 of the 14.10 version (B035-1093-112A) says:
==========================
Using Multivalue Compression
MVC compresses repeating values in a column when you specify the value in a compression list in the column definition.
When data in the column matches a value specified in the compression list, the database stores the value only once in the table header, regardless of how many times it occurs as a field value for the column. The database then stores a smaller substitute value, often as small as 2 bits, in each row where the value occurs.
MVC generally provides the best cost/benefit ratio compared to other methods. Because it requires minimal resources to uncompress the data during query processing, you can use MVC for hot (frequently-used) data without compromising query/load performance. MVC is also considered the easiest to implement of all the compression methods.
=============================
Stats Manager does not read base tables for the Analyze jobs. It reads DBQL log tables and other dictionary tables. So if only the non-dictionary base tables are compressed with block level compression, I don't believe there will be any impact on what Stats Manager is doing within Analyze Jobs.
But Collect jobs could run somewhat longer due to the extra CPU to decompress data blocks in order to collect statistics against base tables. In our testing back in 13.10, we found elapsed time was from 3% to 9% longer with BLC on the table, compared to no BLC. That was on an EDW platform.
If you are on an appliance where everything is compressed automatically, the overhead of Stats Manager due to compression will be equivalent to the overhead of doing anything else in the system. Because everything is compressed, there would be no baseline of comparison with no BLC, so the same overhead that is normally experienced in accessing base table data due to block-level compression will be present with the Collect Jobs.
Thanks, -Carrie