This website is not affiliated with, sponsored by, or approved by SAP AG.
2 posts • Page 1 of 1
I'm newbie to BI and need some small clarifications related to Aggregates and InfoCubes and below are my questions.
1. After activating the Infocube, BI system will create F and E fact tables for the infocube. First data will be loaded in F Fact table and then we need to move the data in E Fact table (which is use to compress the data). I want to know what does compression means here, does it aggregate (summation of key fields based on characteristics) or just moving of data from F Fact table to E Fact table? I read somewhere that data in F Fact table come request by request and E Fact table data is maintained in single request? Correct me if i'm wrong?
2. After creating the aggregates on an infocube, will data come from both E and F Fact tables of infocube or only from a Single Fact table?
3. Since the aggregate contain summarize data and aggregates do have E and F Fact tables as well. Why we are maintaining two separate Fact tables for Aggregates?
Your help is highly appreciated.
(1) Compressing the fact table moves the data from the F to the E table and removes the request dimension values. This allows the incoming E table records to be compared against the existing E table records and where the key figure aggregation definitions allow for 2 matching full keys to be aggregated, they are. There is also an additional technical feature of "Zero Elimination" which compares the values of all the key figures on a single record and if they are zero the record is deleted. Use zero elimination with caution as some reporting requirements need a row to show in the query, even if the key figure value is zero.
(2) This is best described from the query userâ€™s point of view. The query is written above an InfoCube (with aggregates defined in the InfoCube). The current navigation step (execution) of the query will be run on the smallest / tightest matching aggregate. If the user then navigates (like drill down by line item) and the same aggregate does not have the line item characteristic then the next smallest / tightest matching aggregate will be used. When the definition of the current query navigation no longer matches any of the aggregates, the InfoCube itself will be used (both F and E together).
(3) The request dimension in a cube is the delta key for downstream data loads to other DataTargets and it is about being able to roll-back / roll-out / remove / delete a prior data load. For whatever reason, if the BW Administrator determined that the request must be deleted from the InfoCube. This can only be done simply if the records in the InfoCube still have the request number. So once you have compressed a request it cannot easily be identified and deleted. The existence of an F and E table in the cube versus the existence of an F and E table in an aggregate allow you, the BW Administrator, several options on managing how many requests to keep un-compressed just in case they need to be deleted. 1 option for small to medium volume InfoCubes is to keep "X" requests uncompressed in the InfoCube and always compress all requests in the aggregate. This allows the aggregate to be better optimised for the user experience while still allowing the BW administrator to delete the last few requests and then do a drop and reload of the aggregate. For large volume InfoCubes it is recommended to keep "X" requests uncompressed for both the InfoCube and aggregate. The time it takes to rebuild an aggregate from the InfoCube could be hours, which you might not be able to afford the downtime if this is an emergency repair during month end reporting.
In all circumstances you should be compressing the aggregate, even if it is keeping the last 90 days of requests uncompressed. The main advantage is that all the older data will compress and allow historical reporting to be optimised. Remember that the less number of Characteristics in the aggregate the better the compression will be (except in some cases where key figure exception aggregation and time dependant master data enforces a minimum granularity). In most cases where you have aggregates targeting a queries default navigation (first time it is run) or charts and graphs, you will make a big difference to these highly compressible DataSets because what might be millions of records in the F/E InfoCube tables could be only 1000's of records in the aggregate E table.
2 posts • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 6 guests