Applies to: 
Oracle database – 12.1.0.2

Author:

Mahesh Reddy M

Case Study:

Oracle 12c In-memory option is introduced in 12.1.0.2 patch set. This feature stores copies of tables and partitions, and other database objects in columnar format which is IM Column store. IM Column store is an optional area of the SGA and it is not replacement of buffer cache. Instead, both memory areas can store same data in different formats.

IM COLUMN Store performance Benefits:

If the database objects in IM Column store, database can more perform scans, joins and aggregates much faster than on disk. When In memory will give more performance

  1. Querying small number of columns from large number of columns

  2. Queries that scan large number of rows and applying filters

  3. Queries that apply aggregate data

  4. Queries that join a small table to large table(fact table)

Study 1:

Oracle 12c In Memory option tested on Dell infrastructure.

Operating system: RHEL6.5

Database: 12.1.0.2

Memory: 512G

SGA Size: 450G

In Memory size: 300G

Database Size: 1TB

I am enabling the inmemory option using inmemory_size parameter.

Sql> alter system set inmemory_size=300g scope=spfile;

It will effect only after instance restart.

We can find the size of inmemory using

SQL> show parameter inmemory;

NAME                                                        TYPE                      VALUE

------------------------                                   ------------                  ------------------

inmemory_clause_default                          string             

inmemory_force                                          string                    DEFAULT

inmemory_max_populate_servers              integer                    30

inmemory_query                                         string                     ENABLE

inmemory_size                                            big integer               300G

inmemory_trickle_repopulate_servers_       integer                     10

percent

optimizer_inmemory_aware                         Boolean               TRUE

 

After that create tablespace using In Memory attribute

Sql> create tablespace quest datafile  '+DATA’  size 10g default inmemory;

We can change any time inmemory parameters using alter statement

Sql> alter tablespace quest DEFAULT INMEMORY MEMCOMPRESS FOR CAPACITY HIGH;

After you can create any tables under this tablespace it will automatically store into IM Column store.

Using BMF7.0 we can load the data into particular tablespace. If we enable inmemory parameter at tablespace level before loading it will take less time compare to without inmemory attribute.

Please refer this link

http://www.toadworld.com/platforms/oracle/b/weblog/archive/2014/09/12/benchmarking-oracle-12c-in-memory-column-store.aspx

After loading data, we can use the different compression techniques and priority levels to populate the data into IM Column store. Objects are populated into the IM column store either in a prioritized list immediately after the database is opened or after they are scanned (queried) for the first time.

Here are my runtime results against Dell Infra structure.

We can compress and populate the data into IM Column store using different methods.

         The table h_partsupp which has 59.59 GB original data. We can use different compression techniques to populate the table objects into IM Column store and see the compression ratio.

Compression Techniques

ratio

Compressed Data

Memcompress for query high

1.92

31.08

Memcomress for query Low

1.67

35.64

Memcompress for Capacity low

3.04

19.63

Memcompress for Capacity high

4.88

12.21

NO MemCompress

1.17

50.86


After that we can raise a query against the table and see the results of performance of inmemory and without In Memory.

Sql> select max (PS_SUPPLYCOST) from h_partsupp;

Compression Techniques

sec

Memcompress for query high

15.97

Memcomress for query Low

18

Memcompress for Capacity low

18.13

Memcompress for Capacity high

99.86

No Inmemory

2961


As per results, In Memory is giving more performance (185X) compare to without In Memory.

Study 2 :

The table h_part which has 29.27Gb original data. We can use different compression techniques to populate the table objects into IM Column store and see the compression ratio.

 

Compression Techniques

ratio

Compressed Data

Memcompress for query high

3.32

8.43

Memcomress for query Low

2.77

10.09

Memcompress for Capacity low

5.77

4.85

Memcompress for Capacity high

7.96

3.51

NO MemCompress

1.24    

22.49

 

After that we can raise a query against the table and see the results of performance of inmemory and without In Memory.

Sql> select P_NAME from quest.h_part where P_TYPE='SMALL BRUSHED NICKEL';

 

Compression Techniques

sec

Memcompress for query high

6.04

Memcomress for query Low

6.26

Memcompress for Capacity low

8.86

Memcompress for Capacity high

19

No Inmemory

30

NO MemCompress

9.10