Using HUGETABLE in oracle and “ORA-27125: unable to create shared memory segment”


Using HUGETABLE in oracle and “ORA-27125: unable to create shared memory segment”

On SLES9 hugetlb is enable by default and it could cause you problems.
A good resource to resolve this kind of issue can be found here:
http://linux.inet.hr/oracle10g_on_debian.html

However on oracle9i if you wish to use this feature (without the DISABLE_HUGETLBFS=1 trick) you will need patch #3386122 from metalink which is available for x86 only. I asked a backport for x86-64 but I’m still waiting.

On 10g you don’t need any patch.
The common way to avoid ORA-27125 on 10g is to disable hugetlb with:
 

linux: # cd $ORACLE_HOME/bin
linux: # mv oracle oracle.bin

cat >oracle <<“EOF”
#!/bin/bash

export DISABLE_HUGETLBFS=1
exec $ORACLE_HOME/bin/oracle.bin $@
EOF

linux: # chmod +x oracle


This is necessary because the env variables aren’t passed to the oracle command by java applications such as dbca (otherwise to define DISABLE_HUGETLBFS=1 in your environment should be enough).

If you are experiencing:

ORA-12547: TNS:lost contact

this is likely due to a wrong wrapper generation. Make sure you didn’t create the file above with “vi”. Use “cat” instead.

(Another cause could be a wrong re-linking phase. Check if “relink all” works correctly).

But why not to use this new kernel feature?

To allocate huge pages do:

echo 2048 > /proc/sys/vm/nr_hugepages

A page, by default, is 2MB so with 2048 you are allocating 4GB of your RAM for hugepages.
Check it in /proc/meminfo.

There is even the chance to mount a hugetlb file system.

Create a mount point:

mkdir /dev/hugetlbfs

and mount it:

mount -t hugetlbfs -o uid=55,gid=59,mode=0777 hugetlbfs /dev/hugetlbfs

This last bit is not mandatory but can be useful for debugging and advanced management.

Now you don’t need anymore DISABLE_HUGETLBFS=1 or a wrapper for the “oracle” command.

To make the changes permanent insert these lines into /etc/sysctl.conf

vm/nr_hugepages=2048
vm/disable_cap_mlock=1

And this into /etc/fstab (only if you wish to mount a hugetlbfs).

hugetlbfs            /dev/hugetlbfs       hugetlbfs  mode=0777,uid=59,gid=55      0 0

This will allow your oracle user to access the memory in a way similar to the shm mechanism.

If you don’t want to reboot your machine you can issue these commands:

sysctl -p
mount -a

which set all the variables in sysctl.conf in your proc and mount all the file systems listed in fstab.

To check if your instance is using hugetlbfs use
cat /proc/meminfo |grep Huge

HugePages_Total:  1024
HugePages_Free:    940
Hugepagesize:     2048 kB

The value of Total and Free should be different.

If you are using a kernel without the patch for disable_cap_mlock then there is another solution (maybe next SLES9 kernels will include it).
From kernel 2.6.7 place in you sysctl.conf the following lines:

vm/nr_hugepages=2048
vm/hugetlb_shm_group=55

Where hugetlb_shm_group contains the gid of the oracle group (usually dba).

It is a more secure way to access hugetlb than with disable_cap_mlock.

If you encounter:

ORA-00385: cannot enable Very Large Memory with new buffer cache parameters

Then you have use_indirect_data_buffers=true in your spfile. Set it to false (I still haven’t found a way to use HUGETLB and use_indirect_data_buffers=true).
 

, ,