ORA-27086: unable to lock file – already in use Netapp NFS

My 10.2 database crashed suddenly.
Can’t login by # sqlplus / as sysdba
Can’t shutdown it

I have to reboot the server.

Got error after OS is back:

This is a classic symptom of a Netapp problem, which likes to hold file locks open on NFS mounts.
There is a standard procedure for clearing those locks; see, for instance, document 429912.1 on Metalink.

ORA-01157 ORA-01110 ORA-27086 after crash prevents database from opening (Doc ID 429912.1)
Modified: 01-Mar-2013

As root on the NetApp, from the prompt:

Thanks Team!!!
I am able to bring it back on line after that.

This Server changed the Hostname about 6 months ago.
The Netapp promtp still shows the old Hostname.
That could be the reason.

Reference:https://forums.oracle.com/thread/653757

Please Notice part of the reference I don’t agree with:
Do not copy control*.ctl files before you know what are you doing.
In my case, I did not touch any control or data file.
Only restart the Oracle instance several times.

Best Luck!

Maximum Number of Files in a Single Directory for Netapp NFS mounts on Linux: maxdirsize

Hit file number limit on Netapp NFS mount.

The parameter is [maxdirsize]

Starting with Data ONTAP 6.5, the maximum number of subdirectories a single directory may have is 99998 (100K). Data ONTAP 6.4 and earlier versions were restricted to 65534 (64K) subdirectories. This number may not be changed. To understand the reason for this limit, see the section below on hard links and subdirectory implementation.

And the size of it also has limit which is 1% of Memory.
It has too be less than 80 MB on a 8GB Memory Netapp unit.

You can increase this parameter on line for individual volume.

For the best performance:

# The most important point is to avoid a high number of files in a single directory.
# Create sub-directory structures and place files at the bottom of the directory tree.
# Less than 1000 files per directory is ideal.
# Avoid deep directory structures.
# A depth of less than 5 is ideal, anything above 8 or 9 results in poor performance.

What are the performance impacts of changing the size of maxdirsize?

Performance issues are hard to quantify, but easy to state in a general sense. Lookups in a large directory consume lots of CPU. An additional
performance impact is that when a directory is loaded into memory, the entire directory tree is loaded. Parts of it may fall out of memory
through non-use, but there is a performance impact from reading from disk and finding space in-memory for the directory to be stored.
Hope this helps ,

References:
http://serverfault.com/questions/76018/maximum-number-of-files-in-a-single-directory-for-netapp-nfs-mounts-on-linux

Post about wafl.dir.size.max


https://communities.netapp.com/message/5790;jsessionid=7D859DAC2D127BAF4C4D53A5157E060D

Emergency Solution:

How to install fonts (.ttf) on CentOS for an individual user

CentOS offical Documents here:http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-x-fonts.html

1.To add fonts system-wide, copy the new fonts into the /usr/share/fonts/ directory. It is a
good idea to create a new subdirectory, such as local/ or similar, to help distinguish between user-installed and default fonts.

To add fonts for an individual user, copy the new fonts into the .fonts/ directory in the user’s home directory.

2.Use the fc-cache command to update the font information cache, as in the following example:

In this command, replace with the directory containing the new fonts (either /usr/share/fonts/local/ or /home//.fonts/).

For Example:

I only have a normal user lambert@milliondollarserver.com
To read Chinese from Firefox:

Start Firefox and enjoy.