How to Check Transparent HugePages( THP) Enabled Or Not in RHEL 6 and 7 ?

1)Check “/sys/kernel/mm/redhat_transparent_hugepage/enabled”.

always   -  always use THP
never    -  disable THP

if it is set as “always”, then khugepaged will be automatically started.

$ ps -eaf | grep khugepaged
root 188 2 0 Feb05 ? 00:04:37 [khugepaged]

2) Check /proc/meminfo.

If THP is enabled :

$ grep AnonHugePages /proc/meminfo
AnonHugePages: 8450048 kB

If THP is disabled:

$ grep AnonHugePages /proc/meminfo
AnonHugePages: 0 kB

3) for RHEL 6.2 and later, check /proc/vmstat:

If THP is enabled:

$ egrep 'trans|thp' /proc/vmstat
nr_anon_transparent_hugepages 4127
thp_fault_alloc 12446176
thp_fault_fallback 7205480
thp_collapse_alloc 27143
thp_collapse_alloc_failed 14815
thp_split 310753
thp_zero_page_alloc 16
thp_zero_page_alloc_failed 0

If THP is disabled:

$ egrep 'trans|thp' /proc/vmstat
nr_anon_transparent_hugepages 0
thp_fault_alloc 0
thp_fault_fallback 0
thp_collapse_alloc 0
thp_collapse_alloc_failed 0
thp_split 0
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0

4) Check THP usage per process:

$ grep -e AnonHugePages /proc/*/smaps | awk '{ if($2>4) print $0} ' | awk -F "/" '{print $0; system("ps -fp " $3)} '

/proc/64068/smaps:AnonHugePages: 2048 kB
UID PID PPID C STIME TTY TIME CMD
oracle 64068 1 0 Feb10 ? 01:50:36 ora_diag_RACNODE1
/proc/64085/smaps:AnonHugePages: 4096 kB
UID PID PPID C STIME TTY TIME CMD
oracle 64085 1 0 Feb10 ? 16:39:31 ora_dia0_RACNODE2
/proc/64085/smaps:AnonHugePages: 6144 kB
...
..
.
Advertisements
Posted in Linux | Tagged , , , , , , | Leave a comment

Error in invoking target ‘agent nmhs’ of makefile

Below errors occurred while install 11.2.0.4 on Linux 7.4.  when relink the binary, the same error will be reported as well.

INFO: collect2: error: ld returned 1 exit status

INFO: make[1]:
INFO: *** [/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/emdctl] Error 1

INFO: make[1]: Leaving directory `/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib'

INFO: make: *** [emdctl] Error 2

INFO: End output from spawned process.
INFO: ----------------------------------
SEVERE: oracle.sysman.oii.oiil.OiilActionException: Error in invoking target 'agent nmhs' of makefile '/u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib/ins_emagent.mk'. See '/u01/app/oracle/product/11.2.0/dbhome_1/install/relinkActions2018-05-25_09-48-55-AM.log' for details.
at oracle.sysman.oii.oiis.OiisMakeDeps.invokeMakefile(OiisMakeDeps.java:537)
at oracle.sysman.oii.oiis.OiisMakeDeps.doRelink(OiisMakeDeps.java:614)
at oracle.sysman.oii.oiis.OiisMakeDeps.doOperation(OiisMakeDeps.java:799)
at oracle.sysman.oii.oiis.OiisMakeDeps.main(OiisMakeDeps.java:809)

WORKAROUND

In $ORACLE_HOME/sysman/lib/ins_emagent.mk file :

Change “$(MK_EMAGENT_NMECTL) ”

into

$(MK_EMAGENT_NMECTL) -lnnz11

Posted in Upgrading and Patching | Tagged , , | Leave a comment

How to Remove Oracle RAC Software

1)Check to make sure there is no database still using this ORACLE_HOME:

$ srvctl status database -thishome

2) De-install the binary of Oracle RAC HOME cleanly by:

$ /u01/app/oracle/product/12.1.0/dbhome_1/deinstall/deinstall

— logs:

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/oracle/product/12.1.0/dbhome_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.2.0.1/grid
The following nodes are part of this cluster: racnode1,racnode2,racnode3
Active Remote Nodes are racnode2,racnode3
Checking for sufficient temp space availability on node(s) : 'racnode1,racnode2,racnode3'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-05-23_11-58-31-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2018-05-23_11-58-38-AM.log

Use comma as separator when specifying list of values as input

Specify the list of database names that are configured in this Oracle home []:
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check5009.log
Oracle Configuration Manager check END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12.2.0.1/grid
The following nodes are part of this cluster: racnode1,racnode2,racnode3
Active Remote Nodes are racnode2,racnode3
The cluster node(s) on which the Oracle home deinstallation will be performed are:racnode1,racnode2,racnode3
Oracle Home selected for deinstall is: /u01/app/oracle/product/12.1.0/dbhome_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Checking the config status for CCR
racnode1 : Oracle Home exists with CCR directory, but CCR is not configured
racnode2 : Oracle Home exists with CCR directory, but CCR is not configured
racnode3 : Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-05-23_11-58-02-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-05-23_11-58-02-AM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-05-23_11-59-55-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-05-23_11-59-55-AM.log

De-configuring Listener configuration file on all nodes...
Listener configuration file de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean5009.log
Oracle Configuration Manager clean END

######################### DECONFIG CLEAN OPERATION END #########################


####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################


############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2018-05-23_11-57-44AM/response/deinstall_2018-05-23_11-58-02-AM.rsp
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL TOOL START ############

####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-05-23_11-58-02-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-05-23_11-58-02-AM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to racnode1
Setting REMOTE_NODES to racnode2,racnode3
Setting CLUSTER_NODES to racnode1,racnode2,racnode3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-05-23_11-57-44AM/oraInst.loc
Setting oracle.installer.local to false

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/12.1.0/dbhome_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/oracle/product/11.2.0/dbhome_1'.

Detach Oracle home '/u01/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the remote nodes 'racnode2,racnode3' : Done

Delete directory '/u01/app/oracle/product/12.1.0/dbhome_1' on the remote nodes 'racnode2,racnode3' : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on node 'racnode2'. The directory is in use by Oracle Home '/u01/app/oracle/product/11.2.0/dbhome_1'.

The Oracle Base directory '/u01/app/oracle' will not be removed on node 'racnode3'. The directory is in use by Oracle Home '/u01/app/oracle/product/11.2.0/dbhome_1'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-05-23_11-57-44AM' on node 'racnode1'
Clean install operation removing temporary directory '/tmp/deinstall2018-05-23_11-57-44AM' on node 'racnode2,racnode3'

## [END] Oracle install clean ##


######################### DEINSTALL CLEAN OPERATION END #########################


####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/12.1.0/dbhome_1' on the local node.
Successfully detached Oracle home '/u01/app/oracle/product/12.1.0/dbhome_1' from the central inventory on the remote nodes 'racnode2,racnode3'.
Successfully deleted directory '/u01/app/oracle/product/12.1.0/dbhome_1' on the remote nodes 'racnode2,racnode3'.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL TOOL END #############
Posted in Configuration, GI and RAC | Tagged , | Leave a comment

SRVCTL Useful Commands

1)To check status of databases that are configured in this Oracle Home.

$ srvctl status database -thishome

2) To check status of databases that are of the same Oracle product version.

$ srvctl status database -thisversion
Posted in GI and RAC | Tagged , | Leave a comment

Asmcmd Pwcopy With PRCD-1163 PRCR-1071 CRS-0245 ASMCMD-9453 Errors

After standby database rebuilt, and trying to copy database password file from primary to standby ASM disk, the below errors occurred:

ASMCMD> pwcopy --dbuniquename RACTEST '/tmp/orapwractest' '+DATA/RACTEST/orapwractest'
copying /tmp/orapwractest -> +DATA/RACTEST/orapwractest
PRCD-1163 : Failed to modify database RACTEST
PRCR-1071 : Failed to register or update resource ora.ractest.db
CRS-0245: User doesn't have enough privilege to perform the operation
ASMCMD-9453: failed to register password file as a CRS resource

The password file was copied to ASM diskgroup ‘+DATA’ successfully, but the configuring  of database failed .

WORKAROUND

Rerun the same command without “–dbuniquename RACTEST” option.

ASMCMD> pwcopy /tmp/orapwractest +DATA/RACTEST/orapwractest
copying /tmp/orapwractest -> +DATA/RACTEST/orapwractest

Then use ‘srvctl’ to re-configure database as user ‘oracle’ if needed:

$srvctl modify database -d RACTEST -pwfile '+DATA/RACTEST/orapwractest';

$srvctl config database -d RACTEST

 

Posted in ASM, Configuration, DataGuard | Tagged , , , , , , , | Leave a comment

How to Migrate SQL Plan Baselines Between Oracle Databases

After database upgrading / Patching, or database is migrated onto new infrastructure, it is common to see database performance is degrading due to SOL execution plans changed.

What we can do is by creating  SQL plan baselines from old database, then copied them on to new database.

1) Identify the SQL plan baseline(s) to be migrated from source database:

SQL> select sql_handle, plan_name, sql_text, enabled, accepted from 
dba_sql_plan_baselines 
where enabled='YES' 
and accepted='YES' 
and PLAN_NAME='SQL_PLAN_7ns0utmam6vc64a63fd18';

SQL_HANDLE           PLAN_NAME                      SQL_TEXT ENA ACC
-------------------- ------------------------------ -------- --- ---
SQL_7a601accd5336d86 SQL_PLAN_7ns0utmam6vc64a63fd18 UPDATE . YES YES

2) Create a staging table on source database:

SQL> BEGIN
        DBMS_SPM.CREATE_STGTAB_BASELINE(
        table_name => 'spm_tab',
        table_owner => 'jamesh',
        tablespace_name => 'USERS');
END; 

/

PL/SQL procedure successfully completed.


SQL> select table_name,owner from dba_tables where owner='JAMESH';

TABLE_NAME   OWNER
------------ ------
SPM_TAB      JAMESH

3) Pack  SQL plan baseline(s) into above staging table ( spm_tab):

var x number;
begin
:x := DBMS_SPM.PACK_STGTAB_BASELINE('spm_tab', 'jamesh', -
      sql_handle => 'SQL_7a601accd5336d86',  -
      plan_name =>  'SQL_PLAN_7ns0utmam6vc64a63fd18' ); 
end;

/

PL/SQL procedure successfully completed.

4) Export staging table with datapump:

$ expdp directory=exp_dir dumpfile=jamesh_spm_tab.dmp 
  tables=jamesh.spm_tab

5) Transfer the datapump file onto target database server.

6) Import staging table onto target database:

$ impdp directory=imp_dir dumpfile=jamesh_spm_tab.dmp 
tables=jamesh.spm_tab remap_schema=... remap_tablespace=...

7) Unpack the SQL plan baselines in staging table into target database:

SQL> var x number;
     begin
      :x := DBMS_SPM.UNPACK_STGTAB_BASELINE('SPM_TAB', 'JAMESH');
     end;

     /

PL/SQL procedure successfully completed.

8) Verify the SQL plan baselines have been migrated onto target database:

SQL>select sql_handle, plan_name, sql_text, enabled, accepted, fixed 
from dba_sql_plan_baselines
where SQL_HANDLE='SQL_7a601accd5336d86';

SQL_HANDLE           PLAN_NAME                    SQL_TEXT  ENA ACC FIX
-------------------- ------------------------------ ------- --- --- ---
SQL_7a601accd5336d86 SQL_PLAN_7ns0utmam6vc64a63fd18 UPDATE  YES YES NO

9) Display execution plans for this SQL handle of a SQL plan baseline:

SQL> set linesize 120
SQL> set pagesize 2000
SQL> select * from TABLE(DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE( -
     'SQL_7a601accd5336d86','SQL_PLAN_7ns0utmam6vc64a63fd18'));

Syntax:

DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE (
   sql_handle      IN VARCHAR2 := NULL,
   plan_name       IN VARCHAR2 := NULL,
   format          IN VARCHAR2 := 'TYPICAL')
 RETURN dbms_xplan_type_table;
Parameter Description
sql_handle SQL statement handle. It identifies a SQL statement whose plan(s) are to be displayed.
plan_name Plan name. It identifies a specific plan. Default NULL means all plans associated with identified SQL statement are explained and displayed.
format Format string determines what information stored in the plan displayed. One of three format values (‘BASIC’, ‘TYPICAL’, ‘ALL’) can be used, each representing a common use case.
Posted in Performance Tunning | Tagged , , , , , , , , , | Leave a comment

How to Update Opatch Utility of GI Home

Due to GI_HOME restricted access, you might get below errors when you extract opatch utility to GI_HOME:

checkdir error:  cannot create /u01/app/12.2.0.1/grid/OPatch

                 Permission denied

                 unable to process OPatch/config/opatch.properties.

Below is one of the appropriate way for extracting opatch utility :

1)  Make a backup of current opatch utility:

$ cd /u01/app/12.2.0.1/grid/
$ tar -cvf /tmp/opatch_old.tar ./OPatch

2) Empty OPatch directory:

$cd /u01/app/12.2.0.1/grid/OPatch

$ rm *.*

3) Extract opatch utility of patch id 6880880 to GI home ( /u01/app/12.2.0.1/grid ):

$ unzip p6880880_122010_Linux-x86-64.zip  -d /u01/app/12.2.0.1/grid

$ which opatch
/u01/app/12.2.0.1/grid/OPatch/opatch

$ opatch version

OPatch Version: 12.2.0.1.13
OPatch succeeded.
Posted in GI and RAC, Upgrading and Patching | Tagged , , , , , , , | Leave a comment

Query on v$asm_diskgroup Hangs

Due to underneath SAN LUN issue, after manually dismounting problem disk group, the queries on v$asm_* views hang forever.

1)  Check blacking sessions, and find blocking session is SID=426, and the final blocking session us SID=392.

SQL> select SID,LAST_CALL_ET,EVENT,BLOCKING_SESSION,FINAL_BLOCKING_SESSION 
from v$session where BLOCKING_SESSION is not null;

SID        LAST_CALL_ET EVENT                BLOCKING_SESSION  FINAL_BLOCKING_SESSION
---------- ------------ -------------------- ----------------- ------------------------
 12         2313        enq: DD - contention 426               392
 17         9526        enq: DD - contention 426               392
 23         7726        enq: DD - contention 426               392
 25       527598        enq: DD - contention 426               392
 40       384765        enq: DD - contention 426               392
 418      132675        enq: DD - contention 426               392
 425        3397        enq: DD - contention 426               392
 426      530330        rdbms ipc reply      392               392
 444       15982        enq: DD - contention 426               392

...
..
.

2) Check what is session SID=392, which is ASM RBAL background process.

SQL> select s.sid,p.SPID,p.PNAME,p.BACKGROUND 
from v$process p, v$session s 
where s.PADDR=p.ADDR and s.sid in ( 426, 392);

SID        SPID                     PNAME B
---------- ------------------------ ----- -
 392       11670                    RBAL  1
 426       24638

$ ps -eaf | grep 11670
grid 11670 1 0 Mar22 ? 00:11:26 asm_rbal_+ASM1

3) Check which processes are still holding dismounted disk group.

$ lsof | grep ASMD_TEST
oracleasm 1246 grid 4r BLK 253,10 0t0 18446612242469980264 /dev/oracleasm/disks/ASMD_TEST1
oracle_20 2087 grid 14r BLK 253,10 0t0 18446612242469980264 /dev/oracleasm/disks/ASMD_TEST1
oracle_33 3371 grid 14r BLK 253,10 0t0 18446612242469980264 /dev/oracleasm/disks/ASMD_TEST1
oracle_47 4754 grid 17r BLK 253,10 0t0 18446612242469980264 /dev/oracleasm/disks/ASMD_TEST1
...
..
.

$ fuser /dev/oracleasm/disks/ASMD_TEST1
/dev/oracleasm/disks/ASMD_TEST1: 1246 2087 3371 4754 ... ... ..

In this case, ASM instance bounce is required for resolving this issue.

 

Posted in ASM | Tagged , , | Leave a comment

How to Prevent ASM Diskgroup Automatic Mount

1) To disable certain ASM Diskgroup automatic mount on all RAC nodes, run below command as grid user:

$ srvctl disable diskgroup -diskgroup testdg

2) To disable certain ASM Diskgroup automatic mount on specific RAC node, run below command as grid user:

$ srvctl disable diskgroup -diskgroup testdg -node racnode1

3) To check the status of ASM diskgroup:

$ srvctl status diskgroup -diskgroup testdg -detail -verbose
Disk Group testdg is running on racnode1, racnode2
Disk Group testdg is disabled

Note: You cannot run the srvctl start command on a disabled object until you first re-enable the object

Posted in ASM | Tagged , , , | Leave a comment

DataPump “ORA-31623: a job is not attached to this session via the specified handle”

The below errors occurred while export tables by using datapump:

$expdp userid=... tables=... directory=... dumpfile=...
..

UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3905
ORA-06512: at "SYS.DBMS_DATAPUMP", line 5203
ORA-06512: at line 1

CAUSES

Stream pool size is too small, and there is no more memory for stream pool to increase.

SOLUTION

Increase streams_pool_size:

SQL> show parameter streams_pool_size

NAME TYPE VALUE
----------------- ----------- ------------------------------
streams_pool_size big integer 20M

SQL> alter system set streams_pool_size=128M ;

If not enough memory available:

SQL>  alter system flush SHARED_POOL;
System altered.

-- OR 

SQL> alter system flush BUFFER_CACHE;
System altered.

SQL> alter system set streams_pool_size=128M ;

For AMM or ASMM, after data pump is complete, reset the parameter to leave system to manage this parameter again.

SQL> alter system reset streams_pool_size;
Posted in DataPump | Tagged , , , , | Leave a comment