PUP.NickoffSecurity

I have used MacOS several years ago and wondered if AntiVirus software is essential. The one I bought from AppStore is Thor Antivirus, and it was removed by AppStore in 2016. I believe that there was a security issue or scam software inside, but no clue to find out from AppStore.

The creditable story teller of security flaw on major antivirus softwares in AppStore is Thomas Reed from Malwarebytes. Once updating my old Malwarebytes 1.3.1.628 and rule version 2018-01-23, I discover my purchased software, Thor Antivirus contains a PUP.NickoffSecurity (Potentially-Unwanted Program) coined by Nicolae Popescu, Romanian, wanted by FBI.

Pasted Graphic

Hope this help.

Bunditj

root.sh execution in 12cR2

There is an option whether or not that you need to install TFA (Trace File Analyzer) in 12cR2 even if you just install an oracle software in single host.

[root@localhost ~]# /u01/app/oracle/product/12.2.0.1/dbhome_1/root.sh                                                                   

Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.2.0.1/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.

Now product-specific root actions will be performed.

Do you want to setup Oracle Trace File Analyzer (TFA) now ? yes|[no] :

You might find an error when your choice here is yes

Installing TFA on localhost:
HOST: localhost TFA_HOME: /u01/app/oracle/tfa/localhost/tfa_home

.-----------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+-----------+---------------+-----+-------+------------+----------------------+
| localhost | RUNNING | 868 | 17094 | 12.2.1.0.0 | 12210020161122170355 |
'-----------+---------------+-----+-------+------------+----------------------'

Running Inventory in All Nodes...

Enabling Access for Non-root Users on localhost...
ERROR: /u01/app/oracle/tfa/localhost/tfa_home/database does not exists 
ERROR: /u01/app/oracle/tfa/localhost/tfa_home/output does not exists 
ERROR: /u01/app/oracle/tfa/localhost/tfa_home/output/inventory does not exists

Summary of TFA Installation:
.--------------------------------------------------------------.
| localhost |
+---------------------+----------------------------------------+
| Parameter | Value |
+---------------------+----------------------------------------+
| Install location | /u01/app/oracle/tfa/localhost/tfa_home |
| Repository location | /u01/app/oracle/tfa/repository |
| Repository usage | 0 MB out of 10240 MB |
'---------------------+----------------------------------------'


TFA is successfully installed...

So you could fix those errors simply by creating them, and re-run root.sh.

[root@localhost ~]# mkdir -p /u01/app/oracle/tfa/localhost/tfa_home/database
[root@localhost ~]# mkdir -p /u01/app/oracle/tfa/localhost/tfa_home/output/inventory
[root@localhost ~]# /u01/app/oracle/product/12.2.0.1/dbhome_1/root.sh                                                                   
Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/12.2.0.1/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Do you want to setup Oracle Trace File Analyzer (TFA) now ? yes|[no] :
yes

Installing Oracle Trace File Analyzer (TFA).
Log File: /u01/app/oracle/product/12.2.0.1/dbhome_1/install/root_localhost.localdomain_2017-03-02_23-19-46-274611268.log
Finished installing Oracle Trace File Analyzer (TFA)

Lastly, verifying your TFA installation

[root@localhost ~]# /u01/app/oracle/product/12.2.0.1/dbhome_1/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host      | Status of TFA | PID | Port  | Version    | Build ID             | Inventory Status |
+-----------+---------------+-----+-------+------------+----------------------+------------------+
| localhost | RUNNING       | 868 | 17094 | 12.2.1.0.0 | 12210020161122170355 | COMPLETE         |
'-----------+---------------+-----+-------+------------+----------------------+------------------'

 

EZConnect on specific CDB using same PDB name

Here is what I come across to connect same PDB name on 2 CDBs same host. Here is my output from listener stat. You can notice I have 2 CDBs, cdb001 and cdb002. All of them are having same pdb name, pdb1

Service "pdb1" has 2 instance(s).
 Instance "cdb001", status READY, has 1 handler(s) for this service...
 Instance "cdb002", status READY, has 1 handler(s) for this service...

If you often use EZConnect like me to connect several database instances in your company, I don’t want to keep updating my tnsnames.ora file. For a 12c MDB (Multitenant Database), you sometimes have to connect the specific same PDB name on among different CDBs.

For existing fashion, you usually define EZconnect as hostname/service_name, however defining service_name as PDB name, you will by chance log into different CDBs randomly depending on listener itself.

Hence,  to rightly connect to required CDB, you simply identify your CDB name prior to PDB name for example

hostname/PDB/CDB

sqlplus sys@\"hostname/pdb1/cdb1\" as sysdba
sqlplus sys@\"hostname/pdb1/cdb2\" as sysdba

Enterprise Manager 13c – Consolidation Planner

Formerly, when I have had to decide either server isolation or server consolidation to different platform, there could use of TPC-C (if they exists) or SPECInt_Rate_2006 (Mostly available). That’s however very cumbersome to check, verify, design and decide on optimal performance based on our tight budget.

I have chance to test an Oracle Software EMCC 13c, and found that it’d pretty be valuable source to reference especially it help speed up my process of how to isolate or consolidate our server. Here is what a tool help summarize based upon SPECInt_Rate_2006 on our test environment. (This might be flexible to adjust based on your experience and your own benchmark)

Screen Shot 2016-10-11 at 11.12.03 PM.png

Apart from CPU, you can additionally see your memory utilization and disk storage perspective including IOPs and Throughput.

Screen Shot 2016-10-11 at 11.41.32 PM.png

It could save my day.

Global Temporary Table

Easy question comes to me. How could I know that our existing global temporary tables (GTT) are created with either “on commit delete rows” (default setting if not specify it), or “on commit preserve rows” ?

A data dictionary all_tables.DURATION is our answer. For any permanent table, there is none. SYS$SESSION – Rows are preserved for the duration of the session. SYS$TRANSACTION – Rows are deleted after COMMIT

hr(cdbdev_2)@PDBSAMPLE> create global temporary table gtt_preserve 
(id int) on commit preserve rows;

Table created.

hr(cdbdev_2)@PDBSAMPLE> create global temporary table gtt_delete 
(id int) on commit delete rows;

Table created.

hr(cdbdev_2)@PDBSAMPLE> create global temporary table gtt_default 
(id int);

Table created.

hr(cdbdev_2)@PDBSAMPLE> create table non_gtt 
(id int);

Table created.

hr(cdbdev_2)@PDBSAMPLE> select table_name, temporary, duration 
from all_tables where table_name like '%GTT%';

TABLE_NAME T DURATION
--------------- - ---------------
NON_GTT N
GTT_DELETE Y SYS$TRANSACTION
GTT_PRESERVE Y SYS$SESSION
GTT_DEFAULT Y SYS$TRANSACTION

4 rows selected.

 

Oracle : TO_YMINTERVAL function

From 11.1 onwards with YEAR-TO-MONTH Interval function, oracle has supported the nominal duration (Year, Month ) with time format. However specifying time format doesn’t generate the result as expected. You can browse the document describing this behavior here.

From my 10g sandbox, this function supports only nominal duration. Using ISO 8601 standard, you don’t have to specify a month except for ISO/IEC 9075, otherwise the error appears (ORA-1867: the interval is invalid)

10g> select sysdate
, sysdate + TO_YMINTERVAL('10-0') ISO_IEC_9075_2003 
, sysdate + TO_YMINTERVAL('P10Y') ISO_8601_2004 
from dual;

SYSDATE ISO_IEC_9075_2003 ISO_8601_2004
-------------------- -------------------- --------------------
26-SEP-2016 10:33:36 26-SEP-2026 10:33:36 26-SEP-2026 10:33:36

ISO/IEC 9075 2003 doesn’t support a TIME format. except you switch to use ISO 8601 format. This example shows you if you need to add 1 HOUR ahead

10g> select sysdate
, sysdate + TO_YMINTERVAL('10-0') ISO_IEC_9075_2003
, sysdate + TO_YMINTERVAL('P10YT1H') ISO_8601_2004
from dual;

ERROR at line 3:
ORA-01867: the interval is invalid

As oracle’s note on document 10g, it doesn’t support TIME format until 11.1 version.

11g> select sysdate
, sysdate + TO_YMINTERVAL('10-0') ISO_IEC_9075_2003
, sysdate + TO_YMINTERVAL('P10YT1H') ISO_8601_2004
from dual;

SYSDATE              ISO_IEC_9075_2003    ISO_8601_2004
-------------------- -------------------- --------------------
26-SEP-2016 10:39:06 26-SEP-2026 10:39:06 26-SEP-2026 10:39:06

You’d be noticed that even if you add more 1 hour, but the result of this function is NOT completely certified with ISO yet. The latest version 12.1 when writing on this blog remains the same.

12c> select sysdate
, sysdate + TO_YMINTERVAL('10-0') ISO_IEC_9075_2003 
, sysdate + TO_YMINTERVAL('P10YT1H') ISO_8601_2004 
from dual;

SYSDATE ISO_IEC_9075_2003 ISO_8601_2004
-------------------- -------------------- --------------------
26-SEP-2016 10:43:49 26-SEP-2026 10:43:49 26-SEP-2026 10:43:49

 

 

 

pgrep on AIX

It has been annoying from time to time when I switch to support non-linux platform, especially and mostly on AIX. I love to use pgrep to shorten a ps command to find out PID (process ID), that has never existed on AIX.

This script is very simple to use of pgrep and placed on /usr/local/bin or wherever you set an environment variable PATH.

# pgrep alike

while [ $# -gt 0 ]
 do
 #echo "$1"
 if [[ "$1" != "-"* ]]; then
 break;
 fi
 shift
 done

ps -o pid,args -e|grep "$1" |grep -v grep

Here is the output example from my AIX platform

[oracle@host01 ~]$ pgrep -lf pmon
 9699392 asm_pmon_+ASM
14680280 ora_pmon_cdb
18088032 ora_pmon_emrep

In summary, this script will ignore an option starting with – (minus), and use another as pattern searching on grep command.

runInstaller attachHome Error

From time to time, I have to (re)attach the other ORACLE_HOME since the existing has been lately configured with EM Cloud agent 12c.

This is an error occurred on my runInstaller

[grid@host01 ~]$ /u01/app/12.1.0/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/12.1.0/grid ORACLE_HOME_NAME=OraGI12Home1

Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 14848 MB    Passed
org.xml.sax.SAXParseException: <Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected.
        at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:422)
        at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:287)
        at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:414)
….
        at oracle.sysman.oii.oiic.OiicAttachHome.main(OiicAttachHome.java:692)
'AttachHome' failed.
'AttachHome' failed.

I have firstly thought that every runInstaller executed, it always read the contents of Oracle Central Inventory Location. You can determine its location based on /etc/oraInst.loc contents

[grid@host01 ~]$ cat /etc/oraInst.loc 
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[grid@host01 ~]$ ls -l /u01/app/oraInventory/ContentsXML
total 16
-rw-rw----    1 grid     oinstall        329 Jun 21 17:22 comps.xml
-rw-rw----    1 grid     oinstall          0 Jun 21 17:21 inventory.xml
-rw-rw----    1 grid     oinstall        292 Jun 21 17:22 libs.xml

This is a root cause. My xml files were sound peculiar. Luckily, I have a 2nd node containing the correct contents. The good copy from 2nd node replaced all these 3 files

[grid@host02 ~]$ ls -l /u01/app/oraInventory/ContentsXML/
total 24
-rw-rw---- 1 grid oinstall 329 Jun 22 18:40 comps.xml
-rw-rw---- 1 grid oinstall 2646 Jun 22 18:39 inventory.xml
-rw-rw---- 1 grid oinstall 292 Jun 22 18:40 libs.xml

Once replacing them, my attachHome is working after sometimes.

[grid@host01 ~]$ /u01/app/12.1.0/grid/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/u01/app/12.1.0/grid ORACLE_HOME_NAME=OraGI12Home1

Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 14848 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'AttachHome' was successful.

adrci utilities (11g onwards)

If you have struggled with house keeping on your log files located under $ORACLE_BASE (aka Diagnostic Directory Structures), oracle has intelligently provided some useful subcommands via adrci to control and purge them automatically without any old-fashion scripting.

A simple script to making use of adrci. I assume that your ORACLE_BASE is /u01/app/oracle, and also changeable according to your setting. In addition, my assumption is that you’d like to keep and purge them using 168-hour life cycle

  • Short Life Policy includes trace files, core dumps, and ips information (package files)
  • Long Life Policy includes incident information,  incident dumps, and alert log file.

Reference: 11g utilities

for i in `adrci exec="show home -base /u01/app/oracle"|tail +2`
do
adrci exec="set home $i; show control; set control \(shortp_policy\=168,longp_policy=168\); purge -age 168"
done

 

In addition, I stumble on this OTN. Above action won’t be applicable to Grid Infrastructure (GI) when you implement either Oracle Restart or Oracle Clusterware with role separation.

You can simply change the code from oracle keyword to grid (GI owner)

for i in `adrci exec="show home -base /u01/app/grid"|tail +2`
do
adrci exec="set home $i; show control; set control \(shortp_policy\=168,longp_policy=168\); purge -age 168"
done