Wednesday, 31 December 2014

How to migrate an existing pre12c to 12 CDB

Every now and again it is necessary to migrate an existing database to the new version of oracle software and what's more most of the time doing it by leaving offline services the least time possible.

I'm used to migrating pre10g release to 11.2 release and now we are ready to go over the migration to the 12c release though.

To be continued...

Exploring Oracle Data Guard - Part 3

Now It's time to get our hands dirty in the process of creating a physical standby database.

6) Copy the  orapwfile of the primary database to the standby database :

scp orapwTESTDG oracle@oracle10gr2standbybis:$ORACLE_HOME/dbs
     


7) Add the following alias to the tnsnames.ora file of the primary database :

 TESTDGS =
         (DESCRIPTION =
            (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.51)(PORT = 1521))
            (CONNECT_DATA =
               (SERVER = DEDICATED)
               (SERVICE_NAME = TESTDGS)
              )
         )





Nb: The service name TESTDGS is going to be managed by the listener process of standby database
 
To be continued.....

Friday, 19 December 2014

Exploring Oracle Data Guard - Part 2

Let's get the ball rolling, shall we ?

Server : oracle10gr2primary , oracle10gr2standby

Preparing Database :

On primary database server : oracle10gr2primary 

1) Verify if archiving is enabled and verify archive log files :

   SQL> archive log list;
    Database log mode                Archive Mode
    Automatic archival                  Enabled
    Archive destination                 USE_DB_RECOVERY_FILE_DEST
    Oldest online log sequence     1
    Next log sequence to archive   3
    Current log sequence              3


2) If it isn't already enabled... enable 'force logging':

    SQL> alter database force logging;
              Database altered.


3) Configure standby redo logs:

   SQL>alter database add standby logfile group 11 '/u01/oradata/TESTDG/standby_redo01.log' size 50m;
    SQL>alter database add standby logfile group 12 '/u01/oradata/TESTDG/standby_redo02.log' size 50m;
    SQL>alter database add standby logfile group 13 '/u01/oradata/TESTDG/standby_redo03.log' size 50m;
    SQL>alter database add standby logfile group 14 '/u01/oradata/TESTDG/standby_redo04.log' size 50m;



NB1: The standby redo log size must be equal to redo log size.


NB2: The standby redo log groups must be one more than the redo log groups.

NB3: Frankly speaking it isn't necessary to create the standby redo logs on the primary database during this phase, because these logs will be used by the database only in case of a switchover event.

4) Ensure that Flash Recovery Area is enabled :

   SQL> show parameter recovery
   NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
   db_recovery_file_dest                string           /u01/orabackup
   db_recovery_file_dest_size         big integer     8048M


5) Modify the init parameter file of the primary database according to these values:


alter system set db_unique_name='TESTDGP' scope=spfile;
alter system set log_archive_config='DG_CONFIG=(TESTDGP,TESTDGS)' scope=spfile;
alter system set log_archive_dest_1='LOCATION=/u02/oraarch/TESTDG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=TESTDGP' scope=spfile;
alter system set log_archive_dest_2='SERVICE=TESTDGS LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLES) DB_UNIQUE_NAME=TESTDGS' scope=spfile;
alter system set log_archive_dest_state_1=ENABLE scope=spfile;
alter system set log_archive_dest_state_2=ENABLE scope=spfile;
alter system set log_archive_max_processes=8 scope=spfile;
alter system set log_archive_format='%t_%s_%r.dbf'  scope=spfile;
alter system set standby_file_management='AUTO' scope=spfile;
alter system set fal_server='TESTDGS' scope=spfile;
alter system set fal_client='TESTDGP' scope=spfile;
alter system set standby_archive_dest='/u02/oraarch/TESTDG' scope=spfile;



Tuesday, 9 December 2014

Exploring Oracle Data Guard - Part 1

Every now and again it is necessary to brush up on previous skills in order to resolve problems.

Today I'm going to get down to a handy Oracle features which is called Oracle Data Guard.

Since Oracle database 7.3 version, this features has been introduced and it offers the possibility to protect database from failure, disaster, human error and data corruption.

Since the first version, lots of new options have come out like for example in the 11g, the 'Active Data Guard' and what's more the 'Snapshot Standby'.

In my opinion, the first step to do is the installation of a complete Test environment made up with a Primary and Standby database server and what's more since the last production environment is a Oracle 10GR4 release this post will end up installing a 10GR4 primary and standby databases.

See you soon.

Byeeee


Friday, 28 November 2014

RMAN New Features Oracle 12c

Nowadays keeping up with time is the core of our job so every now and again it's necessary to go over the new Oracle database release even if there isn't an immediate necessity.

In this post I would like to talk about the RMAN new features in a multitenant environment.

Let's get down this post, shall we ?

According to the Oracle 12c reference manual it is now possible to back up and recover an entire CDB and all PDBs or only the CDB root container and what's more only one or more PDBs within the CDB.

Let's start from the beginning !

1) Backup an entire CDB container and all the PDB included :

RMAN> connect target /
RMAN> backup database;

In case you need it's also possible to do a backup of all archive logs rewriting the previous command :

RMAN> connect target /
RMAN> backup database plus archivelog;

As you can see this command isn't so different from previous backup command release.

NB: Since we are connected to CDB level and what's more the redo plus archived log files are only created at this level, we have been able to backup the archive log. Otherwise it isn't possible when we are connected to PDB level.


2) Backup only PDB :

RMAN> connect target /
RMAN> backup pluggable database pdb11;

It's also possible to backup multiple PDBs within the same command:

RMAN> backup pluggable database pdb10,pdb11;

3) Backup only the root container:

RMAN> backup pluggable database 'CDB$root';

4) Partial backup of a PDB:

RMAN> connect target /
RMAN> backup tablespace users;pdb10;


More or less that's all about the new backup command in Oracle 12c.

Please remember that it's always necessary to go over these commands and what's more verify these in a Test environment.

See you in my next blog !

Byeee !!!








Friday, 21 November 2014

Installing Oracle 12c Rac on VirtualBox - Part 2

Hi there,
following you'll find the second part of "Installing Oracle 12c Rac on VirtualBox".

1) Install the required RPM module :

yum install gcc
yum install kernel-uek-devel
yum search oracle-rdbms-server

2) Create the Oracle and Grid users and groups:

groupadd -g 1500 dba
groupadd -g 1501 oinstall
groupadd -g 1502 asmadmin
groupadd -g 1503 asmdba
groupadd -g 1504 asmoper


useradd  -u 1500 -p oracle -g oinstall -G dba,asmdba oracle
useradd  -u 1501 -p oracle -g oinstall -G dba,asmdba,asmadmin,asmoper grid


NB: According to Oracle reference book I decided to keep divided the Oracle software owner from the Grid software owner.

3) Create the ORACLE_HOME and ORACLE_BASE directory :

mkdir -p /u01/app/12.1.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/


4) Configure operating system kernel parameter :

kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576


5) Modify Grid and Oracle resource limit :

File : /etc/security/limits.conf
   
 grid   soft    nproc    2047
 grid   hard   nproc   16384
 grid   soft    nofile    1024
 grid   hard   nofile    65536
 grid   soft    stack    10240
 grid   hard   stack    10240
   
oracle   soft    nproc    2047
oracle   hard   nproc   16384
oracle   soft    nofile    1024
oracle   hard   nofile    65536
oracle   soft    stack    10240
oracle   hard   stack    10240



6) Configuring shared storage: 

Oracle clusterware needs to be able to work a shared storage, so it's necessary to configure the server machine and the linux server too.

Let's start with the virtual machine !

First of all you have to stop the linux server and then using the Oracle Virtual box client do the following:

1) Add a SCSI controller.

2) Create three new VDI disks with a fixed size.

 OcrVoting    6Gb
 ORADATA      6Gb
 ORAFRA       6Gb


3) From the menu FILE of Virtual box program open the Virtual Media Manager utility.

4) Mark all of then as sharable disks.

 All these steps end up creating sharable disks that we are going to use within our cluster which is made up with two linux server.

Let's get the ball rolling, shall we ?

First of all we have to configure new disks within the linux server. There are different tricks to do so. For example we can use UDEV rules or asmlib plus multipath daemon. Both of them turn out to make the disks persistent during a system reboot.

By the way, since this is only a Test environment I won't install and configure the multipath daemon but don't do it in a production environment it's dangerous !!!

To be continued.....

Byeee




Tuesday, 11 November 2014

Installing Oracle 12c Rac on VirtualBox - Part 1

In 2005 I worked as a DBA Oracle at a public department in Rome. While I was working there an important project was approved and this approval turned out in a new Oracle 10 RAC installation.

Oracle technicians implemented only the production environment made up with 2 Oracle database nodes. 

Since I've always thought that doing stuff is the best way to learn something, I decide to try the installation process out on a test environment.

Let's start to install Oracle 12c RAC on two VirtualBox server. First of all we have to install a new linux server then we can clone it afterwards.

The following are the binary files that are necessary :

1) VirtualBox-4.3.16.exe
2) Oracle Linux 6
3) linuxamd64_12c_database.zip
4) linuxamd64_12_grid.zip

Indeed Oracle 12.1.0.2 is certified with Oracle Linux Server 6.

The installation steps of a Linux server isn't the most important thing of this post, so you won't find all the details about it. 

By the way don't worry because every now and again I'll go over it.

To be continued...

See you soon...

Bye

Thursday, 16 October 2014

MySQL Replication Master/Slave

I'm sorry I haven't finished the Post before now, but  I've been snowed under with work :) 

Let's get the ball rolling, shall we ?

Since 2010 I've worked with MySQL replication and frankly speaking today I'm still sure and convinced that this is a really handy and smart feature.

Let's go over MySQL replication together, shall we ?

It is made up by at least 2 database server, one Master and one Slave and every DML plus DDL SQL command executed on the master node is replicated in asynchronous mode on the Slave.

It is possible using this features to :

1) Use the Slave likes an "online backup" database and use it just in case of failure of the Master node
    
Be careful ! As I said before replication is made from an asynchronous algorithm and in case the Master fails his current data doesn't get to Slave.

2) Use the Master node only for DML SQL command, like for example INSERT and instead execute all the queries on the slave node.

According to MySQL Reference Books it is possible to increase the cluster with lots of slave nodes which end up creating a scale-out solution.

You are looking forward to hints, aren't you ?

Let's get down to MySQL Replication configuration, shall we ?

First of all, as I said before you need at least 2 MySQL database servers, already configured and started.

Would it help if I wrote the link of a previous Post to follow if you need to install 2 MySQL server from scratch ?   It may..... :)

http://alessiodba.blogspot.it/2013/11/mysql-56-e-oracle-linux-64.html

The installation process of the previous blog have to end up adding the following replication configuration parameter to the /etc/my.cnf file.


 mysqlvm01master :

 server-id = 1
 expire_logs_days        = 7
 max_binlog_size         = 512M
 log-slave-updates       = 1
 skip-slave-start


mysqlvm02slave01 :
server-id       = 2
max_binlog_size  = 1024M
expire_logs_days  = 7
log-slave-updates = 1
skip-slave-start



3) Add a new user on the Master and Slave:

GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repl'@'x.x.x.106' IDENTIFIED BY 'xxxxx';

GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repl'@'x.x.x.105' IDENTIFIED BY 'xxxxx';




This new user is going to be used to establish a connection between the master plus the Slave node.


4) Restart the Master and Slave nodes look for bin-log files:


[mysql@mysqlvm01master]$ ls -l
        drwxr-xr-x. 2 mysql mysql   6 Sep 30 15:32 binlogs
        -rw-rw----. 1 mysql mysql 120 Oct  6 12:00 binlogs.000001
        -rw-rw----. 1 mysql mysql  33 Oct  6 12:00 binlogs.index
         drwxr-xr-x. 2 mysql mysql  78 Oct  3 12:27 innologs




[mysql@mysqlvm02slave]$ ls -l
    drwxr-xr-x. 2 mysql mysql   6 Oct  2 12:29 binlogs
    -rw-rw----. 1 mysql mysql 120 Oct  6 12:06 binlogs.000001
    -rw-rw----. 1 mysql mysql  33 Oct  6 12:06 binlogs.index
    drwxr-xr-x. 2 mysql mysql  78 Oct  6 11:33 innologs  
 


5) On the Master node determine the current binary log file name and position :

mysql> show master status;
+----------------+----------+--------------+------------------+-------------------+
| File           | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+----------------+----------+--------------+------------------+-------------------+
| binlogs.000001 |      120 |              |                  |                   |
+----------------+----------+--------------+------------------+-------------------+



6) On the Slave execute the following command :

mysql> change master to MASTER_HOST='mysqlvm01master',
    -> MASTER_USER='repl',
    -> MASTER_PASSWORD='xxxxxx',
    -> MASTER_LOG_FILE='binlogs.000001',
    -> MASTER_LOG_POS=120;



7) Start the slave process and check the status :

mysql> start slave;
mysql> show slave status\G

That's all,our Master / Slave replication installation is now ready to work !

I could say lots of other stuff about this features but for now that's all !






See you soon 

Bye


Wednesday, 1 October 2014

New Opportunity.....

Every day we have to make lots of decisions about our future. We choose for example what to eat and which roads to take to get home.

It happens automatically and we are used to doing so immediately.

Every now and again, instead we have to make a decision about a new job offer and what's more do so within a short period.

That's a really hard and difficult stuff !

Lot's of thoughts immediately crop up like for example :

  • Am I taking the right decision ?
  • Will I be able to join my new co-workers ?

And so on ..... Humankind is usually worried about  unknown stuff. This is a really challenging decision.

Many centuries ago in the ancient Greece, Athenians used to ask advice to "Oracles" about what is going to happen in the future. For example Temistocle got the right advise from the Oracle of Delphy and using this advice he was able to win against Persian's empire during the naval battle near the Salamina's island.

Nowadays, frankly speaking, we know that probably interpreted this advice according to his will.

So I  belive that nobody can give the right advice when you have to take a decision.

As Forrest Gump says : " Life is like a box of chocolate.... you never know what you're gonna get..!!"

So keep your fingers crossed !!!


See you soon.

Byeeee

Tuesday, 16 September 2014

When holidays is over ......

I've been on holidays lately and when it was over , I mean the day after , I usually fell different types of fealing.

More often than not I get sad and what's more I get annoyed.

Driving and working seem to be difficult and strange too. All stuff ends up in a failure !

After a while, my brain slowly starts up again and all the bad fealing fades away !

New ideas crops up, so keep in touch the blog !

See you soon.

Bye..............


Thursday, 31 July 2014

How to automatically rebuild partitioned local index

Once in a while somebody gets a new job in a new company  and what's more every now and again it happens in your office !

This has been Sebastano's turn, a co-worker of mine. 

He went to England for a study leave and when he got there after a while he got a new job too.

Due to his leaving now I have to do also his previous work and rebuilding partitioned local index is one of those.

At first glance it seems to be a boring and time consuming task, but why on earth can't I automate this procedure ?

Following you'll find a complete PL/SQL Block to do it.

The PL/SQL block is going to ask only the degree of parallelism and what's more the schema name.

A "rebuild_local_index.sql" file will be produced by the sqlplus client later on.

You can use it to rebuild all your partitioned local index that are owned by a schema name.

PS: Who is writing doesn't know your environment so try this procedure out before using on a production database.

See you in my next blog !

Byeeeee

***************************************************

SET serveroutput on
set lin 300
accept vParamParallelism prompt "Degree: ";
accept vParamUtente prompt "SchemaName: ";

set echo off
set heading off
set feedback off
set term off
set lin 200
set verify off

spool rebuild_local_index.sql
select 'set echo on' from dual;
select 'set feedback on' from dual;

declare
vParallelismo varchar2(2);
vUtente varchar2(50);
vSQL varchar2(255);
vSQL_LOG_OFF varchar2(255);

begin
vParallelismo:='&vParamParallelism';
vUtente:='&vParamUtente';

for riga_cursore in (select 'alter index ' || INDEX_OWNER ||'.'|| INDEX_NAME || ' rebuild partition ' || PARTITION_NAME || ' nologging parallel '||vParallelismo||' ;' testo_sql FROM dba_ind_partitions i where index_owner=vUtente and status != 'USABLE' and partition_name !='PMAX' order by index_name)
loop
vSQL:=riga_cursore.testo_sql;
dbms_output.put_line(vSQL);
end loop;

for riga_cursore_log_off in (select 'alter index '|| vUtente ||'.'|| index_name||' logging noparallel;' testo_sql_log_off FROM dba_ind_partitions where index_owner=vUtente and LOGGING='NO' and partition_name !='PMAX' group by index_name)
loop
vSQL_LOG_OFF:=riga_cursore_log_off.testo_sql_log_off;
dbms_output.put_line(vSQL_LOG_OFF);
end loop;


end;
/

spool off
set term on



Thursday, 17 July 2014

MySQL Cluster NDB Restore


Every now and again you can get a failure also with MySQL Cluster NDB and the only way to bring back online is with the restore command.


Let's look how , shall we ?

After the failure of either Data Nodes we look at the log files and following error we are going to find :

"Error while reading REDO log. from......"

Something put them out of use !

Following you'll find the commands to execute a complete restore process :


1) If other Nodes as Management or Sql Nodes are still online, bring them offline.

2) Bring online the Management Nodes : ndb_mgmd  --configdir=/home/mysql/mysqlCluster -f /home/mysql/mysqlCluster/config.ini

3) Bring online the Data Nodes :  ndbd --initial

Be careful !!  The option "--initial" is going to reset and destroy all information stored in the Data Nodes.
   
4) Leaving the Sql Nodes offline execute the following commands on both the data nodes:

On the first Data Nodes :

ndb_restore -r -m -b 1 -n 3 /home/mysql/mysqlClusterData/BACKUP/BACKUP-1

Backup Id = 1
Nodeid = 3
backup path = /home/mysql/mysqlClusterData/BACKUP/BACKUP-1
Opening file '/home/mysql/mysqlClusterData/BACKUP/BACKUP-1/BACKUP-1.3.ctl'
File size 21388 bytes
Backup version in files: ndb-6.3.11 ndb version: mysql-5.6.15 ndb-7.3.4
Stop GCP of Backup: 670480
Connected to ndb!!
Successfully restored table `test/def/ciccio`
Successfully restored table event REPL$test/ciccio
Create foreign keys
Create foreign keys done
Opening file '/home/mysql/mysqlClusterData/BACKUP/BACKUP-1/BACKUP-1-0.3.Data'
File size 1556696 bytes
_____________________________________________________
Processing data in table: mysql/def/NDB$BLOB_7_3(8) fragment 0
_____________________________________________________
Processing data in table: mysql/def/ndb_index_stat_sample(5) fragment 0
_____________________________________________________
Processing data in table: sys/def/NDB$EVENTS_0(3) fragment 0
_____________________________________________________
Processing data in table: mysql/def/ndb_apply_status(9) fragment 0
_____________________________________________________
Processing data in table: mysql/def/ndb_index_stat_head(4) fragment 0
_____________________________________________________
Processing data in table: test/def/ciccio(10) fragment 0
_____________________________________________________
Processing data in table: sys/def/SYSTAB_0(2) fragment 0
_____________________________________________________
Processing data in table: mysql/def/ndb_schema(7) fragment 0
Opening file '/home/mysql/mysqlClusterData/BACKUP/BACKUP-1/BACKUP-1.3.log'
File size 52 bytes
Restored 54663 tuples and 0 log entries
NDBT_ProgramExit: 0 - OK

   

On the second Data Node :

ndb_restore -r -b 1 -n 4 /home/mysql/mysqlClusterData/BACKUP/BACKUP-1

Backup Id = 1
Nodeid = 4
backup path = /home/mysql/mysqlClusterData/BACKUP/BACKUP-1
Opening file '/home/mysql/mysqlClusterData/BACKUP/BACKUP-1/BACKUP-1.4.ctl'
File size 21388 bytes
Backup version in files: ndb-6.3.11 ndb version: mysql-5.6.15 ndb-7.3.4
Stop GCP of Backup: 670480
Connected to ndb!!
Opening file '/home/mysql/mysqlClusterData/BACKUP/BACKUP-1/BACKUP-1-0.4.Data'
File size 1575096 bytes
_________________________________________________
Processing data in table: mysql/def/NDB$BLOB_7_3(8) fragment 1
_____________________________________________________
Processing data in table: mysql/def/ndb_index_stat_sample(5) fragment 1
_____________________________________________________
Processing data in table: sys/def/NDB$EVENTS_0(3) fragment 1
_____________________________________________________
Processing data in table: mysql/def/ndb_apply_status(9) fragment 1
_____________________________________________________
Processing data in table: mysql/def/ndb_index_stat_head(4) fragment 1
_____________________________________________________
Processing data in table: test/def/ciccio(10) fragment 1
_____________________________________________________
Processing data in table: sys/def/SYSTAB_0(2) fragment 1
_____________________________________________________
Processing data in table: mysql/def/ndb_schema(7) fragment 1
Opening file '/home/mysql/mysqlClusterData/BACKUP/BACKUP-1/BACKUP-1.4.log'
File size 52 bytes
Restored 55338 tuples and 0 log entries
NDBT_ProgramExit: 0 - OK



Following you'll find some handy stuff about the option of ndb_restore command:

-m = restore meta
-r  = restore records
-b  = backup ID
-n  = node ID

1) On the Second Data Nodes we have executed the ndb_restore command without the "-m" option, because the metadata information has already been restored with the first execution on the first Data Node.

2) Due to native table partitioning method, that is used to divide rows between the Data Nodes, we have to execute the ndb_restore command on both the Data Nodes.

NB: Who is writing the Post doesn't know your environment so it's necessary to try it out in a test cluster database before.

This procedure ends up bringing back online the Data and the last step that we have to do is only to restart both Sql Nodes with the following command: mysqld_safe &


Feel free to leave a comment or ask for further queries regarding the Post.

See you in my next Blog !

Byeeeee


Friday, 11 July 2014

Load Data Infile vs Date Type

Every now and again lots of data rows have to be loaded using a csv text file and once in a while it isn't well formatted. In this case we have to manually modify the data but luckily we can use the options of the 'LOAD DATA INFILE' command.

Today I'm going to look into the 'SET' option.

Following you'll find a complete example. 

Let's start creating a table, shall we ?

create table table_ok (`subject` varchar(255), `data_appo` date, `prio` varchar(255), `taskkk` int(11),
`acco` varchar(255), `cont` varchar(255), `case_appo` varchar(255), `assign` varchar(255),  `last_mod` Date);


Here is the data row in detail :

"AAAAAA","6/5/2014","BBBBB","1","CCCCC","DDDDDD","1111111111","EEEEEEE","6/5/2014"

Looking at the second column we can see that the data type isn't properly formatted, indeed mysql server is used to saving and displaying date type using the following format "2014-06-05" that is completely different from the ones seen before.

Here is the solution in detail:

load data infile '/home/mysql/appo.txt'
into table table_ok
fields terminated by ','
optionally  enclosed by '"'
LINES TERMINATED BY '\n'
(subject, @first_data, prio, taskkk, acco, cont, case_appo, assign, @second_data )
set data_appo = str_to_date(@first_data,'%d/%c/%Y'), last_mod = str_to_date(@second_data,'%d/%c/%Y') ;
 

Feel free to leave a comment or ask any further queries regarding the Post.

See you in my next Blog.


Byeeeeee


Monday, 9 June 2014

ROWNUM VS ORDER BY

As everybody knows if I need to sort out the result of a SELECT I have to include at the end of  query the 'ORDER BY' keyword.

Following an example : 

select NUMERI_GRAD from grad where sect_id = 101 and grade_type= 'FI' order by NUMERI_GRAD desc;

Output : 99,92,91,etc.

However if I need to limit the result of the previous query , how can I get the right result ?

We can include the ROWNUM keyword, can't we ?

Yes, but pay special attention to the result :

select NUMERI_GRAD from grad where sect_id = 101 and grade_type = 'FI' and rownum <4  order by NUMERI_GRAD desc;

Output :  99,88,77,etc.

We have a problem. The  "NUMERI_GRAD" code isn't well ordered. This is why the 'ROWNUM'  keyword is applied before the 'ORDER BY' .

However a handy option to solve the problem could be the following :

select NUMERI_GRAD from (select NUMERI_GRAD from grad where sect_id = 101 and grade_type= 'FI' order by NUMERI_GRAD desc) where rownum < 4;

This query looks like an inline-view, doesn't it ?

See you in my next blog.

Byeeee



Wednesday, 28 May 2014

Installing Oracle 12c

Recently I've been chatting with Marco, a co-worker of mine, who is now working in another company, about lots of stuff. He has got a new job in this new company and he also has got lots of stuff to do including installing and migrating Oracle database 12c.

He used to work with Oracle database and now he is looking for a complete procedure to do it quickly and well, so I said : would it help if I did it ?

Let's help Marco, shall we ?

1) Requirements :

2 GB of RAM or more
6.5 GB of local disk space
Oracle Linux 6.x 

2) Required RPM module :

binutils-*         
glibc-*            
libgcc-*           
libstdc++-*        
libaio-*           
libXext-*          
libXtst-*          
libX11-*           
libXau-*           
libxcb-*           
libXi-*            
make-*             
sysstat-*          
compat-libcap1-*   
compat-libstdc++-* 
gcc-*              
gcc-c++-*          
glibc-devel-*      
ksh*               
libstdc++-devel*   
libaio-devel*   



 3) Required Kernel parameter :
      
#Oracle 12c x SGA (16G)
kernel.shmall = 4194304
#Oracle 12c x Server (64G)
kernel.shmmax = 68719476736
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6815744
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576

4) Set the following shell limits for Oracle and Grid user in the /etc/security/limits.conf file:

oracle              soft    nproc    2047
oracle              hard   nproc   16384
oracle              soft    nofile    1024
oracle              hard   nofile    65536
oracle              soft    stack    10240
oracle              hard   stack    10240
         
grid              soft    nproc    2047
grid              hard   nproc   16384
grid              soft    nofile    1024
grid              hard   nofile    65536
grid              soft    stack    10240
grid              hard   stack    10240


5) Create the user Oracle and Grid :
As root execute:
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g oinstall -G dba grid
useradd -u 1101 -g oinstall -G dba oracle
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01
mkdir /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
passwd oracle
passwd grid

6) Add the following line to the oracle and grid profile:

--Oracle (.bash_profile)

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$PATH:$HOME/bin
#Setting $ORACLE_BASE (not $ORACLE_HOME) is recommended, 
#since it eases a few prompts  in the OUI runInstaller tool.
export ORACLE_BASE=/u01/app/oracle
#export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1

--Grid (.bash_profile)

if [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -u 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi

export ORACLE_BASE=/u01/app/grid
#export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin

7) Ensure that the command "gcc -version" return 4.1.x :
[oracle@oracle12cvm07 ~]$ gcc --version
gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3)
   

8) Install the ASMLib package :

ASMLib consist of the following packages :

oracleasm-support-2.1.8-1.el6.x86_64.rpm
oracleasmlib-2.0.4-1.el6.x86_64.rpm
kmod-oracleasm-x86_64.rpm

You can find the "oracleasm-support-2.1.8-1.el6.x86_64.rpm" in the Oracle Linux 6.x CD.

You can download the "oracleasmlib-2.0.4-1.el6.x86_64.rpm" from the Oracle site otherwise the
"kmod-oracleasm-x86_64.rpm"  is already loaded in the kernel UEK.

9) Configuring ASMLib and create ASMLib disks :

As root execute :     /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.
  
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.
 
Default user to own the driver interface []: grid
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:        [  OK  ]

Creating  ASMlib disk. 
 
/etc/init.d/oracleasm createdisk ORADATA1 /dev/sdb1
/etc/init.d/oracleasm createdisk ORAFRA1  /dev/sdc1
/etc/init.d/oracleasm createdisk OCRASM  /dev/sdd1
     
Be careful ! The disk / dev/sdb, /dev/sdc and /dev/sdd must be created and partitioned before.


10) Installing the Grid Infrastructure :

unzip linuxamd64_12c_grid_2of2.zip
Execute as grid : ./runInstaller

Follow the graphical interface and complete the installation process.

10) Creating the ASM Disk Group :

As grid execute the following command : asmca

Follow the graphical interface and create at least two ASM diskgroup : ORADATA, ORAFRA

11) Installing the Database binary :

unzip linuxamd64_12c_database_1of2.zip
unzip linuxamd64_12c_database_2of2.zip

Execute as Oracle : ./runInstaller

Follow the graphical interface and complete the installation process.
12) Creating the 12c database :

Execute as Oracle : dbca


Follow the graphical interface and complete the installation process.


NB: There is only one difference from the previous 11g process. With 12c version you also have to specify a container database, but this will be a topic later on.

See you in my next blog

Byeeeee

Wednesday, 14 May 2014

Quick Installation of MySQL Cluster NDB 7.3 - PART 4

Let's continue with the configuration, shall we ?

STEP 5 :  Configuring the Management node.

We have to create a directory like this : /home/mysql/mysqlCluster and the following config.ini in both the Management Nodes.

SECTION : [ndbd default] : Options affecting ndbd processes on all data nodes:
NoOfReplicas=2    # Number of replicas
DataMemory=80M    # How much memory to allocate for data storage
IndexMemory=18M   # How much memory to allocate for index storage


In this section we can config the option of the Data Nodes, for example we can decide the amount of memory for data and index storage.

Be careful ! These options are going to affect both Data Nodes.

SECTION : [tcp default] :  # TCP/IP options:
portnumber=2202  > Easy to understand !       

SECTION : [ndb_mgmd] : Management process options:
hostname=10.5.250.101           # Hostname or IP address of MGM node
datadir=/home/mysql/mysqlCluster  # Directory for MGM node log files


SECTION : [ndb_mgmd] :  Management process options:
hostname=10.5.250.102           # Hostname or IP address of MGM node
datadir=/home/mysql/mysqlCluster  # Directory for MGM node log files


In these sections we have to declare the IP address of both Management Nodes but according to the manual each Node has to be specified in a different section.

SECTION :  [ndbd] : Options for data node "A" :
hostname=10.5.250.103                   # Hostname or IP address
datadir=/home/mysql/mysqlClusterData    # Directory for this data node's data files


This section refers only to one Data Node, that we'll refer to it as Node A and in which we can config the hostname and data dir.


SECTION : [ndbd] :  Options for data node "B" :
hostname=10.5.250.104                   # Hostname or IP address
datadir=/home/mysql/mysqlClusterData    # Directory for this data node's data files


This section has the same meaning of the previous section.

SECTION [mysqld] :  SQL node "A" options:
hostname=10.5.250.103      # Hostname or IP address

This section refers only to one SQL Nodes, that we'll call it as SQL Node "A" and in which we can config the hostname and data dir.


SECTION : [mysqld] : # SQL node "B" options :

hostname=10.5.250.104           # Hostname or IP address 

This section has the same meaning as the previous section.

Now we are ready to start the MySQL Cluster but for today that's all.....

See you in my next blog.

Byeeeeee