Recently I've started to explain how to install quickly the new MySQL Cluster NDB 7.3 and now I'm ready to continue.
Let's start, shall we ?
STEP 2 : On every SQL Nodes execute the following command as root:
[root@MysqlVm04ClusterData1 mysql]# cd /usr/local/mysql
[root@MysqlVm04ClusterData1 mysql]# scripts/mysql_install_db --user=mysql
NB:
If we want to change the root password of MySQL database or remove the
anonymous user we have to execute the following command :
$BASE_DIR/bin/mysql_secure_installation
The "mysql_install_db" command is necessary to create the system databases on the SQL Nodes. Afterwards we can modify the .bash_profile file of mysql user adding the following row :
export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/mysql/bin
STEP 3: On all Data Nodes we have only to be sure of adding the binary directory of MySQL in the environment variable PATH
export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/mysql/bin
To be continued ....
See you in my next Blog.
Byeee
Wednesday, 30 April 2014
Thursday, 17 April 2014
Quick Installation of MySQL Cluster NDB 7.3 - PART 1
Recently I've been taking part in a MySQL event in Rome and while the new version of MySQL Cluster was being explained by the technician, I got an idea !
Why not use this product in our new production environment ? Infact I'm still working on it and this Cluster NDB could be a wonderful solution for it.
So I decided to download the community edition of MySQL Cluster NDB 7.3 for linux and afterwards I got down the installation process.
Following you'll find all the steps to install quickly a Cluster made up of :
Be careful ! The SQL nodes and Data nodes are in the same servers (Mysqlvm04ClusterData1 & Mysqlvm05ClusterData2) only because I haven't got enough memory and disk space on my Virtual machine server used for testing purpose. In your environment fell free to create as many nodes as you prefer.
STEP 1 : On every cluster node execute the following command as root :
mkdir -p /home/mysql/mysqlCluster/
mkdir -p /home/mysql/mysqlClusterData/
mkdir -p /home/mysql/download
groupadd -g 3000 mysql
useradd -u 3000 -g 3000 -d /home/mysql -s /bin/bash -c "MySQL DBA" mysql
passwd mysql
chown -R mysql:mysql /home/mysql
cp mysql-cluster-advanced-7.3.4-linux-glibc2.5-x86_64.tar /usr/local
tar xvf mysql-cluster-advanced-7.3.4-linux-glibc2.5-x86_64.tar
ln -s /usr/local/mysql-cluster-advanced-7.3.4-linux-glibc2.5-x86_64 /usr/local/mysql
chown -R mysql:mysql mysql
cd mysql
chown -R mysql:mysql *
According to the MySQL manual the installation of data nodes and management nodes don't require the mysqld binary, instead I prefered to install it the same, in order to be able to change the role of them later on.
STEP 2 : On every SQL Node execute the following command as root:
[root@MysqlVm04ClusterData1 mysql]# cd /usr/local/mysql
[root@MysqlVm04ClusterData1 mysql]# scripts/mysql_install_db --user=mysql
To be continued....
See you in my next blog.
Byeeeee
Why not use this product in our new production environment ? Infact I'm still working on it and this Cluster NDB could be a wonderful solution for it.
So I decided to download the community edition of MySQL Cluster NDB 7.3 for linux and afterwards I got down the installation process.
Following you'll find all the steps to install quickly a Cluster made up of :
- Two Management nodes (Mysqlvm02ClusterMgm & Mysqlvm03ClusterMg),
- Two SQL nodes (Mysqlvm04ClusterData1 & Mysqlvm05ClusterData2),
- Two Data nodes (Mysqlvm04ClusterData1 & Mysqlvm05ClusterData2).
Be careful ! The SQL nodes and Data nodes are in the same servers (Mysqlvm04ClusterData1 & Mysqlvm05ClusterData2) only because I haven't got enough memory and disk space on my Virtual machine server used for testing purpose. In your environment fell free to create as many nodes as you prefer.
STEP 1 : On every cluster node execute the following command as root :
mkdir -p /home/mysql/mysqlCluster/
mkdir -p /home/mysql/mysqlClusterData/
mkdir -p /home/mysql/download
groupadd -g 3000 mysql
useradd -u 3000 -g 3000 -d /home/mysql -s /bin/bash -c "MySQL DBA" mysql
passwd mysql
chown -R mysql:mysql /home/mysql
cp mysql-cluster-advanced-7.3.4-linux-glibc2.5-x86_64.tar /usr/local
tar xvf mysql-cluster-advanced-7.3.4-linux-glibc2.5-x86_64.tar
ln -s /usr/local/mysql-cluster-advanced-7.3.4-linux-glibc2.5-x86_64 /usr/local/mysql
chown -R mysql:mysql mysql
cd mysql
chown -R mysql:mysql *
According to the MySQL manual the installation of data nodes and management nodes don't require the mysqld binary, instead I prefered to install it the same, in order to be able to change the role of them later on.
STEP 2 : On every SQL Node execute the following command as root:
[root@MysqlVm04ClusterData1 mysql]# cd /usr/local/mysql
[root@MysqlVm04ClusterData1 mysql]# scripts/mysql_install_db --user=mysql
To be continued....
See you in my next blog.
Byeeeee
Tuesday, 8 April 2014
Impdp and Network Link
A handy option of impdp command is the 'network link', which can be used to import one or many tables from a remote Oracle database to a local Oracle database.
Let's see how, shall we ?
Finally in the local database we execute the following command:
Let's see how, shall we ?
- In the local database we create a 'DB_LINK' that is linked to the database remote: CREATE DATABASE LINK DB_REMOTE CONNECT TO SCOTT IDENTIFIED BY 'XXX' USING "alias_tnsnames";
- In the remote database we grant the 'exp_full_database' privilege to the user owners of the DB_LINK created just before: grant exp_full_database to scott;
Finally in the local database we execute the following command:
impdp
userid=system tables=SCOTT.TABLE_NEW network_link=DB_REMOTE
directory=DATA_PUMP_DIR logfile=TABLE_NEW.log
remap_schema=OLD_SCHEMA:NEW_SCHEMA
remap_tablespace=OLD_TBL:NEW_TBL
Import: Release 11.2.0.1.0 - Production on Ven Feb 7 12:33:22 2014
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Avvio di "SYS"."SYS_IMPORT_TABLE_01":
Stima in corso con il metodo BLOCKS...
Elaborazione dell'object type TABLE_EXPORT/TABLE/TABLE_DATA
Stima totale con il metodo BLOCKS: 288 MB
Elaborazione dell'object type TABLE_EXPORT/TABLE/TABLE
. . importate "SCOTT"."TABLE_NEW" 1930168 righe
Job "SYS"."SYS_IMPORT_TABLE_01" completato in 12:33:33
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Avvio di "SYS"."SYS_IMPORT_TABLE_01":
Stima in corso con il metodo BLOCKS...
Elaborazione dell'object type TABLE_EXPORT/TABLE/TABLE_DATA
Stima totale con il metodo BLOCKS: 288 MB
Elaborazione dell'object type TABLE_EXPORT/TABLE/TABLE
. . importate "SCOTT"."TABLE_NEW" 1930168 righe
Job "SYS"."SYS_IMPORT_TABLE_01" completato in 12:33:33
Be careful : we have to specify the directory option in the 'impdp' command, because a log file will be written even if the table isn't downloaded in the filesystem directory.
Look out ! Who is writing doesn't know your system environment thereby I suggest to try in a Test environment before making mistakes in a production environment.
See you in my next blog !
Byeeeeeee
Tuesday, 1 April 2014
Insert plus Connect By Level
Today I would like to show you a handy command to load tables with fake data.
I'm used to testing new database server whenever I have installed it and with this SQL command option I can load huge stuff of data without effort.
Let's look how, shall we ?
create table table_insert (
code number,
text varchar2(255));
insert into table_insert select rownum,'ZZZZZZZZZZZZZAAAAAAAAAAAAA' from dual connect by level <=100000;
commit;
This SQL command is going to insert 100.000 rows , lots of stuff and all is done in one step.
Look out ! You can specify more fields if you want but make sure there's enough space availability on the device, unless a problem of disk full crops up.
See you in my next blog.
Byeee
I'm used to testing new database server whenever I have installed it and with this SQL command option I can load huge stuff of data without effort.
Let's look how, shall we ?
create table table_insert (
code number,
text varchar2(255));
insert into table_insert select rownum,'ZZZZZZZZZZZZZAAAAAAAAAAAAA' from dual connect by level <=100000;
commit;
This SQL command is going to insert 100.000 rows , lots of stuff and all is done in one step.
Look out ! You can specify more fields if you want but make sure there's enough space availability on the device, unless a problem of disk full crops up.
See you in my next blog.
Byeee
Subscribe to:
Comments (Atom)