Wednesday, November 20, 2013

Installing Python 3 on CentOS/Redhat 5.x / 6.x From Source

The latest release of the python scripting language is Python 3.4.0 However due to backwards incompatibilities with Python 2, it has not been adopted for CentOS / Redhat Linux 5. The primary reason for this, is because release of ‘yum’ (package management) used in EL5 requires Python 2. Because of this, Python cannot be upgraded in place to version 3 without breaking the package manager.

Therefore to use Python 3, it will need to be installed outside of /usr.

Installing

Below is the list of command (with inline comments) on what is required to compile and install Python 3 from source. The installation will be done into the prefix /opt/python3. This will ensure the installation does not conflict with system software installed into /usr.
- - - - - - - - - - - - #1 will install following- - - - - - - - - - - - - - - - - - 
Installed:
  bzip2-devel.i386 0:1.0.3-6.el5_5                bzip2-devel.x86_64 0:1.0.3-6.el5_5                expat-devel.i386 0:1.95.8-11.el5_8
  expat-devel.x86_64 0:1.95.8-11.el5_8            gdbm-devel.i386 0:1.8.0-28.el5                    gdbm-devel.x86_64 0:1.8.0-28.el5
  openssl-devel.i386 0:0.9.8e-22.el5_8.4          openssl-devel.x86_64 0:0.9.8e-22.el5_8.4          readline-devel.i386 0:5.1-3.el5
  readline-devel.x86_64 0:5.1-3.el5               sqlite-devel.i386 0:3.3.6-6                       sqlite-devel.x86_64 0:3.3.6-6

Dependency Installed:
  gdbm.i386 0:1.8.0-28.el5                         keyutils-libs-devel.x86_64 0:1.2-1.el5         krb5-devel.x86_64 0:1.6.1-70.el5
  libselinux-devel.x86_64 0:1.33.4-5.7.el5         libsepol-devel.x86_64 0:1.15.2-3.el5           libtermcap-devel.x86_64 0:2.0.8-46.1
  sqlite.i386 0:3.3.6-6

Dependency Updated:
  expat.i386 0:1.95.8-11.el5_8                     expat.x86_64 0:1.95.8-11.el5_8             gdbm.x86_64 0:1.8.0-28.el5
  krb5-libs.i386 0:1.6.1-70.el5                    krb5-libs.x86_64 0:1.6.1-70.el5            krb5-workstation.x86_64 0:1.6.1-70.el5
  libselinux.i386 0:1.33.4-5.7.el5                 libselinux.x86_64 0:1.33.4-5.7.el5         libselinux-python.x86_64 0:1.33.4-5.7.el5
  libselinux-utils.x86_64 0:1.33.4-5.7.el5         libsepol.i386 0:1.15.2-3.el5               libsepol.x86_64 0:1.15.2-3.el5
  openssl.i686 0:0.9.8e-22.el5_8.4                 openssl.x86_64 0:0.9.8e-22.el5_8.4         sqlite.x86_64 0:3.3.6-6

Complete!
- - - - - - - - - - - - #1 will install above & if these are already installed we will see following- - - - - - - - - - - - - - 
[root@bhmed-dt-2q ~]# yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel
Loaded plugins: logchanges, security
Setting up Install Process
Package openssl-devel-0.9.8e-22.el5_8.4.x86_64 already installed and latest version
Package openssl-devel-0.9.8e-22.el5_8.4.i386 already installed and latest version
Package bzip2-devel-1.0.3-6.el5_5.x86_64 already installed and latest version
Package bzip2-devel-1.0.3-6.el5_5.i386 already installed and latest version
Package expat-devel-1.95.8-11.el5_8.x86_64 already installed and latest version
Package expat-devel-1.95.8-11.el5_8.i386 already installed and latest version
Package gdbm-devel-1.8.0-28.el5.x86_64 already installed and latest version
Package gdbm-devel-1.8.0-28.el5.i386 already installed and latest version
Package readline-devel-5.1-3.el5.x86_64 already installed and latest version
Package readline-devel-5.1-3.el5.i386 already installed and latest version
Package sqlite-devel-3.3.6-6.x86_64 already installed and latest version
Package sqlite-devel-3.3.6-6.i386 already installed and latest version
Nothing to do
[root@bhmed-dt-2q ~]#
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  1. # Install required build dependencies
    1. $ yum install openssl-devel bzip2-devel expat-devel gdbm-devel readline-devel sqlite-devel
  2. # Fetch and extract source. Please refer to http://www.python.org/download/releases to ensure the latest source is used.
    1. $ wget 
      1. https://www.python.org/ftp/python/3.4.0/Python-3.4.0.tgz --no-check-certificate
      2. https://www.python.org/ftp/python/3.4.0/Python-3.4.0.tar.xz --no-check-certificate
    2. $ tar -xjf Python-3.2.tar.bz2
    3. $ cd Python-3.2
  3. # Configure the build with a prefix (install dir) of /opt/python3, compile, and install.
    1. $ ./configure --prefix=/opt/python3
    2. $ make
    3. $ sudo make install
  4. #Python 3 will now be installed to /opt/python3.
    1. $ /opt/python3/bin/python3 -V
      1. Output: Python 3.2
  5. Ensure your python 3 scripts and applications query the correct interpreter.
    1. #!/opt/python3/bin/python3 Generally Work here is completed python3 can be accessed using this path.
  6. Now make sure you set your new PYTHON as in your PATH. try
    1. $which python: If this return a different version of python may be your system default then you might want to set your new python3.4 path or in the python3.4 directory try ./python that should open your new 3.4 python.
      1. echo $PATH
      2. export PATH=/YOUR/PYTHON/3.4_HOMEDIR/PATH:$PATH
      3. echo $PATH : (this is temporary, you can add to config files to make this export permanent) notice that i am adding it to the front so it gets picked before our system default.
  7. Now lets install some packages:
    1. # Create your virtual env: so this installation will not clash with system default python and/or you are not using the wrong python pip
      1. $ pyvenv-3.4 py3.4env
    2. # Activate it:
      1. $ source py3.4env/bin/activate
    3. # This should be pointing to the right pip in your new python 3.4 Home
      1. (py3.4env) $ which pip
    4. # This checks if your env is clean ..this would not return any  output
      1. (py3.4env) $ pip freeze
      2. (py3.4env) $ pip install numpy
      3. (py3.4env) $ pip install pytz

Friday, October 18, 2013

CENTOS v 6.4, GIT v 1.7.1 and JENKINS v 1.535 CONTINOUS INTEGRATION

Having tuned our Agile process to release iteratively and often I decided it was time to spend some time looking at how we could introduce continuous integration into our PHP and Javascript BDD workflows. Given that Travis isn’t suitable for most of our work (your code must be open source), I chose Jenkins as our CI server and was able to get up and running fairly quickly. I  found a few resources that covered integrating Git with Jenkins but I ended up doing a bit of digging myself so thought I’d quickly share the steps I followed.

Install Jenkins and Git

I provisioned a fresh CENTOS v 6.4 box, which meant I was able to follow the official Jenkins docs without any problems. If you’re using anything other than Ubuntu/Debian or Redhat you may need to look elsewhere. The installation instructions outline simple Apache or Nginx vhost configurations that can be used to serve the Jenkins administration console. You’ll need to install git on the same box so that Jenkins can eventually pull down, or commit to, your git repositories. To install Git I simply ran $ sudo apt-get install git (debian) and $ yum install git (Centos).

Install the Git Plugin

Once you can access your Jenkins console,  goto `Manage Jenkins -> Manage Plugins` from the home screen.
Open the ‘Available’ tab and find the plugin entitled Git Plugin.There is a filter box but it didn’t work particularly well for me, I ended up using Find in Chrome.

Create ssh key pair

Part of the installation process will create the user `jenkins` which will envoke all Jenkins processes, including all git commands. Therefore you’ll need to provision a keypair for this user then add the public key to your git repo. There’s a bit of trick to doing the former which I’ll cover now:

  1. Login to your box and switch to the Jenkins user. The installation process doesn’t create a password so you’ll need to have root/sudo permissions to do this. Run the command sudo su - jenkins. The ‘-’ specifies a login shell, and will switch you to jenkins’ home directory (for me this was /var/lib/jenkins’).
  2. Create a .ssh directory in the jenkins home directory.
  3. Create the public private key pair. There are many tutorials which cover using the ssh-keygen command to do this. The most important thing is not to set a password, otherwise the jenkins user will not be able to connect to the git repo in an automated way.
  4. Add the public key to your Git repo. We use bitbucket so this was fairly straightforward for me and I imagine anyone reading this will have performed similar actions for all their devs keys in the past.
  5. Set a git user and email address. This is also mentioned in the git plugin documentation. Run:cd /srv/jenkins/jobs/project/workspacegit config user.email "some@email.com"git config user.name "jenkins".
Connect to the Git repo. This is a one time step which will dismiss that ‘Are you sure you want to connect’ ssh message, again jenkins won’t be able to deal with this. Just run ssh git@your_git_server_url info.

Create a new Job in Jenkins

The official docs provide a good level of detail on how to configure a basic Jenkins job, so I’d recommend following them here. The git-plugin docs also provide some useful info, so there’s not too much more for me to say other than list the steps I followed:

  1. Select Git in Source Code Management.
  2. Enter your git repository URL. This performs a asynchronous check and will give you an error if it can’t connect. Double check you followed the steps in #3 if you get an error.
  3. Select the branch to build. This is branch jenkins will pull from when a build is started so enter whatever is suitable. We are building from develop so I entered this here.

Run a test Build

Once you’ve saved the job it’s worth kicking off a build to check everything’s working OK. You can do this in the main dashboard. Have a look at the latest build, you should see the commit id that was pulled down. If you dig around in the docs you’ll find that the plugin will have fetched your branch (in my case develop) and pulled from the remote repo. The repo is checked out into the job’s `workspace` directory and during the build the Jenkins user is cd’d into this directory. By adding your makefiles and build scripts into your repository it’s then a straightforward case of configuring the Jenkins job to execute these upon build. Again, this is covered in good detail in the docs.

Post Receive Hook

Finally you’ll need to setup your Git repository to initialise a build each time you push to your repository. Github and bitbucket both have hooks ready to be used, but if you’re self hosted a good place to start is at the plugin documentation which specifies the HTTP endpoint which can be used to trigger the job.


Wednesday, October 2, 2013

Sealing Packages within a JAR File

Packages within JAR files can be optionally sealed, which means that all classes defined in that package must be archived in the same JAR file. You might want to seal a package, for example, to ensure version consistency among the classes in your software.
You seal a package in a JAR file by adding the Sealed header in the manifest, which has the general form:
Name: myCompany/myPackage/
Sealed: true
The value myCompany/myPackage/ is the name of the package to seal.
Note that the package name must end with a "/".

An Example

We want to seal two packages firstPackage and secondPackage in the JAR file MyJar.jar.
We first create a text file named Manifest.txt with the following contents:
Name: myCompany/firstPackage/
Sealed: true

Name: myCompany/secondPackage/
Sealed: true

Warning: The text file must end with a new line or carriage return. The last line will not be parsed properly if it does not end with a new line or carriage return.

We then create a JAR file named MyJar.jar by entering the following command:
jar cmf MyJar.jar Manifest.txt MyPackage/*.class
This creates the JAR file with a manifest with the following contents:
Manifest-Version: 1.0
Created-By: 1.7.0_06 (Oracle Corporation)
Name: myCompany/firstPackage/
Sealed: true
Name: myCompany/secondPackage/
Sealed: true

Sealing JAR Files

If you want to guarantee that all classes in a package come from the same code source, use JAR sealing. A sealed JAR specifies that all packages defined by that JAR are sealed unless overridden on a per-package basis.
To seal a JAR file, use the Sealed manifest header with the value true. For example,
Sealed: true
specifies that all packages in this archive are sealed unless explicitly overridden for particular packages with the Sealed attribute in a manifest entry.

Friday, September 27, 2013

Viewing the Contents of a JAR File

The basic format of the command for viewing the contents of a JAR file is:
jar tf jar-file
Let's look at the options and argument used in this command:
  • The t option indicates that you want to view the table of contents of the JAR file.
  • The f option indicates that the JAR file whose contents are to be viewed is specified on the command line.
  • The jar-file argument is the path and name of the JAR file whose contents you want to view.
The t and f options can appear in either order, but there must not be any space between them.
This command will display the JAR file's table of contents to stdout.
You can optionally add the verbose option, v, to produce additional information about file sizes and last-modified dates in the output.

An Example

Let's use the Jar tool to list the contents of the TicTacToe.jar file we created in the previous section:
jar tf TicTacToe.jar
This command displays the contents of the JAR file to stdout:
META-INF/MANIFEST.MF
TicTacToe.class
audio/
audio/beep.au
audio/ding.au
audio/return.au
audio/yahoo1.au
audio/yahoo2.au
images/
images/cross.gif
images/not.gif
The JAR file contains the TicTacToe class file and the audio and images directory, as expected. The output also shows that JAR file contains a default manifest file, META-INF/MANIFEST.MF, which was automatically placed in the archive by the JAR tool.
All pathnames are displayed with forward slashes, regardless of the platform or operating system you're using. Paths in JAR files are always relative; you'll never see a path beginning with C:, for example.
The JAR tool will display additional information if you use the v option:
jar tvf TicTacToe.jar
For example, the verbose output for the TicTacToe JAR file would look similar to this:
    68 Thu Nov 01 20:00:40 PDT 2012 META-INF/MANIFEST.MF
553 Mon Sep 24 21:57:48 PDT 2012 TicTacToe.class
3708 Mon Sep 24 21:57:48 PDT 2012 TicTacToe.class
9584 Mon Sep 24 21:57:48 PDT 2012 TicTacToe.java
0 Mon Sep 24 21:57:48 PDT 2012 audio/
4032 Mon Sep 24 21:57:48 PDT 2012 audio/beep.au
2566 Mon Sep 24 21:57:48 PDT 2012 audio/ding.au
6558 Mon Sep 24 21:57:48 PDT 2012 audio/return.au
7834 Mon Sep 24 21:57:48 PDT 2012 audio/yahoo1.au
7463 Mon Sep 24 21:57:48 PDT 2012 audio/yahoo2.au
424 Mon Sep 24 21:57:48 PDT 2012 example1.html
0 Mon Sep 24 21:57:48 PDT 2012 images/
157 Mon Sep 24 21:57:48 PDT 2012 images/cross.gif
158 Mon Sep 24 21:57:48 PDT 2012 images/not.gif

Sunday, June 23, 2013

Automatic Repository Deployment and Promotion Process OBIEE 11g

A typical deployment process an OBIEE 11g repository will follow in most production environments resembles the following:

The development zone represents a series of developer machines modifying a repository either by:
Multi User Directory Environment Configuration (MUDE)
Local development machines where each developer migrates their changes to a centralized OBIEE 11g dev/unit test box via a patch-merge process.

We're going to focus on the 'Production Deployment Path' that takes the repository from the Dev/Unit test machine and migrates it through the deployment path from Assembly Test through Production.

This production path is critical because it's at this point where the repository leaves the 'safe haven' of the developer environment and goes through various stages of testing, usually performed by another team. Each testing team will have their own BI Server and database that the repository must connect to for testing.

Usually, the repository remains the same through all environments except for:
  • Connection Pools
  • Environment specific server variables
We're going to perform the assembly test to production deployment process in a completely automated fashion by:
  1. Generating an XUDML file that connections connection pool information.
  2. Generating a new system test repository by applying the System test XUDML to the assembly test repository.
  3. Using WLST to upload the RPD to the specified environment.

Step 1: Generate the XUDML file for the assembly, system, staging and production environments:
We're going to create an eXtensible Universal Database Markup Language (XUDML for short) that contains connection pools specific for each environment. This file is generated by biserverxmlgen and is basically the repository exported to XML. The way to accomplish this in OBIEE 10g was using UDML which has seen been deprecated and is not supported by Oracle - see Oracle Note 1068266.1.

Step 1.1 - Set Variables via bi-init.sh
 . /export/obiee/11g/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh

Note the space between the '.' and the '/' . This is required for the i-init.sh script to propagate through all folders
 
Step 1.2 - Generate XUDML file
Navigate to export/obiee/11g/Oracle_BI1/bifoundation/server/bin/ and run:

biserverxmlgen -R C:\testconnpool\base.rpd -P Admin123 -O c:\testconnpool\test.xml -8

Replace base.rpd with your source RPD - i.e. if you want to generate connection pool information for assembly test, base.rpd should represent your assembly test repository.
  • -O generates the output XML file
  • -8 represents the UTF-8 formatting for the XML file
  • -P represents the password of the base repository

 
If fail to set your session variables will you encounter the following error:
"libnqsclusterapi64.so: open failed: No Such file or directory"
 

If you are successful, your output should be as follows:
 
 

Step 1.3 Remove inapplicable entries
For connection pool migrations, your script should only include:

<?xml version="1.0" encoding="UTF-8" ?>
<Repository xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<DECLARE>
<Connection Pool ......>
</ConnectionPool>
</DECLARE>
</Repository>

 
You will only need to re-generate this file if you change your connection pool information. This XUDML file will be used to update connection pools of your target environment.

Step 2: Apply XUDML file to base repository
Let's say you have an assembly test repository and a system test XUDML file. The biserverxmlexec.sh script will take your assembly test repository, system test XUDML file and generate a 'system test repository' using the following command located in export/obiee/11g/Oracle_BI1/bifoundation/server/bin/

biserverxmlexec -I input_file_pathname [-B base_repository_pathname] [-P password] -O output_repository_pathname

Where:
input_file_pathname is the name and location of the XML input file you want to execute base_repository_pathname is the existing repository file you want to modify using the XML input file (optional). Do not specify this argument if you want to generate a new repository file from the XML input file. password is the repository password.
If you specified a base repository, enter the repository password for the base repository. If you did not specify a base repository, enter the password you want to use for the new repository.
The password argument is optional. If you do not provide a password argument, you are prompted to enter a password when you run the command. To minimize the risk of security breaches, Oracle recommends that you do not provide a password argument either on the command line or in scripts. Note that the password argument is supported for backward compatibility only, and will be removed in a future release.
output_repository_pathname is the name and location of the RPD output file you want to generate.

Example:
biserverxmlexec -I testxudml.txt -B rp1.rpd -O rp2.rpd
Give password: my_rpd_password

You now have a system test repository that you can upload to your applicable environment.

Step 3: Upload Repository to BI Server via WLST
Many web sites show how to upload the repository via the FMW Enterprise Manager, but that is generally alot slower and not as efficient as scripting it.

The uploadRPD.py script below performs five tasks:
  1. Connects to WLST
  2. Locks the System
  3. Uploads the RPD
  4. Commits Changes
  5. Restarts BI Services
Copy the code below and save it as a python script (.py)


connect('user','pass','server')

 user = ''
 password = ''
 host = ''
 port = ''
 rpdpath = '/path/path2/repository.rpd'
 rpdPassword = ''


 # Be sure we are in the root
 cd("..\..")


 print(host + ": Connecting to Domain ...")
 try:
 domainCustom()
 except:
 print(host + ": Already in domainCustom")


 print(host + ": Go to biee admin domain")
 cd("oracle.biee.admin")


 # go to the server configuration
 print(host + ": Go to BIDomain.BIInstance.ServerConfiguration MBean")


 cd ('oracle.biee.admin:type=BIDomain,group=Service')
 biinstances = get('BIInstances')
 biinstance = biinstances[0]


 # Lock the System
 print(host + ": Calling lock ...")
 cd("..")
 cd("oracle.biee.admin:type=BIDomain,group=Service")
 objs = jarray.array([], java.lang.Object)
 strs = jarray.array([], java.lang.String)
 try:
 invoke("lock", objs, strs)
 except:
 print(host + ": System already locked")


 cd("..")

 # Upload the RPD
 cd (biinstance.toString())
 print(host + ": Uploading RPD")
 biserver = get('ServerConfiguration')
 cd('..')
 cd(biserver.toString())
 ls()
 argtypes = jarray.array(['java.lang.String','java.lang.String'],java.lang.String)
 argvalues = jarray.array([rpdpath,rpdPassword],java.lang.Object)


 invoke('uploadRepository',argvalues,argtypes)

 # Commit the system
 print(host + ": Commiting Changes")


 cd('..')
 cd('oracle.biee.admin:type=BIDomain,group=Service')
 objs = jarray.array([],java.lang.Object)
 strs = jarray.array([],java.lang.String)
 invoke('commit',objs,strs)


 # Restart the system
 print(host + ": Restarting OBIEE processes")


 cd("..\..")
 cd("oracle.biee.admin")
 cd("oracle.biee.admin:type=BIDomain.BIInstance,biInstance=coreapplication,group=Service")


 print(host + ": Stopping the BI instance")
 params = jarray.array([], java.lang.Object)
 signs = jarray.array([], java.lang.String)
 invoke("stop", params, signs)


 BIServiceStatus = get("ServiceStatus")
 print(host + ": BI ServiceStatus " + BIServiceStatus)


 print(host + ": Starting the BI instance")
 params = jarray.array([], java.lang.Object)
 signs = jarray.array([], java.lang.String)
 invoke("start", params, signs)


 BIServerStatus = get("ServiceStatus")
 print(host + ": BI ServerStatus " + BIServerStatus)


 The aforementioned code works on scaled out (clustered) environments since there is only one active admin server. The code will connect to the active admin server located in your first node, and WLST will propagate changes to each node. You can validate this by navigating to the local repository folder of each node.

To run the script, load wlst located at :
 /export/obiee/11g/oracle_common/common/bin/wlst.sh

and perform the execfile command as follows:
execfile(‘/path/path1/path2/uploadRPD.py’)

In conclusion, the entire repository deployment process can be executed by the following two scripts:
biserverxmlexec (provided by Oracle)
uploadRpd.py (see above)

Reference: Fusion Middleware Integrator's Guide for Oracle Business Intelligence Enterprise Edition

Saturday, June 1, 2013

GlassFish

Glassfish v 4 requires java 7
  1. glassfish/config/asenv.bat [set AS_JAVA=C:\YYY\java\jdk1.7.0_51]
  2. glassfish/config/asenv.conf [AS_JAVA=C:\YYY\java\jdk1.7.0_51]
  3. An alternative to setting the AS_JAVA variable is to set JAVA_HOME environment variable to the jdk
  4. set JAVA_HOME=C:\YYY\java\jdk1.7.0_51
Restart the GlassFish domain server and any other server instances you might have:
  1. C:\CCC\netbeans\glassfish4\bin\asadmin stop-local-instance
  2. C:\CCC\netbeans\glassfish4\bin\asadmin stop-domain domain1
  3. C:\CCC\netbeans\glassfish4\bin\asadmin start-domain domain1
  4. C:\CCC\netbeans\glassfish4\bin\asadmin start-local-instance 
  5. asadmin start-domain [--verbose]
C:\CCC\netbeans\glassfish4\bin>asadmin version
asadmin enable-secure-admin
asadmin restart-domain
asadmin undeploy hello
# In order to save typing "admin user name" and "password" every time you deploy or undeploy an application, create a password file pwdfile with content:
AS_ADMIN_PASSWORD=your_admin_password
--
Add --passwordfile in command: [asadmin --passwordfile pwdfile deploy /home/ee/glassfish/sample/hello.war]

as-install/bin/asadmin list-domains [List all domains]

Before you start database, atleast one domain to be running:
$ asadmin start-database
$ as-install/bin/asadmin start-database --dbhome directory-path
$ as-install/bin/asadmin start-database --dbhome as-install-parent/javadb [to start javadb]
$ as-install/bin/asadmin stop-database [stop database]


$ as-install/bin/asadmin deploy war-name [as-install/bin/asadmin deploy sample-dir/hello.war]
http://localhost:8080/hello
$ as-install/bin/asadmin list-applications
$ as-install/bin/asadmin undeploy war-name [as-install/bin/asadmin undeploy hello]

Automatically Deploy: copy application to as-install/domains/domain1/autodeploy
[Unix:] cp sample-dir/hello.war as-install/domains/domain-dir/autodeploy
[windows: ] copy sample-dir\hello.war as-install\domains\domain-dir\autodeploy
Undeploy:
 $ cd as-install\domains\domain-dir\autodeploy
$ rm hello.war [unix] or $ del hello.war (windows)

admin default password : adminadmin

At least one GlassFish Server domain must be started before you start the database server.
as-install/bin/asadmin start-database --dbhome directory-path
as-install/bin/asadmin start-database --dbhome as-install-parent/javadb
as-install/bin/asadmin stop-database

Changing Default GlassFish v3 Prelude Port Numbers 4848, 8080, and 8181
When you install GlassFish, it gives you default port numbers of:
  1. 4848 (for administration)
  2. 8080 (for the HTTP listener)
  3. 8181 (for the HTTPS listener)
Here are some examples that work in GlassFish v3 Prelude:
  1. To change the HTTP port to 10080: asadmin set server.http-service.http-listener.http-listener-1.port=10080
  2. To change the HTTPS port to 10443:asadmin set server.http-service.http-listener.http-listener-2.port=10443
  3. To change the administration server port to 14848: asadmin set server.http-service.http-listener.admin-listener.port=14848
  4. It's handy to know you can grep for server properties in GlassFish v3 Prelude as follows:
    1. asadmin get server | grep listener
  5. In GlassFish v3 Prelude, you can set port numbers for administration and the HTTP listener in the installer - but not for the HTTPS listener. You might find yourself needing to explicitly specify the administration port in your asadmin command. For example:
    1. $ asadmin set --port 14848 server.http-service.http-listener.http-listener-2.port=10443
    2. For GlassFish v2, use the asadmin get command as described here.

Managing Oracle 12c CDB’s and PDB’s – Cloning PDB’s – Part C


Cloning PDB’s

Cloning a database used to be a difficult, if not a hectic process. Before 12c the most efficient method to create the clone required using RMAN. However even this method has quite a number of steps inlvolved. In 12c too, the effort required to clone the CDB database is almost same as before but not for PDBs. Cloning PDB is as easy as executing three to four simple commands. As PDBs will have the actual user data in them so this will be a ground breaking feature in multiple deployments scenarios.

In this article we will look at how to clone a PDB within the same the CDB container. Within the same CDB, I mean that both the target and the source PDBs will be in the same CDB. But this process can be used if you want to clone it to a different CDB with a very minor modification.

Preparing for the Clone process

The first step is to prepare the environment for cloning. This includes opening the source PDB in the read-only mode and also creating the directories where the target Database files will be placed. We have CDB12c as the CDB container database and the PDB1 as the source PDB database for cloning. The new target database will be called PDB1_CLONE.
Log into the CDB and execute the following command.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 open read only;
Pluggable database altered.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ ONLY

Next we will create the directory where data files will be stored.
$ mkdir -p /u02/app/oracle/oradata/cdb12c/pdb1_clone

After that we set the directory as the default file creation location for the entire instance. This is required so that files are created where we want them to be.
SQL> alter system set db_create_file_dest='/u02/app/oracle/oradata/cdb12c/pdb1_clone';
System altered.

Cloning the PDB

Once we have the environment ready, we can now start the clone process. The clone process in itself is just one command which goes something like below.
SQL> create pluggable database pdb1_clone from pdb1;
Pluggable database created.
That’s it. The PDB has been cloned. Depending on the size of database the above command may take sometime. If your source PDB is in different CDB then your target then you need a database link from target to source. The above command will work with just an addition of DB link reference at the end.

You can now open the source and target databases and confirm that cloning has been completed successfully.
SQL> alter pluggable database pdb1_clone open;
Pluggable database altered.
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 open;
Pluggable database altered.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE
PDB1_CLONE READ WRITE

As you can see both the source and target databases are up and running. You can now log into your newly created database. The easiest way if you are connected to CDB as SYS is to change the Container like below.
SQL> alter session set container=pdb1_clone;
Session altered.
SQL> select file_name from dba_data_files;
FILE_NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/cdb12c/pdb1_clone/CDB12c/
E070CD4A69AC0893E0450000FF00020F/datafile/o1_mf_system_8x2tx8l9_.dbf
/u02/app/oracle/oradata/cdb12c/pdb1_clone/CDB12c/
E070CD4A69AC0893E0450000FF00020F/datafile/o1_mf_sysaux_8x2tx8m5_.dbf

As SYS is common user and has presence in almost every PDB created so you can easily switch between different Containers. In Oracle Database 12c every database whether it is CDB or PDB, both are considered a Container.

You can switch back to the CDB container just like that. The result of same SELECT statement will be different and will confirm that you are now connected to a different database.
SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select file_name from dba_data_files;
FILE_NAME
----------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb12c/system01.dbf
/u01/app/oracle/oradata/cdb12c/sysaux01.dbf
/u01/app/oracle/oradata/cdb12c/undotbs01.dbf
/u01/app/oracle/oradata/cdb12c/users01.dbf
/u01/app/oracle/oradata/cdb12c/cdata.dbf

Migrating Non-CDB database as PDB via Cloning

You can migrate any non-cdb database created either in 11.2.0.3 or 12.1 as a PDB into your Container database. If your target database is running in 11g then you will first have to upgrade it to 12.1. Once upgraded you can then follow the process outlined here to migrate your non-cdb database as PDB into your main Container database.

Suppose we have a non-cdb database running in 12.1 named ORCL. The first step to migrate it as PDB would be to generate manifest file. Shutdown the database and then restart it in Read-Only mode.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 626327552 bytes
Fixed Size 2291472 bytes
Variable Size 276826352 bytes
Database Buffers 343932928 bytes
Redo Buffers 3276800 bytes
Database mounted.
SQL> alter database open read only;
Database altered.

Once the database has been started in Read-Only mode, run the following procedure to generate manifest file. This file will be used to create clone of database as PDB into our CDB.
SQL> exec dbms_pdb.describe (pdb_descr_file=>'/u01/app/oracle/noncdb_orcl.xml');
PL/SQL procedure successfully completed.

Shutdown the database so that there is no data inconsistency while cloning.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.


Now log into your CDB.
sqlplus sys/oracle as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 17:12:03 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB12C READ WRITE

As you can see that apart from Seed database currently we have only one PDB.

As our ORCL database was using Oracle managed files so we will have to set the db_create_file_dest parameter to make sure that files are copied where we want them to be.
SQL> alter system set db_create_file_dest='/u02/app/oracle/oradata/cdb12c/noncdb_orcl';
System altered.

The name of our database would be noncdb_orcl as you might have guessed. We are now ready to finally clone the non-cdb database as PDB into our CDB.
SQL> create pluggable database noncdb_orcl
2 using '/u01/app/oracle/noncdb_orcl.xml' copy;
Pluggable database created.

The database has been created and is almost ready to use. But we have one last step left. Although this last step is optional but is highly recommended. We will have to run an Oracle supplied SQL script to make sure that the migration was smooth. This script is also required if you are planning to upgrade your CDB in future.

To run the script we will have to open the newly created PDB.
SQL> alter pluggable database noncdb_orcl open;
Warning: PDB altered with errors.


Database opened with errors. You can ignore this message as of now. Next, run the script while logged into PDB as SYS.
SQL> @?/rdbms/admin/noncdb_to_pdb.sql

The script will take sometime and has lengthy output. But at the end it will leave your database in stage where it was when script was run. In our case database was open.

To make sure that everything is now OK with new PDB you can give database a bounce.
SQL> alter pluggable database noncdb_orcl close;
Pluggable database altered.
SQL> alter pluggable database noncdb_orcl open;
Pluggable database altered.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB12C READ WRITE
NONCDB_ORCL READ WRITE

Cloning vs Snapshot

When using the Cloning method above, the database will require exactly the same space as that consumed by the source database. Fortunately though, cloning further supports “SNAPSHOT COPY” which is synonymous with the traditional SNAP mechanism, reducing the requirementdisk space. However this option is only supported with ACFS and direct NFS Client storage. You can read more details in the CREATE PLUGGABLE DATABASE document.

Sunday, May 26, 2013

Managing Oracle 12c CDB’s and PDB’s – Plugging and Unpluggging PDB’s – Part B

One of the main features of the Oracle Database 12c is the portable nature of the pluggable databases (PDBs). You can easily unplug a PDB from a CDB and then plug it into a different Container database (CDB). This ease of Plugging in and Unplugging PDBs makes the 12c database truly Cloud ready. If you have just downloaded the Oracle Database software you may want to read previous article.

Oracle 12c PDB Multitenent Database

Let’s look at how to accomplish this task of plugging and unplugging PDB’s. There are two ways you can do that and essentially they are based on whether you want to move the datafiles from one location to the other or not. We will look at both of these methods here.

Using the NoCopy Method

The first method uses the NoCopy option which implies that you want to unplug the PDB from one CDB to another without moving the actual datafiles. Obviously this can only be done when you want to unplug a PDB from one CDB to another CDB, on the same server.

In our examples here, we will use two CDBs named CDB12c and CDBNEW. The CDB12c container database has one PDB database named PDB3 and the CDBNEW container database has one PDB database named PDB10. The SQL commands below confirm the databases in the current environment.
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
cdb12c
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB3 READ WRITE
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
cdbnew
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB10 READ WRITE

First lets move the PDB10 from the CDBNEW to CDB12c. Lets now unplug CDB10 from CDBNEW .
SQL> alter pluggable database pdb10 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb10 unplug into '/u01/app/oracle/oradata/pdb10_unplug.xml';
Pluggable database altered.
SQL> drop pluggable database pdb10 keep datafiles;
Pluggable database dropped.

Just three commands and the database has been unplugged and ready to be moved. First we closed the database, then the second command generates an XML manifest file and the third command drops the PDB from the current CDB it is attached to. Please note the “Keep datafiles” clause which make sure that data files are not deleted.

To confirm that PDB10 is no longer part of CDBNEW issue the following command.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY

The next step is to plug the database into our second CDB i.e. CDB12c. Log into CDB12c and run the following command to plug PDB10 into PDB12c container database.
SQL> create pluggable database pdb10_nocopy using '/u01/app/oracle/oradata/pdb10_unplug.xml'
2 nocopy
3 tempfile reuse;
Pluggable database created.
Note the use of XML manifest file and also the NoCopy clause. The NoCopy clause basically makes sure that data files retain their location. The Temp File reuse clause is required to re-initiate the temporary files.

You can query the v$PDBS to confirm plugging of database.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB3 READ WRITE
PDB10_NOCOPY MOUNTED

The newly plugged database is in the mount state. You can now open it for normal use.
SQL> alter pluggable database pdb10_nocopy open;
Pluggable database altered.

Using Copy Method

The second method uses the Copy clause to copy the datafiles from the old location to new. Almost all steps are the same, except for the actual command which plugs the database to new CDB container database. In this example we will move PDB3 from CDB12c to CDBNEW. Before we move onto unplugging the PDB3 database from CDB12c first check the location of datafiles. This is to make sure that Copy clause can indeed move data files.
SQL> select file_name from cdb_data_files where con_id=3;
FILE_NAME
----------------------------------------------------------------------
/u02/app/oracle/oradata/cdb12c/pdbtest/system01.dbf
/u02/app/oracle/oradata/cdb12c/pdbtest/sysaux01.dbf
(We have used CDB_DATA_Files view which is new in 12c. Any view starting with CDB will only be part of CDBs. They do not exist in PDBs. They show information from all PDBs attached to CDB and also information about CDB itself. For example CDB_DATA_FILES will have information about all datafiles. Whether they belong to CDB itself or they belong to some PDB attached to it. You can filter information using the CON_ID column as shown above. The DBA_DATA_FILES is still there in PDBs and in CDBs and will only show information pertinent to datafiles of the current database only.)

Now that we have the current location of data files, lets move onto to the unplugging phase.
SQL> alter pluggable database pdb3 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb3 unplug into '/u01/app/oracle/oradata/pdb3_unplug.xml';
Pluggable database altered.
SQL> drop pluggable database pdb3 keep datafiles;
Pluggable database dropped.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB10_NOCOPY READ WRITE


The unplugging part is almost the same. First we close the database, create the manifest file and then drop the database from CDB while keeping datafiles intact. Querying the V$PDBS confirms the process.
To plug PDB3 into our other CDB i.e. CDBNEW, log into CDBNEW and use the following command.
SQL> create pluggable database pdb3_copy using '/u01/app/oracle/oradata/pdb3_unplug.xml'
2 copy
3 tempfile reuse;
Pluggable database created.

SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB3_COPY MOUNTED

SQL> alter pluggable database pdb3_copy open;
Pluggable database altered.

The command to plug it is similar to the method above except that this time around we used the copy clause. The Copy clause will move the files to the new default location of the CDBNEW container database. The subsequent commands shows that PDB_COPY has been created in CDBNEW. To verify the new location of data files use the following command.
SQL> select file_name from cdb_data_files where con_id=3;
FILE_NAME
----------------------------------------------------------------------
/u02/app/oracle/oradata/CDBNEW/E03A6382B68D4162E0450000FF00020F/datafile/
o1_mf_system_8wzvmf7n_.dbf
/u02/app/oracle/oradata/CDBNEW/E03A6382B68D4162E0450000FF00020F/datafile/
o1_mf_sysaux_8wzvmktl_.dbf

As you can see that the datafiles have been moved into the new default location of the CDNBEW container. The names are not what you may be expecting and that is because we did not provided any convention. You can use the FILE_NAME_CONVERT clause to properly specify the custom location where data files should be copied.
File_Name_Reuse(,)

Using As Clone … Move Method

The last scenario of plugging and unplugging pdb’s is when you want to plug back a database into the same CDB from where it was removed. Suppose we have the following pdbs in our CDB.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 READ WRITE

Let’s unplug the PDB1 database.
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 unplug into '/u01/app/oracle/unplug_pdb1.xml';
Pluggable database altered.

The PDB1 database has been unplugged from the CDB container. Hence it cannot be used anymore. If you try to open it, you will get the error message.
SQL> alter pluggable database pdb1 open;
alter pluggable database pdb1 open
*
ERROR at line 1:
ORA-65086: cannot open/close the pluggable database

What if you want to get it back up and running again? Here the “As Clone … Move method” will come in handy. You will be able to plug the database back in again, with a single command after renaming the database name.
SQL> create pluggable database pdb1_plug_move
2 as clone using '/u01/app/oracle/unplug_pdb1.xml'
3 move
4 file_name_convert=('pdb1','pdb1_plug_move');
Pluggable database created.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1 MOUNTED
PDB1_PLUG_MOVE MOUNTED

The new database PDB1_PLUG_MOVE has been created as an exact replica of the PDB1 database. You can go ahead and open that database and drop the original PDB1.
SQL> alter pluggable database pdb1_plug_move open;
Pluggable database altered.
SQL> drop pluggable database pdb1 including datafiles;
Pluggable database dropped.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB1_PLUG_MOVE READ WRITE


Saturday, May 25, 2013

Managing Oracle 12c CDB and PDB – Part A

While the installation of the Oracle 12c Database is more or less similar to that of 11g, the same cannot be said about the actual administration. Managing Oracle 12c PDBs and CDBs is different in many ways. That’s mainly because of architectural changes introduced 12c. If you have just downloaded the Oracle Database software you may want to read the following articles I have written previously.

Before getting into the actual administration lets briefly review the CDBs and PDBs concepts as it is very important to understand the multitenant environment which the installer creates.

About Multitenant Architecture

There are three main concepts that we need to be familiar with.

CDB Components: 

CDB is the main container database. It is much like traditional database except for the fact that it now supports the multitenant architecture. The PDB’s database plug into this container database. The data in the PDB’s is accessed from the SGA and bacground processes of the CDB database. Mutiple PDB’s can be plugged into a container database. There are three main components of CDB.
Oracle-12c-Multitenant Architecture.

Root: 

The root database, CDB$ROOT, is be the main container and holds the Oracle metadata and common users. A typical example of the metadata are the Oracle supplied PL/SQL packages. Common users are users defined at the root level which have access to all databases plugged into the CDB. They are available across the multitent architecture similar to SYS and SYSTEM, however their privileges vary across the PDB/CDB databases. A CDB can only have one root.

Seed: 

A seed is a template which is used to create new PDBs. Its named as PDB$SEED. You cannot edit or modify objects within PDB$SEED and its Read-Only. There can be only PDB Seed against each CDB.

PDBs: 

From an end user’s perspective they only know and need to connect with PDBs. From their point of view PDB is no different from non-CDB database (Non-CDB is the term that will be used for databases created in versions before 12c). Ideally a PDB will correspond to one application and thus many applications can be hosted against one CDB. Currently Oracle supports 252 PDBs against one CDB. Every PDB will be fully compatible with previous versions of Oracle database. Also you can easily plug any Non-CDB database as PDB to CDB database and vice versa.

Common and Local Users:

There will be two types of users. Common users will be the users which have same identity in Root and every other PDB database. What they are authorized to do within each database may vary from database to database but they will have identity in all of them.
Local users will be the users local to individual PDBs. They will not have identity in other databases. So a local user named Scott can exist in two databases with same name. However common user will have unique name across all databases.

CDB and PDB Administration:

There is a clear SOD (Separation of Duties) defined between the administrator accounts of CDB and PDB. A CDB administrator can manage Root database and can also perform some operations on PDB level as well like Creating and dropping PDBs.
A PDB administrator on the other hand will only be able to manage individual PDB where it is created. He can manage space, manager other users, move data files here and there but only to its specific PDB. He will not have access to other PDBs and CDB.

Connecting to CDB and PDB

During installation if you select to install and create database option then Oracle at least creates CDB$ROOT and PDB$SEED. If you choose to create a PDB then it will create a PDB as well the name of which will be specified by you. The screenshot below shows you this option when you are creating database.
Of course you can create pluggable databases later on as well. This is to show you that if you have followed our installation article then you will have one CDB Root CDB$ROOT (named cdb12c), one PDB Seed PDB$SEED and one pluggable database named pdb12c.
It is worth mentioning that every PDB created will have its own Service name which users will use to connect to – Just like in previous versions. However it will not have a separate instance. It will use the same instance and share memory structures (SGA) and background processes from CDB. This feature is known as Consolidation and its new in 12c. So the ORACLE_SID environment parameter will always have value equal to CDB instance name. The following query will always return the instance name of CDB regardless if you are connected to CDB or any PDB attached to that CDB.
SQL> select instance_name from v$instance;

With that said lets try to connect to Oracle instance using the very familiar command.
$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 28 03:12:40 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 839282688 bytes
Fixed Size 2293928 bytes
Variable Size 562040664 bytes
Database Buffers 272629760 bytes
Redo Buffers 2318336 bytes
Database mounted.
Database opened.
SQL>

To check which database you are connected to you can use the new con_name and con_id SQL*PLUS functions.
SQL> show con_id
CON_ID
------------------------------
1
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL>

To connect to PDB databases you need to use the proper username/password combination. OS level authentication will not work for PDBs. This also means that you cannot connect to PDBs without first configuring tnsnames.ora and starting the listener. However for test purpose you can use the following syntax to avoid tnsnames configurations for now. Your listener should be listening however for this to work.
$ sqlplus sys/oracle@VST-12c:1521/pdb12c as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 28 03:32:21 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL>
To confirm that we are connected to right database.
SQL> show con_name
CON_NAME
------------------------------
PDB12c
SQL> show con_id
CON_ID
------------------------------
3
SQL>

The following command however will show that instance is same. This is because of consolidation. All PDBs will use the CDB’s instance.
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
cdb12c

If you are connected to CDB as SYS or any other Common user then you can switch between different Containers on the go – provided that you have relevant privileges. For example if you are connected to CDB as SYS and want to perform some operation in one of your PDB as SYS as well then instead of opening another session you can just switch Container as shown below.
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> alter session set container=pdb12c;
Session altered.
SQL> show con_name
CON_NAME
------------------------------
PDB12C

Creating and dropping PDBs

In order to create a new PDB or to drop an existing one you will have to connect to CDB as SYSDBA. First lets check what current PDBs are attached to CDB.
SQL> select pdb_name,status from cdb_pdbs;
PDB_NAME STATUS
------------------------------ -------------
PDB12c NORMAL
PDB$SEED NORMAL

As you can see there are two PDBs. One is seeded and the other is what we created during installation. To create a new PDB we will first create a directory to hold the data files for new database and then use the following command.
SQL> create pluggable database pdbtest
2 admin user pdbtest_admin identified by oracle
3 roles = (DBA)
4 file_name_convert=('/u01/app/oracle/oradata/cdb12c/pdbseed','/u02/app/oracle/oradata/cdb12c/pdbtest');
Pluggable database created.
Elapsed: 00:00:17.08

I specifically turned on timing for this command. As you can see it took us just 17 seconds to create a new database. Application deployment is going to be extremely fast with this new concept. This was made possible due to the fact that for every new database Oracle does not have to extract and copy files from template, create and initialize instance and configure everything. Everything has already been done during CDB creation. Oracle merely copies the required data files and creates a service which can be used to connect to CDB instance.
The above command will only create SYSTEM and SYSAUX tablespaces for the new PDB. The files like for Undo tablespace and Redo Logs will be used to that of CDB.
Run the following command to confirm creation of database.
SQL> select pdb_name,status from cdb_pdbs;
PDB_NAME STATUS
------------------------------ -------------
PDB12c NORMAL
PDB$SEED NORMAL
PDBTEST NEW
Elapsed: 00:00:00.10

The status column shows New for our newly created database. This is because database has never been opened. We will see next how to start/stop CDB and PDBs.
To drop a PDB you must close it first.
SQL> alter pluggable database pdb12c close;
Pluggable database altered.
Elapsed: 00:00:02.25
SQL> drop pluggable database pdb12c including datafiles;
Pluggable database dropped.
Elapsed: 00:00:00.66
SQL> select name from v$pdbs;
NAME
------------------------------
PDB$SEED
PDBTEST
Elapsed: 00:00:00.06

Starting/Stopping CDB and PDBs

The traditional approach of starting and stopping databases is now only valid for CDB. So in order to start and stop CDB you will use the familiar startup and shutdown commands. PDBs don’t get automatically started with CDB. Even if your CDB is up and running it is possible that PDBs are still inaccessable. To view the current status of PDBs run the following command.
SQL> select name,open_mode
2 from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB12c MOUNTED
PDBTEST MOUNTED
Elapsed: 00:00:00.01

Already discussed above PDB$SEED cannot be opened for Read Write. Our two example PDBs are in Mount state. You can open a PDB using the following command.
SQL> alter pluggable database pdbtest open;
Pluggable database altered.
Elapsed: 00:00:11.69
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB12c MOUNTED
PDBTEST READ WRITE
Elapsed: 00:00:00.04

Likewise you can close a PDB using the following command.
SQL> alter pluggable database pdbtest close;
Pluggable database altered.
Elapsed: 00:00:00.92

To open/close all PDBs at once you can use the following commands.
SQL> alter pluggable database all open;
Pluggable database altered.
Elapsed: 00:00:05.96

SQL> alter pluggable database all close;
Pluggable database altered.
Elapsed: 00:00:01.33

If you are connected to PDB and issue the Startup and Shutdown commands then those commands will Open and Close that PDB only . For example
SQL> show con_name
CON_NAME
------------------------------
PDB12C
SQL> shutdown immediate;
Pluggable Database closed.
SQL> startup
Pluggable Database opened.

Automating Startup of PDBs

You can automate the startup of PDBs so that they are opened as soon as CDB is up and running. The following code will create a trigger on Startup event to open all PDBs.
SQL> create or replace trigger sys.after_startup after startup on database
2 begin
3 execute immediate 'alter pluggable database all open';
4 end after_startup;
5 /
Trigger created.
Elapsed: 00:00:00.19

Lets test this by shutting down and then starting up the CDB.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 839282688 bytes
Fixed Size 2293928 bytes
Variable Size 562040664 bytes
Database Buffers 272629760 bytes
Redo Buffers 2318336 bytes
Database mounted.
Database opened.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB$SEED READ ONLY
PDB12c READ WRITE
PDBTEST READ WRITE
Elapsed: 00:00:00.18

Renaming a PDB

You can easily change the global name of a PDB with few very simple commands. This is simple because there is no specific instance with same name attached to PDBs. They use instance attached to CDB. However you will have to log into the specific PDB to change its name. You cannot do this from CDB.
First close the database and open it in restricted mode.
SQL> alter pluggable database pdb_test close;
Pluggable database altered.
SQL> alter pluggable database pdb_test open restricted;
Pluggable database altered.

We are currently connected to CDB. If we try to change the global of PDB_TEST, we will get the error message as shown below.
SQL> alter pluggable database pdb_test rename global_name to pdb3;
alter pluggable database pdb_test rename global_name to pdb3
*
ERROR at line 1:
ORA-65046: operation not allowed from outside a pluggable database

To do this we will first connect to PDB_TEST and then change the name from there.
$ sqlplus sys/oracle@localhost:1521/pdb_test as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sat Jun 29 01:39:34 2013
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> show con_name
CON_NAME
------------------------------
PDB_TEST
SQL> alter pluggable database pdb_test rename global_name to pdb3;
Pluggable database altered.

Global name has been changed. Close the database and reopen it in Normal Read Write mode.
SQL> alter pluggable database pdb_test rename global_name to pdb3;
Pluggable database altered.
SQL> alter pluggable database close immediate;
Pluggable database altered.
SQL> alter pluggable database open;
Pluggable database altered.
SQL> select name,open_mode from v$pdbs;
NAME OPEN_MODE
------------------------------ ----------
PDB3 READ WRITE

Data Dictionary Views

There are new data dictionary views which start with CDB_* and are only visible to SYSDBA when connected to CDB. They show information about all objects no matter where they are. In CDB or in some PDB. For example the following command will show names of all data files whether they belong to CDB, PDBSEED or to our PDB i.e. PDB12C.
SQL> select file_name from cdb_data_files;
FILE_NAME
----------------------------------------------------------------------------------------------------
/u02/app/oracle/oradata/cdb12c/pdb12c/system01.dbf
/u02/app/oracle/oradata/cdb12c/pdb12c/sysaux01.dbf
/u01/app/oracle/oradata/cdb12c/system01.dbf
/u01/app/oracle/oradata/cdb12c/sysaux01.dbf
/u01/app/oracle/oradata/cdb12c/undotbs01.dbf
/u01/app/oracle/oradata/cdb12c/users01.dbf
/u01/app/oracle/oradata/cdb12c/cdata.dbf
/u01/app/oracle/oradata/cdb12c/pdbseed/system01.dbf
/u01/app/oracle/oradata/cdb12c/pdbseed/sysaux01.dbf
9 rows selected.

These data dictionary views have an extra column of CON_ID which can be used to select objects attached to specific database.
SQL> select file_name from cdb_data_files
2 where con_id=2;
FILE_NAME
----------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb12c/pdbseed/system01.dbf
/u01/app/oracle/oradata/cdb12c/pdbseed/sysaux01.dbf

Almost every data dictionary view have a corresponding CDB_* view. The normal DBA_* views will only show information about objects of current container only.
SQL> select file_name from dba_data_files;
FILE_NAME
----------------------------------------------------------------------------------------------------
/u01/app/oracle/oradata/cdb12c/system01.dbf
/u01/app/oracle/oradata/cdb12c/sysaux01.dbf
/u01/app/oracle/oradata/cdb12c/undotbs01.dbf
/u01/app/oracle/oradata/cdb12c/users01.dbf
/u01/app/oracle/oradata/cdb12c/cdata.dbf