Thursday, November 11, 2010

Compressing / Decompressing files in Linux / Unix

Both Linux and UNIX include various commands for Compressing and decompresses (read as expand compressed file). To compress files you can use gzip, bzip2 and zip commands. To expand compressed file (decompresses) you can use and gzip -d, bunzip2 (bzip2 -d), unzip commands.

Compressing Files: 
Syntax Description Example(s)
gzip {filename} Gzip compress the size of the given files using Lempel-Ziv coding (LZ77). Whenever possible, each file is replaced by one with the extension .gz. gzip mydata.doc
gzip *.jpg
ls -l
bzip2 {filename} bzip2 compresses files using the Burrows-Wheeler block sorting text compression algorithm, and Huffman coding. Compression is generally considerably better than that achieved by bzip command (LZ77/LZ78-based compressors). Whenever possible, each file is replaced by one with the extension .bz2. bzip2 mydata.doc
bzip2 *.jpg
ls -l
zip {.zip-filename} {filename-to-compress} zip is a compression and file packaging utility for Unix/Linux. Each file is stored in single .zip {.zip-filename} file with the extension .zip. zip mydata.zip mydata.doc
zip data.zip *.doc
ls -l
tar -zcvf {.tgz-file} {files}
tar -jcvf {.tbz2-file} {files}
The GNU tar is archiving utility but it can be use to compressing large file(s). GNU tar supports both archive compressing through gzip and bzip2. If you have more than 2 files then it is recommended to use tar instead of gzip or bzip2.
-z: use gzip compress
-j: use bzip2 compress
tar -zcvf data.tgz *.doc
tar -zcvf pics.tar.gz *.jpg *.png
tar -jcvf data.tbz2 *.doc
ls -l

Decompressing Files: 

Syntax Description Example(s)
gzip -d {.gz file}
gunzip {.gz file}
Decompressed a file that is created using gzip command. File is restored to their original form using this command. gzip -d mydata.doc.gz
gunzip mydata.doc.gz
bzip2 -d {.bz2-file}
bunzip2 {.bz2-file}
Decompressed a file that is created using bzip2 command. File is restored to their original form using this command. bzip2 -d mydata.doc.bz2
gunzip mydata.doc.bz2
unzip {.zip file} Extract compressed files in a ZIP archive. unzip file.zip
unzip data.zip resume.doc
tar -zxvf {.tgz-file}
tar -jxvf {.tbz2-file}
Untar or decompressed a file(s) that is created using tar compressing through gzip and bzip2 filter tar -zxvf data.tgz
tar -zxvf pics.tar.gz *.jpg
tar -jxvf data.tbz2


List the contents of an archive / compressed file
Some time you just wanted to look at files inside an archive or compressed file. Then all of the above command supports file list option.

Syntax Description Example(s)
gzip -l {.gz file} List files from a GZIP archive gzip -l mydata.doc.gz
unzip -l {.zip file} List files from a ZIP archive unzip -l mydata.zip
tar -ztvf {.tar.gz}
tar -jtvf {.tbz2}
List files from a TAR archive tar -ztvf pics.tar.gz
tar -jtvf data.tbz2

Tuesday, October 5, 2010

Password-less logins with OpenSSH

Because OpenSSH allows you to run commands on remote systems, showing you the results directly, as well as just logging in to systems it's ideal for automating common tasks with shellscripts and cronjobs. One thing that you probably won't want is to do though is store the remote system's password in the script. Instead you'll want to setup SSH so that you can login securely without having to give a password.

Thankfully this is very straightforward, with the use of public keys.
To enable the remote login you create a pair of keys, one of which you simply append to a file upon the remote system. When this is done you'll then be able to login without being prompted for a password - and this also includes any cronjobs you have setup to run.

If you don't already have a keypair generated you'll first of all need to create one.
If you do have a keypair handy already you can keep using that, by default the keys will be stored in one of the following pair of files:
$ ~/.ssh/identity and ~/.ssh/identity.pub    (This is an older DSA key).
$ ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub       (This is a newer RSA key).

If you have neither of the two files then you should generate one. The DSA-style keys are older ones, and should probably be ignored in favour of the newer RSA keytypes (unless you're looking at connecting to an outdated installation of OpenSSH). We'll use the RSA keytype in the following example.

To generate a new keypair you run the following command:
$ user@localhost:~$ ssh-keygen -t rsa

This will prompt you for a location to save the keys, and a pass-phrase:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/skx/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/skx/.ssh/id_rsa.
Your public key has been saved in /home/skx/.ssh/id_rsa.pub.

If you accept the defaults you'll have a pair of files created, as shown above, with no passphrase. This means that the key files can be used as they are, without being "unlocked" with a password first. If you're wishing to automate things this is what you want.

Now that you have a pair of keyfiles generated, or pre-existing, you need to append the contents of the .pub file to the correct location on the remote server.

Assuming that you wish to login to the machine called mystery from your current host with the id_rsa and id_rsa.pub files you've just generated you should run the following command:
$ ssh-copy-id -i ~/.ssh/id_rsa.pub username@localhost

This will prompt you for the login password for the host, then copy the keyfile for you, creating the correct directory and fixing the permissions as necessary.
The contents of the keyfile will be appended to the file ~/.ssh/authorized_keys2 for RSA keys, and
$ ~/.ssh/authorised_keys for the older DSA key types.

Once this has been done you should be able to login remotely, and run commands, without being prompted for a password:
$ skx@lappy:~$ ssh mystery uptime
$  09:52:50 up 96 days, 13:45,  0 users,  load average: 0.00, 0.00, 0.00

What if it doesn't work?
There are three common problems when setting up passwordless logins:
The remote SSH server hasn't been setup to allow public key authentication.
File permissions cause problems.
Your keytype isn't supported.

Each of these problems is easily fixable, although the first will require you have root privileges upon the remote host.

If the remote server doesn't allow public key based logins you will need to updated the SSH configuration. To do this edit the file /etc/sshd/sshd_config with your favourite text editor.
You will need to uncomment, or add, the following two lines:
$ RSAAuthentication yes
$ PubkeyAuthentication yes

Once that's been done you can restart the SSH server - don't worry this won't kill existing sessions:
$ /etc/init.d/ssh restart

File permission problems should be simple to fix. Upon the remote machine your .ssh file must not be writable to any other user - for obvious reasons. (If it's writable to another user they could add their own keys to it, and login to your account without your password!).

If this is your problem you will see a message similar to the following upon the remote machine, in the file /var/log/auth:
Jun  3 10:23:57 localhost sshd[18461]: Authentication refused:
 bad ownership or modes for directory /home/skx/.ssh
To fix this error you need to login to the machine (with your password!) and run the following command:
$ cd
$ chmod 700 .ssh

Finally if you're logging into an older system which has an older version of OpenSSH installed upon it which you cannot immediately upgrade you might discover that RSA files are not supported.

In this case use a DSA key instead - by generating one:
$ ssh-keygen

Then appending it to the file ~/.ssh/authorized_keys on the remote machine - or using the ssh-copy-id command we showed earlier.

Note if you've got a system running an older version of OpenSSH you should upgrade it unless you have a very good reason not to. There are known security issues in several older releases. Even if the machine isn't connected to the public internet, and it's only available "internally" you should fix it.

Instead of using authorized_keys/authorized_keys2 you could also achieve a very similar effect with the use of the ssh-agent command, although this isn't so friendly for scripting commands.

This program allows you to type in the passphrase for any of your private keys when you login, then keep all the keys in memory, so you don't have password-less keys upon your disk and still gain the benefits of reduced password usage.

If you're interested read the documentation by running:
$ man ssh-agent

Sunday, August 15, 2010

Agile Case Study - Software marketing

Software Marketing, enter a world of countless project requests, numerous stakeholders, limited resources and rapidly changing market conditions.

Sound familar? In fact, marketers face a lot of the same challenges as development teams, and Agile can be a powerful way to alleviate those common issues and intelligently plan our work.

10 Steps to successful Marketing using Agile and Lean Practices:

In steps 1-5, Explain how marketing team conducts our version of release planning.
In step 6-10, Explain how we run our iterations to meet those commitments. Our planning processes continue to evolve, through this method has worked for a while now.


PART 1
STEP 1: We recognize that Marketing has challenges that are different from development.
  • There is no unique product owner: for example if we chose Sales, when we would always rank lead generation over branding, customer programs or analyst relations, and that could ultimately hurt our company. Therefore we have to use some best-guessing to prioritize our backlog and determine what is most important.
  • We face hard event deadlines set far into the future. Sometimes we have no choice but to commit to an event or sign a contract months ahead of time.
  • Each team member has unique expertise, i.e. writing, event planning, PHP development and so forth. So one shared backlog is inefficient.
Now that we've reviewed the challenges,w e give ourselves permission to do what we need to do, have patience and adjust anything that isn't working for us.

STEP 2: Conduct an ORID to learn from the past:
Before planning for the next quarter, we hold a retrospective in the form of an ORID, “a means to analyze facts and feelings, to ask about implications and to make decisions intelligently”, a process created by the Institute of Cultural Affairs. We gather as a team to share

  • Observations (O) – What do we know?
  • Reflections (R) – How do we feel about this?
  • Interpretations (I) – What does it mean for the organization?
  • Decisions (D) – What are we going to do?
This strategic overview helps set context for us to prioritize our focus for next quarter.

STEP 3: Align ORID decision with company strategy:
We conduct quarterly (as per company strategy) and annual planning using the Plan Do Check Adjust methodology as explained in Getting the Right Things Done.  As we look at the overall company direction and goals, we keep these in mind as we hold planning at our own level.   Ideally, our major commitments support and align with company strategy. This also helps inform our “stop doing” list.

STEP 4: Poll our stack holders:
As part of determining quarterly commitments, we poll our major stakeholders for their top requests.  We use a Google survey to rank these requests by importance, size each request and bring these epics into our release planning meeting, to be included as part of our ranked backlog.

STEP 5: Conduct release planning to prioritize and agree on quarterly commitment:
Now that we have all of our inputs, we hold our quarterly Release Planning session.  We write each epic on a sticky note and look at all of the possible work we could do this quarter.  Then, we evaluate epics based on importance taking company goals, stakeholder wishes, market realities like conferences and our own passions into consideration. We decide what we can realistically commit to, and agree as a team.  We keep in mind that making and meeting commitments is a huge deal, and we try really hard not to over-commit.
PART 2
STEP 6: Create a task board:
Since our marketing team is mostly co-located, we pin up several large sheets of paper to use as a task board.  This is where we review our commitments on a daily basis as a sanity check that our stories are prioritized correctly and that we are tackling the right work as the quarter progresses.

As a team, we write our quarterly commitments on the task board with the definition of done assigned to each one.  We also include our “foundational” work – recurring work like website updates, online ad campaigns, field events, press releases and other important work that we need time to do.
We break into smaller project teams that do share a backlog, we often use AgileZen to manage this work.

STEP 7: Hold Iteration planning every two week:
Every 2 weeks, we hold an Iteration Planning meeting.  Each team member has her own sticky note color, creates stories on those notes and manages her own prioritized backlog using T-shirt sizing to roughly estimate each story.  In this hour-long meeting, we begin by expressing appreciation for team members who gave exceptional assistance.  Then we hold a brief retrospective on what worked and what should change for the next iteration.  Finally, we each read out our prioritized stories for the iteration, putting them on the task board’s backlog.  This gives everyone visibility to what’s happening, identifies if someone is over-committed and lets the team swarm any epics with looming deadlines

STEP 8: Conduct a daily Stand up meeting:
At the same time each day, we hold a stand-up meeting (with a consistent conference call #) that is at most 10-15 minutes long.  We form a semi-circle in front of our task board and share no more than 2 cross-cutting significant actions or take-aways from the day before, no more than 2 that we are planning to accomplish that day, and whether our work is blocked by any issues beyond our control.   As we start working on stories throughout the iteration, we move them from the backlog into their section of the task board to show what we are working on.  When the story is complete, then we move it to a place on the task board labeled “Done”.  Once the commitment’s Definition of Done is met, we check off that commitment and feel good about completing it.

STEP 9: Be patience as things change:
It would be lovely if nothing changed during the iteration, but that just doesn’t happen.  The goal is ultimately to respond to change rather than cling to an outdated plan.  As new opportunities arise, new time-sensitive information appears and new requests are made, so our iteration work changes and that’s ok.  We try to just document what we’re working on and create new stories so that we can make intelligent prioritization decisions during the course of the iteration.

STEP 10: Retrospect, Inspect and Adapt:
As we keep running our iterations and fulfilling our commitments, we are always looking for ways to improve them.   Ultimately, we’re using Agile to improve the quality of our work life by using objective, smart ways of planning how we spend our time. And we’re learning a lot from the journey.

Monday, August 2, 2010

Software Development Methodology

"A Software Development Methodology is the concept / best practices or guidelines for managers to streamline the process, instructing them how to plan, control, estimate, involving team members and quickly delivers quality software by minimizing risks, cost and failures."

A software / system development methodology in software engineering has various meanings to various audiences.
  • As a noun, a software development methodology is a framework that is used to structure, plan, and control the process of developing an information system.
    • Rational Unified Process: - since 1998
    • Agile Unified Process: since 2005
    • Integrated Unified Process: since 2007
  • As a verb, the software development methodology is an approach to be used by an organization or project team to apply the software development methodology framework
    • Waterfall Approach (linear): Waterfall approach is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirement analysis, design, implementation, testing (validation), integration and maintenance.
    • Protyping Approach (iterative): Software prototyping is the development approach of activities during software development the creating of prototypes, i.e. incomplete versions of the software program being developed.
    • Incremental Approach (combination of linear and iterative): Various methods are acceptable for combining linear and iterative systems development methodology approaches, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
    • Spiral Approach (combination of linear and iterative): The spiral model approach is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts.
    • Rapid Application Development (RAD) Approach (Iterative): Rapid Application Development (RAD) is a software development methodology approach, which involves iterative development and the construction of prototypes.
    • Extreme Programming Approach:
  • Waterfall Approach: Basic Principle of Waterfall approach is:
    • Project is divided into sequential phases, with some overlap and spash back acceptable between phases.
    • Emphasis is on planning, time schedules, target dates, budgets and implementation of an entire system at one time.
    • Tight control is maintained over the life of the project through the use of extensive written documentation, as well as through formal reviews and approval/signoff by the user and information technology management occurring at the end of most phases before beginning the next phase.
  • Protyping Approach: Basic principles of the Prototyping Approach are:
    • Not a standalone, complete development methodology approach, but rather an approach to handling selected portions of a larger, more traditional development methodology (i.e. incremental, Spiral or Rapid Application Development (RAD)) approaches.
    • Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
    • User is involved throughout the development process, which increases the likelihood of user acceptance of the final implementation.
    • Small-scale mock-ups of the system are developed following an iterative modification process until the prototype evolves to meet the user’s requirements.
    • While most prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system.
    • A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problem.
    • Mainframes have a lot to do with this sort of thing.
  • Incremental Apprach: Basic principles of the incremental development approach are:
    • A series of mini-Waterfalls are performed, where all phases of the Waterfall development approach are completed for a small part of the systems, before proceeding to the next incremental, or
    • Overall requirements are defined before proceeding to evolutionary, mini-Waterfall development approaches of individual increments of the system, or
    • The initial software concept, requirements analysis, and design of architecture and system core are defined using the Waterfall approach, followed by iterative Prototyping approach, which culminates in installation of the final prototype (i.e., working system).
  • Spiral Approach - basic principles.
    • Focus is on risk assessment and on minimizing project risk by breaking a project into smaller segments and providing more ease-of-change during the development process, as well as providing the opportunity to evaluate risks and weigh consideration of project continuation throughout the life cycle.
    • Each cycle involves a progression through the same sequence of steps, for each portion of the product and for each of its levels of elaboration, from an overall concept-of-operation document down to the coding of each individual program.
    • Each trip around the spiral approach traverses four basic quadrants: 
      • determine objectives, 
      • alternatives, and constraints of the iteration; 
      • Evaluate alternatives; Identify and resolve risks; 
      • develop and verify deliverables from the iteration; 
      • Plan the next iteration.
    • Begin each cycle with an identification of stakeholders and their win conditions, and end each cycle with review and commitment.
  • Rapid Application Development (RAD) Approach – Basic Principles.
    • Key objective is for fast development and delivery of a high quality system at a relatively low investment cost.
    • Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
    • Aims to produce high quality systems quickly, primarily through the use of iterative Prototyping (at any stage of development), active user involvement, and computerized development tools. These tools may include Graphical User Interface (GUI) builders, Computer Aided Software Engineering (CASE) tools, Database Management Systems (DBMS), fourth-generation programming languages, code generators, and object-oriented techniques.
    • Key emphasis is on fulfilling the business need, while technological or engineering excellence is of lesser importance.
    • Project control involves prioritizing development and defining delivery deadlines or “timeboxes”. If the project starts to slip, emphasis is on reducing requirements to fit the timebox, not in increasing the deadline.
    • Generally includes Joint Application Development (JAD), where users are intensely involved in system design, either through consensus building in structured workshops, or through electronically facilitated interaction.
    • Active user involvement is imperative.
    • Iteratively produces production software, as opposed to a throwaway prototype.
    • Produces documentation necessary to facilitate future development and maintenance.
    • Standard systems analysis and design techniques can be fitted into this framework.
  • Agile Unified Process (AUP): Agile describes a simple, easy to understand approach to developing business application software using agile techniques and concepts yet still remaining true to the RUP. The AUP applies agile techniques including test driven development (TDD), Agile Modelling, agile change management and database refactoring to improve productivity.
Agile SCRUM

    • Unlike RUP, the AUP has only seven disciplines.
      • Model. Understand the business of the organization, the problem domain being addressed by the project, and identify a viable solution to address the problem domain.
      • Implementation. Transform model(s) into executable code and perform a basic level of testing, in particular unit testing.
      • Test. Perform an objective evaluation to ensure quality. This includes finding defects, validating that the system works as designed, and verifying that the requirements are met.
        Deployment. Plan for the delivery of the system and to execute the plan to make the system available to end users.
      • Configuration Management. Manage access to project artifacts. This includes not only tracking artifact versions over time but also controlling and managing changes to them.
      • Project Management. Direct the activities that takes place within the project. This includes managing risks, directing people (assigning tasks, tracking progress, etc.), and coordinating with people and systems outside the scope of the project to be sure that it is delivered on time and within budget.
      • Environment. Support the rest of the effort by ensuring that the proper process, guidance (standards and guidelines), and tools (hardware, software, etc.) are available for the team as needed.
    • Agile Philosphies: The Agile UP is based on the following philosophies:
      • Your staff knows what they're doing. People are not going to read detailed process documentation, but they will want some high-level guidance and/or training from time to time. The AUP product provides links to many of the details, if you are interested, but doesn't force them upon you.Simplicity. Everything is described concisely using a handful of pages, not thousands of them.
      • Agility. The Agile UP conforms to the values and principles of the agile software development and the Agile Alliance.
      • Focus on high-value activities. The focus is on the activities which actually count, not every possible thing that could happen to you on a project.
      • Tool independence. You can use any toolset that you want with the Agile UP. The recommendation is that you use the tools which are best suited for the job, which are often simple tools.
      • You'll want to tailor the AUP to meet your own needs.
Agile Software Development LifeCycle

    • Releases: The Agile Unified Process distinguishes between two types of iterations. A Development Release Iteration results in a deployment to the Quality Assurance and/or Demo area. A Production Release Iteration results in a deployment to the Production area. This is a significant refinement to the Rational Unified Process.

Thursday, July 29, 2010

Java Server Faces (JSF)

Java Server Faces (JSF)
JavaServer Faces (JSF) is a Java based Web application framework that simplifies the development of user interfaces for enterprise Java applications. JSF applications are implemented in Java on the server, and render as web pages back to clients based on their web requests. JSF provides Web application lifecycle management through a controller servlet; and like Swing, JSF provides a rich component model complete with event handling and component rendering. It is based on other Java standards such as Java Servlets and JavaServer Pages, but it provides a higher–level component layer for UI (user interface) development.

The Major Benefits of Java Server Faces Technology:
  • JavaServer Faces architecture makes it easy for the developers to use. In JavaServer Faces technology, user interfaces can be created easily with its built–in UI component library, which handles most of the complexities of user interface management.
  • JavaServer Faces technology offers a clean separation between behavior and presentation.
  • JavaServer Faces technology provides a rich architecture for managing component state, processing component data, validating user input, and handling events.
  • Robust event handling mechanism.
  • Render kit support for different clients.
  • Highly 'pluggable' – components, view handler, etc
JSF LifeCycle
In order for you to understand how the framework masks the underlying request processing nature of the Servlet API and to analyze how Faces processes each request, we’ll go through the JSF request processing lifecycle. This will allow you to build better applications.

A JavaServer Faces page is represented by a tree of UI components, called a view. During the lifecycle, the JavaServer Faces implementation must build the view while considering state saved from a previous submission of the page. When the client submits a page, the JavaServer Faces implementation performs several tasks, such as validating the data input of components in the view and converting input data to types specified on the server side. The JavaServer Faces implementation performs all these tasks as a series of steps in the JavaServer Faces request–response life cycle.

The phases of the JSF application lifecycle are as follows:

  • Restore view
  • Apply request values; process events
  • Process validations; process events
  • Update model values; process events
  • Invoke application; process events
  • Render response
The normal flow of control is shown with solid lines, whereas dashed lines show alternate flows depending on whether a component requests a page redisplay or validation or conversion errors occur.
JSF Life Cycle
Note: Life – cycle handles two kinds of requests:
  • Initial request: A user requests the page for the first time.
  • Postback: A user submits the form contained on a page that was previously loaded into the browser as a result of executing an initial request.
Phase 1 : Restore view
In the RestoreView phase, JSF classes build the tree of UI components for the incoming request.
  • When a request for a JavaServer Faces page is made, such as when a link or a button is clicked, the JavaServer Faces implementation begins the restore view phase.
  • This is one of the trickiest parts of JSF: The JSF framework controller uses the view ID (typically JSP name) to look up the components for the current view. If the view isn’t available, the JSF controller creates a new one. If the view already exists, the JSF controller uses it. The view contains all the GUI components and there is a great deal of state management by JSF to track the status of the view – typically using HTML hidden fields.
  • If the request for the page is an initial request, the JavaServer Faces implementation creates an empty view during this phase. Lifecycle only executes the restore view and render response phases because there is no user input or actions to process.
  • If the request for the page is a postback, a view corresponding to this page already exists. During this phase, the JavaServer Faces implementation restores the view by using the state information saved on the client or the server. Lifecycle continues to execute the remaining phases.
  • Fortunately this is the phase that requires the least intervention by application code.
Phase 2 : ApplyRequest values
During ApplyRequest values, the request parameters are read and their values are used to set the values of the corresponding UI components. This process is called decoding.
  • If the conversion of the value fails, an error message associated with the component is generated and queued on FacesContext. This message will be displayed during the render response phase, along with any validation errors resulting from the process validations phase.
  • If some components on the page have their immediate event handling property is set to true, then the validation, conversion, and events associated with these components takes place in this phase instead of the Process Validations phase. For example, you could have a Cancel button that ignores all values on a form.
Phase 3 : Process validations
The Apply Validations phase triggers calls to all registered validators.
  • The components validate the new values coming from the request against the application's validation rules.
  • Any input can be scanned by any number of validators.
  • These Validators can be pre-defined or defined by the developer.
  • Any validation errors will abort the request–handling process and skip to rendering the response with validation and conversion error messages.
Phase 4 : Update Model Values
The Update Model phase brings a transfer of state from the UI component tree to any and all backing beans, according to the value expressions defined for the components themselves.
  • It is in this phase that converters are invoked to parse string representations of various values to their proper primitive or object types. If the data cannot be converted to the types specified by the bean properties, the life cycle advances directly to the render response phase so that the page is re-rendered with errors displayed.
  • Note: The difference between this phase and Apply Request Values - that phase moves values from client–side HTML form controls to server–side UI components; while in this phase the information moves from the UI components to the backing beans.
Phase 5 : Invoke Application
The Invoke Application phase handles any application-level events. Typically this takes the form of a call to process the action event generated by the submit button that the user clicked.
  • Application level events handled
  • Application methods invoked
  • Navigation outcome calculated

Phase 6 : Render Response
Finally, Render Response brings several inverse behaviors together in one process:
  • Values are transferred back to the UI components from the bean. Including any modifications that may have been made by the bean itself or by the controller.
  • The UI components save their state – not just their values, but other attributes having to do with the presentation itself. This can happen server–side, but by default state is written into the HTML as hidden input fields and thus returns to the JSF implementation with the next request.
  • If the request is a postback and errors were encountered during the apply request values phase, process validations phase, or update model values phase, the original page is rendered during this phase. If the pages contain message or messages tags, any queued error messages are displayed on the page.
Process Events
In this phase, any events that occurred during the previous phase are handled.
  • Each Process Events phase gives the application a chance to handle any events (for example, validation failures) that occurred during the previous phase.
Note: Sometimes, an application might need to redirect to a different web application resource, such as a web service, or generate a response that does not contain JavaServer Faces components. In these situations, the developer must skip the rendering phase (Render Response Phase) by calling FacesContext.responseComplete. This situation is also shown in the diagram, with ProcessEvents pointing to the response arrow.

JSF Setup
Now that we have a general overview of JavaServer Faces and a basic understanding of the JSF lifecycle, let's get started with some code.
There are more than one JSF implementations available in market. Some of them are:

  • Sun (RI) (default)
  • Apache MyFaces
  • IBM
  • Simplica (based on Apache MyFaces)
  • Additionally, there are several 3rd party UI components that should run with any implementation.
For our simple application we use Sun (RI) default implementation.

Before you can dive into a full-fledged example, you must lay some groundwork. i.e., configuring your environment to work with JSF. First, you need to get the JSF library files.

  • jsf-api.jar
  • jsf-impl.jar
You should place these two JSF JAR files (jsf-api.jar and jsf-impl.jar) in the application's classpath, either in the Web app's lib directory or in the server's classpath. The next thing we'll need to do is download the dependencies our simple project will have. Here are the jar files (apart from above two jars) you will need in your WEB-INF/lib:
  • jstl.jar
  • standard.jar
  • commons-beanutils.jar
  • commons-collections.jar
  • commons-digester.jar
  • commons-logging.jar
Alternatively, use Ant or Maven to only include these jars when you are testing.

Note: Even though, JSF applications typically use JSP tags implemented by the JSF implementation, there are no separate tag library descriptor (TLD) files because that information is contained in the jar files.



When working with JSF, you will have a minimum of two XML configuration files, and you will often have even more (ex: tiles.xml). It is important that you become familiar with these config files, as they are the key to the flexibility and loose coupling provided by this architecture.
  • Faces config (faces-config.xml) : JavaServer Faces configuration file. Place this file in the WEB-INF directory. This file lists bean resources and navigation rules.
  • Web config (web.xml): This is your standard Web configuration file.

Wednesday, July 28, 2010

The Agile System Development Life Cycle (SDLC)

The Agile System Development Life Cycle (SDLC)
I'm often asked to over viewing the ideas presented in the  Agile Manifesto and agile techniques such as  Test-Driven Design (TDD),  database re factoring, and  agile change management. One issue that many people seem to struggle with is how all of these ideas fit together, and invariably I found myself sketching one or more pictures which overview the life cycle for agile software development projects.  I typically need one or more pictures because the scope of life cycles change -- some life cycles address just the construction life cycle, some address the full development life cycle, and some even address the full IT life cycle.  Depending on your scope, and how disciplined your approach to agile software development is, you will get different life cycle diagrams.

This article covers:
# 1: The scope of life cycles
# 2: Iteration -1: Pre-project planning
# 3: Iteration 0: Project inception
# 4: Construction iterations
# 5: Release iterations
# 6: Production
# 7: Retirement


1. The Scope of Life Cycles:
In Enterprise Unified Process (EUP) the scope of life cycles can vary dramatically.  For example, Figure 1 depicts the Scrum construction life cycle whereas Figure 2 depicts an extended version of that diagram which covers the full system development life cycle (SDLC) and Figure 3 extends that further by addressing enterprise-level disciplines via the EUP life cycle.  The points that I'm trying to make are:
  • System development is complicated: Although it's comforting to think that development is as simple as Figure 1 makes it out to be, the fact is that we know that it's not.  If you adopt a development process that doesn't actually address the full development cycle then you've adopted little more than consultant ware in the end.  My experience is that you need to go beyond the construction life cycle of Figure 1 to the full SDLC of Figure 2 (ok, Retirement may not be all that critical) if you're to be successful

  • There's more to IT than development: To be successful at IT you must take a multi-system, multi-life cycle stage view as depicted in Figure 3.  The reality is that organizations have many potential projects in the planning stage (which I'll call Iteration -1 in this article), many in development, and many in production.
Figure 1 uses the terminology of the Scrum methodology.  The rest of this article uses the terminology popularized in the mid-1990s by the Unified Process (Sprint = Iteration, Backlog = Stack, Daily Scrum Meeting = Daily Meeting).  Figure 1 shows how agilists treat requirements like a prioritized stack, pulling just enough work off the stack for the current iteration (in Scrum iterations/sprints are often 30-days long, although this can vary).  At the end of the iteration the system is demoed to the stakeholders to verify that the work that the team promised to do at the beginning of the iteration was in fact accomplished.

The Scrum construction life cycle of Figure 1, although attractive proves to be a bit naive in practice. It's actually the result of initial requirements envisioning early in the project.  You don't only implement requirements during an iteration, you also fix defects (disciplined agile teams have a parallel testing effort during construction iterations where these defects are found), go on holiday, support other teams (perhaps as reviewers of their work), and so on.  So you really need to expand the product backlog into a full work items list.  You also release your system into production, often a complex endeavor.
A more realistic life cycle is captured Figure 2, over viewing the full agile SDLC.  This SDLC is comprised of six phases: Iteration -1, Iteration 0/Warm Up, Construction, Release/End Game, Production, and Retirement.  Although many agile developers may talk at the idea of phases, it's been recognized that processes such as Extreme Programming (XP) and Agile Unified Process (AUP) do in fact have phases.  The Disciplined Agile Delivery (DAD) lifecycle also includes phases (granted, I lead the development of DAD).
Figure 2
Figure 3
Figure 4

On the surface, the agile SDLC of Figure 4 looks very much like a traditional SDLC, but when you dive deeper you quickly discover that this isn't the case.  This is particularly true when you consider the detailed view of Figure 2.  Because the agile SDLC is highly collaborative, iterative, and incremental the roles which people take are much more robust than on traditional projects.  In the traditional world a business analyst created a requirements model that is handed off to an architect  who creates design  models that are handed off to a coder who writes programs which are handed off to a tester and so on.  On an agile project, developers work closely with their stakeholders to understand their needs, they pair together to implement and test their solution, and the solution is shown to the stakeholder for quick feedback.  Instead of specialists handing artifacts to one another, and thereby injecting defects at every step along the way, agile developers are generalizing specialists with full life cycle skills.

2. Iteration-1: Pre-Project Planning
Iteration -1, the “pre-Inception phase” in the Enterprise Unified Process (EUP), is the pre-project aspects of portfolio management.  During this phase you will:
  • Define the business opportunity:  You must consider the bigger business picture and focus on market concerns.  This includes exploring how the new functionality will improve your organization’s presence in the market, how it will impact profitability, and how it will impact the people within your organization.  This exploration effort should be brief, not all projects will make the initial cut so you only want to invest enough effort at this point to get a good “gut feel” for the business potential.  A good strategy is to follow Outside-In Development’s focus on identifying the potential stakeholders and their goals, key information to help identify the scope of the effort.
  • Identify a viable for the project: There are several issues to consider when identifying a potential strategy for the project.  For example, do you build a new system or buy an existing package and modify it?  If you decide to build, do you do so onshore or offshore?  Will the work be solely done by your own development team, by a team from a system integrator (SI), or in partnership with the SI?  What development paradigm – traditional/waterfall, iterative, or agile – will you follow?  Will the team be co-located, near-located within the same geographic region, or far-located around the world?   As you can see there are many combination of strategy available to you, and at this point in time you may only be able to narrow the range of the possibilities but be forced to leave the final decision to the project team in future iterations.
  • Assess the feasibility: During Iteration -1 you will want to do just enough feasibility analysis to determine if it makes sense to invest in the potential project.  Depending on the situation you may choose to invest very little effort in considering feasibility, for many systems just considering these issues for a few minutes is sufficient for now, and for some systems you may choose to invest days if not weeks exploring feasibility.  Many organizations choose to do just a little bit of feasibility analysis during Iteration -1, and then if they decide to fund the project they will invest more effort during Iteration 0.  In my experience you need to consider four issues when exploring feasibility: economic feasibility, technical feasibility, operational feasibility, and political feasibility.   Your feasibility analysis efforts should also produce a list of potential risks and criteria against which to make go/no-go decisions at key milestone points during your project.  Remember that agile teams only have a success rate of 72%, compared to 63% for traditional projects, implying that almost 30% of agile projects are considered failures.  Therefore you should question the feasibility of the project throughout the life cycle to reduce overall project risk.
Iteration -1 activities can and should be as agile as you can possibly make it – you should collaborate with stakeholders who are knowledgeable enough and motivated enough to consider this potential project and invest in just enough effort to decide whether to consider funding the effort further. 

3. Iteration 0/Warm up: Project Initiation
The first week or so of an agile project is often referred to as “Iteration 0” (or "Cycle 0") or in The Eclipse Way the "Warm Up" iteration.  Your goal during this period is to initiate the project by:
  • Garnering initial support and funding for the project: This may have been already achieved via your portfolio management efforts, but realistically at some point somebody is going to ask what are we going to get, how much is it going to cost, and how long is it going to take. You need to be able to provide reasonable, although potentially evolving, answers to these questions if you're going to get permission to work on the project.  You may need to justify your project via a feasibility study.
  • Actively working with stakeholders to initially model the scope of the system: As you see in Figure 5, during Iteration 0 agilists will do some initial requirements modeling with their stakeholders to identify the initial, albeit high-level, requirements for the system.  To promote active stakeholder participation you should use inclusive tools, such as index cards and white boards to do this modeling – our goal is to understand the problem and solution domain, not to create mounds of documentation.  The details of these requirements are modeled on a just in time (JIT) basis in model storming sessions during the development cycles.
  • Starting to build the team: Although your team will evolve over time, at the beginning of a development project you will need to start identifying key team members and start bringing them onto the team.  At this point you will want to have at least one or two senior developers, the project coach/manager, and one or more stakeholder representatives.
  • Modeling an initial architecture for the system: Early in the project you need to have at least a general idea of how you're going to build the system.  Is it a mainframe COBOL application?  A .Net application?  J2EE?  Something else?  As you see in Figure 5, the developers on the project will get together in a room, often around a whiteboard, discuss and then sketch out a potential architecture for the system.  This architecture will likely evolve over time, it will not be very detailed yet (it just needs to be good enough for now), and very little documentation (if any) needs to be written.  The goal is to identify an architectural strategy, not write mounds of documentation.  You will work through the design details later during development cycles in model storming sessions and via TDD.
  • Setting up the environment: You need workstations, development tools, a work area, .. for the team.  You don't need access to all of these resources right away, although at the start of the project you will need most of them.
  • Estimating the project: You'll need to put together an initial estimate for your agile project based on the initial requirements, the initial architecture, and the skills of your team.  This estimate will evolve throughout the project.
Figure 5
4. Construction Iteration:
During construction iterations agilists incrementally deliver high-quality working software which meets the changing needs of our stakeholders, as over viewed in Figure 6.
Figure 6

We achieve this by:
  • Collaborating closely with both our stakeholders and with other developers: We do this to reduce risk through tightening the feedback cycle and by improving communication via closer collaboration.
  • Implementing functionality in priority order: We allow our stakeholders to change the requirements to meet their exact needs as they see fit.  The stakeholders are given complete control over the scope, budget, and schedule – they get what they want and spend as much as they want for as long as they’re willing to do so.
  • Analyzing and designing: We analyze individual requirements by model storming on a just-in-time (JIT) basis for a few minutes before spending several hours or days implementing the requirement.  Guided by our architecture models, often hand-sketched diagrams, we take a highly-collaborative, test-driven design (TDD) approach to development (see Figure 7) where we iteratively write a test and then write just enough production code to fulfill that test.  Sometimes, particularly for complex requirements or for design issues requiring significant forethought, we will model just a bit ahead to ensure that the developers don't need to wait for information.
  • Ensuring quality: Agilists are firm believers in following guidance such as coding conventions and modeling style guidelines.  Furthermore, we refactor our application code and/or our database schema as required to ensure that we have the best design possible.
  • Regularly delivering working software: At the end of each development cycle/iteration you should have a partial, working system to show people.  Better yet, you should be able to deploy this software into a pre-production testing/QA sandbox for system integration testing.  The sooner, and more often, you can do such testing the better.  See Agile Testing and Quality Strategies: Discipline Over Rhetoric for more thoughts.
  • Testing, testing, and yes, testing: As you can see in Figure 8 agilists do a significant amount of testing throughout construction.  As part of construction we do confirmatory testing, a combination of developer testing at the design level and agile acceptance testing at the requirements level.  In many ways confirmatory testing is the agile equivalent of "testing against the specification" because it confirms that the software which we've built to date works according to the intent of our stakeholders as we understand it today. This isn't the complete testing picture: Because we are producing working software on a regular basis, at least at the end of each iteration although ideally more often, we're in a position to deliver that working software to an independent test team for investigative testing.  Investigative testing is done by test professionals who are good at finding defects which the developers have missed.  These defects might pertain to usability or integration problems, sometimes they pertain to requirements which we missed or simply haven't implemented yet, and sometimes they pertain to things we simply didn't think to test for.
Figure 7: Taking "test first" approach to construction
Figure 8: Testing during Construction Iteration
5. Release Iteration(s): The "End Game"
During the release iteration(s), also known as the "end game", we transition the system into production.  Not that for complex systems the end game may prove to be several iterations, although if you've done system and user testing during construction iterations (as indicated by Figure 6) this likely won't be the case.  As you can see in Figure 9, there are several important aspects to this effort:
  • Final testing of the system: Final system and acceptance testing should be performed at this point, although as I pointed out earlier the majority of testing should be done during construction iterations.  You may choose to pilot/beta test your system with a subset of the eventual end users.  Check Full Life Cycle Object-Oriented Testing (FLOOT) method for more thoughts on testing.
  • Rework: There is no value testing the system if you don't plan to act on the defects that you find.  You may not address all defects, but you should expect to fix some of them.
  • Finalization of any system and user documentation: Some documentation may have been written during construction iterations, but it typically isn't finalized until the system release itself has been finalized to avoid unnecessary rework  Note that documentation is treated like any other requirement: it should be costed, prioritized, and created only if stakeholders are willing to invest in it. Agilists believe that if stakeholders are smart enough to earn the money then they must also be smart enough to spend it appropriately.
  • Training.
  • Deploy the system
Figure 9: The AUP Deployment discipline Workflow
6. Production:
The goal of the Production Phase is to keep systems useful and productive after they have been deployed to the user community. This process will differ from organization to organization and perhaps even from system to system, but the fundamental goal remains the same: keep the system running and help users to use it. Shrink-wrapped software, for example, will not require operational support but will typically require a help desk to assist users. Organizations that implement systems for internal use will usually require an operational staff to run and monitor systems.

This phase ends when the release of a system has been slated for retirement or when support for that release has ended. The latter may occur immediately upon the release of a newer version, some time after the release of a newer version, or simply on a date that the business has decided to end support.  This phase typically has one iteration because it applies to the operational lifetime of a single release of your software. There may be multiple iterations, however, if you defined multiple levels of support that your software will have over time.

7. Retirement:
The goal of the Retirement Phase is the removal of a system release from production, and occasionally even the complete system itself, an activity also known as system decommissioning or system sun setting.  Retirement of systems is a serious issue faced by many organizations today as legacy systems are removed and replaced by new systems.  You must strive to complete this effort with minimal impact to business operations.  If you have tried this in the past, you know how complex it can be to execute successfully.  System releases are removed from production for several reasons, including:
  • The system is being complete replaced: It is not uncommon to see homegrown systems for human resource functions being replaced by COTS systems such as SAP or Oracle Financial s.
  • The release is no longer to be supported: Sometimes organizations will have several releases in production at the same time, and over time older releases are dropped.
  • The system no longer needed to support the current business model: A organization may explore a new business area by developing new systems only to discover that it is not cost effective.
  • The system is redundant: Organizations that grow by mergers and/or acquisitions often end up with redundant systems as they consolidate their operations.
  • The system has become obsolete.
In most cases, the retirement of older releases is a handled during the deployment of a newer version of the system and is a relatively simple exercise.  Typically, the deployment of the new release includes steps to remove the previous release.  There are times, however, when you do not retire a release simply because you deploy a newer version.  This may happen if you can not require users to migrate to the new release or if you must maintain an older system for backward compatibility.

New in Spring 3

Changes in Spring 3

The framework modules have been revised and are now managed separately with one source-tree per module jar:
1. org.springframework.aop
2. org.springframework.beans
3. org.springframework.context
4. org.springframework.context.support
5. org.springframework.expression
6. org.springframework.instrument
7. org.springframework.jdbc
8. org.springframework.jms
9. org.springframework.orm
10. org.springframework.oxm
11. org.springframework.test
12. org.springframework.transaction
13. org.springframework.web
14. org.springframework.web.portlet
15. org.springframework.web.servlet
16. org.springframework.web.struts
Note: The spring.jar artifact that contained almost the entire framework is no longer provided.


We are now using a new Spring build system as known from Spring Web Flow 2.0. This gives us:
1. Ivy-based "Spring Build" system
2. consistent deployment procedure
3. consistent dependency management
4. consistent generation of OSGi manifests


Overview of new Features
This is a list of new features for Spring 3.0. We will cover these features in more detail later in this section.
• Spring Expression Language
• IoC enhancements/Java based bean metadata
• General-purpose type conversion system and field formatting system
• Object to XML mapping functionality (OXM) moved from Spring Web Services project
• Comprehensive REST support
• @MVC additions
• Declarative model validation
• Early support for Java EE 6
• Embedded database support


Core APIs updated for Java 5:
BeanFactory interface returns typed bean instances as far as possible:
• T getBean(Class requiredType)
• T getBean(String name, Class requiredType)
• Map getBeansOfType(Class type)


Spring's TaskExecutor interface now extends java.util.concurrent.Executor.
• extends AsyncTaskExecutor supports standard Callables with Futures.


New Java 5 based converter API and SPI:
• stateless ConversionService and Converters
• superseding standard JDK PropertyEditors
Typed ApplicationListener


Spring Expression Language:
Spring introduces an expression language which is similar to Unified EL in its syntax but offers significantly more features. The expression language can be used when defining XML and Annotation based bean definitions and also serves as the foundation for expression language support across the Spring portfolio.


The Spring Expression Language was created to provide the Spring community a single, well supported expression language that can be used across all the products in the Spring portfolio.


The following is an example of how the Expression Language can be used to configure some properties of a database setup
This functionality is also available if you prefer to configure your components using annotations:
 - - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - - - - - - - - - - - -
@Repository
public class RewardsTestDatabase {
   @Value("#{systemProperties.databaseName}")
   public void setDatabaseName(String dbName) { … }
   @Value("#{strategyBean.databaseKeyGenerator}")
    public void setKeyGenerator(KeyGenerator kg) { … }
}
- - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - - - - - - - - - - - - - -
The Inversion of Control (IoC) container:
• Java based Bean metadata
Some core features from the JavaConfig project have been added to the Spring Framework now. This means that the following annotations are now directly supported:
@Configuration
@Bean
@DependsOn
@Primary
@Lazy
@Import
@ImportResource
@Value
Here is an example of a Java class providing basic configuration using the new JavaConfig features:
- - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - - - - - - - - - - - -
package org.example.config;
@Configuration
public class AppConfig {
   private @Value("#{jdbcProperties.url}") String jdbcUrl;
   private @Value("#{jdbcProperties.username}") String username;
   private @Value("#{jdbcProperties.password}") String password;


   @Bean
   public FooService fooService() {
      return new FooServiceImpl(fooRepository());
   }
   @Bean
   public FooRepository fooRepository() {
      return new HibernateFooRepository(sessionFactory());
   }
   @Bean
   public SessionFactory sessionFactory() {
      // wire up a session factory
      AnnotationSessionFactoryBean asFactoryBean = new AnnotationSessionFactoryBean();
      asFactoryBean.setDataSource(dataSource());
    // additional config
   return asFactoryBean.getObject();
   }
   @Bean
   public DataSource dataSource() {
      return new DriverManagerDataSource(jdbcUrl, username, password);
   }
}
- - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - - - - - - - - - -
To get this to work you need to add the following component scanning entry in your minimal application context XML file.




Or you can bootstrap a @Configuration class directly using AnnotationConfigApplicationContext:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
public static void main(String[] args) {
   ApplicationContext ctx = new AnnotationConfigApplicationContext(AppConfig.class);
   FooService fooService = ctx.getBean(FooService.class);
   fooService.doStuff();
}
- - - - - - - - - - - - - - - - - - - - - - - - - -- - - - - - - - - - - - - - - - - - -
• Defining bean metadata within components:
@Bean annotated methods are also supported inside Spring components. They contribute a factory bean definition to the container.


• The Data Tier: Object to XML mapping functionality (OXM) from the Spring Web Services project has been moved to the core Spring Framework now


• The Web Tier: The most exciting new feature for the Web Tier is the support for building RESTful web services and web applications. There are also some new annotations that can be used in any web application.
      o Comprehensive REST support:
Server-side support for building RESTful applications has been provided as an extension of the existing annotation driven MVC web framework. Client-side support is provided by the RestTemplate class in the spirit of other template classes such as JdbcTemplate andJmsTemplate. Both server and client side REST functionality make use of HttpConverters to facilitate the conversion between objects and their representation in HTTP requests and responses.
The MarshallingHttpMessageConverter uses the Object to XML mapping functionality mentioned earlier.
    o @MVC additions:
A mvc namespace has been introduced that greatly simplifies Spring MVC configuration.
Additional annotations such as @CookieValue and @RequestHeaders have been added.


• Declarative Model Validation: Several validation enhancements, including JSR 303 support that uses Hibernate Validator as the default provider.


• Early Support for Java EE 6:
We provide support for asynchronous method invocations through the use of the new @Async annotation (or EJB 3.1's @Asynchronous annotation). JSR 303, JSF 2.0, JPA 2.0, etc


• Support for embedded database: Convenient support for embedded Java database engines, including HSQL, H2, and Derby, is now provided.

Tuesday, July 27, 2010

Oracle Weblogic 11/12


Install Java
Download Java bin file from Oracle Java Web Site run following command to install it over Linux system.
$ sudo ./jdk-6u32-linux-x64.bin

After installation, say java got installed at following directory with JAVA_HOME as follows:
JAVA_HOME=/var/cemp/hsd/mediation/jdk1.6.0_32
Set Environment variable:
$ JAVA_HOME=/var/cemp/hsd/mediation/jdk1.6.0_32
$ PATH=/usr/local/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/opt/dell/srvadmin/bin:/home/cemp/bin:/opt/oracle/product/10.2.0/db_1/bin:.:
$ PATH=$JAVA_HOME/bin:$PATH
$ export PATH
$ echo $PATH

Install Weblogic
Download Weblogic installable from Oracle Weblogic Web Site.
$ sudo java -d64 -Djava.io.tmpdir=/logs -jar wls1036_generic.jar 
[Bold: changed the temp location to overcome root space restriction].

For example we want to install weblogic (Middleware_home) at following location:
(Middleware Home) MW_Home=/var/cemp/hsd/mediation/bea11
   1|WebLogic Server: [/var/cemp/hsd/mediation/bea11/wlserver_10.3]
   2|Oracle Coherence: [/var/cemp/hsd/mediation/bea11/coherence_3.7]

$ cd ${MW_Home}/utils/config/10.3
./setHomeDirs.sh

- - - - - - - - - - - - - - 
MW_HOME="/var/cemp/hsd/mediation/bea11"
WL_HOME="/var/cemp/hsd/mediation/bea11/wlserver_10.3"

- - - - - - - - - - - - - -

Domain Creation
Go to following directory:
$ cd ${WL_HOME}/wlserver_10.3/common/bin
 $ ./config.sh

<------------------- Fusion Middleware Configuration Wizard ------------------>
Welcome:
--------
Choose between creating and extending a domain. Based on your selection,
the Configuration Wizard guides you through the steps to generate a new or
extend an existing domain.
*   1 |Create a new WebLogic domain
       |    Create a WebLogic domain in your projects directory.
   2  |Extend an existing WebLogic domain
       |    Use this option to add new components to an existing domain and modify     

     |    configuration settings.
1. Select Domain Source: Choose Weblogic Platform components
2. Application Template Selection: Available Template
    Available Templates
    |_____Basic WebLogic Server Domain - 10.3.6.0 [wlserver_10.3]x
    |_____Basic WebLogic SIP Server Domain - 10.3.6.0 [wlserver_10.3] [2] x
    |_____WebLogic Advanced Web Services for JAX-RPC Extension - 10.3.6.0 [wlserver_10.3] [3] x
    |_____WebLogic Advanced Web Services for JAX-WS Extension - 10.3.6.0 [wlserver_10.3] [4] x

3. Edit Domain Information:
---------------------------
    |  Name     | Value  |
__|________|______ |_
  1| *Name:  | aspen   |

4.Select the target domain directory for this domain:
Target Location: "/var/cemp/hsd/mediation/bea11/user_projects/domains"

5. Configure Administrator User Name and Password:
______________________________________
      |          Name                      |    Value              |
___|_____________________|_____________ |_
   1|         *Name:                     |  weblogic          |
   2|     *User password:           | ***********    |
   3| *Confirm user password: | ***********    |

   4|      Description:                 |   asusual            |
------------------------------------------------- 
6. Domain Mode Configuration:
*  1|Development Mode
    2|Production Mode

7. Java SDK Selection:
 * 1|Sun SDK 1.6.0_32 @ /var/cemp/hsd/mediation/jdk1.6.0_32
    2|Other Java SDK

8. Select Optional Configuration: [Optional-can be configured later, but also can be configured from here]
   1|Administration Server [ ]
   2| [ ]
   3|Managed Servers, Clusters and Machines [ ]
   4|Deployments and Services [ ]
   5|JMS File Store [ ]
   6|RDBMS Security Store [ ]

Creating Domain...
0%          25%          50%          75%          100%
[------------|------------|------------|------------]
[***************************************************]
**** Domain Created Successfully! ****


This will create domain aspen
Start Weblogic & Node Manager
Domain Home: ${MW_Home}/user_projects/domainsDomain Name: aspen
Start Node Manager:
$ cd {MW_Home}/wlserver_10.3/server/bin
bin]$ nohup ./startNodeManager.sh > nohup.out &
Start WebLogic Admin Server:
$ cd ${Domain_Home}/aspen/bin
./setDomainEnv.sh
bin]$ nohup ./startWebLogic.sh > nohup.out &




JAVA_HOME=/home/aupm/neps/jdk1.6.0_33
PATH=$JAVA_HOME/bin:$PATH
export PATH

export CLASSPATH=/home/aupm/neps/weblogic_10.3.6/wlserver_10.3/server/lib/weblogic.jar:

java weblogic.WLST
connect('weblogic','weblogic123','t3://vivpfmwc03.westchester.pa.bo.comcast.net:9001')
nmEnroll('/home/aupm/neps/weblogic_10.3.6/user_projects/domains/base_domain','/home/aupm/neps/weblogic_10.3.6/wlserver_10.3/common/nodemanager') 
.
disconnect()


dumpStack()   // very useful command in times :)


Weblogic Console
http://AdminAddress:AdminPort/console 
Domain: aspen
Console User / Password: weblogic /weblogic123

0.1. Create A New Machine:A. Name: Machine1 / OS : UNIX
B. Type: SSL, Listen Address: 147.191.113.124, Listen Port: 5556, Debug Enabled: Checked


1.1. Create a New Cluster:
A. Name: Cluster-1, Messaging Mode: Unicast, Multicast Address: 239.192.0.0, Multicast Port: 7001
1.2. Create a New Server:

A. Server Name: Server-1, Server Listen Address: 147.191.113.124, Server Listen Port: 7011, Yes Make this server belong to Existing Cluster: 1.1.Name,
#1. Cluster-1 - Server-1 : Coherence
#2. Cluster-2 - Server-2 : DataSource
#3. Cluster-3 - Server-3 : Application Deployment

2.1. Create Coherence Cluster Configuration:
A. Name: Coherence-Cluster
B. Unicast Listen Address: 147.191.113.124, Unicast Listen Port: 8888, Multicast Listen Address: 231.1.1.1, Multicast Listen Port: 7777
C. Target: 1.1.Name (Cluster-1)
2.2. Create Coherence Server Configuration:

A. Name: Coherence-Server-1, Machine: 0.1.Name, Cluster: 2.1.Name, Unicast Listen Address: 147.191.113.124, Unicast Listen Port: 8888, Unicast Port Auto Adjust: Checked

Admin Port: 7001
Cluster-1-Server-1: Coherence: 7011, 7012 (https)
Cluster-2-Server-2: Coherence: 7021, 7022 (https)
Cluster-3-Server-3: Coherence: 7031, 7032(https)


${Domain_Home}/aspen/servers/Server-1
${Domain_Home}/aspen/servers/Server-2
${Domain_Home}/aspen/servers/Server-3
${Domain_Home}/aspen/servers/AdminServer
${Domain_Home}/aspen/servers/domain_bak


Common Errors
Problem 0.[Management:141266]Parsing Failure in config.xml: java.lang.AssertionError: java.lang.ClassNotFoundException: com.bea.wcp.sip.management.descriptor.beans.SipServerBean - While starting the Managed Server.
Solution: Update nodemanager.properties: ${MW_Home}/wlserver_10.3/common/nodemanager/nodemanager.properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DomainsFile=/var/cemp/hsd/mediation/bea11/wlserver_10.3/common/nodemanager/nodemanager.domains
StartScriptEnabled=true
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Explanation:
You have to make sure this class is in the classpath: com.bea.wcp.sip.management.descriptor.beans.SipServerBean.
You have to edit the nodemanager.properties file and set the variable StartScriptEnabled=true instead of the default false.
The nodemanager.properties file is usually located in the directory: /wlserver_10.3/common/nodemanager. Now the nodemanager uses the start script usually startWebLogic, which calls the setDomainEnv in which the classpath for the SIP server is set. When you use the startManagedServer command the setDomainEnv is called so the classpath is set.


Problem 1: Exception in thread "main" java.lang.NoClassDefFoundError: weblogic/nodemanager/server/provider/WeblogicCacheServer
Solution 1: The process should be able to find correct classpath by default, we might hit this issue in some cases over *nix system, where we set some thing in classpath (or it's not empty), solution to this issue is set coherence.jar and coherence.server_.jar as follows:
 ${COHERENCE_HOME}/lib/coherence.jar:${COHERENCE_HOME}/modules/features/weblogic.server.modules.coherence.server_.jar: [along with other files we want to set in classpath]
Once the server started the Log file for the coherence server exists on the file system as follows: $DOMAIN_HOME/servers_coherence/{COHERENCE_SERVER_NAME}/logs/{COHERENCE_SERVER_NAME}.out
PID of the server is found here, i normally add that to the log file, but it's not possible when we start it from console.
$DOMAIN_HOME/servers_coherence/{COHERENCE_SERVER_NAME}/data/nodemanager/{COHERENCE_SERVER_NAME}.pid

Problem 2: Coherence*Web, exception on RedHat Linux, but not on windows and Debian: [weblogic.application.ModuleException: Missing Coherence jar or WebLogic Coherence Integration jar]:
Solution 2: Taking a look at the Module Exception, the coherence.jar is part of counter.war application, and the Weblogic.coherence.integration.jar is declared with the active-cache-[version].jar lib which is refrenced by MANIFEST.MF of the counter application.
The entire issue was that the MANIFEST.MF in the counter.war was named as "manifest.mf". Therefore the RedHat Linux could not recognize it simpl due to the naming issues. So make sure that manifest file name is "MANIFEST.MF" and declares to active-cache.jar which should be deployed as a lib on servers (can be bundled in Application/APP-INF/lib or 'as seperate application library deployed on same cluster')
This was solved by executing the following steps:
a) Explode the counter.war application.
b) Go to Counter.war application
c) Rename the manifest.mf to MANIFEST.MF
d) Deploy as open directory or compact back to war file and deploy.

Problem 3: Sample Configuration file for weblogic 12C
4 Managed Server on 4 Machine:
Managed Server 1:Server 1 => Arguments:
-Dtangosol.coherence.override=/home/aupm/neps/usageCache/client/tangosol-coherence-p1-override.xml -Dtangosol.coherence.cacheconfig=/home/aupm/neps/usageCache/usage-cache-config.xml
-Desp.propDir=/home/aupm/neps/usageCache/props
-Desp.logDir=/home/aupm/neps/usageCache/logs
-Dtangosol.coherence.management=all
-Dcom.sun.management.jmxremote


Coherence 1: => Classpath:
/home/aupm/neps/usageCache/UsageCacheModel-12.08-SNAPSHOT.jar:/home/aupm/neps/weblogic_10.3.6/coherence_3.7/lib/coherence.jar:/home/aupm/neps/weblogic_10.3.6/modules/features/weblogic.server.modules.coherence.server_10.3.5.0.jar:
Coherence 1: =&gt; Arguments:
-Dtangosol.coherence.override=/home/aupm/neps/usageCache/server/tangosol-coherence-p1-override.xml
-Dtangosol.coherence.cacheconfig=/home/aupm/neps/usageCache/usage-cache-config.xml
-Dtangosol.coherence.cluster=aupm_usage_service
-Dtangosol.coherence.distributed.localstorage=true


Managed Server 2:Server 2 => Arguments:
-Dtangosol.coherence.override=/home/aupm/neps/usageCache/client/tangosol-coherence-p2-override.xml -Dtangosol.coherence.cacheconfig=/home/aupm/neps/usageCache/usage-cache-config.xml
-Desp.propDir=/home/aupm/neps/usageCache/props
-Desp.logDir=/home/aupm/neps/usageCache/logs
-Dtangosol.coherence.management=local-only
-Dtangosol.coherence.management.remote=true


Coherence 2: => Classpath:
/home/aupm/neps/usageCache/UsageCacheModel-12.08-SNAPSHOT.jar:/home/aupm/neps/weblogic_10.3.6/coherence_3.7/lib/coherence.jar:/home/aupm/neps/weblogic_10.3.6/modules/features/weblogic.server.modules.coherence.server_10.3.5.0.jar:

Coherence 2 =&gt; Arguments:
-Dtangosol.coherence.override=/home/aupm/neps/usageCache/server/tangosol-coherence-p2-override.xml
-Dtangosol.coherence.cacheconfig=/home/aupm/neps/usageCache/usage-cache-config.xml
-Dtangosol.coherence.cluster=aupm_usage_service
-Dtangosol.coherence.distributed.localstorage=true


Managed Server 3:Server 3 => Arguments:
-Dtangosol.coherence.override=/home/aupm/neps/usageCache/client/tangosol-coherence-p3-override.xml -Dtangosol.coherence.cacheconfig=/home/aupm/neps/usageCache/usage-cache-config.xml -Desp.propDir=/home/aupm/neps/usageCache/props -Desp.logDir=/home/aupm/neps/usageCache/logs
-Dtangosol.coherence.management=local-only
-Dtangosol.coherence.management.remote=true


Coherence 3: => Classpath:
/home/aupm/neps/usageCache/UsageCacheModel-12.08-SNAPSHOT.jar:/home/aupm/neps/weblogic_10.3.6/coherence_3.7/lib/coherence.jar:/home/aupm/neps/weblogic_10.3.6/modules/features/weblogic.server.modules.coherence.server_10.3.5.0.jar:

Coherence 3: =&gt; Arguments:
-Dtangosol.coherence.override=/home/aupm/neps/usageCache/server/tangosol-coherence-p3-override.xml
-Dtangosol.coherence.cacheconfig=/home/aupm/neps/usageCache/usage-cache-config.xml
-Dtangosol.coherence.cluster=aupm_usage_service
-Dtangosol.coherence.distributed.localstorage=true


Managed Server 4:Server 4 => Arguments:
-Dtangosol.coherence.override=/home/aupm/neps/usageCache/client/tangosol-coherence-p4-override.xml -Dtangosol.coherence.cacheconfig=/home/aupm/neps/usageCache/usage-cache-config.xml -Desp.propDir=/home/aupm/neps/usageCache/props -Desp.logDir=/home/aupm/neps/usageCache/logs
-Dtangosol.coherence.management=local-only
-Dtangosol.coherence.management.remote=true


Coherence 4:=> Classpath:
/home/aupm/neps/usageCache/UsageCacheModel-12.08-SNAPSHOT.jar:/home/aupm/neps/weblogic_10.3.6/coherence_3.7/lib/coherence.jar:/home/aupm/neps/weblogic_10.3.6/modules/features/weblogic.server.modules.coherence.server_10.3.5.0.jar:

Coherence 4: =&gt; Arguments:
-Dtangosol.coherence.override=/home/aupm/neps/usageCache/server/tangosol-coherence-p4-override.xml
-Dtangosol.coherence.cacheconfig=/home/aupm/neps/usageCache/usage-cache-config.xml
-Dtangosol.coherence.cluster=aupm_usage_service
-Dtangosol.coherence.distributed.localstorage=true


Problem 4: