Tuesday, July 29, 2008

2nd application BOINC works on my HPC


My 2nd application BOINC works on my HPC.

great


I use this procedure http://boinc.berkeley.edu/wiki/Installing_on_Linux

Monday, July 28, 2008

MySQL Administrator - my first package works!


Thanks to Mr. Lauro Atienza for providing useful info.

Ive met a nice beautiful lady whom I think she can help us in our Backup solution to our HPC

Ive met a nice beautiful lady whom I think she can help us in our Backup solution to our HPC. -- Her name is Candy ! sweet name great smile.

My First glance on Ganglia in my HPC

Thursday, July 24, 2008

My 1st Compute Node

I messed up my frontend server during my first installation of my 1st compute mode, due to my diligence I was able to successfully installed my First Compute node.

Lesson learn:

1, Create Clone VM-Frontend in my ESX server
2. Applied Service-pack on my VM-Frontend server.
3. Use PXE on my first compute node.


WHAT NEXT ?

Monday, July 21, 2008

My First HPC ROCKS Frontend Server

"It tooks the Romans in 5 years to build the Rome- It takes me 5 days to build My First HPC ROCKS Frontend Server"


- after so many tries , I have finally installed My First HPC ROCKS Frontend Server running under Virtual Machine VMware ESX AMD opteron 64 4-CPU.

Lesson learn:

I did not use the 4 CD Rocks Roll - I downloaded Rocks 5 v ISO image from FTP Rockcluster.org.

During installation do not choose automatic partitioning- it will hang on the 4th CD.

By default, after the installation Frontend server Eth0 DHCPd is running- be aware not to connect to your Production Network were my IP DHCP Commander are broadcasting- else you will have a Rogue DHCP.

okay !!!!I need to go back to install my 1st Compute node

Thursday, July 3, 2008

Summary of meeting about ITS and CRIL about IRRI's HPC:

Review of Hardware, scheduled on Thursday 8:30am with Carlos Ortiz
  1. Identify apps installed out of the standard of the HPC
    • LSF, version 6.0
    • Blast (original Paracel Blast, version 1.6.1)
    • Blast NCBI
    • R statistics (define desirable version with Ramil Mauleon)
    • Victor Ulat will ask to users for any other app missed in this list.
  2. Test new Rocks version with the current applications (3 weeks)
  3. Make backup of the HPC
    • Debug unused accounts and data, Carlos Ortiz is working in this issue
    • Save config files
  4. Reinstallation of the HPC (1 week), this time includes disasters and troubleshooting
  5. Documentation, during all the process
    • For adminsitrators, it'll be made during all the process.
    • For end-user, include tutorial in coordination with CRIL.
Work team are:
ITS: Boyet, Nanie,Denis Diaz,
CRIL: Carlos Ortiz,

Monday, June 30, 2008

HPC Architectures

As an example for comparing the architectures, we’ll be looking at computing A = A x B.


The oldest HPC Architecture is the Vector Super Computer. These are first introduced by Cray Research in the 1970s. A vector computer has a pipeline arithmetic unit and stream data from memory into the unit, and back out to memory. The primary problem with vector super computers is that they are very expensive.


The next HPC architecture is the Symmetric Multi Processor or SMP. An SMP connects multiple processors to a large shared memory. The programming model for an SMP is threads, one for each processor. However, multiple threads may need synchronisation.


For example, if I’m computing A = A x B, but you’re computing B, I have to wait until you’re finished before I can do my computation.


The problem with SMP is that they are expensive to scale. That is, it is expensive to connect a large number of processors to the same shared memory.


The newest HPC architecture is the Cluster. Cluster consists of a bunch of separate computers or nerves connected by a network. The programming model for clusters is processes, one for each nerve. Processes or nerves share data by passing messages. For example, if I want to compute A = A x B, and I have A, and you have B. Then you must explicitly send the message with B in it to me through the network, so that I also now have a copy of B, and I can compute A = A x B.


Clusters are the most cost-effective HPC architecture. Challenge with cluster is the explicit changes that are needed in the code to implement the message passing.


Clusters are the most widely deployed HPC architecture in the world today. They pass 50% recently, they are continuing to grow, they will reach 75 or 80% in the near future.

My HPC link

1. Rocks Tutorial
2. The CGIAR Global Cluster Grid of HPCs for Bioinformatics
3. Demonstrations of GCP Bioinformatics Products
4. ICRISAT HPC
5. cropforge gcphpcstructure
6. www.rocksclusters.org/rocks-documentation
7.www.bestgrid.org/index.php/Rocks_5.0_Installation