Tips and Tricks for Middleware installation
Installing the operating system
gLite middleware can be installed on RedHat Enterprise Linux distribution and its derivatives. It works on:
gLite 3.1 can be installed on version 4 of the OS, gLite 3.0 requires the version 3.
Note that only
CentOS and Scientific Linux
support security updates for version 3.09 of the OS so we recommend to use
them if you want to install gLite 3.0.
Suggestions
- All the middleware software is installed in /opt so if you are considering to use a separate partition for /opt assing at leas 2GB to it (the use of LVM is highly recommended).
- Perform custom package installation and add Development Tools
- Java is an essential software for grid middleware.
Local Repository
If you expect to install (and consequently update) a grid site its a good idea to create a local repository of the OS.
We suggest to use
mrepo tool to create the repository.
With
mrepo
you can build a local APT/YUM RPM repository from local ISO files, downloaded updates,
and extra packages from 3rd party repositories. It takes care of setting up the ISO files, downloading the RPMs, configuring HTTP access and providing PXE/TFTP resources for remote network installations.
Install and Configure mrepo
To install it you can just use yum:
root@locahost> yum install mrepo
The
/etc/mrepo.conf file contains the general settings for mrepo.
###
### Configuration file for mrepo
###
[main]
srcdir = /var/rep
wwwdir = /var/www/html/rep
confdir = /etc/mrepo.conf.d
arch = i386
metadata = apt yum repomd
The
/etc/httpd/conf.d/mrepo.conf
file contains web server settings for the published directory.
The /etc/mrepo.conf.d/*.conf
files contain all repositories configurations. You should split your repositories configuration in more files.
To create a repository use:
root@localhost> mrepo -guv
this command downloads the files and create the web tree (default location is
http://site/mrepo
).
To update the repository use:
root@localhost> mrepo -g
Mirror a SL4 repository
Create a /etc/mrepo.conf.d/slc4.conf file
### Scientific Linux 4
[sl4X]
name = Scientific Linux 4.X
release = 4x
arch = i386 x86_64
os = http://linuxsoft.cern.ch/scientific/$release/$arch/apt/RPMS.os/
updates = http://linuxsoft.cern.ch/scientific/$release/$arch/apt/RPMS.updates/
contrib = http://linuxsoft.cern.ch/scientific/$release/$arch/apt/RPMS.contrib/
localrpms = file:///mnt/rep/localrpms/
Set up the kernel and initrd files The kernel and the initrd have to be available via TFTP to start the installation via network.
Create the directory for kernel and initrd and download them from the OS distribution mirror:
NOTE: in the following example we refer to i386 version; please properly modify the path and url if you use a different arch.
cd /tftpboot/sl4X-i386
wget http://linuxsoft.cern.ch/scientific/4x/i386/images/SL/pxeboot/initrd.img
wget http://linuxsoft.cern.ch/scientific/4x/i386/images/SL/pxeboot/vmlinuz
chmod 0644 vmlinuz initrd.img
If you rename these files or change the path remember to update also a corresponding pxelinux configuration file.
Anaconda files
The RPMS directory in the web server has already been prepared by mrepo with all the references to all the modules included in the repository. This will be the source of all the packages for the installation. Also the base directory is already present, but some more files are needed for the network boot and installation sequence. These files can be manually downloaded from the OS repository (for example the one mirrored with mrepo) and are:
* comps.xml
* pkgorder
* hdlist
* hdlist2
* *.img
To download them:
mkdir -p /var/www/html/rep/sl4X-i386/SL/base
cd /var/www/html/rep/sl4X-i386/SL/base
wget -l1 -nd -c -r -R '*.html,*.gif' http://linuxsoft.cern.ch/scientific/4x/i386/SL/base/
mkdir /var/www/html/rep/sl4X-i386/SL/images
cd /var/www/html/rep/sl4X-i386/SL/images
wget -l1 -nd -c -r -R '*.html,*.gif' http://linuxsoft.cern.ch/scientific/4x/i386/images/SL/boot.iso
In order to use a kickstart file with a http installation it's better to create an additional directory structure slightly different from the existing one.
Taking the example above of the mrepo configuration, the existing directory tree on the web server should follow this order:
/var/www/html/rep/
/var/www/html/rep/sl4X-i386
/var/www/html/rep/sl4X-i386/SL/base
/var/www/html/rep/sl4X-i386/SL/images
/var/www/html/rep/sl4X-i386/SL/RPMS
/var/www/html/rep/sl4X-i386/RPMS.os
/var/www/html/rep/slX-i386/...
This one will serve for the mrepo tool and should be maintained. In the same www area (/var/www/html/) a connection between the install directory and the base directory has to be prepared (explanation follows). This will be a simbolik-link - called SL - to the distribution (in this example: sl4X-i386), located in the install directory. Thus we will have, for example:
ln -s /var/www/html/mrepo/sl4X-i386/RPMS.os /var/www/html/mrepo/sl4X-i386/SL/RPMS
This is required by the OS installer (Anaconda), it needs to find the base directory inside the SL (or
RedHat /Fedora…) one.
SLC4 mirror
Edit repository file /etc/mrepo.conf.d/slc4.conf
### Scientific Linux CERN 4
[slc4X]
name = ScientificLinuxCERN ($release - $arch)
release = slc4X
arch = i386
# Repositories
os = http://linuxsoft.cern.ch/cern/$release/$arch/apt/RPMS.os/
extras = http://linuxsoft.cern.ch/cern/$release/$arch/apt/RPMS.extras/
updates = http://linuxsoft.cern.ch/cern/$release/$arch/apt/RPMS.updates/
localrpms = file://var/rep/mirror/$release-$arch/localrpms
In analogy it is possible to mirror the
x86_64 distribution.
Note: after the installation please check the /etc/yum.repos.d/ .repo files to point to the local repository. *
Mirror the gLite middleware
CA mirror
Edit repository file /etc/mrepo.conf.d/ca.conf
###
### CA repository
###
[ca]
name = lcg-ca
arch = noarch
# lcg-ca repository (mirror)
current = http://linuxsoft.cern.ch/LCG-CAs/current/RPMS.production
GLite 3.0 mirror
Edit repository file /etc/mrepo.conf.d/glite30.conf
###
### gLite Middleware 3.0
###
[glite_sl3]
name = gLite Middleware ($release - $arch)
release = R3.0
arch = i386
### Official repositories (https://twiki.cern.ch/twiki/bin/view/LCG/GenericInstallGuide301)
# Generic Repositories
3_0 = http://glitesoft.cern.ch/EGEE/gLite/APT/$release/rhel30/RPMS.Release3.0/
3_0_externals = http://glitesoft.cern.ch/EGEE/gLite/APT/$release/rhel30/RPMS.externals/
3_0_updates = http://glitesoft.cern.ch/EGEE/gLite/APT/$release/rhel30/RPMS.updates/
# WMS/LB Repositories
3_0_wms = http://glitesoft.cern.ch/EGEE/gLite/APT/$release/glite-WMS/rhel30/RPMS.Release3.0
3_0_wms_externals = http://glitesoft.cern.ch/EGEE/gLite/APT/$release/glite-WMS/rhel30/RPMS.externals
3_0_wms_updates = http://glitesoft.cern.ch/EGEE/gLite/APT/$release/glite-WMS/rhel30/RPMS.updates
GLite 3.1 mirror
Edit repository file /etc/mrepo.conf.d/glite31.conf
###
### gLite Middleware 3.1
###
[glite_sl4]
name = gLite Middleware ($release - $arch)
release = R3.1
arch = i386
### Official repositories (https://twiki.cern.ch/twiki/bin/view/LCG/GenericInstallGuide310)
# Generic Repositories
generic-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/generic/sl4/$arch/RPMS.externals/
generic-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/generic/sl4/$arch/RPMS.release/
generic-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/generic/sl4/$arch/RPMS.updates/
# AMGA Repositories
amga-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-AMGA_postgres/sl4/$arch/RPMS.externals/
amga-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-AMGA_postgres/sl4/$arch/RPMS.release/
amga-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-AMGA_postgres/sl4/$arch/RPMS.updates/
# BDII Repositories
bdii-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-BDII/sl4/$arch/RPMS.externals/
bdii-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-BDII/sl4/$arch/RPMS.release/
bdii-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-BDII/sl4/$arch/RPMS.updates/
# FTM Repositories
ftm-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-FTM/sl4/$arch/RPMS.externals/
ftm-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-FTM/sl4/$arch/RPMS.release/
ftm-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-FTM/sl4/$arch/RPMS.updates/
# LFC_mysql Repositories
lfc_mysql-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-LFC_mysql/sl4/$arch/RPMS.externals/
lfc_mysql-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-LFC_mysql/sl4/$arch/RPMS.release/
lfc_mysql-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-LFC_mysql/sl4/$arch/RPMS.updates/
# LFC_oracle Repositories
lfc_oracle-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-LFC_oracle/sl4/$arch/RPMS.externals/
lfc_oracle-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-LFC_oracle/sl4/$arch/RPMS.release/
lfc_oracle-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-LFC_oracle/sl4/$arch/RPMS.updates/
# MPI_utils Repositories
mpi_utils-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-MPI_utils/sl4/$arch/RPMS.externals/
mpi_utils-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-MPI_utils/sl4/$arch/RPMS.release/
mpi_utils-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-MPI_utils/sl4/$arch/RPMS.updates/
# PX Repositories
px-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-PX/sl4/$arch/RPMS.externals/
px-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-PX/sl4/$arch/RPMS.release/
px-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-PX/sl4/$arch/RPMS.updates/
# SE_dcache_admin_gdbm Repositories
se_dcache_admin_gdbm-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_admin_gdbm/sl4/$arch/RPMS.externals/
se_dcache_admin_gdbm-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_admin_gdbm/sl4/$arch/RPMS.release/
se_dcache_admin_gdbm-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_admin_gdbm/sl4/$arch/RPMS.updates/
# SE_dcache_admin_postgres Repositories
se_dcache_admin_postgres-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_admin_postgres/sl4/$arch/RPMS.externals/
se_dcache_admin_postgres-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_admin_postgres/sl4/$arch/RPMS.release/
se_dcache_admin_postgres-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_admin_postgres/sl4/$arch/RPMS.updates/
# SE_dcache_info Repositories
se_dcache_info-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_info/sl4/$arch/RPMS.externals/
se_dcache_info-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_info/sl4/$arch/RPMS.release/
se_dcache_info-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_info/sl4/$arch/RPMS.updates/
# SE_dcache_pool Repositories
se_dcache_pool-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_pool/sl4/$arch/RPMS.externals/
se_dcache_pool-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_pool/sl4/$arch/RPMS.release/
se_dcache_pool-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dcache_pool/sl4/$arch/RPMS.updates/
# SE_dpm_disk Repositories
se_dpm_disk-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_disk/sl4/$arch/RPMS.externals/
se_dpm_disk-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_disk/sl4/$arch/RPMS.release/
se_dpm_disk-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_disk/sl4/$arch/RPMS.updates/
# SE_dpm_mysql Repositories
se_dpm_mysql-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_mysql/sl4/$arch/RPMS.externals/
se_dpm_mysql-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_mysql/sl4/$arch/RPMS.release/
se_dpm_mysql-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_mysql/sl4/$arch/RPMS.updates/
# SE_dpm_oracle Repositories
#se_dpm_oracle-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_oracle/sl4/$arch/RPMS.externals/
#se_dpm_oracle-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_oracle/sl4/$arch/RPMS.release/
#se_dpm_oracle-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-SE_dpm_oracle/sl4/$arch/RPMS.updates/
# TORQUE_client Repositories
torque_client-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_client/sl4/$arch/RPMS.externals/
torque_client-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_client/sl4/$arch/RPMS.release/
torque_client-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_client/sl4/$arch/RPMS.updates/
# TORQUE_server Repositories
torque_server-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_server/sl4/$arch/RPMS.externals/
torque_server-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_server/sl4/$arch/RPMS.release/
torque_server-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_server/sl4/$arch/RPMS.updates/
# TORQUE_utils Repositories
torque_utils-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_utils/sl4/$arch/RPMS.externals/
torque_utils-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_utils/sl4/$arch/RPMS.release/
torque_utils-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-TORQUE_utils/sl4/$arch/RPMS.updates/
# UI Repositories
ui-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-UI/sl4/$arch/RPMS.externals/
ui-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-UI/sl4/$arch/RPMS.release/
ui-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-UI/sl4/$arch/RPMS.updates/
# VOBOX Repositories
vobox-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOBOX/sl4/$arch/RPMS.externals/
vobox-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOBOX/sl4/$arch/RPMS.release/
vobox-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOBOX/sl4/$arch/RPMS.updates/
# VOMS_mysql Repositories
voms_mysql-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOMS_mysql/sl4/$arch/RPMS.externals/
voms_mysql-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOMS_mysql/sl4/$arch/RPMS.release/
voms_mysql-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOMS_mysql/sl4/$arch/RPMS.updates/
# VOMS_oracle Repositories
voms_oracle-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOMS_oracle/sl4/$arch/RPMS.externals/
voms_oracle-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOMS_oracle/sl4/$arch/RPMS.release/
voms_oracle-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-VOMS_oracle/sl4/$arch/RPMS.updates/
# WN Repositories
wn-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-WN/sl4/$arch/RPMS.externals/
wn-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-WN/sl4/$arch/RPMS.release/
wn-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/glite-WN/sl4/$arch/RPMS.updates/
# lcg-CE Repositories
lcg-ce-externals = http://linuxsoft.cern.ch/EGEE/gLite/$release/lcg-CE/sl4/$arch/RPMS.externals/
lcg-ce-release = http://linuxsoft.cern.ch/EGEE/gLite/$release/lcg-CE/sl4/$arch/RPMS.release/
lcg-ce-updates = http://linuxsoft.cern.ch/EGEE/gLite/$release/lcg-CE/sl4/$arch/RPMS.updates/
Using a XEN virtual Machine
You can use a XEN 3 virtual machine to install your system.
First yo need to install a Linux SL4 server and install the xen packages. Please use LVM to manage the disk of both the Guest systems and the Host. Then you can install the guest servers.
Select the partition dedicated to the Guest servers and initialize the physical volume
[root@grid006 ~]# pvcreate /dev/hdb1
create the volume group
[root@grid006 ~]# vgcreate guests /dev/hdb1
create a logical volume called test of size 21 GB on the volume group guests
[root@grid006 ~]# lvcreate -L21G -ntest guests
Create the xen guest configuration file:
[root@grid006 ~]# cat /etc/xen/test
name= "amga"
memory=1024
disk=[ 'phy:main/test,xvda,w' ]
vif= ['mac=xx:xx:xx:xx:xx:xx' ]
#bootloader="/usr/bin/pygrub"
on_reboot = 'destroy'
on_crash = 'destroy'
on_poweroff = 'destroy'
Bootstrap the virtual machine for installation:
[root@grid006 xen]# xm create amga -c kernel=/etc/xen/vmlinuzSL ramdisk=/etc/xen/initrdSL.img extra="display=140.105.79.107:0 method=http://images.si.inaf.it/mrepo/sl4-i386/"
where the display is the display of your local machine (please allow connection to your x server from the xen host
xhost +
).
You can download the
ramdisk
and the
kernel image
from the
scientificlinux site.
After the installation uncomment the line
#bootloader="/usr/bin/pygrub"
and bootstrap the machine with the
-c
option
[root@grid006 xen]# xm create amga -c
Check grub to boot the xen kernel and eventually after bootstrap modify the
/boot/grub/menu.lst
file.
Installing the middleware
To install a gLite 3.1 middlewere on a SL 4 Linux machine you can follow the instruction on the
gLite generic installation guide
.
The installation of a grid element is done using
yum while the configuration is done using
yaim. Each middleware component (grid node) is associated with a metapakage, here below an updated lists of currently deployed profiles with related metapackage and nodetype names:
Node Type |
meta-package name |
repo file |
What is |
needed in a grid site |
AMGA |
glite_AMGA_oracle |
glite_AMGA_oracle.repo |
Metadata server |
NO |
AMGA |
glite_AMGA_postgres |
glite_AMGA_postgres.repo |
Metadata server |
NO |
BDII |
glite-BDII |
glite-BDII.repo |
Top level information system server |
NO |
site BDII |
glite-BDII |
glite-BDII.repo |
Grid site information system server |
YES |
dCache Storage Element |
glite-SE_dcache_admin_gdbm |
glite-SE_dcache_admin_gdbm.repo |
Storage Element |
MAYBE |
dCache Storage Element |
glite-SE_dcache_admin_postgres |
glite-SE_dcache_admin_postgres.repo |
Storage Element |
MAYBE |
dCache Storage Element |
glite-SE_dcache_info |
glite-SE_dcache_info.repo |
Storage Element |
MAYBE |
dCache Storage Element |
glite-SE_dcache_pool |
glite-SE_dcache_pool.repo |
Storage Element |
MAYBE |
DPM disk |
glite-SE_dpm_disk |
glite-SE_dpm_disk.repo |
Storage Element |
YES |
DPM Storage Element (mysql) |
glite-SE_dpm_mysql |
glite-SE_dpm_mysql.repo |
Storage Element |
YES |
FTM |
glite-FTM |
glite-FTM.repo |
File Transfer Monitor node |
NO |
LB |
glite-LB |
glite-LB.repo |
Logging and bookkeeping service |
NO |
LCG CE |
lcg-CE |
lcg-CE.repo |
Computing Element |
YES |
LCG File Catalog server with mysql |
glite-LFC_mysql |
glite-LFC_mysql.repo |
File Catalog server |
NO |
LCG File Catalog server with oracle |
glite-LFC_oracle |
glite-LFC_oracle.repo |
File Catalog server |
NO |
LSF batch server utils |
glite-LSF_utils |
glite-LSF_utils.repo |
LSF scheduler |
NO |
MON-Box |
glite-MON |
glite-MON.repo |
RGMA-based monitoring system collector server |
YES |
MPI utils |
glite-MPI_utils |
glite-MPI_utils.repo |
Parallel programming Utilities |
YES |
MyProxy |
glite-PX |
glite-PX.repo |
MYProxy server |
NO |
TORQUE client |
glite-TORQUE_client |
glite-TORQUE_client.repo |
Torque batch system client |
YES |
TORQUE server |
glite-TORQUE_server |
glite-TORQUE_server.repo |
Torque batch system server |
YES |
TORQUE batch server utils |
glite-TORQUE_utils |
glite-TORQUE_utils.repo |
Torque batch system |
YES |
SGE batch server utils |
glite-SGE_utils |
glite-SGE_utils.repo |
Sun Grid Engine batch server |
NO |
User Interface |
glite-UI |
glite-UI.repo |
User Interface |
YES |
VO agent box |
glite-VOBOX |
glite-VOBOX.repo |
Virtual Organization Agents |
NO |
VOMS server with mysql |
gllite-VOMS_mysql |
glite-VOMS_mysql.repo |
VO membership service |
NO |
VOMS server with oracle |
glite-VOMS_oracle |
glite-VOMS_oracle.repo |
VO membership service |
NO |
WMS |
glite-WMS |
glite-WMS.repo |
Workload management server |
NO |
Worker Node |
glite-WN |
glite-WN.repo |
Worker Node |
YES |
Generic suggestions
Ensure that the hostnames of your machines are correctly set. Run the command:
root@localhost> hostname -f
Check the proper date setting (
ABSOLUTELY NECESSARY) and use NTP:
root@localhost> yum install ntp
root@localhost> services ntpd start
root@localhost> chkconfig add ntpd
root@localhost> chkconfig ntpd on
Java installation must be done following the suggestions on the
glite site
.
If you have a local repository, we suggest to create the rpm package according to the instructions and then to add the java packages as localrpms.
Enable the DAG repository in /etc/yum.repos.d/dag.repo.
Download the repo file associated to the metapakage you want to install.
The available meta-packages and the associated repo file name can be download
here
.
root@localhost> cd /etc/yum.repos.d/
root@localhost> wget http://grid-deployment.web.cern.ch/grid-deployment/glite/repos/metapkg.repo
root@localhost> yum update
root@localhost> yum install metapkg
Use
yaim
for the configuration of your host. A full yaim manual is available
here
please read it carefully.
Redirect yaim standard output/error to a file to check the various stage of the installation. Use yaim logging level variable
YAIM_LOGGING_LEVEL
in the yaim configuration file
to set the verbosity. The default value is "INFO".
Modify the repofiles to mach your local repository
If you used a local repository after the installation of the operating system it is useful to modify the
.repo
files in
/etc/yum.repo.d/
to point to your local installation server.
Eventually you can create an RPM file with the new setup to distribute as localrpm.
Here an example on howto do it:
[root@localhost ~]# cat .rpmmacros
%_topdir /root/redhat
%packager Giuliano Taffoni
[root@localhost ~]# mkdir -p ~/redhat/SRPMS/ ~/redhat/BUILD ~/redhat/SOURCES ~/redhat/SPECS ~/redhat/RPMS/i586 ~/redhat/
Download and install the source package of the SL4 yum configuration:
[root@localhost ~]# rpm -ihv http://linuxsoft.cern.ch/scientific/4x/SRPMS/yum-conf-4x-1-7.SL.src.rpm
Modify the sources:
[root@localhost ~]# cd redhat/SOURCES/
[root@localhost SOURCES]# ls
yum-conf-4x-1.tar.gz
[root@localhost SOURCES]# tar zxvf yum-conf-4x-1.tar.gz
[root@localhost SOURCES]# cd yum-conf-4x-1/etc/yum.repos.d
[root@localhost SOURCES]# vim ...
Create a new source
[root@localhost ~]# cd ~/redhat/SOURCES/
[root@localhost ~]# mv yum-conf-4x-1 yum-yourconf-4x-1
[root@localhost ~]# tar zcvf yum-yourconf-4x-1.tar.gz yum-yourconf-4x-1
[root@localhost SOURCES]# cd ~/redhat/SPECS/
[root@localhost SPECS]# cp yum-conf-sl4x.spec yum-yourconf.spec
Modify the conf file
[root@localhost SPECS]# vim yum-yourconf.spec
Summary: RPM installer/updater config files
Name: yum-yourconf
Version: 1
Release: 1.SL
License: GPL
Group: System Environment/Base
Source: %{name}-%{version}.tar.gz
URL: http://www.dulug.duke.edu/yum/
BuildRoot: %{_tmppath}/%{name}-%{version}root
BuildArchitectures: noarch
Prereq: /sbin/chkconfig, /sbin/service
Obsoletes: yum-conf
Provides: yum-conf, yumconf
Epoch: 4
[...]
Create the package:
[root@localhost SPECS]# rpmbuild -ba yum-yourconf.spec
Done.
The yaim configuration file
The yaim configuration file is used to set up your grid site. A template is available in
/opt/glite/yaim/examples/siteinfo/site-info.def
, this file should be customized according yo your site
needs.
A complete list of the available variables can be found
here
.
In a standard grid site it i necessary to install a UI, CE, SE, site BDII, so you need to set up the variables related to your network and location, batch systems, fqhn of your machines (not IP address).
Notice that all the servers that needs a certificate to access the grid must be registered in your DNS (direct and reverse).
A part of the yaim configuration file regards the VOs. To add ASTRO vo you can add the following lines.
VO_ASTRO_SW_DIR=$VO_SW_DIR/astro
VO_ASTRO_DEFAULT_SE=$CLASSIC_HOST
VO_ASTRO_STORAGE_DIR=$CLASSIC_STORAGE_DIR/astro
VO_ASTRO_VOMS_SERVERS="vomss://grid12.lal.in2p3.fr:8443/voms/astro.vo.eu-egee.org?/astro.vo.eu-egee.org"
VO_ASTRO_VOMSES="astro.vo.eu-egee.org grid12.lal.in2p3.fr 20012 /O=GRID-FR/C=FR/O=CNRS/OU=LAL/CN=grid12.lal.in2p3.fr astro.vo.eu-egee.org"
If you want to support this VO in your site you have to configure also the site meta-users and group for this VO.
/opt/glite/yaim/examples/users.conf
/opt/glite/yaim/examples/groups.conf
The safest way is to modify the data regarding onother VO (that you are not supporting in your site) and adapt them to the astro VO.
--
TaffoniGiuliano - 13 Aug 2008