LCFGng Server Installation Guide



Document identifier: LCG-02-GIS-0002
Date: July 20, 2004
Author: Jose Gabadinho, Emanuele Leonardi, Antonio Retico, Alessandro Usai,
Abstract: Installation and set-up of a LCFGng installation server

Contents

Objectives of this Document

This document is addressed to Site Administrators in charge of LCG2 middleware installation and configuration.
Its purpose is to provide a complete guide to set up a LCFGng server in order to handle automatically the installation LCG middleware releases.

Introduction

This procedure foresees some shell scripts to be used in order to reduce the amount of manual work involved. The aim of the procedure is to have a LCFGng server which runs a Red Hat 7.3 OS and which is able to install clients with Red Hat 7.3 via diskette or PXE. The main steps of the LCFGng server installation procedure are:

Prerequisites

Before starting the server installation, in order to make the installation and testing proceed more smoothly, collect in advance the following information:

Installation of Red Hat 7.3 release

Only few guidelines are listed here for the installation of Red Hat 7.3, which each site may implement according to its own policies and practices: WARNING: in order to have mkxprof working properly after the installation, the node name reported by the /bin/hostname command should include the full domain name. Should the installation defaults differ, the configuration may be changed by editing the file /etc/sysconfig/network.

For example, if you are running at CERN the file /etc/sysconfig/network should contain the following line:

  
    HOSTNAME="<LCFGngserverHostName>.cern.ch"

Time Server configuration

A general requirement for the LCFGng server is to be synchronized to the other nodes in the farm.
This requirement may be fulfilled in several ways.

If your nodes run under afs, most likely they will be already synchronized. Otherwise, you can use the NTP protocol with a time server. Instructions and examples for a NTP client configuration can be found in [2].

SSH configuration

If it is not already started, the SSH deamon has to be enabled

Installation of LCFGng basic applications

Three packages have to be manually installed in order to start-up the complete set up of the LCFGng server.


Download the LCG packages via the "updaterep" object

Files needed for LCG releases are available from a CVS server at CERN. This CVS server contains the list of rpms to install and the LCFGng configuration files for each node type. Furthermore, in the release, also templates of site-specific configuration files ( cfgdir-cfg.h, local-cfg.h , site-cfg.h , and private-cfg.h ) are provided.

The CVS archive can be reached directly from the LCG Deployment homepage
http://www.cern.ch/grid-deployment

You can follow the CVS Deployment link on the left and then browse into the repository. The CVS area where the installation tools and documentation are kept is called lcg2 .

WARNING: at the same location there is another directory called lcg-release : this area is used for the integration and certification software, NOT for production. Just ignore it!

Documentation about access to this CVS repository can be found in [1].

Set the CVSROOT and then check out the software release(s) you want on your server.
For a single tag you can use


> cvs checkout -r <CURRENT\_TAG> -d <TAG\_DIRECTORY> lcg2
A good candidate for TAG_DIRECTORY is for example CURRENT_TAG.
In any case notice that the path of the directory must be local, because CVS does not allow absolute paths to be used with the -d option
Note: the -d TAG_DIRECTORY option will create a directory named TAG_DIRECTORY and copy there all the files. If you do not specify the -d parameter, the directory will be a subdirectory of the current directory named lcg2.

As the client nodes need to "see" the rpm lists, the content of the rpmlist subdirectory must be copied to /opt/local/linux/7.3/rpmcfg on the LCFG server: this directory is then NFS mounted by all client nodes and it is visible as /export/local/linux/7.3/rpmcfg

Once the rpms lists and LCFGng configuration files for all the node types are present, the corresponding rpms needed for the wanted release can be downloaded and installed using the updaterep object.
In TAG_DIRECTORY/tools two configuration files for this script can be found:

The former one makes updaterep download only the rpms which are actually needed to install the current tag, whereas the latter one yields a full mirror of the LCG rpm repository.
For the first case copy updaterep.conf to /etc/updaterep.conf and run the updaterep command:

> cp <TAG_DIRECTORY>/tools/updaterep.conf /etc/
> updaterep -v
By default all rpms will be copied to the /opt/local/linux/7.3/RPMS area, that is visible from the client nodes as /export/local/linux/7.3/RPMS.
You can change the repository area by editing /etc/updaterep.conf and modifying the REPOSITORY_BASE variable.

IMPORTANT NOTICE:
As the list and structure of Certification Authorities (CA) accepted by the LCG project can change independently from the middleware releases, the rpm list related to the CAs certificates and URLs has been decoupled from the standard LCG2 release procedure.
This means that the version of the security-rpm.h file contained in the rpmlist directory associated to the current tag might be incomplete or obsolete. Please go to the URL

http://markusw.home.cern.ch/markusw/lcg2CAlist.html

and follow the relevant links to update all the CA-related settings.
Changes and updates of these settings will be announced on the LCG-Rollout mailing list.


Installation/upgrade of server packages

To make sure that all the needed object rpms are installed on your LCFG server, you should use the lcfgng_server_update.pl script, also located in TAG_DIRECTORY/tools . This script will report which rpms are missing or have the wrong version and will create the /tmp/lcfgng_server_update_script.sh script which you can then use to fix the server configuration.

  • Run the script lcfgng_server_update.pl in the following way:
    
     > lcfgng_server_update.pl <TAG_DIRECTORY>/rpmlist/lcfgng-common-rpm.h
     > /tmp/lcfgng_server_update_script.sh
     > lcfgng_server_update.pl <TAG_DIRECTORY>/rpmlist/lcfgng-server-rpm.h
     > /tmp/lcfgng_server_update_script.sh
          \end {verbatim}
          WARNING: please always check the script 
          \em /tmp/lcfgng\_server\_update\_script.sh \em verifing that all the rpm 
          update commands look reasonable before running it.\\\\
          The script \em /tmp/lcfgng\_server\_update\_script.sh \em may encounter 
          some problems with missing dependencies. Usually they are not critical 
          and they are due to the fact that the script launches independent rpm 
          statements. Since dependencies are supposed to have been checked before, 
          an easy way to proceed is to run again the script until no more missing 
          dependencies exist.
    \end{itemize}
    
    \section{Build LiveOS}
    
    The LiveOS is the Operative system to be mounted by client nodes during the 
    initial boot phase and will be used to steer the whole node installation.\\
       
    In order to install the LiveOS on the LCFGng server, you can use the released 
    script\\
    
     \em $<$TAG\_DIRECTORY$>$/tools/lcfgng\_installroot.pl \em.\\\\
    This script uses configuration parameters read from the file
    \em lcfgng\_installroot.cfg\em, which must be in perl format.\\ 
    What follows is a list of the configurable actions done by the script:
    \begin{itemize}
    \item Set the root directory where all LiveOS versions will be installed 
          (default: \em /opt/local/linux/nginstallroot\em).\\
          Notice that this directory needs to be created explicitly!.   
    \item Create the file containing the list of RPMs to install on the LiveOS 
          (the default list is the \em LIVEOS-rpm\em, that should be part of the 
          release itself. It's not recommended to change it).
    \item Set the directory where the rpm lists reside (default: 
         \em /opt/local/linux/$<$RH-version$>$/rpmcfg \em)
    \item Set the path of the rpm command to use (def: \em /bin/rpm\em)
    \end{itemize}
    The required steps to install and configure the LiveOS on a LCFGng server 
    are outlined below:
    \begin{itemize}
    \item Install the LiveOS on the LCFGng server
          \begin{verbatim}
    > cd <TAG_DIRECTORY>/tools
    > lcfgng_installroot.pl
    
    WARNING: The script lcfgng_installroot.pl must be launched within the TAG_DIRECTORY/tools directory so that its configuration file can be found.
    It is advisable to save the output into a log file as the output size is considerable.

  • Set up the local installation parameters
    In order to set up local installation parameters (e.g. keyboard, time zone) edit the file

    /opt/local/linux/nginstallroot/7.3/etc/installparams

    The default installparams file assumes the US keyboard and the CET time zone.

    You can use the default values provided by executing:

    
    > cd /opt/local/linux/nginstallroot/7.3/etc
    > cp installparams.default installparams
    
    If you want to use a different set of defaults, you should edit the installparams file.

Set up LCFGng Interface Services (DHCP, WEB and NFS servers)

Configure DHCP deamon

In order to set-up the dhcp deamon, the file /etc/dhcpd.conf has to be configured.
You can find an example of /etc/dhcpd.conf in /etc/dhcpd.conf.ngexample .
The user-class option contains the URL of the installation server.

IMPORTANT NOTICE: in order to avoid problems with your network and security administration NEVER omit (even in a test environment) to insert the row

 
deny unknown-clients;
at the beginning of your dhcp configuration file.

An example of dhcp.conf file is


deny unknown-clients;
        not authoritative;
        option domain-name
"cern.ch";
        
        # These 3 lines are needed for the installation via
PXE
        option dhcp-class-identifier "PXEClient";
        option
vendor-encapsulated-options 01:04:00:00:00:00:ff;
        filename "pxelinux.0";

        
        subnet 137.138.0.0 netmask 255.255.0.0 {
          option
routers 137.138.1.1;
          option domain-name-servers
137.138.16.5,137.138.17.5;
          host lxshare0403 {
             
fixed-address lxshare0403.cern.ch;
              hardware ethernet
00:E0:81:04:E1:DE;
              option user-class
"http://lxshare0402.cern.ch/profiles/";
          }
        }


Configure HTTP server (Apache)

  • Stop the web server (Apache) if it is running:
    
    > /etc/rc.d/init.d/httpd stop
    
  • A default configuration of HTTP server can be obtained by replacing the file

    /etc/httpd/conf/httpd.conf

    with the file

    /etc/httpd/conf/httpd.conf.ngexample73

    
    > cp /etc/httpd/conf/httpd.conf.ngexample73 /etc/httpd/conf/httpd.conf
    
    In this default configuration, the apache server running on your LCFG server will allow access from anywhere on the net. This is a potential security breach as LCFG node configuration files contain sensible information (e.g. encrypted root passwords). To limit access to nodes belonging to your domain you should edit the apache configuration file /etc/httpd/conf/httpd.conf , and apply the following changes:
    • In the section delimited by these two lines:
      
      <Directory "/var/obj/conf/server/web">
      ...
      </Directory>
                    \end {verbatim}
                    find the line \em ``Allow from all'' \em  and replace it with 
                    \em ``Allow from $<$domainname$>$'' \em where $<$domainname$>$ 
                    is your network domain (e.g. at CERN this gives 
                    \em ``Allow from cern.ch''\em).        
              \item Repeat the same operation in the section delimited by lines:
                    \begin{verbatim}
      <Directory "/var/obj/conf/server/web/install/">
      ...
      </Directory>
                    \end {verbatim}
              \end{itemize}
      \item The proposed default configuration allows the use of PXE installation.
            The install.cgi script provides a basic interactive interface to select
            which nodes should be installed by changing the pxelinux configuration
            files to be used (install the machine/boot from the hd): The script is
            protected by a user/password access. The defined user is 
            \em lcfgng\em.\\ 
            The encrypted password has to be stored into the AuthUserFile (Default:
             \em /etc/httpd/.htpasswd\em).\\
             
            In order to create the password file you need to issue the following
            command:
            \begin{verbatim}
      > htpasswd -c /etc/httpd/.htpasswd lcfgng
      
      Of course you can change the location of the password file, the user name and password or setup a better protection of this simple web interface. Another script ( ack.cgi ) is called by the clients when at the end of the installation to restore the boot from the hard disk.

Configure NFS

To configure NFS add to the file /etc/exports the lines


/opt/local/linux/7.3 *(ro,no_root_squash)
/opt/local/linux/nginstallroot/7.3 *(ro,no_root_squash)
(the lines above can also be copied from /etc/exports.ngexample73 )

Start all LCFGng interface services

  • NFS
    
    > chkconfig nfs on
    > chkconfig nfslock on
    > /etc/rc.d/init.d/nfs start
    > /etc/rc.d/init.d/nfslock start
    

  • HTTPD
    
    > chkconfig httpd on
    > /etc/rc.d/init.d/httpd start
    

  • DHCP
    
    > chkconfig dhcpd on
    > /etc/rc.d/init.d/dhcpd start
    


PXE set-up

The installation via PXE is done with pxelinux. The first step is to put he pxelinux loader (provided by the package syslinux) in the /tftpboot directory :

> cp /usr/lib/syslinux/pxelinux.0 /tftpboot/
TFTP is used by PXE NIC to download the pxelinux.0 loader and then by this one to download its configuration file (from /tftpboot/pxelinux.cfg ) and a Linux kernel (from /tftpboot/kernel).

The edg-pxelinux-supp rpm provides two linux kernels usable for boot.

If you want any other kernel to be used on your nodes during the first phase of the installation, you can just copy your own Linux kernel into the directory

/tftpboot/kernel

Different node installation flavours, dealing with different parameters to be passed to the kernel are also possible.
Installation modes are defined by configuration files stored in the directory

/tftpboot/pxelinux.cfg

The edg-pxelinux-supp" rpm provides there the configuration file
/tftpboot/pxelinux.cfg/lcfg-install-nointeract-73

With respect to the default configuration file provided, you will probably need, in order to have your OS starting on a ram disk, to increase the maximum size of the ram disk itself. That is done by setting-up the kernel parameter ramdisk_size.

Two examples of alternative pxelinux configuration, with a short description follows (the append lines in the examples have been broken with a ``backslash'' character for readability reasons, but they have to be written on a single line)

  • file:lcfg-install-nointeract-nosmp-noserial-73.cfg
    
    default linux
    label linux
      kernel kernel/kernelboot-2.4.20-19.serial
      ipappend 1
      append root=/dev/nfs nfsroot=/opt/local/linux/nginstallroot/7.3 \ 
        init=/etc/rc_install nointeract=yes pxe_http_ack=/install/ack.cgi \
        ramdisk_size=32768
    

    In this example a different linux kernel is used. The kernel "kernelboot-2.4.20-19.serial" had previously been stored into the /tftpboot/kernel subdirectory, as it is referenced in the configuration file by the kernel parameter.
    Furthermore, the maximum size of the ram disk has been increased by setting the kernel parameter ramdisk_size up to 32768 kB.

  • file:lcfg-install-nointeract-nosmp-serial-73.cfg
    
    default linux
    serial 0,9600n8
    label linux
      kernel kernel/kernelboot-2.4.20-19.serial
      ipappend 1
      append root=/dev/nfs nfsroot=/opt/local/linux/nginstallroot/7.3 \
        init=/etc/rc_install nointeract=yes pxe_http_ack=/install/ack.cgi \
        console=tty0 console=ttyS0,9600 ramdisk_size=32768
    
    In this example, with respect to the previous one, the output of the node to be installed to a serial line connected to a central node which can be used in order to follow the installation process remotely.
    Note that that particular kernel version has been used with this aim.

For each configuration file you put into the /tftpboot/pxelinux.cfg directory, the corresponding installation option will be shown by the installation GUI (see afterwards).

Check the flag "disable" in the configuration file /etc/xinetd.d/tftp to be set to "=no", then tun the command


> service xinetd restart
to make sure that tftp is up and running.


Firewall configuration

If you are using a firewall consider that the LCFGng server needs the following ports to be opened.

Service Port Protocol open-to
tftpd 69 tcp, udp site
httpd 80 tcp site
sunrpc/portmap 111 tcp, udp site
nfsd 2049 tcp, udp site


For example, if you are using ipchains for firewall configuration, you can do this by inserting, before the first "REJECT" line into the file /etc/sysconfig/ipchains , the lines


-A input -s 0/0 -d 0/0 80 -p tcp -y -j ACCEPT
-A input -s 137.138.0.0/16 -d 0/0 69 -p tcp -y -j ACCEPT
-A input -s 137.138.0.0/16 -d 0/0 69 -p udp -j ACCEPT
-A input -s 137.138.0.0/16 -d 0/0 111 -p tcp -y -j ACCEPT
-A input -s 137.138.0.0/16 -d 0/0 111 -p udp -j ACCEPT
-A input -s 137.138.0.0/16 -d 0/0 2049 -p udp -j ACCEPT
-A input -s 137.138.0.0/16 -d 0/0 2049 -p tcp -y -j ACCEPT
After this, the ipchains service has to be restarted:

> service ipchains restart

Reboot the node

Although a reboot of the node is not mandatory, you can do it in order to be sure that all the needed services to be up and running after re-starting.


> shutdown -r now


How to start using the LCFGng server

This section is meant to be very basic introduction to the use of LCFGng server to set-up client nodes. If you are already skilled with LCFGng use there will be likely nothing new to you.

The following introduction, of course, IS NOT a replacement of the LCG Release Notes [3].
They are the official reference for LCG sites installation, and must be carefully read before starting the installation of a production site.

A complete example of installation of a Worker Node using the LCFGng server is produced as well. A summary of the values used in the example follows:


<CURRENT_TAG>=LCG-2_0_0
<TAG_DIRECTORY>=LCG-2_0_0
<FULL_TAG_DIRECTORY>=/root/tags/LCG-2_0_0
<LOCAL_DIR>=/root/source/LCG2_SIMPLE_SITE_040419
<LCFGngServer>=adc0013.cern.ch

The installation of a LCG on client nodes using LCFGng can be done in six logical steps:

Download the current tag from CVS

WARNING: This step has already been described in section [*].
If you have just done it there is no need to do it again unless a new LCG tag had been issued in the meanwhile.

STEP DESCRIPTION

  • Set the CVSROOT variable and check out from CVS [1] from CVS the current LCG2 tag into a local directory on the LCFGng server.
    There will be saved, among others, all the files needed for this step.
    
    > cvs checkout -r <CURRENT_TAG> -d <TAG_DIRECTORY> lcg2
    
    Note: the cvs checkout command does not allow an absolute path to be used as a parameter of the -d option. So TAG_DIRECTORY is the basename. From now on, each time we will refer to the absolute path in the instructions, we will use the label FULL_TAG_DIRECTORY.

EXAMPLE (bash)

 
> export CVSROOT=:pserver:anonymous@lcgdeploy.cvs.cern.ch:/cvs/lcgdeploy
> mkdir /root/tags
> cd /root/tags/
> cvs checkout -r LCG-2_0_0 -d LCG-2_0_0 lcg2


Update rpm lists and rpm repository

The aim of this step is to have the lists of rpm to install and the local rpm repository synchronized with the current tag of the LCG software.

WARNING: Actions needed to perform this logical step have already been described in section [*]. If you have just done them, there is no need to repeat them again unless a new LCG tag had been issued in the meanwhile.
Anyway doing them twice is not harmful for the system.

STEP DESCRIPTION

  • copy the content of the FULL_TAG_DIRECTORY/rpmlist subdirectory to the directory

    /opt/local/linux/7.3/rpmcfg
    on the LCFG server. This directory is NFS-mounted by all client nodes and is visible as /export/local/linux/7.3/rpmcfg . It contains all the lists of rpms.

    
    > cp <FULL_TAG_DIRECTORY>/rpmlist/* /opt/local/linux/7.3/rpmcfg/
    

  • get from the tag the up-to-date version of the updaterep object configuration file
        
    > cp <FULL_TAG_DIRECTORY>/tools/updaterep.conf /etc/
    

  • run the updaterep object
     
    > updaterep -v
    

    By default all rpms have now been copied to the /opt/local/linux/7.3/RPMS directory, that is visible from the client nodes as /export/local/linux/7.3/RPMS .

    (You can change the repository area by editing /etc/updaterep.conf and modifying the REPOSITORY_BASE variable)

EXAMPLE (bash)


 > cp =/root/tags/LCG-2_0_0/rpmlist/* /opt/local/linux/7.3/rpmcfg/
 > cp =/root/tags/LCG-2_0_0/tools/updaterep.conf /etc/
 > updaterep -v

Installation/Upgrade of server packages

The aim of this step is to be sure to have the LCFGng server itself up-to-date to the current version.

WARNING: This step has already been described in section [*].
If you have just done it there is no need to do it again unless a new LCG tag had been issued in the meantime.

STEP DESCRIPTION

  • Check (and eventually update) your LCFGng server installation
            
    > cd <FULL_TAG_DIRECTORY>/tools
    > ./lcfgng_server_update.pl ../rpmlist/lcfgng-common-rpm.h
    > /tmp/lcfgng_server_update_script.sh
    > ./lcfgng_server_update.pl ../rpmlist/lcfgng-server-rpm.h
    > /tmp/lcfgng_server_update_script.sh
    
    NOTE: the file

    /tmp/lcfgng_server_update_script.sh

    is produced only if updates are revealed to be done. Please, before running, check the script in order to verify that all the rpm update commands look reasonable.

    The script /tmp/lcfgng_server_update_script.sh may encounter some problems with missing dependencies. Usually they are not critical and they are due to the fact that the script launches independent rpm statements. Since dependencies are supposed to have been checked before, an easy way to proceed is to run again the script until no more missing dependencies exist.

EXAMPLE


> cd /root/tags/LCG-2_0_0/tools
> ./lcfgng_server_update.pl ../rpmlist/lcfgng-server-rpm.h 
> /tmp/lcfgng_server_update_script.sh
> ./lcfgng_server_update.pl ../rpmlist/lcfgng-server-rpm.h
> /tmp/lcfgng_server_update_script.sh

Customize your site configuration

The aim of this step is to set-up your "site profile" (e.g. hostnames, path of directories, ...) into a set of LCFGng general configuration files to be included in node profiles definition (see [*]).

To do a complete site configuration requires you to have a clear view of how you want to organize your LCG site. Hints and release-specific guidelines for site configuration are from time to time provided in the LCG Installation [3]. In this example we will limit ourselves to use the provided template files provided doing the minimal configuration actions to have a WN installed by LCFGng. So, we will suppose that you do not have got previous versions of these files to re-use.

The files you will need to create in this case are:

cfgdir-cfg.h
It defines the directory from which all configuration files will be read.
local-cfg.h
It contains all site-wide settings to be done by LCFGng as additions to or replacement of standard settings of Red Hat 7.3 (defined in redhat73-cfg.h ).
private-cfg.h
It is used to define the site-wide root password and possibly other security-related parameters. This file should NOT be checked into CVS.
site-cfg.h
It contains the whole site-specific configuration.
All the LCFGng configuration files look very similar, apart from the LCFGng proprietary configuration language, to a cpp-preprocessor-like style with some important differences to be taken in account.

The LCFGng preprocessor, for instance, requires all the string definitions to be included within "".
So, if you want to define a string as a juxtaposition of two macros (it is the case, shown later, of the CFGDIR macro), you have to leave the string open in the first macro and then to close it when the trailer is joined.

E.g .


#define CFGDIR "/root/tags/LCG-2_0_0/source
....
....
#include CFGDIR/include-file.h"

In the directory FULL_TAG_DIRECTORY/examples , a set of templates of the files needed to configure a default, very basic, LCG site can be found.
If your site has a more complex configuration (e.g. you use disk servers or have more than one batch system) then node configuration files will have to be modified accordingly. Please refer to the LCFG objects man pages to find out how to do this.
It's worth noticing explicitly that the site configuration template site-cfg.h.template contains only example values for the various parameters. It needs extensive editing before being actually usable at your site. Of course, a general prerequisite for site configuration is to have a clear view of how you want to organize your LCG site.

STEP DESCRIPTION

  • Create a directory, referred from now on as LOCAL_DIR , where to keep your local configuration files.
      
    > mkdir <LOCAL_DIR> 
          \end{vebatim} 
    \item from the tag directory make a copy of the up-to-date version of the 
          templates for the configuration files and rename them from 
          \em xxx.h.template \em to \em xxx.h \em into the $<$LOCAL\_DIR$>$
           \begin{verbatim} 
    > cp <FULL_TAG_DIRECTORY>/examples/* <LOCAL_DIR>
    > cp <LOCAL_DIR>/xxx.h.template <LOCAL_DIR>/xxx.h
    > ...
    
  • cd to LOCAL_DIR

  • Edit the file cfgdir-cfg.h
    Uncomment the define line of CFGDIR macro, after modifying the directory path in
    
    #define CFGDIR "<FULL_TAG_DIRECTORY>/source
    
    WARNING: CFGDIR has to be defined starting with a " but then the string must be left open and must be closed when the macro is used to define a full file name, e.g.
    
    #include CFGDIR/macros-cfg.h"
    
    Be very careful not to leave blank spaces after the last character in the definition of CFGDIR. Opening the cfgdir-cfg.h with 'vi' and moving the cursor on the right, it has to stay on the last character (the 'e' in the example)
  • Edit the file local-cfg.h
    Unless you have changed the position of the rpmlist directory (see [*] and [*]) you do not need to make any change in this file for this configuration example.

  • Edit the file private-cfg.h
    You must add here your own root password or you will not be able to log on your nodes as root
    Encode your desired root password in MD5 format by the command
    
    > openssl passwd -1
    
    you will be required to enter the new password and you will be given an encrypted value in output.
    Copy the output encrypted password in the file private-cfg.h modifying the +auth.rootpwd attribute as follows:
    
    +auth.rootpwd <Your_encrypted_root_password_here>
    

  • Edit the file site-cfg.h
    For this simple configuration example, you can leave the template unchanged and set up just the macros in the section 'COMMON SITE DEFINITIONS' of the file. The template assumes you want to run the PBS batch system without sharing the /home directory between the CE and all the WNs. This is the recommended setup.
    Namely, the macros to change are:
    
    #define GRID_TRUSTED_BROKERS "<Subject-of-the-RB-host-certificate>"
    
    Note: the host certificate is stored, by default, in the file 
          /etc/grid-security/hostcert.pem on the machine
          
    #define CE_HOSTNAME             <ComputingElement hostname>
    #define SE_HOSTNAME             <StorageElement hostname>
    #define SITE_LOCALDOMAIN        <local domain>
    #define SITE_MAILROOT           <address-to-send-root-mail>
    #define SITE_ALLOWED_NETWORKS   <list-of-comma-separated-network-addresses>
    #define SITE_GATEWAYS           <default-gateway>
    #define SITE_NAMESERVERS        <list-of-space-separated-dne-server>
    #define SITE_NETMASK            <net-mask>
    #define SITE_NETWORK            <site-network-address>
    #define SITE_BROADCAST          <site-broadcast-address>
    #define SITE_NTP_HOSTNAME       <NTP-time-server-hostname>
    #define SITE_TIMEZONE           <time-zone>
    #define SITE_NAME_              <site-name-for-the-information-server>
    #define SITE_EDG_VERSION        <current-tag-name>
    #define SITE_INSTALLATION_DATE_ <installation-date-YYYYmmDDhhmissZ>
    #define SITE_LCFG_SERVER        <LCFGng-server-hostname>
    #define SITE_WN_HOSTS           <space-separated-list-of-WN-hostnames>
    #define SITE_GIIS               <site-giis-name-for-the-information-server>
    #define SITE_BDII_HOST          <BDII-hostname>
    #define SITE_BDII_PASSWD        \"<BDII-encrypted-root-passwd>\"
    #define SITE_BDII_PASSWD_PLAIN  <BDII-plain-passwd>
    #define SITE_BDII_URL           <url-of-the-new-BDII-configuration-file>
    
    EXAMPLE
       
    > mkdir /root/source/LCG2_SIMPLE_SITE_040419
    > cp /root/tags/LCG-2_0_0/examples/*.template \
                       /root/source/LCG2_SIMPLE_SITE_040419 
    > cd /root/source/LCG2_SIMPLE_SITE_040419 
    > cp cfgdir-cfg.h.template cfgdir-cfg.h
    > cp local-cfg.h.template local-cfg.h
    > cp private-cfg.h.template private-cfg.h
    > cp site-cfg.h.template site-cfg.h
    
    > vi cfgdir-cfg.h
    -------------------------------------------
    #define CFGDIR "/root/tags/LCG-2_0_0/source
    ------------------------------------------- 
    > openssl passwd -1
    Password: new_root_password
            $1$8eKiuqo2$eZI/zygW6chJ7zDkVYIDn/
    
    > vi private-cfg.h
    -------------------------------------------
    +auth.rootpwd $1$8eKiuqo2$eZI/zygW6chJ7zDkVYIDn/
    -------------------------------------------
    
    > vi site-cfg.h
    -------------------------------------------
    ...
    #define GRID_TRUSTED_BROKERS "/O=Grid/O=CERN/OU=cern.ch/CN=host/lxshare0410.cern.ch"
    ...
    #define CE_HOSTNAME             adc0025.cern.ch
    #define SE_HOSTNAME             adc0016.cern.ch
    #define SITE_LOCALDOMAIN        cern.ch
    #define SITE_MAILROOT           Antonio_Retico@cern.ch
    #define SITE_ALLOWED_NETWORKS   127.0.0.1, 137.138., 128.141.
    #define SITE_GATEWAYS           137.138.1.1
    #define SITE_NAMESERVERS        137.138.16.5 137.138.17.5
    #define SITE_NETMASK            255.255.0.0
    #define SITE_NETWORK            137.138.0.0
    #define SITE_BROADCAST          137.138.255.255
    #define SITE_NTP_HOSTNAME       ip-time-1.cern.ch
    #define SITE_TIMEZONE           Europe/Zurich
    #define SITE_NAME_              LCG2-SIMPLE-TEST-SITE
    #define SITE_EDG_VERSION        LCG-2_0_0beta
    #define SITE_INSTALLATION_DATE_ 20040419115700Z
    #define SITE_LCFG_SERVER        adc0013.cern.ch
    #define SITE_WN_HOSTS           adc*.cern.ch
    #define SITE_GIIS               lcg2manualtestlcfg
    ...
    #define SITE_BDII_HOST         adc0009.cern.ch
    #define SITE_BDII_PASSWD       \"{SSHA}z2q23+xm9n7MFj1l+T9nYUAO27TtCyUH\"
    #define SITE_BDII_PASSWD_PLAIN nevertotell
    #define SITE_BDII_URL          http://grid-deployment.web.cern.ch/ \
                                       grid-deployment/gis/lcg2-bdii-update.conf
    ...
    


Set-up node profiles

The aim of this step is to have a set of XML profiles each one of them dealing with a singe node to be handled by LCFGng.

Among the files you have copied into the directory LOCAL_DIR from the directory

FULL_TAG_DIRECTORY/examples you can find example configuration files for each node type you may want to install at your site.

STEP DESCRIPTION

  • For each node type you want to install, copy the example file for that node to a file named with the hostname the node you want to install. Then and edit it replacing the default definition of HOSTNAME with the hostname of the node. E.g. to install a Worker Node on a node named WN-hostname do:
    
    > cd <LOCAL_DIR>
    > cp WN_node <WN-hostname>
    
    and then edit the file WN-hostname replacing the line
    
    #define HOSTNAME WN_node
    
    with
    
    #define HOSTNAME <WN-hostname>
    
    Note: for this operation always use the node name WITHOUT its domain extension. The domain extension will be added, where needed, according to the definition of the macro 'SITE_LOCALDOMAIN' in site-cfg.h .

  • Create the XML profile for the node(s)
     
    > cd <FULL_TAG_DIRECTORY>/tools
    > ./do_mkxprof.sh <WN-hostname> [<list-of-nodes>]
    
    If you get an error status for one or more of the configurations, you can get a detailed report on the nature of the error by looking into the URL
    
    http://<Your_LCFGng_Server>/status/
    
    and clicking on the name of the node with a faulty configuration (a small red bug should be shown beside the node name).

EXAMPLE


 > cd /root/source/LCG2_SIMPLE_SITE_040419   
 > cp WN_node adc0004
 
> vi adc0004
---------------------------
.....
#define HOSTNAME adc0004
.....
---------------------------
 
> do_mkxprof.sh adc0004


Install the nodes

Once all node configurations are correctly published, you can proceed and install your nodes.
Two methods are possible in order to install the LCFGng client on client nodes:
  • Automatic installation (via PXE)
  • Manual installation (via image floppy)

Automatic installation (via PXE)

In order to apply this method the configuration described in [*] have to be done.
If all the listed steps have been correctly performed, the installation of a node can be easily driven by a common web browser.
  • Go to the URL
         
    http://<LCFGngServer>/install/install.cgi
    
    The required username and password are the one defined during the configure of the HTTP server (see [*]).

  • Choose the installation type for your nodes:
    The choices proposed in the drop-down box correspond to the different PXE configurations done in section [*].

  • Enable the boot via PXE on the machines.

    Select the Apply Changes button

    The node will be installed as soon as it is rebooted.

  • To check if the process is working you can check the file

    /var/log/messages

    on the LCFGng server.

    After the initial installation is completed (expect two automatic reboots in the process), each node type requires a few manual steps, detailed in the LCG Release Notes [3].
    After completing these steps, some of the nodes need a final reboot which will bring them up with all the needed services active.
    When needed, this final reboot is explicitly stated in the release nodes.

Manual installation (via image floppy)

This method is to be applied only if you are not using PXE Linux and you are installing your nodes via a floppy disk.

On the client node do the following steps:

  • Download a LCFGng image diskette
    
    > cd /tmp
    > wget http://grid-it.cnaf.infn.it/packages/gridit/wp03/ \ 
               bootdisk_rh73_11022004.img
    > dd if=bootdisk_rh73_22112002.img of=/dev/fd0 bs=1024
    
  • Mount the floppy
    
    > mount /dev/fd0 /floppy
    
  • Edit the GRUB configuration on the image to increase the ramdisk_size value to be passed to the kernel
    
    > vi /floppy/boot/grub/grub.conf
    ----------------------------------------------
    kernel (fd0)/kernelboot-2.4.20-20.7 root=/dev/nfs 
    nfsroot=/opt/local/linux/nginstallroot/7.3 init=/etc/rc_install 
    ip=dhcp 
    ramdisk_size=32768 
    ----------------------------------------------
    
    See [*] for further examples of confiuration of the Linux Loader.
  • Re-boot the client node


Change History



Table: Change History
version date description
v1.0 30/Jan/04 First Release.
v1.1 02/Apr/04 CVS references updated.
v1.2 22/Apr/04 [*]: section with firewall configuration inserted and numbering shifted up.
[*]:introduction to use of LCFGng server added.
v2.0 7/May/04 [*]: Installation of client node via image diskette added.
Document format changed.
v2.1 14/Jul/04 [*]: tftp service re-start instruction changed.


Bibliography

1
''CVS User Guide''
Louis Poncet (Louis.Poncet@cern.ch)

http://grid-deployment.web.cern.ch/grid-deployment/cgi-bin/index.cgi?var=documentation

2
''NTP client Installation & Configuration''
Alessandro Usai, Antonio Retico

  cvs: 
     module: ''lcg2'' 
     directory: ''manual-install'' 
     file: ''NTP_install.txt''

3
"LCG-2 Installation notes"
Emanuele Leonardi, Markus Schulz
http://grid-deployment.web.cern.ch/grid-deployment/cgi-bin/index.cgi?var=releases

About this document ...

Infrastructure support documentation -

This document was generated using the LaTeX2HTML translator Version 99.1 release (March 30, 1999)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -html_version 4.0 -no_navigation -address GRID deployment LCFGng-server-install.drv_html

The translation was initiated by Mila Angelova Katzarova on 2004-07-20


GRID deployment