LCG - SE_dcache - Generic Configuration Reference



Document identifier: LCG-GIS-CR-SE_dcache
Date: 16 January 2006
Author: LCG Deployment - GIS team;Retico,Antonio;Vidic,Valentin;
Version:
Abstract: Configuration steps done by the YAIM script 'configure_SE_dcache'

Contents

Introduction

This document lists the manual steps for the installation and configuration of a LCG SE_dcache Node.
Furthermore it provides a specification of the YAIM functions used to configure the node with the script-based configuration.

The configuration has been tested on a standard Scientific Linux 3.0 Installation.

Link to this document:
This document is available on the Grid Deployment web site

http://www.cern.ch/grid-deployment/gis/lcg-GCR/index.html

Variables

In order to set-up a SE_dcache node, you need at least the following variables to be correctly configured in the site configuration file (site-info.def):
BDII_HOST :
BDII Hostname.
CE_BATCH_SYS :
Implementation of site batch system. Available values are ``torque'', ``lsf'', ``pbs'', ``condor'' etc.
CE_CPU_MODEL :
Model of the CPU used by the WN (WN specification). This parameter is a string whose domain is not defined yet in the GLUE Schema. The value used for Pentium III is "PIII".
CE_CPU_SPEED :
Clock frequency in Mhz (WN specification).
CE_CPU_VENDOR :
Vendor of the CPU. used by the WN (WN specification). This parameter is a string whose domain is not defined yet in the GLUE Schema. The value used for Intel is ``intel''.
CE_HOST :
Computing Element Hostname.
CE_INBOUNDIP :
TRUE if inbound connectivity is enabled at your site, FALSE otherwise (WN specification).
CE_MINPHYSMEM :
RAM size in kblocks (WN specification).
CE_MINVIRTMEM :
Virtual Memory size in kblocks (WN specification).
CE_OS :
Operating System name (WN specification).
CE_OS_RELEASE :
Operating System release (WN specification).
CE_OUTBOUNDIP :
TRUE if outbound connectivity is enabled at your site, FALSE otherwise (WN specification).
CE_RUNTIMEENV :
List of software tags supported by the site. The list can include VO-specific software tags. In order to assure backward compatibility it should include the entry 'LCG-2', the current middleware version and the list of previous middleware tags.
CE_SF00 :
Performance index of your fabric in SpecFloat 2000 (WN specification). For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html.
CE_SI00 :
Performance index of your fabric in SpecInt 2000 (WN specification). For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html.
CE_SMPSIZE :
Number of cpus in an SMP box (WN specification).
CLASSIC_HOST :
The name of your SE_classic host.
CLASSIC_STORAGE_DIR :
The root storage directory on CLASSIC_HOST.
CRON_DIR :
Yaim writes all cron jobs to this directory. Change it if you want to turn off Yaim's management of cron.
DCACHE_ADMIN :
Host name of the server node which manages the pool of nodes.
DCACHE_POOLS :
List of pool nodes managed by the DCACHE_ADMIN server node.
DCACHE_PORT_RANGE :
DCACHE Port Range. This variable is optional and the default value is "20000,25000" .
DPMDATA :
Directory where the data is stored (absolute path, e.g./storage).
DPM_HOST :
Host name of the DPM host, used also as a default DPM for the lcg-stdout-mon .
GLOBUS_TCP_PORT_RANGE :
Port range for Globus IO.
GRIDICE_SERVER_HOST :
GridIce server host name (usually run on the MON node).
GRIDMAP_AUTH :
List of ldap servers in edg-mkgridmap.conf which authenticate users.
GRID_TRUSTED_BROKERS :
List of the DNs of the Resource Brokers host certificates which are trusted by the Proxy node (ex: /O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch).
GROUPS_CONF :
Path to the groups.conf file which contains information on mapping VOMS groups and roles to local groups. An example of this configuration file is given in /opt/lcg/yaim/examples/groups.conf.
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
JAVA_LOCATION :
Path to Java VM installation. It can be used in order to run a different version of java installed locally.
JOB_MANAGER :
The name of the job manager used by the gatekeeper.
LFC_CENTRAL :
A list of VOs for which the LFC should be configured as a central catalogue.
LFC_HOST :
Set this if you are building an LFC_HOST, not if you're just using clients.
LFC_LOCAL :
Normally the LFC will support all VOs in the VOS variable. If you want to limit this list, add the ones you need to LFC_LOCAL. For each item listed in the VOS variable you need to create a set of new variables as follows:
VO_$<$VO-NAME$>$_QUEUES :
The queues that the VO can use on the CE.
VO_$<$VO-NAME$>$_SE :
Default SE used by the VO. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_SGM :
ldap directory with VO software managers list. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_STORAGE_DIR :
Mount point on the Storage Element for the VO. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_SW_DIR :
Area on the WN for the installation of the experiment software. If on the WNs a predefined shared area has been mounted where VO managers can pre-install software, then these variable should point to this area. If instead there is not a shared area and each job must install the software, then this variables should contain a dot ( . ).Anyway the mounting of shared areas, as well as the local installation of VO software is not managed by yaim and should be handled locally by Site Administrators. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_USERS :
ldap directory with VO users list. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_VOMS_POOL_PATH :
If necessary, append this to the VOMS_SERVER URL for the pool account list .
VO_$<$VO-NAME$>$_VOMS_SERVERS :
A list of VOMS servers for the VO.
MON_HOST :
MON Box Hostname.
PX_HOST :
PX hostname.
QUEUES :
The name of the queues for the CE. These are by default set as the VO names.
RB_HOST :
Resource Broker Hostname.
REG_HOST :
RGMA Registry hostname.
RESET_DCACHE_CONFIGURATION :
Set this to yes if you want YAIM to configure dCache for you - if unset (or 'no') yaim will only configure the grid front-end to dCache.
SE_LIST :
A list of hostnames of the SEs available at your site.
SITE_EMAIL :
The e-mail address as published by the information system.
SITE_LAT :
Site latitude.
SITE_LOC :
"City, Country".
SITE_LONG :
Site longitude.
SITE_NAME :
Your GIIS.
SITE_SUPPORT_SITE :
Support entry point ; Unique Id for the site in the GOC DB and information system.
SITE_TIER :
Site tier.
SITE_WEB :
Site site.
TORQUE_SERVER :
Set this if your torque server is on a different host from the CE. It is ingored for other batch systems.
USERS_CONF :
Path to the file containing a list of Linux users (pool accounts) to be created. This file should be created by the Site Administrator, which contains a plain list of the users and IDs. An example of this configuration file is given in /opt/lcg/yaim/examples/users.conf.
VOBOX_HOST :
VOBOX hostname.
VOBOX_PORT :
The port the VOBOX gsisshd listens on.
VOS :
List of supported VOs.
VO_SW_DIR :
Directory for installation of experiment software.

Configure Library Paths

Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_ldconf'.

In order to allow the middleware libraries to be looked up and dinamically linked, the relevant paths need to be configured.


Specification of function: config_ldconf

The function 'config_ldconf' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_ldconf

The code is reproduced also in 18.1..

Set-up EDG configuration variables

Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_sysconfig_edg'.

The EDG configuration file is parsed by EDG daemons to locate the EDG root directory and various other global properties.

Create and edit the file /etc/sysconfig/edg as follows:

EDG_LOCATION=<INSTALL_ROOT>/edg
EDG_LOCATION_VAR=<INSTALL_ROOT>/edg/var
EDG_TMP=/tmp
X509_USER_CERT=/etc/grid-security/hostcert.pem
X509_USER_KEY=/etc/grid-security/hostkey.pem
GRIDMAP=/etc/grid-security/grid-mapfile
GRIDMAPDIR=/etc/grid-security/gridmapdir/
where <INSTALL_ROOT> is the installation root of the lcg middleware (/opt by default).

NOTE: it might be observed that some of the variables above listed dealing with the GSI (Grid Security Interface) are needed just on service nodes (e.g. CE, RB) and not on others. Nevertheless, for sake of simplicity, yaim uses the same definitions on all node types, which has been proven not to hurt.


Specification of function: config_sysconfig_edg

The function 'config_sysconfig_edg' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_sysconfig_edg

The code is reproduced also in 18.2..

Set-up Globus configuration variables

Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_sysconfig_globus'.

Create and edit the file /etc/sysconfig/globus as follows:

GLOBUS_LOCATION=<INSTALL_ROOT>/globus
GLOBUS_CONFIG=/etc/globus.conf
GLOBUS_TCP_PORT_RANGE="20000 25000"
export LANG=C
where <INSTALL_ROOT> is the installation root of the lcg middleware (/opt by default).


Specification of function: config_sysconfig_globus

The function 'config_sysconfig_globus' needs the following variables to be set in the configuration file:

GLOBUS_TCP_PORT_RANGE :
Port range for Globus IO.
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_sysconfig_globus

The code is reproduced also in 18.3..

Set-up LCG configuration variables

Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_sysconfig_lcg'.

Create and edit the file /etc/sysconfig/lcg as follows:

LCG_LOCATION=<INSTALL_ROOT>/lcg
LCG_LOCATION_VAR=<INSTALL_ROOT>/lcg/var
LCG_TMP=/tmp
where <INSTALL_ROOT> is the installation root of the lcg middleware (/opt by default).


Specification of function: config_sysconfig_lcg

The function 'config_sysconfig_lcg' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
SITE_NAME :
Your GIIS.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_sysconfig_lcg

The code is reproduced also in 18.4..

Set-up updating of CRLs

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_crl'.

Cron script is installed to fetch new versions of CRLs four times a day. The time when the script is run is randomized in order to distribute the load on CRL servers. If the configuration is run as root, the cron entry is installed in /etc/cron.d/edg-fetch-crl, otherwise it is installed as a user cron entry.

CRLs are also updated immediately by running the update script (<INSTALL_ROOT>/edg/etc/cron/edg-fetch-crl-cron).

Logrotate script is installed as /etc/logrotate.d/edg-fetch-crl to prevent the logs from growing indefinitely.


Specification of function: config_crl

The function 'config_crl' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_crl

The code is reproduced also in 18.5..

Set-up RFIO

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_rfio'.

rfiod is configured on SE_classic nodes by adding the appropriate ports (5001 TCP and UDP) to /etc/services and restarting the daemon.

For SE_dpm nodes, rfiod is configured by config_DPM_rfio so no configuration is done here.

All other nodes don't run rfiod. However, rfiod might still be installed from CASTOR-client RPM. If this is the case, we make sure it's stopped and disabled.


Specification of function: config_rfio

The function 'config_rfio' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_rfio

The code is reproduced also in 18.6..

Set-up Host Certificates

Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_host_certs'.

The SE_dcache node requires the host certificate/key files to be put in place before you start the installation.

Contact your national Certification Authority (CA) to understand how to obtain a host certificate if you do not have one already.

Instruction to obtain a CA list can be found in

http://markusw.home.cern.ch/markusw/lcg2CAlist.html

From the CA list so obtained you should choose a CA close to you.

Once you have obtained a valid certificate, i.e. a file

hostcert.pem

containing the machine public key and a file

hostkey.pem

containing the machine private key, make sure to place the two files into the directory

/etc/grid-security

with the following permissions

> chmod 400 /etc/grid-security/hostkey.pem

> chmod 644 /etc/grid-security/hostcert.pem
It is IMPORTANT that permissions be set as shown, as otherwise certification errors will occur.

If the certificates don't exist, the function exits with an error message and the calling process is interrupted.


Specification of function: config_host_certs

The function 'config_host_certs' needs the following variables to be set in the configuration file:

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_host_certs

The code is reproduced also in 18.7..

Create pool accounts

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_users'.

config_users creates pool accounts for grid users defined in users.conf. Each line in this file describes one user:

UID:LOGIN:GID:GROUP:VO:SGM_FLAG:

First, the format of the users.conf file is checked (VO and SGM fields were added recently).

Groups are then created for the supported VOs (listed in <VOS> variable) using groupadd.

For each of the lines in users.conf, a user account is created (with useradd) if that user's VO is supported.

Finally, grid users are denied access to cron and at by adding their usernames to /etc/at.deny and /etc/cron.deny.


Specification of function: config_users

The function 'config_users' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
USERS_CONF :
Path to the file containing a list of Linux users (pool accounts) to be created. This file should be created by the Site Administrator, which contains a plain list of the users and IDs. An example of this configuration file is given in /opt/lcg/yaim/examples/users.conf.
VOS :
List of supported VOs.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_users

The code is reproduced also in 18.8..

Create EDG users

Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_edgusers'.

Many of the services running on LCG service nodes are owned by the user edguser. The user edguser belongs to the group edguser and it has got a home directory in /home.

The user edginfo is required on all the nodes publishing information on the Information System. The user belongs to the group edginfo and it has got a home directory in /home.

No special requirements exists for the ID of the above mentioned users and groups.

The function creates bothedguser and edginfo groups and users.


Specification of function: config_edgusers

The function 'config_edgusers' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
USERS_CONF :
Path to the file containing a list of Linux users (pool accounts) to be created. This file should be created by the Site Administrator, which contains a plain list of the users and IDs. An example of this configuration file is given in /opt/lcg/yaim/examples/users.conf.
VOS :
List of supported VOs.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_edgusers

The code is reproduced also in 18.9..

Set-up pool account mappings

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_mkgridmap'.

Format of the users.conf file is checked first. This file should have six colon separated fields. Using this file, /etc/grid-security/gridmapdir pool directory is created and initialized with pool accounts.

Next, configuration for edg-mkgridmap is generated in <INSTALL_ROOT>/edg/etc/edg-mkgridmap.conf. edg-mkgridmap generates /etc/grid-security/grid-mapfile using VO membership information in VOMS and/or LDAP. The following lines are generated for each of the supported VOs:

group <VO_<vo>_SERVER>/Role=lcgadmin sgmuser
group <VO_<vo>_SERVER>/<VO_<vo>_VOMS_EXTRA_MAPS>
group <VO_<vo>_SERVER><VO_<vo>_VOMS_POOL_PATH> .user_prefix

group <VO_<vo>_SGM> sgmuser
group <VO_<vo>_USERS> .user_prefix
where sgmuser is SGM for the <vo> and user_prefix is the prefix for <vo> pool accounts (both values are inferred from users.conf). Multiple VOMS servers and extra maps can be defined.

Authentication URLs and site specific mappings are appended to the end of the file:

auth <GRIDMAP_AUTH>

gmf_local <INSTALL_ROOT>/edg/etc/grid-mapfile-local

If authentication URLs are not defined in <GRIDMAP_AUTH>, ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org is used.

Site specific grid user mappings can be defined in <INSTALL_ROOT>/edg/etc/grid-mapfile-local. Contents of this file are included verbatim in the output of edg-mkgridmap.

<INSTALL_ROOT>/edg/etc/lcmaps/gridmapfile is generated with the following contents for each supported VO:

/VO=<vo>/GROUP=/<vo>/ROLE=lcgadmin sgmuser
/VO=<vo>/GROUP=/<vo> .user_prefix
This file defines local account mappings for VOMS enabled proxy certificates.

<INSTALL_ROOT>/edg/etc/lcmaps/groupmapfile is generated with the following contents for each supported VO:

/VO=<vo>/GROUP=/<vo>/ROLE=lcgadmin vo_group
/VO=<vo>/GROUP=/<vo> vo_group
This file defines local group mappings for VOMS enabled proxy certificates.

After the configuration is finished, edg-mkgridmap is run with the new configuration to generate the /etc/grid-security/grid-mapfile. Cron job for regenerating grid-mapfile is installed to run four times a day.

A cron job for expiring gridmapdir pool accounts is installed to run once a day on all nodes except nodes running dpm. This is a temporary fix to avoid users losing access to their files after the mapping expires and they are mapped to a different local user. By default, pool accounts expire if they are not used for more than 2 days, except on RB where they are expired after 10 days.


Specification of function: config_mkgridmap

The function 'config_mkgridmap' needs the following variables to be set in the configuration file:

CRON_DIR :
Yaim writes all cron jobs to this directory. Change it if you want to turn off Yaim's management of cron.
GRIDMAP_AUTH :
List of ldap servers in edg-mkgridmap.conf which authenticate users.
GROUPS_CONF :
Path to the groups.conf file which contains information on mapping VOMS groups and roles to local groups. An example of this configuration file is given in /opt/lcg/yaim/examples/groups.conf.
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
USERS_CONF :
Path to the file containing a list of Linux users (pool accounts) to be created. This file should be created by the Site Administrator, which contains a plain list of the users and IDs. An example of this configuration file is given in /opt/lcg/yaim/examples/users.conf.
VOS :
List of supported VOs. For each item listed in the VOS variable you need to create a set of new variables as follows:
VO_$<$VO-NAME$>$_SGM :
ldap directory with VO software managers list. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_USERS :
ldap directory with VO users list. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_VOMS_POOL_PATH :
If necessary, append this to the VOMS_SERVER URL for the pool account list .
VO_$<$VO-NAME$>$_VOMS_SERVERS :
A list of VOMS servers for the VO.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_mkgridmap

The code is reproduced also in 18.10..

Set-up Java location

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_java'.

Since Java is not included in the LCG distribution, Java location needs to be configured with yaim.

If <JAVA_LOCATION> is not defined in site-info.def, it is determined from installed Java RPMs (if available).

In relocatable distribution, JAVA_HOME environment variable is defined in <INSTALL_ROOT>/etc/profile.d/grid_env.sh and <INSTALL_ROOT>/etc/profile.d/grid_env.csh.

Otherwise, JAVA_HOME is defined in /etc/java/java.conf and /etc/java.conf and Java binaries added to PATH in <INSTALL_ROOT>/edg/etc/profile.d/j2.sh and <INSTALL_ROOT>/edg/etc/profile.d/j2.csh.


Specification of function: config_java

The function 'config_java' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
JAVA_LOCATION :
Path to Java VM installation. It can be used in order to run a different version of java installed locally.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_java

The code is reproduced also in 18.11..

Set-up R-GMA client

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_rgma_client'.

R-GMA client configuration is generated in <INSTALL_ROOT>/glite/etc/rgma/rgma.conf by running:

<INSTALL_ROOT>/glite/share/rgma/scripts/rgma-setup.py --secure=no --server=<MON_HOST> --registry=<REG_HOST> --schema=<REG_HOST>

<INSTALL_ROOT>/edg/etc/profile.d/edg-rgma-env.sh and <INSTALL_ROOT>/edg/etc/profile.d/edg-rgma-env.csh with the following functionality:

These files are sourced into the users environment from <INSTALL_ROOT>/etc/profile.d/z_edg_profile.sh and <INSTALL_ROOT>/etc/profile.d/z_edg_profile.csh.


Specification of function: config_rgma_client

The function 'config_rgma_client' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
MON_HOST :
MON Box Hostname.
REG_HOST :
RGMA Registry hostname.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_rgma_client

The code is also reproduced in 18.12..

Set-up Generic Information Provider

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_gip'.

Generic Information Provider (GIP) is configured through <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf. The start of this file is common for all types of nodes:

ldif_file=<INSTALL_ROOT>/lcg/var/gip/lcg-info-static.ldif
generic_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-generic
wrapper_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-wrapper
temp_path=<INSTALL_ROOT>/lcg/var/gip/tmp
template=<INSTALL_ROOT>/lcg/etc/GlueSite.template
template=<INSTALL_ROOT>/lcg/etc/GlueCE.template
template=<INSTALL_ROOT>/lcg/etc/GlueCESEBind.template
template=<INSTALL_ROOT>/lcg/etc/GlueSE.template
template=<INSTALL_ROOT>/lcg/etc/GlueService.template

# Common for all
GlueInformationServiceURL: ldap://<hostname>:2135/mds-vo-name=local,o=grid

<hostname> is determined by running hostname -f.

For CE the following is added:

dn: GlueSiteUniqueID=<SITE_NAME>,mds-vo-name=local,o=grid
GlueSiteName: <SITE_NAME>
GlueSiteDescription: LCG Site
GlueSiteUserSupportContact: mailto: <SITE_EMAIL>
GlueSiteSysAdminContact: mailto: <SITE_EMAIL>
GlueSiteSecurityContact: mailto: <SITE_EMAIL>
GlueSiteLocation: <SITE_LOC>
GlueSiteLatitude: <SITE_LAT>
GlueSiteLongitude: <SITE_LONG>
GlueSiteWeb: <SITE_WEB>
GlueSiteOtherInfo: <SITE_TIER>
GlueSiteOtherInfo: <SITE_SUPPORT_SITE>
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueForeignKey: GlueClusterUniqueID=<CE_HOST>
GlueForeignKey: GlueSEUniqueID=<SE_HOST>

dynamic_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-ce
dynamic_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-software <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf 

# CE Information Provider
GlueCEHostingCluster: <CE_HOST>
GlueCEInfoGatekeeperPort: 2119
GlueCEInfoHostName: <CE_HOST>
GlueCEInfoLRMSType: <CE_BATCH_SYS>
GlueCEInfoLRMSVersion: not defined
GlueCEInfoTotalCPUs: 0
GlueCEPolicyMaxCPUTime: 0
GlueCEPolicyMaxRunningJobs: 0
GlueCEPolicyMaxTotalJobs: 0
GlueCEPolicyMaxWallClockTime: 0
GlueCEPolicyPriority: 1
GlueCEStateEstimatedResponseTime: 0
GlueCEStateFreeCPUs: 0
GlueCEStateRunningJobs: 0
GlueCEStateStatus: Production
GlueCEStateTotalJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateWorstResponseTime: 0
GlueHostApplicationSoftwareRunTimeEnvironment: <ce_runtimeenv>
GlueHostArchitectureSMPSize: <CE_SMPSIZE>
GlueHostBenchmarkSF00: <CE_SF00>
GlueHostBenchmarkSI00: <CE_SI00>
GlueHostMainMemoryRAMSize: <CE_MINPHYSMEM>
GlueHostMainMemoryVirtualSize: <CE_MINVIRTMEM>
GlueHostNetworkAdapterInboundIP: <CE_INBOUNDIP>
GlueHostNetworkAdapterOutboundIP: <CE_OUTBOUNDIP>
GlueHostOperatingSystemName: <CE_OS>
GlueHostOperatingSystemRelease: <CE_OS_RELEASE>
GlueHostOperatingSystemVersion: 3
GlueHostProcessorClockSpeed: <CE_CPU_SPEED>
GlueHostProcessorModel: <CE_CPU_MODEL>
GlueHostProcessorVendor: <CE_CPU_VENDOR>
GlueSubClusterPhysicalCPUs: 0
GlueSubClusterLogicalCPUs: 0
GlueSubClusterTmpDir: /tmp
GlueSubClusterWNTmpDir: /tmp
GlueCEInfoJobManager: <JOB_MANAGER>
GlueCEStateFreeJobSlots: 0
GlueCEPolicyAssignedJobSlots: 0
GlueCESEBindMountInfo: none
GlueCESEBindWeight: 0

dn: GlueClusterUniqueID=<CE_HOST>, mds-vo-name=local,o=grid
GlueClusterName: <CE_HOST}
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueClusterService: <CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>
GlueForeignKey: GlueCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>

dn: GlueSubClusterUniqueID=<CE_HOST>, GlueClusterUniqueID=<CE_HOST>, mds-vo-name=local,o=grid
GlueChunkKey: GlueClusterUniqueID=<CE_HOST>
GlueSubClusterName: <CE_HOST>

dn: GlueCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>, mds-vo-name=local,o=grid
GlueCEName: <queue>
GlueForeignKey: GlueClusterUniqueID=<CE_HOST>
GlueCEInfoContactString: <CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>
GlueCEAccessControlBaseRule: VO:<vo>

dn: GlueVOViewLocalID=<vo>,GlueCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>,mds-vo-name=local,o=grid
GlueCEAccessControlBaseRule: VO:<vo>
GlueCEInfoDefaultSE: <VO_<vo>_DEFAULT_SE>
GlueCEInfoApplicationDir: <VO_<vo>_SW_DIR>
GlueCEInfoDataDir: <VO_<vo>_STORAGE_DIR>
GlueChunkKey: GlueCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>

dn: GlueCESEBindGroupCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>, mds-vo-name=local,o=grid
GlueCESEBindGroupSEUniqueID: <se_list>

dn: GlueCESEBindSEUniqueID=<se>, GlueCESEBindGroupCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>, mds-vo-name=local,o=grid
GlueCESEBindCEAccesspoint: <accesspoint>
GlueCESEBindCEUniqueID: <CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>
where <accesspoint> is: Some lines can be generated multiple times for different <vo>s, <queue>s, <se>s etc.

For each of the supported VOs, a directory is created in <INSTALL_ROOT>/edg/var/info/<vo>. These are used by SGMs to publish information on experiment software installed on the cluster.

For the nodes running GridICE server (usually SE) the following is added:

dn: GlueServiceUniqueID=<GRIDICE_SERVER_HOST>:2136,Mds-vo-name=local,o=grid
GlueServiceName: <SITE_NAME>-gridice
GlueServiceType: gridice
GlueServiceVersion: 1.1.0
GlueServiceEndpoint: ldap://<GRIDICE_SERVER_HOST>:2136/mds-vo-name=local,o=grid
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceStartTime: 2002-10-09T19:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueServiceAccessControlRule:<vo>

For PX nodes the following is added:

dn: GlueServiceUniqueID=<PX_HOST>:7512,Mds-vo-name=local,o=grid
GlueServiceName: <SITE_NAME>-myproxy
GlueServiceType: myproxy
GlueServiceVersion: 1.1.0
GlueServiceEndpoint: <PX_HOST>:7512
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceStartTime: 2002-10-09T19:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueServiceAccessControlRule: <grid_trusted_broker>

For nodes running RB the following is added:

dn: GlueServiceUniqueID=<RB_HOST>:7772,Mds-vo-name=local,o=grid
GlueServiceName: <SITE_NAME>-rb
GlueServiceType: ResourceBroker
GlueServiceVersion: 1.2.0
GlueServiceEndpoint: <RB_HOST>:7772
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceStartTime: 2002-10-09T19:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueServiceAccessControlRule: <vo>

dn: GlueServiceDataKey=HeldJobs,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid
GlueServiceDataKey: HeldJobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772

dn: GlueServiceDataKey=IdleJobs,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid
GlueServiceDataKey: IdleJobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772

dn: GlueServiceDataKey=JobController,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid
GlueServiceDataKey: JobController
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772

dn: GlueServiceDataKey=Jobs,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid
GlueServiceDataKey: Jobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772

dn: GlueServiceDataKey=LogMonitor,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid
GlueServiceDataKey: LogMonitor
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772

dn: GlueServiceDataKey=RunningJobs,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid
GlueServiceDataKey: RunningJobs
GlueServiceDataValue: 14
GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772

dn: GlueServiceDataKey=WorkloadManager,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid
GlueServiceDataKey: WorkloadManager
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772

For central LFC the following is added:

dn: GlueServiceUniqueID=http://<LFC_HOST>:8085/,mds-vo-name=local,o=grid
GlueServiceName: <SITE_NAME>-lfc-dli
GlueServiceType: data-location-interface
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: http://<LFC_HOST>:8085/
GlueServiceURI: http://<LFC_HOST}:8085/
GlueServiceAccessPointURL: http://<LFC_HOST>:8085/
GlueServiceStatus: running
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueServiceOwner: <vo>
GlueServiceAccessControlRule: <vo>

dn: GlueServiceUniqueID=<LFC_HOST>,mds-vo-name=local,o=grid
GlueServiceName: <SITE_NAME>-lfc
GlueServiceType: lcg-file-catalog
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: <LFC_HOST>
GlueServiceURI: <LFC_HOST>
GlueServiceAccessPointURL: <LFC_HOST>
GlueServiceStatus: running
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueServiceOwner: <vo>
GlueServiceAccessControlRule: <vo>

For local LFC the following is added:

dn: GlueServiceUniqueID=<LFC_HOST>,mds-vo-name=local,o=grid
GlueServiceName: <SITE_NAME>-lfc
GlueServiceType: lcg-local-file-catalog
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: <LFC_HOST>
GlueServiceURI: <LFC_HOST>
GlueServiceAccessPointURL: <LFC_HOST>
GlueServiceStatus: running
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueServiceOwner: <vo>
GlueServiceAccessControlRule: <vo>

For dcache and dpm nodes the following is added:

dn: GlueServiceUniqueID=httpg://<SE_HOST>:8443/srm/managerv1,Mds-Vo-name=local,o=grid
GlueServiceAccessPointURL: httpg://<SE_HOST>:8443/srm/managerv1
GlueServiceEndpoint: httpg://<SE_HOST>:8443/srm/managerv1
GlueServiceType: srm_v1
GlueServiceURI: httpg://<SE_HOST>:8443/srm/managerv1
GlueServicePrimaryOwnerName: LCG
GlueServicePrimaryOwnerContact: mailto:<SITE_EMAIL>
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueServiceVersion: 1.0.0
GlueServiceAccessControlRule: <vo>
GlueServiceInformationServiceURL: MDS2GRIS:ldap://<BDII_HOST>:2170/mds-voname=local,mds-vo-name=<SITE_NAME>,mds-vo-name=local,o=grid
GlueServiceStatus: running

For all types of SE the following is added:

dynamic_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-se

GlueSEType: <se_type>
GlueSEPort: 2811
GlueSESizeTotal: 0
GlueSESizeFree: 0
GlueSEArchitecture: <se_type>
GlueSAType: permanent
GlueSAPolicyFileLifeTime: permanent
GlueSAPolicyMaxFileSize: 10000
GlueSAPolicyMinFileSize: 1
GlueSAPolicyMaxData: 100
GlueSAPolicyMaxNumFiles: 10
GlueSAPolicyMaxPinDuration: 10
GlueSAPolicyQuota: 0
GlueSAStateAvailableSpace: 1
GlueSAStateUsedSpace: 1

dn: GlueSEUniqueID=<SE_HOST>,mds-vo-name=local,o=grid
GlueSEName: <SITE_NAME>:<se_type>
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>

dn: GlueSEAccessProtocolLocalID=gsiftp, GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid
GlueSEAccessProtocolType: gsiftp
GlueSEAccessProtocolPort: 2811
GlueSEAccessProtocolVersion: 1.0.0
GlueSEAccessProtocolSupportedSecurity: GSI
GlueChunkKey: GlueSEUniqueID=<SE_HOST>

dn: GlueSEAccessProtocolLocalID=rfio, GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid
GlueSEAccessProtocolType: rfio
GlueSEAccessProtocolPort: 5001
GlueSEAccessProtocolVersion: 1.0.0
GlueSEAccessProtocolSupportedSecurity: RFIO
GlueChunkKey: GlueSEUniqueID=<SE_HOST>
where <se_type> is srm_v1 for DPM and dCache and disk otherwise.

For SE_dpm the following is added:

dn: GlueSALocalID=<vo>,GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid
GlueSARoot: <vo>:/dpm/<domain>/home/<vo>
GlueSAPath: <vo>:/dpm/<domain>/home/<vo>
GlueSAAccessControlBaseRule: <vo>
GlueChunkKey: GlueSEUniqueID=<SE_HOST>

For SE_dcache the following is added:

dn: GlueSALocalID=<vo>,GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid
GlueSARoot: <vo>:/pnfs/<domain>/home/<vo>
GlueSAPath: <vo>:/pnfs/<domain>/home/<vo>
GlueSAAccessControlBaseRule: <vo>
GlueChunkKey: GlueSEUniqueID=<SE_HOST>

For other types of SE the following is used:

dn: GlueSALocalID=<vo>,GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid
GlueSARoot: <vo>:<vo>
GlueSAPath: <VO_<vo>_STORAGE_DIR>
GlueSAAccessControlBaseRule: <vo>
GlueChunkKey: GlueSEUniqueID=<SE_HOST>

For VOBOX the following is added:

dn: GlueServiceUniqueID=gsissh://<VOBOX_HOST>:<VOBOX_PORT>,Mds-vo-name=local,o=grid
GlueServiceAccessPointURL: gsissh://<VOBOX_HOST>:<VOBOX_PORT>
GlueServiceName: <SITE_NAME>-vobox
GlueServiceType: VOBOX
GlueServiceEndpoint: gsissh://<VOBOX_HOST>:<VOBOX_PORT>
GlueServicePrimaryOwnerName: LCG
GlueServicePrimaryOwnerContact: <SITE_EMAIL>
GlueForeignKey: GlueSiteUniqueID=<SITE_NAME>
GlueServiceVersion: 1.0.0
GlueServiceInformationServiceURL: ldap://<VOBOX_HOST>:2135/mds-vo-name=local,o=grid
GlueServiceStatus: running
GlueServiceAccessControlRule: <vo>

Configuration script is run:

<INSTALL_ROOT>/lcg/sbin/lcg-info-generic-config <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf
Configuration script generates a ldif file (<INSTALL_ROOT>/lcg/var/gip/lcg-info-static.ldif) by merging templates from <INSTALL_ROOT>/lcg/etc/ and data from <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf. Wrapper script is also created in <INSTALL_ROOT>/lcg/libexec/lcg-info-wrapper.

<INSTALL_ROOT>/globus/libexec/edg.info is created:

#!/bin/bash
#
# info-globus-ldif.sh
#
#Configures information providers for MDS
#
cat << EOF

dn: Mds-Vo-name=local,o=grid
objectclass: GlobusTop
objectclass: GlobusActiveObject
objectclass: GlobusActiveSearch
type: exec
path: <INSTALL_ROOT>/lcg/libexec
base: lcg-info-wrapper
args:
cachetime: 60
timelimit: 20
sizelimit: 250

EOF

<INSTALL_ROOT>/globus/libexec/edg.info is created:

#!/bin/bash

cat <<EOF
<INSTALL_ROOT>/globus/etc/openldap/schema/core.schema
<INSTALL_ROOT>/glue/schema/ldap/Glue-CORE.schema
<INSTALL_ROOT>/glue/schema/ldap/Glue-CE.schema
<INSTALL_ROOT>/glue/schema/ldap/Glue-CESEBind.schema
<INSTALL_ROOT>/glue/schema/ldap/Glue-SE.schema
EOF

These two scripts are used to generate slapd configuration for Globus MDS.

<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-ce is generated to call the information provider appropriate for the LRMS. For Torque the file has these contents:

#!/bin/sh
<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-pbs <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf <TORQUE_SERVER>

R-GMA GIN periodically queries MDS and inserts the data into R-GMA. GIN is configured on all nodes except UI and WN by copying host certificate to <INSTALL_ROOT>/glite/var/rgma/.certs and updating the configuration file appropriately (<INSTALL_ROOT>/glite/etc/rgma/ClientAuthentication.props). Finally, GIN configuration script (<INSTALL_ROOT>/glite/bin/rgma-gin-config) is run to configure the mapping between Glue schema in MDS and Glue tables in R-GMA. rgma-gin service is restarted and configured to start on boot.


Specification of function: config_gip

The function 'config_gip' needs the following variables to be set in the configuration file:

BDII_HOST :
BDII Hostname.
CE_BATCH_SYS :
Implementation of site batch system. Available values are ``torque'', ``lsf'', ``pbs'', ``condor'' etc.
CE_CPU_MODEL :
Model of the CPU used by the WN (WN specification). This parameter is a string whose domain is not defined yet in the GLUE Schema. The value used for Pentium III is "PIII".
CE_CPU_SPEED :
Clock frequency in Mhz (WN specification).
CE_CPU_VENDOR :
Vendor of the CPU. used by the WN (WN specification). This parameter is a string whose domain is not defined yet in the GLUE Schema. The value used for Intel is ``intel''.
CE_HOST :
Computing Element Hostname.
CE_INBOUNDIP :
TRUE if inbound connectivity is enabled at your site, FALSE otherwise (WN specification).
CE_MINPHYSMEM :
RAM size in kblocks (WN specification).
CE_MINVIRTMEM :
Virtual Memory size in kblocks (WN specification).
CE_OS :
Operating System name (WN specification).
CE_OS_RELEASE :
Operating System release (WN specification).
CE_OUTBOUNDIP :
TRUE if outbound connectivity is enabled at your site, FALSE otherwise (WN specification).
CE_RUNTIMEENV :
List of software tags supported by the site. The list can include VO-specific software tags. In order to assure backward compatibility it should include the entry 'LCG-2', the current middleware version and the list of previous middleware tags.
CE_SF00 :
Performance index of your fabric in SpecFloat 2000 (WN specification). For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html.
CE_SI00 :
Performance index of your fabric in SpecInt 2000 (WN specification). For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html.
CE_SMPSIZE :
Number of cpus in an SMP box (WN specification).
CLASSIC_HOST :
The name of your SE_classic host.
CLASSIC_STORAGE_DIR :
The root storage directory on CLASSIC_HOST.
DCACHE_ADMIN :
Host name of the server node which manages the pool of nodes.
DPMDATA :
Directory where the data is stored (absolute path, e.g./storage).
DPM_HOST :
Host name of the DPM host, used also as a default DPM for the lcg-stdout-mon .
GRIDICE_SERVER_HOST :
GridIce server host name (usually run on the MON node).
GRID_TRUSTED_BROKERS :
List of the DNs of the Resource Brokers host certificates which are trusted by the Proxy node (ex: /O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch).
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
JOB_MANAGER :
The name of the job manager used by the gatekeeper.
LFC_CENTRAL :
A list of VOs for which the LFC should be configured as a central catalogue.
LFC_HOST :
Set this if you are building an LFC_HOST, not if you're just using clients.
LFC_LOCAL :
Normally the LFC will support all VOs in the VOS variable. If you want to limit this list, add the ones you need to LFC_LOCAL. For each item listed in the VOS variable you need to create a set of new variables as follows:
VO_$<$VO-NAME$>$_QUEUES :
The queues that the VO can use on the CE.
VO_$<$VO-NAME$>$_SE :
Default SE used by the VO. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_STORAGE_DIR :
Mount point on the Storage Element for the VO. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_SW_DIR :
Area on the WN for the installation of the experiment software. If on the WNs a predefined shared area has been mounted where VO managers can pre-install software, then these variable should point to this area. If instead there is not a shared area and each job must install the software, then this variables should contain a dot ( . ).Anyway the mounting of shared areas, as well as the local installation of VO software is not managed by yaim and should be handled locally by Site Administrators. WARNING: VO-NAME must be in capital cases.
PX_HOST :
PX hostname.
QUEUES :
The name of the queues for the CE. These are by default set as the VO names.
RB_HOST :
Resource Broker Hostname.
SE_LIST :
A list of hostnames of the SEs available at your site.
SITE_EMAIL :
The e-mail address as published by the information system.
SITE_LAT :
Site latitude.
SITE_LOC :
"City, Country".
SITE_LONG :
Site longitude.
SITE_NAME :
Your GIIS.
SITE_SUPPORT_SITE :
Support entry point ; Unique Id for the site in the GOC DB and information system.
SITE_TIER :
Site tier.
SITE_WEB :
Site site.
TORQUE_SERVER :
Set this if your torque server is on a different host from the CE. It is ingored for other batch systems.
VOBOX_HOST :
VOBOX hostname.
VOBOX_PORT :
The port the VOBOX gsisshd listens on.
VOS :
List of supported VOs.
VO_SW_DIR :
Directory for installation of experiment software.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_gip

The code is also reproduced in 18.13..

Set-up Globus daemons

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_globus'.

The Globus configuration file /etc/globus.conf is parsed by Globus daemon startup scripts to locate the Globus root directory and other global/daemon specific properties. The contents of the configuration file depend on the type of the node. The following table contains information on daemon to node mapping:

node/daemon MDS GridFTP Gatekeeper
CE yes yes yes
VOBOX yes yes yes
SE_* yes yes no
SE_dpm yes no no
PX yes no no
RB yes no no
LFC yes no no
GridICE yes no no


Note that SE_dpm does not run standard GridFTP server, but a specialized DPM version.

The configuration file is divided into sections:

common
Defines Globus installation directory, host certificates, location of gridmap file etc.
mds
Defines information providers.
gridftp
Defines the location of the GridFTP log file.
gatekeeper
Defines jobmanagers and their parameters.

Logrotate scripts globus-gatekeeper and gridftp are installed in /etc/logrotate.d/.

Globus initialization script (<INSTALL_DIR>/globus/sbin/globus-initialization.sh) is run next.

Finally, the appropriate daemons (globus-mds, globus-gatekeeper, globus-gridftp, lcg-mon-gridftp) are started (and configured to start on boot).


Specification of function: config_globus

The function 'config_globus' needs the following variables to be set in the configuration file:

CE_HOST :
Computing Element Hostname.
GRIDICE_SERVER_HOST :
GridIce server host name (usually run on the MON node).
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
JOB_MANAGER :
The name of the job manager used by the gatekeeper.
PX_HOST :
PX hostname.
RB_HOST :
Resource Broker Hostname.
SITE_NAME :
Your GIIS.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_globus

The code is reproduced also in 18.14..

Set-up dCache SE

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_sedcache'.

If dCache reconfiguration is not requested (<RESET_DCACHE_CONFIGURATION>), only grid-mapfile generation and cron jobs are configured. Otherwise all of the following is done.

Portmapper (portmap) is restarted if it's not running.

PostgreSQL is only configured on admin node. First the postgresql daemon is restarted. If the dcache database already exists, configuration is aborted. PostgreSQL is configured to accept TCP connections (/var/lib/pqsql/data/posgresql.conf) from local host (/var/lib/pqsql/data/pg_hba.conf):

host    all         all         127.0.0.1   255.255.255.255   trust
host    all         all         <hostname>  255.255.255.255   trust
local   all         all                                       trust
dcache database and srmdcache user are created. Resilience Manager database (replicas) is created and initialized from /opt/d-cache/etc/pd_dump-s-U_enstore.sql. Finally, PostgreSQL is configured to start on boot.

pnfs is only configured on admin node. pnfs configuration (<INSTALL_ROOT>/pnfs.3.1.10/pnfs/etc/pnfs_config) is copied from a template (<INSTALL_ROOT>/pnfs.3.1.10/pnfs/etc/pnfs_config.template). If not already installed (determined by existence of <INSTALL_ROOT>/d-cache-lcg/install/.pnfs_done), pnfs installation script (<INSTALL_ROOT>/pnfs.3.1.10/pnfs/install/pnfs-install.sh) is run.

<INSTALL_ROOT>/d-cache/config/dCacheSetup is generated from the template in <INSTALL_ROOT>/d-cache/config/dCacheSetup.template.

If the configuration is run on the pool node, appropriate pool directory is created and the following line is added to <INSTALL_ROOT>/d-cache/etc/pool_path:

<pool_directory> <pool_size> no
If size is not specified in <DCACHE_POOLS>, it is determined by running df.

If not already installed (determined by existence of <INSTALL_ROOT>/d-cache-lcg/install/.dcache_done), dCache installation script (<INSTALL_ROOT>/d-cache/install/install.sh) is run.

On admin nodes pnfs, dcache-core and dcache-opt scripts are installed in /etc/init.d/. On pool nodes only dcache-pool is installed.

On admin nodes the following /opt/d-cache/etc/door_config is generated:

ADMIN_NODE      myAdminNode

door          active      
--------------------
GSIDCAP         no
GRIDFTP         no
SRM             yes

On pool nodes the following /opt/d-cache/etc/door_config is generated:

ADMIN_NODE      myAdminNode

door          active      
--------------------
GSIDCAP         no
GRIDFTP         yes
SRM             no

On pool nodes /opt/d-cache/install/install_doors.sh is run and the following line is appended to /etc/fstab:

<DCACHE_ADMIN>:/fs /pnfs/fs             nfs     hard,intr,rw,noac,auto 0 0

Installed dCache daemons (depending on the node type) are restarted and configured to start on boot.

grid-mapfile is set-up by running config_mkgridmap yaim function. <INSTALL_ROOT>/d-cache/bin/grid-mapfile2dcache-kpwd is then run to generate /opt/d-cache/etc/dcache.kpwd from the contents of /etc/grid-security/grid-mapfile.

Cron job for generating /etc/grid-security/grid-mapfile and /opt/d-cache/etc/dcache.kpwd is installed to run four times a day.

Dynamic dCache specific information provider is installed in <INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-se:

#!/bin/sh
#
# Script dynamically generated by YAIM function config_sedcache
#

<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-dcache <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf

Cron job for generating proxy certificates for edginfo and rgma users is installed to run once an hour. Proxy certificaes are generated from the host certificate and are placed in <INSTALL_ROOT>/lcg/hostproxy and <INSTALL_ROOT>/lcg/hostproxy.rgma, respectively.


Specification of function: config_sedcache

The function 'config_sedcache' needs the following variables to be set in the configuration file:

CRON_DIR :
Yaim writes all cron jobs to this directory. Change it if you want to turn off Yaim's management of cron.
DCACHE_ADMIN :
Host name of the server node which manages the pool of nodes.
DCACHE_POOLS :
List of pool nodes managed by the DCACHE_ADMIN server node.
DCACHE_PORT_RANGE :
DCACHE Port Range. This variable is optional and the default value is "20000,25000" .
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
RESET_DCACHE_CONFIGURATION :
Set this to yes if you want YAIM to configure dCache for you - if unset (or 'no') yaim will only configure the grid front-end to dCache.
USERS_CONF :
Path to the file containing a list of Linux users (pool accounts) to be created. This file should be created by the Site Administrator, which contains a plain list of the users and IDs. An example of this configuration file is given in /opt/lcg/yaim/examples/users.conf.
VOS :
List of supported VOs.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_sedcache

The code is also reproduced in 18.15..

Source code


config_ldconf

config_ldconf () {

    INSTALL_ROOT=${INSTALL_ROOT:-/opt} 
    
cp -p /etc/ld.so.conf /etc/ld.so.conf.orig

    LIBDIRS="${INSTALL_ROOT}/globus/lib \
	     ${INSTALL_ROOT}/edg/lib \
             ${INSTALL_ROOT}/edg/externals/lib/ \
	     /usr/local/lib \
             ${INSTALL_ROOT}/lcg/lib \
             /usr/kerberos/lib \
             /usr/X11R6/lib \
             /usr/lib/qt-3.1/lib \
             ${INSTALL_ROOT}/gcc-3.2.2/lib \
             ${INSTALL_ROOT}/glite/lib \
             ${INSTALL_ROOT}/glite/externals/lib"

    
    if [ -f /etc/ld.so.conf.add ]; then
	rm -f /etc/ld.so.conf.add
    fi

    for libdir in ${LIBDIRS}; do 
	if  ( ! grep -q $libdir /etc/ld.so.conf && [ -d $libdir ] ); then
	    echo $libdir >> /etc/ld.so.conf.add
	fi
    done

    if [ -f /etc/ld.so.conf.add ]; then    
	sort -u /etc/ld.so.conf.add >> /etc/ld.so.conf  
	rm -f  /etc/ld.so.conf.add
    fi

    /sbin/ldconfig

    return 0
}


config_sysconfig_edg

config_sysconfig_edg(){

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

cat <<EOF > /etc/sysconfig/edg
EDG_LOCATION=$INSTALL_ROOT/edg
EDG_LOCATION_VAR=$INSTALL_ROOT/edg/var
EDG_TMP=/tmp
X509_USER_CERT=/etc/grid-security/hostcert.pem
X509_USER_KEY=/etc/grid-security/hostkey.pem
GRIDMAP=/etc/grid-security/grid-mapfile
GRIDMAPDIR=/etc/grid-security/gridmapdir/
EDG_WL_BKSERVERD_ADDOPTS=--rgmaexport 
EDG_WL_RGMA_FILE=/var/edgwl/logging/status.log
EOF

return 0
}


config_sysconfig_globus

config_sysconfig_globus() {

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

# If GLOBUS_TCP_PORT_RANGE is unset, give it a good default
# Leave it alone if it is set but empty
GLOBUS_TCP_PORT_RANGE=${GLOBUS_TCP_PORT_RANGE-"20000 25000"}

cat <<EOF > /etc/sysconfig/globus
GLOBUS_LOCATION=$INSTALL_ROOT/globus
GLOBUS_CONFIG=/etc/globus.conf
export LANG=C
EOF

# Set GLOBUS_TCP_PORT_RANGE, but not for nodes which are only WNs
if [ "$GLOBUS_TCP_PORT_RANGE" ] && ( ! echo $NODE_TYPE_LIST | egrep -q '^ *WN_?[[:alpha:]]* *$' ); then
    echo "GLOBUS_TCP_PORT_RANGE=\"$GLOBUS_TCP_PORT_RANGE\"" >> /etc/sysconfig/globus
fi

(
    # HACK to avoid complaints from services that do not need it,
    # but get started via a login shell before the file is created...

    f=$INSTALL_ROOT/globus/libexec/globus-script-initializer
    echo '' > $f
    chmod 755 $f
)

return 0
}


config_sysconfig_lcg

config_sysconfig_lcg(){

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

cat <<EOF > /etc/sysconfig/lcg
LCG_LOCATION=$INSTALL_ROOT/lcg
LCG_LOCATION_VAR=$INSTALL_ROOT/lcg/var
LCG_TMP=/tmp
export SITE_NAME=$SITE_NAME
EOF

return 0
}


config_crl

config_crl(){

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

let minute="$RANDOM%60"

let h1="$RANDOM%24"
let h2="($h1+6)%24"
let h3="($h1+12)%24"
let h4="($h1+18)%24"

if !( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then

    if [ ! -f /etc/cron.d/edg-fetch-crl ]; then
	echo "Now updating the CRLs - this may take a few minutes..."
	$INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> /var/log/edg-fetch-crl-cron.log 2>&1
    fi

cron_job edg-fetch-crl root "$minute $h1,$h2,$h3,$h4 * * * $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> /var/log/edg-fetch-crl-cron.log 2>&1"

    cat <<EOF > /etc/logrotate.d/edg-fetch
/var/log/edg-fetch-crl-cron.log  {
    compress
    monthly
    rotate 12
    missingok
    ifempty
    create
}
EOF

else

    cron_job edg-fetch-crl `whoami` "$minute $h1,$h2,$h3,$h4 * * * $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> $INSTALL_ROOT/edg/var/log/edg-fetch-crl-cron.log 2>&1"
    if [ ! -d $INSTALL_ROOT/edg/var/log ]; then
	mkdir -p $INSTALL_ROOT/edg/var/log
    fi
    echo "Now updating the CRLs - this may take a few minutes..."
    $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> $INSTALL_ROOT/edg/var/log/edg-fetch-crl-cron.log 2>&1

fi

return 0
}


config_rfio

config_rfio() {

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

# This function turns rfio on where necessary and
# just as important, turns it off where it isn't necessary

if ( echo "${NODE_TYPE_LIST}" | grep -q SE_classic ); then

    if [ "x`grep rfio /etc/services | grep tcp`" = "x"  ]; then
	echo "rfio     5001/tcp" >> /etc/services
    fi

    if [ "x`grep rfio /etc/services | grep udp`" = "x"  ]; then
	echo "rfio     5001/udp" >> /etc/services
    fi

    /sbin/service rfiod restart

elif ( echo "${NODE_TYPE_LIST}" | grep -q SE_dpm ); then

    return 0

elif (  rpm -qa | grep -q CASTOR-client ); then 

    /sbin/service rfiod stop
    /sbin/chkconfig --level 2345 rfiod off

fi

return 0

}


config_host_certs

config_host_certs(){

if [ -f /etc/grid-security/hostkey.pem ]; then
    chmod 400 /etc/grid-security/hostkey.pem
elif  [ -f /etc/grid-security/hostcert.pem ]; then
    chmod 644 /etc/grid-security/hostcert.pem
else
    echo "Please copy the hostkey.pem and hostcert.pem to /etc/grid-security"
    return 1
fi
return 0
}


config_users

config_users(){
#
# Creates the Pool Users.
#
# Takes the users, groups and ids from a configuration file (USERS_CONF).
# File format: 
#
# UserId:User:GroupId:Group
#

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

requires USERS_CONF VOS

if [ ! -e $USERS_CONF ]; then
    echo "$USERS_CONF not found."
    return 1	
fi

check_users_conf_format

# Add each group required by $VOS
awk -F: '{print $3, $4, $5}' ${USERS_CONF} | sort -u | while read gid groupname virtorg; do
    if ( [ "$virtorg" ] && echo $VOS | grep -w "$virtorg" > /dev/null ); then
	groupadd -g $gid $groupname 2> /dev/null
    fi
done

grid_accounts=
newline='
'

# Add all the users for each VO in ${VOS}
for x in `cat $USERS_CONF`; do 
    # ensure that this VO is in the $VOS list
    virtorg=`echo $x | cut -d":" -f5`
    if ( [ "$virtorg" ] && echo $VOS | grep -w "$virtorg" > /dev/null ); then
	user=`echo $x | cut -d":" -f2`
	id=`echo $x | cut -d":" -f1`
	group=`echo $x | cut -d":" -f3`
	if ( ! id $user > /dev/null 2>&1 ); then
	    useradd -c "mapped user for group ID $group" -u $id -g $group $user
	fi

	# grid users shall not be able to submit at or cron jobs
	for deny in /etc/at.deny /etc/cron.deny; do
	    tmp=$deny.$$
	    touch $deny
	    (grep -v "^$user\$" $deny; echo "$user") > $tmp && mv $tmp $deny
	done

	grid_accounts="$grid_accounts$newline$user"
    fi
done

(
    cga=$INSTALL_ROOT/lcg/etc/cleanup-grid-accounts.conf
    cga_tmp=$cga.$$

    [ -r $cga ] || exit

    (
	sed '/YAIM/,$d' $cga
	echo "# next lines added by YAIM on `date`"
	echo "ACCOUNTS='$grid_accounts$newline'"
    ) > $cga_tmp

    mv $cga_tmp $cga
)

let minute="$RANDOM%60"
let h="$RANDOM%6"
f=/var/log/cleanup-grid-accounts.log

if ( echo "${NODE_TYPE_LIST}" | grep '\<CE' > /dev/null ); then

    cron_job cleanup-grid-accounts root "$minute $h * * * \
$INSTALL_ROOT/lcg/sbin/cleanup-grid-accounts.sh -v -F >> $f 2>&1"

    cat <<EOF > /etc/logrotate.d/cleanup-grid-accounts
$f {
    compress
    daily
    rotate 30
    missingok
}
EOF

elif ( echo "${NODE_TYPE_LIST}" | grep '\<WN' > /dev/null ); then

    cron_job cleanup-grid-accounts root "$minute $h * * * \
$INSTALL_ROOT/lcg/sbin/cleanup-grid-accounts.sh -v >> $f 2>&1"

    cat <<EOF > /etc/logrotate.d/cleanup-grid-accounts
$f {
    compress
    daily
    rotate 30
    missingok
}
EOF

fi

return 0
}


config_edgusers

config_edgusers(){

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

check_users_conf_format

if ( ! id edguser > /dev/null 2>&1 ); then
    useradd -r -c "EDG User" edguser
    mkdir -p /home/edguser
    chown edguser:edguser /home/edguser
fi

if ( ! id edginfo > /dev/null 2>&1 ); then
    useradd -r -c "EDG Info user" edginfo
    mkdir -p /home/edginfo
    chown edginfo:edginfo /home/edginfo
fi

if ( ! id rgma > /dev/null 2>&1 ); then
    useradd -r -c "RGMA user" -m -d ${INSTALL_ROOT}/glite/etc/rgma rgma
fi

# Make sure edguser is a member of each group

awk -F: '{print $3, $4, $5}' ${USERS_CONF} | sort -u | while read gid groupname virtorg; do
    if ( [ "$virtorg" ] && echo $VOS | grep -w "$virtorg" > /dev/null ); then
	# On some nodes the users are not created, so the group will not exist
	# Isn't there a better way to check for group existance??
	if ( grep "^${groupname}:" /etc/group > /dev/null ); then
	    gpasswd -a edguser $groupname  > /dev/null
	fi
    fi
done

return 0
}


config_mkgridmap

config_mkgridmap(){

requires USERS_CONF GROUPS_CONF VOS

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

if [ ! -e $USERS_CONF ]; then
    echo "$USERS_CONF not found."
    return 1
fi

if [ ! -e $GROUPS_CONF ]; then
    echo "$GROUPS_CONF not found."
    return 1
fi

check_users_conf_format

gmc=$INSTALL_ROOT/edg/etc/edg-mkgridmap.conf
gmd=/etc/grid-security/gridmapdir

mkdir -p $gmd
chown root:edguser $gmd
chmod 775 $gmd

for user in `awk -F: '$6==""{print $2}' $USERS_CONF`; do
    f=$gmd/$user
    [ -f $f ] || touch $f
done

if ( echo "${NODE_TYPE_LIST}" | egrep -q 'dpm|LFC' ); then
    gmc_dm=$INSTALL_ROOT/lcg/etc/lcgdm-mkgridmap.conf
else
    gmc_dm=/dev/null
fi

cat << EOF > $gmc
##############################################################################
#
# edg-mkgridmap.conf generated by YAIM on `date`
#
##############################################################################

EOF

cat << EOF > $gmc_dm
##############################################################################
#
# lcgdm-mkgridmap.conf generated by YAIM on `date`
#
##############################################################################

EOF

lcmaps=${INSTALL_ROOT}/edg/etc/lcmaps
lcmaps_gridmapfile=$lcmaps/gridmapfile
lcmaps_groupmapfile=$lcmaps/groupmapfile

mkdir -p $lcmaps
rm -f $lcmaps_gridmapfile $lcmaps_groupmapfile

for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do

    # Set some variables

    VO_lower=`echo $VO | tr '[:upper:]' '[:lower:]'`

    vo_user_prefix=`users_getvoprefix $VO`
    [ -z "$vo_user_prefix" ] && vo_user_prefix=$VO_lower

    vo_group=`users_getvogroup $VO`

    sgmuser=`users_getsgmuser $VO`
    prduser=`users_getprduser $VO`

    eval voms_pool='$'VO_${VO}_VOMS_POOL_PATH
    test -z "$voms_pool" || voms_pool=/${voms_pool#/}

    eval voms_servers='$'VO_${VO}_VOMS_SERVERS

    vo_match=/VO=$VO_lower/GROUP=/$VO_lower
    role_match=$vo_match/ROLE=

    echo "# $VO" >> $gmc

    ### VOMS sgm
    if [ "$sgmuser" -a "$voms_servers" ]; then

	#
	# "/VO=dteam/GROUP=/dteam/ROLE=lcgadmin":::sgm:
	#

	role=`sed -n 's|^"'"$role_match"'\(.*\)":.*:sgm:* *$|\1|p' $GROUPS_CONF`

	echo "# Map VO members (Role) $sgmuser" >> $gmc

	split_quoted_variable $voms_servers | while read server; do
	    echo "group ${server%/}/Role=$role $sgmuser"  >> $gmc
	    echo "group ${server%/}/Role=$role $VO_lower" >> $gmc_dm
	done
	echo >> $gmc
    fi

    ### VOMS prd
    if [ "$prduser" -a "$voms_servers" ]; then

	#
	# "/VO=dteam/GROUP=/dteam/ROLE=production":::prd:
	#

	role=`sed -n 's|^"'"$role_match"'\(.*\)":.*:prd:* *$|\1|p' $GROUPS_CONF`

	echo "# Map VO members (Role) $prduser" >> $gmc

	split_quoted_variable $voms_servers | while read server; do
	    echo "group ${server%/}/Role=$role $prduser"  >> $gmc
	    echo "group ${server%/}/Role=$role $VO_lower" >> $gmc_dm
	done
	echo >> $gmc
    fi

    ### VOMS pool
    if [ "$voms_servers" ]; then
	echo "# Map VO members  (root Group) $VO_lower" >> $gmc

	split_quoted_variable $voms_servers | while read server; do
	    echo "group ${server%/}${voms_pool} .$vo_user_prefix" >> $gmc
	    echo "group ${server%/}${voms_pool} $VO_lower"        >> $gmc_dm
	done
	echo >> $gmc
    fi

    echo "# LDAP lines for ${VO}" >> $gmc

    ### LDAP sgm
    if [ "$sgmuser" ]; then
	eval ldap_sgm='$'VO_${VO}_SGM
	test -z "$ldap_sgm" || {
	    echo "group $ldap_sgm $sgmuser"  >> $gmc
	    echo "group $ldap_sgm $VO_lower" >> $gmc_dm
	}
    fi

    ### LDAP pool
    eval ldap_users='$'VO_${VO}_USERS
    test -z "$ldap_users" || {
	echo "group $ldap_users .$vo_user_prefix" >> $gmc
	echo "group $ldap_users $VO_lower"        >> $gmc_dm
    }

    echo >> $gmc
    echo >> $gmc

    ### VOMS gridmapfile and groupmapfile

    #
    # "/VO=cms/GROUP=/cms/ROLE=lcgadmin":::sgm:
    # "/VO=cms/GROUP=/cms/ROLE=production":::prd:
    # "/VO=cms/GROUP=/cms/GROUP=HeavyIons":cms01:1340::
    # "/VO=cms/GROUP=/cms/GROUP=Higgs":cms02:1341::
    # "/VO=cms/GROUP=/cms/GROUP=StandardModel":cms03:1342::
    # "/VO=cms/GROUP=/cms/GROUP=Susy":cms04:1343::
    # "/VO=cms/GROUP=/cms"::::
    #

    sed -n '/^"\/VO='"$VO_lower"'\//p' $GROUPS_CONF | while read line; do

	fqan=` echo "$line" | sed 's/":.*/"/' `
	line=` echo "$line" | sed 's/.*"://'  `
	group=`echo "$line" | sed 's/:.*//'   `
	line=` echo "$line" | sed 's/[^:]*://'`
	gid=`  echo "$line" | sed 's/:.*//'   `
	line=` echo "$line" | sed 's/[^:]*://'`
	flag=` echo "$line" | sed 's/:.*//'   `

	if [ "$flag" = sgm ]; then

	    u=$sgmuser
	    g=$vo_group

	elif [ "$flag" = prd ]; then

	    u=$prduser
	    g=$vo_group

	elif [ "$group" ]; then

	    groupadd ${gid:+"-g"} ${gid:+"$gid"} "$group" 2>&1 | grep -v exists

	    u=.$vo_user_prefix
	    g=$group

	else

	    u=.$vo_user_prefix
	    g=$vo_group
	fi

	echo "$fqan $u" >> $lcmaps_gridmapfile
	echo "$fqan $g" >> $lcmaps_groupmapfile

    done

done # End of VO loop

cat << EOF >> $gmc
#############################################################################
# List of auth URIs
# eg 'auth ldap://marianne.in2p3.fr/ou=People,o=testbed,dc=eu-datagrid,dc=org'
# If these are defined then users must be authorised in one of the following
# auth servers.


# A list of authorised users.
EOF

GRIDMAP_AUTH=${GRIDMAP_AUTH:-\
ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org}

for i in $GRIDMAP_AUTH; do
    echo "auth $i" >> $gmc
    echo "auth $i" >> $gmc_dm
    echo >> $gmc
done

f=$INSTALL_ROOT/edg/etc/grid-mapfile-local

[ -f $f ] || touch $f

cat << EOF >> $gmc
#############################################################################
# DEFAULT_LCLUSER: default_lcluser lcluser
# default_lcuser .


#############################################################################
# ALLOW and DENY: deny|allow pattern_to_match
# allow *INFN*


#############################################################################
# Local grid-mapfile to import and overide all the above information.
# eg, gmf_local $f

gmf_local $f
EOF

if [ ${gmc_dm:-/dev/null} != /dev/null ]; then
    f=${INSTALL_ROOT}/lcg/etc/lcgdm-mapfile-local

    [ -f $f ] || touch $f
fi

cat << EOF >> $gmc_dm
gmf_local $f
EOF

#
# bootstrap the grid-mapfile
#

cmd="$INSTALL_ROOT/edg/sbin/edg-mkgridmap \
--output=/etc/grid-security/grid-mapfile --safe"

echo "Now creating the grid-mapfile - this may take a few minutes..."

$cmd 2>> $YAIM_LOG

let minute="$RANDOM%60"

let h1="$RANDOM%6"
let h2="$h1+6"
let h3="$h2+6"
let h4="$h3+6"

cron_job edg-mkgridmap root "$minute $h1,$h2,$h3,$h4 * * * $cmd"

if ( echo "${NODE_TYPE_LIST}" | egrep -q 'dpm|LFC' ); then

    cmd="$INSTALL_ROOT/edg/libexec/edg-mkgridmap/edg-mkgridmap.pl \
--conf=$gmc_dm --output=$INSTALL_ROOT/lcg/etc/lcgdm-mapfile --safe"

    echo "Now creating the lcgdm-mapfile - this may take a few minutes..."

    $cmd 2>> $YAIM_LOG

    let minute="$RANDOM%60"

    let h1="$RANDOM%6"
    let h2="$h1+6"
    let h3="$h2+6"
    let h4="$h3+6"

    cron_job lcgdm-mkgridmap root "$minute $h1,$h2,$h3,$h4 * * * $cmd"

fi

if ( echo "${NODE_TYPE_LIST}" | grep -q '\<'RB ); then

    cron_job lcg-expiregridmapdir root  "5 * * * * \
${INSTALL_ROOT}/edg/sbin/lcg-expiregridmapdir.pl -e 240 -v >> \
/var/log/lcg-expiregridmapdir.log 2>&1"

elif ( echo "${NODE_TYPE_LIST}" | egrep -q 'dpm|LFC' ); then

    # No expiry
    rm -f ${CRON_DIR}/lcg-expiregridmapdir

else

    cron_job lcg-expiregridmapdir root  "5 * * * * \
${INSTALL_ROOT}/edg/sbin/lcg-expiregridmapdir.pl -v >> \
/var/log/lcg-expiregridmapdir.log 2>&1"

fi

return 0
}


config_java

function config_java () {

INSTALL_ROOT=${INSTALL_ROOT:-/opt} 

# If JAVA_LOCATION is not set by the admin, take a guess
if [ -z "$JAVA_LOCATION" ]; then
    java=`rpm -qa | grep j2sdk-` || java=`rpm -qa | grep j2re`
    if [ "$java" ]; then
	JAVA_LOCATION=`rpm -ql $java | egrep '/bin/java$' | sort | head -1 | sed 's#/bin/java##'`
    fi
fi

if [ ! "$JAVA_LOCATION" -o ! -d "$JAVA_LOCATION" ]; then
   echo "Please check your value for JAVA_LOCATION"
   return 1
fi

if ( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then

# We're configuring a relocatable distro

    if [ ! -d ${INSTALL_ROOT}/edg/etc/profile.d ]; then
	mkdir -p ${INSTALL_ROOT}/edg/etc/profile.d/
    fi

    cat  > $INSTALL_ROOT/edg/etc/profile.d/j2.sh <<EOF

JAVA_HOME=$JAVA_LOCATION
export JAVA_HOME
EOF

    cat  > $INSTALL_ROOT/edg/etc/profile.d/j2.csh <<EOF

setenv JAVA_HOME $JAVA_LOCATION
EOF

    chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.sh
    chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.csh

    return 0

fi # end of relocatable stuff

# We're root and it's not a relocatable

if [ ! -d /etc/java ]; then
    mkdir /etc/java
fi

echo "export JAVA_HOME=$JAVA_LOCATION" > /etc/java/java.conf
echo "export JAVA_HOME=$JAVA_LOCATION" > /etc/java.conf
chmod +x /etc/java/java.conf 

#This hack is here due to SL and the java profile rpms, Laurence Field

if [ ! -d ${INSTALL_ROOT}/edg/etc/profile.d ]; then
    mkdir -p ${INSTALL_ROOT}/edg/etc/profile.d/
fi

cat << EOF > $INSTALL_ROOT/edg/etc/profile.d/j2.sh
if [ -z "\$PATH" ]; then
   export PATH=${JAVA_LOCATION}/bin
else
   export PATH=${JAVA_LOCATION}/bin:\${PATH}
fi
EOF

chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.sh

cat << EOF > $INSTALL_ROOT/edg/etc/profile.d/j2.csh
if ( \$?PATH ) then
    setenv PATH ${JAVA_LOCATION}/bin:\${PATH}
else
    setenv PATH ${JAVA_LOCATION}/bin
endif
EOF

chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.csh

return 0

}


config_rgma_client

config_rgma_client(){

requires MON_HOST REG_HOST

INSTALL_ROOT=${INSTALL_ROOT:-/opt} 

# NB java stuff now in config_java, which must be run before

export RGMA_HOME=${INSTALL_ROOT}/glite

# in order to use python from userdeps.tgz we need to source the env
if ( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then
    . $INSTALL_ROOT/etc/profile.d/grid_env.sh
fi

${RGMA_HOME}/share/rgma/scripts/rgma-setup.py --secure=yes --server=${MON_HOST} --registry=${REG_HOST} --schema=${REG_HOST} 

cat << EOF > ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.sh
export RGMA_HOME=${INSTALL_ROOT}/glite
export APEL_HOME=${INSTALL_ROOT}/glite

echo \$PYTHONPATH | grep -q ${INSTALL_ROOT}/glite/lib/python && isthere=1 || isthere=0
if [ \$isthere = 0 ]; then
    if [ -z \$PYTHONPATH ]; then
        export PYTHONPATH=${INSTALL_ROOT}/glite/lib/python
    else
        export PYTHONPATH=\$PYTHONPATH:${INSTALL_ROOT}/glite/lib/python
    fi
fi

echo \$LD_LIBRARY_PATH | grep -q ${INSTALL_ROOT}/glite/lib && isthere=1 || isthere=0
if [ \$isthere = 0 ]; then
    if [ -z \$LD_LIBRARY_PATH ]; then
        export LD_LIBRARY_PATH=${INSTALL_ROOT}/glite/lib
    else
        export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:${INSTALL_ROOT}/glite/lib
    fi
fi
EOF

chmod a+rx ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.sh

cat << EOF > ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.csh
setenv RGMA_HOME ${INSTALL_ROOT}/glite
setenv APEL_HOME ${INSTALL_ROOT}/glite

echo \$PYTHONPATH | grep -q ${INSTALL_ROOT}/glite/lib/python && set isthere=1 || set isthere=0
if ( \$isthere == 0 ) then
    if ( -z \$PYTHONPATH ) then
        setenv PYTHONPATH ${INSTALL_ROOT}/glite/lib/python
    else
        setenv PYTHONPATH \$PYTHONPATH\:${INSTALL_ROOT}/glite/lib/python
    endif
endif

echo \$LD_LIBRARY_PATH | grep -q ${INSTALL_ROOT}/glite/lib && set isthere=1 || set isthere=0
if ( \$isthere == 0 ) then
    if ( -z \$LD_LIBRARY_PATH ) then
        setenv LD_LIBRARY_PATH ${INSTALL_ROOT}/glite/lib
    else
        setenv LD_LIBRARY_PATH \$LD_LIBRARY_PATH\:${INSTALL_ROOT}/glite/lib
    endif
endif
EOF

chmod a+rx  ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.csh

return 0
}


config_gip

config_gip () {

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

requires CE_HOST RB_HOST PX_HOST 

#check_users_conf_format

#set some vars for storage elements
if ( echo "${NODE_TYPE_LIST}" | grep '\<SE' > /dev/null ); then
    requires VOS SITE_EMAIL SITE_NAME BDII_HOST VOS SITE_NAME
    if ( echo "${NODE_TYPE_LIST}" | grep SE_dpm > /dev/null ); then
	requires DPM_HOST
	se_host=$DPM_HOST
	se_type="srm_v1"
	control_protocol=srm_v1
	control_endpoint=httpg://${se_host}
    elif ( echo "${NODE_TYPE_LIST}" | grep SE_dcache > /dev/null ); then
	requires DCACHE_ADMIN
	se_host=$DCACHE_ADMIN
	se_type="srm_v1"
	control_protocol=srm_v1
	control_endpoint=httpg://${se_host}
    else
	requires CLASSIC_STORAGE_DIR CLASSIC_HOST VO__STORAGE_DIR
	se_host=$CLASSIC_HOST
	se_type="disk"
	control_protocol=classic
	control_endpoint=classic
    fi
fi

if ( echo "${NODE_TYPE_LIST}" | grep '\<CE' > /dev/null ); then

    # GlueSite

    requires SITE_EMAIL SITE_NAME SITE_LOC SITE_LAT SITE_LONG SITE_WEB \
	SITE_TIER SITE_SUPPORT_SITE SE_LIST

    outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-site.conf

    # set default SEs if they're currently undefined
    default_se=`set x $SE_LIST; echo "$2"`
    if [ "$default_se" ]; then
	for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
	    if [ "x`eval echo '$'VO_${VO}_DEFAULT_SE`" = "x" ]; then
		eval VO_${VO}_DEFAULT_SE=$default_se
	    fi
	done
    fi

    cat << EOF > $outfile
dn: GlueSiteUniqueID=$SITE_NAME
GlueSiteUniqueID: $SITE_NAME
GlueSiteName: $SITE_NAME
GlueSiteDescription: LCG Site
GlueSiteUserSupportContact: mailto: $SITE_EMAIL
GlueSiteSysAdminContact: mailto: $SITE_EMAIL
GlueSiteSecurityContact: mailto: $SITE_EMAIL
GlueSiteLocation: $SITE_LOC
GlueSiteLatitude: $SITE_LAT
GlueSiteLongitude: $SITE_LONG
GlueSiteWeb: $SITE_WEB
GlueSiteSponsor: none
GlueSiteOtherInfo: $SITE_TIER
GlueSiteOtherInfo: $SITE_SUPPORT_SITE
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
    $INSTALL_ROOT/lcg/etc/GlueSite.template > \
    $INSTALL_ROOT/lcg/var/gip/ldif/static-file-Site.ldif

    # GlueCluster

    requires JOB_MANAGER CE_BATCH_SYS VOS QUEUES CE_BATCH_SYS CE_CPU_MODEL \
	CE_CPU_VENDOR CE_CPU_SPEED CE_OS CE_OS_RELEASE CE_MINPHYSMEM \
	CE_MINVIRTMEM CE_SMPSIZE CE_SI00 CE_SF00 CE_OUTBOUNDIP CE_INBOUNDIP \
	CE_RUNTIMEENV

    outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-cluster.conf

    for VO in $VOS; do
        dir=${INSTALL_ROOT}/edg/var/info/$VO
        mkdir -p $dir
	f=$dir/$VO.list
	[ -f $f ] || touch $f
        # work out the sgm user for this VO
        sgmuser=`users_getsgmuser $VO`
	sgmgroup=`id -g $sgmuser`
	chown -R ${sgmuser}:${sgmgroup} $dir
	chmod -R go-w $dir
    done

    cat <<EOF > $outfile

dn: GlueClusterUniqueID=${CE_HOST}
GlueClusterName: ${CE_HOST}
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
GlueInformationServiceURL: ldap://`hostname -f`:2135/mds-vo-name=local,o=grid
EOF

    for QUEUE in $QUEUES; do
        echo "GlueClusterService: ${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE" >> $outfile
    done

    for QUEUE in $QUEUES; do
        echo "GlueForeignKey:" \
	    "GlueCEUniqueID=${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE" >> $outfile
    done

    cat << EOF >> $outfile

dn: GlueSubClusterUniqueID=${CE_HOST}, GlueClusterUniqueID=${CE_HOST}
GlueChunkKey: GlueClusterUniqueID=${CE_HOST}
GlueHostArchitectureSMPSize: $CE_SMPSIZE
GlueHostBenchmarkSF00: $CE_SF00
GlueHostBenchmarkSI00: $CE_SI00
GlueHostMainMemoryRAMSize: $CE_MINPHYSMEM
GlueHostMainMemoryVirtualSize: $CE_MINVIRTMEM
GlueHostNetworkAdapterInboundIP: $CE_INBOUNDIP
GlueHostNetworkAdapterOutboundIP: $CE_OUTBOUNDIP
GlueHostOperatingSystemName: $CE_OS
GlueHostOperatingSystemRelease: $CE_OS_RELEASE
GlueHostOperatingSystemVersion: 3
GlueHostProcessorClockSpeed: $CE_CPU_SPEED
GlueHostProcessorModel: $CE_CPU_MODEL
GlueHostProcessorVendor: $CE_CPU_VENDOR
GlueSubClusterName: ${CE_HOST}
GlueSubClusterPhysicalCPUs: 0
GlueSubClusterLogicalCPUs: 0
GlueSubClusterTmpDir: /tmp
GlueSubClusterWNTmpDir: /tmp
GlueInformationServiceURL: ldap://`hostname -f`:2135/mds-vo-name=local,o=grid
EOF

    for x in $CE_RUNTIMEENV; do
        echo "GlueHostApplicationSoftwareRunTimeEnvironment: $x" >>  $outfile
    done

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
    $INSTALL_ROOT/lcg/etc/GlueCluster.template > \
    $INSTALL_ROOT/lcg/var/gip/ldif/static-file-Cluster.ldif

    # GlueCE

    outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-ce.conf

    cat /dev/null > $outfile

    for QUEUE in $QUEUES; do
        cat <<EOF >> $outfile


dn: GlueCEUniqueID=${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE
GlueCEHostingCluster: ${CE_HOST}
GlueCEName: $QUEUE
GlueCEInfoGatekeeperPort: 2119
GlueCEInfoHostName: ${CE_HOST}
GlueCEInfoLRMSType: $CE_BATCH_SYS
GlueCEInfoLRMSVersion: not defined
GlueCEInfoTotalCPUs: 0
GlueCEInfoJobManager: ${JOB_MANAGER}
GlueCEInfoContactString: ${CE_HOST}:2119/jobmanager-${JOB_MANAGER}-${QUEUE}
GlueCEInfoApplicationDir: ${VO_SW_DIR}
GlueCEInfoDataDir: ${CE_DATADIR:-unset}
GlueCEInfoDefaultSE: $default_se
GlueCEStateEstimatedResponseTime: 0
GlueCEStateFreeCPUs: 0
GlueCEStateRunningJobs: 0
GlueCEStateStatus: Production
GlueCEStateTotalJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateWorstResponseTime: 0
GlueCEStateFreeJobSlots: 0
GlueCEPolicyMaxCPUTime: 0
GlueCEPolicyMaxRunningJobs: 0
GlueCEPolicyMaxTotalJobs: 0
GlueCEPolicyMaxWallClockTime: 0
GlueCEPolicyPriority: 1
GlueCEPolicyAssignedJobSlots: 0
GlueForeignKey: GlueClusterUniqueID=${CE_HOST}
GlueInformationServiceURL: ldap://`hostname -f`:2135/mds-vo-name=local,o=grid
EOF

        for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
            for VO_QUEUE in `eval echo '$'VO_${VO}_QUEUES`; do
                if [ "${QUEUE}" = "${VO_QUEUE}" ]; then
                    echo "GlueCEAccessControlBaseRule:" \
			"VO:`echo $VO | tr '[:upper:]' '[:lower:]'`" >> $outfile
                fi
            done
        done

	for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
            for VO_QUEUE in `eval echo '$'VO_${VO}_QUEUES`; do
                if [ "${QUEUE}" = "${VO_QUEUE}" ]; then
		    cat << EOF >> $outfile

dn: GlueVOViewLocalID=`echo $VO | tr '[:upper:]' '[:lower:]'`,\
GlueCEUniqueID=${CE_HOST}:2119/jobmanager-${JOB_MANAGER}-${QUEUE}
GlueCEAccessControlBaseRule: VO:`echo $VO | tr '[:upper:]' '[:lower:]'`
GlueCEStateRunningJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateTotalJobs: 0
GlueCEStateFreeJobSlots: 0
GlueCEStateEstimatedResponseTime: 0
GlueCEStateWorstResponseTime: 0
GlueCEInfoDefaultSE:  `eval echo '$'VO_${VO}_DEFAULT_SE`
GlueCEInfoApplicationDir:  `eval echo '$'VO_${VO}_SW_DIR`
GlueCEInfoDataDir: ${CE_DATADIR:-unset}
GlueChunkKey: GlueCEUniqueID=${CE_HOST}:2119/jobmanager-${JOB_MANAGER}-${QUEUE}
EOF
                fi
	    done
	done
    done

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
    $INSTALL_ROOT/lcg/etc/GlueCE.template > \
    $INSTALL_ROOT/lcg/var/gip/ldif/static-file-CE.ldif


    # GlueCESEBind

    outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-cesebind.conf
    echo "" > $outfile

    for QUEUE in $QUEUES
      do
      echo "dn: GlueCESEBindGroupCEUniqueID=${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE" \
	>> $outfile
      for se in $SE_LIST
        do
        echo "GlueCESEBindGroupSEUniqueID: $se" >> $outfile
        done
    done

    for se in $SE_LIST; do

	case "$se" in
	"$DPM_HOST") accesspoint=$DPMDATA;;
	"$DCACHE_ADMIN") accesspoint="/pnfs/`hostname -d`/data";;
	*) accesspoint=$CLASSIC_STORAGE_DIR ;;
	esac

        for QUEUE in $QUEUES; do

            cat <<EOF >> $outfile

dn: GlueCESEBindSEUniqueID=$se,\
GlueCESEBindGroupCEUniqueID=${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE
GlueCESEBindCEAccesspoint: $accesspoint
GlueCESEBindCEUniqueID: ${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE
GlueCESEBindMountInfo: $accesspoint
GlueCESEBindWeight: 0


EOF
        done
    done

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
    $INSTALL_ROOT/lcg/etc/GlueCESEBind.template > \
    $INSTALL_ROOT/lcg/var/gip/ldif/static-file-CESEBind.ldif

    # Set some vars based on the LRMS

    case "$CE_BATCH_SYS" in
    condor|CONDOR) plugin="${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-condor /opt/condor/bin/ $INSTALL_ROOT/lcg/etc/lcg-info-generic.conf";;
    lsf|LSF)       plugin="${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-lsf /usr/local/lsf/bin/ $INSTALL_ROOT/lcg/etc/lcg-info-generic.conf";;
    pbs|PBS)       plugin="${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-pbs /opt/lcg/var/gip/ldif/static-file-CE.ldif ${TORQUE_SERVER}"
		   vo_max_jobs_cmd="";;
    *)             plugin="${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-pbs /opt/lcg/var/gip/ldif/static-file-CE.ldif ${TORQUE_SERVER}"
		   vo_max_jobs_cmd="$INSTALL_ROOT/lcg/libexec/vomaxjobs-maui";;
    esac

    # Configure the dynamic plugin appropriate for the batch sys

    cat << EOF  > ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-ce
#!/bin/sh
$plugin
EOF

    chmod +x ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-ce

    # Configure the ERT plugin

    cat << EOF > ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-scheduler-wrapper
#!/bin/sh
${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-scheduler -c ${INSTALL_ROOT}/lcg/etc/lcg-info-dynamic-scheduler.conf
EOF

    chmod +x ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-scheduler-wrapper

    if ( echo $CE_BATCH_SYS | egrep -qi 'pbs|torque' ); then

	cat <<EOF > $INSTALL_ROOT/lcg/etc/lcg-info-dynamic-scheduler.conf
[Main]
static_ldif_file: $INSTALL_ROOT/lcg/var/gip/ldif/static-file-CE.ldif
vomap :
EOF

	for vo in $VOS; do
	    vo_group=`users_getvogroup $vo`
	    if [ $vo_group ]; then
		echo "   $vo_group:$vo" >> $INSTALL_ROOT/lcg/etc/lcg-info-dynamic-scheduler.conf
	    fi
	done

	cat <<EOF >> $INSTALL_ROOT/lcg/etc/lcg-info-dynamic-scheduler.conf
module_search_path : ../lrms:../ett
[LRMS]
lrms_backend_cmd: $INSTALL_ROOT/lcg/libexec/lrmsinfo-pbs
[Scheduler]
vo_max_jobs_cmd: $vo_max_jobs_cmd
cycle_time : 0
EOF
    fi

    # Configure the provider for installed software

    if [ -f $INSTALL_ROOT/lcg/libexec/lcg-info-provider-software ]; then
	cat <<EOF > $INSTALL_ROOT/lcg/var/gip/provider/lcg-info-provider-software-wrapper
#!/bin/sh
$INSTALL_ROOT/lcg/libexec/lcg-info-provider-software -p $INSTALL_ROOT/edg/var/info  -c $CE_HOST
EOF
	chmod +x $INSTALL_ROOT/lcg/var/gip/provider/lcg-info-provider-software-wrapper
    fi

fi #endif for CE_HOST

if [ "$GRIDICE_SERVER_HOST" = "`hostname -f`" ]; then

    requires VOS SITE_NAME SITE_EMAIL

outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-gridice.conf

    cat <<EOF > $outfile

dn: GlueServiceUniqueID=${GRIDICE_SERVER_HOST}:2136
GlueServiceName: ${SITE_NAME}-gridice
GlueServiceType: gridice
GlueServiceVersion: 1.1.0
GlueServiceEndpoint: ldap://${GRIDICE_SERVER_HOST}:2136/mds-vo-name=local,o=grid
GlueServiceURI: unset
GlueServiceAccessPointURL: not_used
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

    for VO in $VOS; do
        echo "GlueServiceAccessControlRule: $VO" >> $outfile
	echo "GlueServiceOwner: $VO" >> $outfile
    done


FMON='--fmon=yes'

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
	$INSTALL_ROOT/lcg/etc/GlueService.template > \
	$INSTALL_ROOT/lcg/var/gip/ldif/static-file-GRIDICE.ldif

fi #endif for GRIDICE_SERVER_HOST

if ( echo "${NODE_TYPE_LIST}" | grep -w PX > /dev/null ); then

    requires GRID_TRUSTED_BROKERS SITE_EMAIL SITE_NAME

outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-px.conf

    cat << EOF > $outfile

dn: GlueServiceUniqueID=${PX_HOST}:7512
GlueServiceName: ${SITE_NAME}-myproxy
GlueServiceType: myproxy
GlueServiceVersion: 1.1.0
GlueServiceEndpoint: ${PX_HOST}:7512
GlueServiceURI: unset
GlueServiceAccessPointURL: myproxy://${PX_HOST}
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

    split_quoted_variable $GRID_TRUSTED_BROKERS | while read x; do
        echo "GlueServiceAccessControlRule: $x" >> $outfile
    done

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
	$INSTALL_ROOT/lcg/etc/GlueService.template > \
	$INSTALL_ROOT/lcg/var/gip/ldif/static-file-PX.ldif

fi #endif for PX_HOST

if ( echo "${NODE_TYPE_LIST}" | grep -w RB > /dev/null ); then

    requires VOS SITE_EMAIL SITE_NAME

outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-rb.conf


    cat <<EOF > $outfile

dn: GlueServiceUniqueID=${RB_HOST}:7772
GlueServiceName: ${SITE_NAME}-rb
GlueServiceType: ResourceBroker
GlueServiceVersion: 1.2.0
GlueServiceEndpoint: ${RB_HOST}:7772
GlueServiceURI: unset
GlueServiceAccessPointURL: not_used
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

    for VO in $VOS; do
        echo "GlueServiceAccessControlRule: $VO" >> $outfile
	echo "GlueServiceOwner: $VO" >> $outfile
    done

    cat <<EOF >> $outfile
dn: GlueServiceDataKey=HeldJobs,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: HeldJobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772

dn: GlueServiceDataKey=IdleJobs,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: IdleJobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772

dn: GlueServiceDataKey=JobController,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: JobController
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772

dn: GlueServiceDataKey=Jobs,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: Jobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772

dn: GlueServiceDataKey=LogMonitor,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: LogMonitor
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772

dn: GlueServiceDataKey=RunningJobs,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: RunningJobs
GlueServiceDataValue: 14
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772

dn: GlueServiceDataKey=WorkloadManager,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: WorkloadManager
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772

EOF

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
	$INSTALL_ROOT/lcg/etc/GlueService.template > \
	$INSTALL_ROOT/lcg/var/gip/ldif/static-file-RB.ldif

fi #endif for RB_HOST

if ( echo "${NODE_TYPE_LIST}" | grep '\<LFC' > /dev/null ); then

outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-lfc.conf
cat /dev/null > $outfile

    requires VOS SITE_EMAIL SITE_NAME BDII_HOST LFC_HOST

    if [ "$LFC_LOCAL" ]; then
	lfc_local=$LFC_LOCAL
    else
	# populate lfc_local with the VOS which are not set to be central
	unset lfc_local
	for i in $VOS; do
	    if ( ! echo $LFC_CENTRAL | grep -qw $i ); then
		lfc_local="$lfc_local $i"
	    fi
	done
    fi

    if [ "$LFC_CENTRAL" ]; then

	cat <<EOF >> $outfile
dn: GlueServiceUniqueID=http://${LFC_HOST}:8085/
GlueServiceName: ${SITE_NAME}-lfc-dli
GlueServiceType: data-location-interface
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: http://${LFC_HOST}:8085/
GlueServiceURI: http://${LFC_HOST}:8085/
GlueServiceAccessPointURL: http://${LFC_HOST}:8085/
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

	for VO in $LFC_CENTRAL; do
	    echo "GlueServiceOwner: $VO" >> $outfile
	    echo "GlueServiceAccessControlRule: $VO" >> $outfile
	done

	echo >> $outfile

	cat <<EOF >> $outfile
dn: GlueServiceUniqueID=${LFC_HOST}
GlueServiceName: ${SITE_NAME}-lfc
GlueServiceType: lcg-file-catalog
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: ${LFC_HOST}
GlueServiceURI: ${LFC_HOST}
GlueServiceAccessPointURL: ${LFC_HOST}
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

	for VO in $LFC_CENTRAL; do
	    echo "GlueServiceOwner: $VO" >> $outfile
	    echo "GlueServiceAccessControlRule: $VO" >> $outfile
	done

        echo >> $outfile

    fi

    if [ "$lfc_local" ]; then

        cat <<EOF >> $outfile
dn: GlueServiceUniqueID=http://${LFC_HOST}:8085/,o=local
GlueServiceName: ${SITE_NAME}-lfc-dli
GlueServiceType: local-data-location-interface
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: http://${LFC_HOST}:8085/
GlueServiceURI: http://${LFC_HOST}:8085/
GlueServiceAccessPointURL: http://${LFC_HOST}:8085/
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

        for VO in $lfc_local; do
            echo "GlueServiceOwner: $VO" >> $outfile
            echo "GlueServiceAccessControlRule: $VO" >> $outfile
        done

        echo >> $outfile

	cat <<EOF >> $outfile
dn: GlueServiceUniqueID=${LFC_HOST},o=local
GlueServiceName: ${SITE_NAME}-lfc
GlueServiceType: lcg-local-file-catalog
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: ${LFC_HOST}
GlueServiceURI: ${LFC_HOST}
GlueServiceAccessPointURL: ${LFC_HOST}
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

	for VO in $lfc_local; do
	    echo "GlueServiceOwner: $VO" >> $outfile
	    echo "GlueServiceAccessControlRule: $VO" >> $outfile
	done

    fi

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
	$INSTALL_ROOT/lcg/etc/GlueService.template > \
	$INSTALL_ROOT/lcg/var/gip/ldif/static-file-LFC.ldif

fi # end of LFC

if ( echo "${NODE_TYPE_LIST}" | egrep -q 'dcache|dpm_(mysql|oracle)' ); then

    outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-dse.conf

    cat <<EOF > $outfile

dn: GlueServiceUniqueID=httpg://${se_host}:8443/srm/managerv1
GlueServiceName: ${SITE_NAME}-srm
GlueServiceType: srm_v1
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: httpg://${se_host}:8443/srm/managerv1
GlueServiceURI: httpg://${se_host}:8443/srm/managerv1
GlueServiceAccessPointURL: httpg://${se_host}:8443/srm/managerv1
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

    for VO in $VOS; do
	echo "GlueServiceAccessControlRule: $VO" >> $outfile
    done

    cat <<EOF >> $outfile
GlueServiceInformationServiceURL: \
MDS2GRIS:ldap://${BDII_HOST}:2170/mds-vo-name=${SITE_NAME},o=grid
GlueServiceStatus: OK
EOF

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
	$INSTALL_ROOT/lcg/etc/GlueService.template > \
	$INSTALL_ROOT/lcg/var/gip/ldif/static-file-dSE.ldif

fi # end of dcache,dpm

if ( echo "${NODE_TYPE_LIST}" | egrep -q 'SE_dpm_(mysql|oracle)' ); then

    # Install dynamic script pointing to gip plugin
    cat << EOF  > ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-se
#! /bin/sh
${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-dpm ${INSTALL_ROOT}/lcg/var/gip/ldif/static-file-SE.ldif
EOF

    chmod +x ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-se

fi # end of dpm


if ( echo "${NODE_TYPE_LIST}" | grep '\<SE' > /dev/null ); then

outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-se.conf


    # dynamic_script points to the script generated by config_info_dynamic_se<se_type>
#    echo "">> $outfile
#    echo "dynamic_script=${INSTALL_ROOT}/lcg/libexec5A/lcg-info-dynamic-se" >> $outfile
#    echo >> $outfile        # Empty line to separate it form published info

    cat <<EOF > $outfile
dn: GlueSEUniqueID=${se_host}
GlueSEName: $SITE_NAME:${se_type}
GlueSEPort: 2811
GlueSESizeTotal: 0
GlueSESizeFree: 0
GlueSEArchitecture: multidisk
GlueInformationServiceURL: ldap://`hostname -f`:2135/mds-vo-name=local,o=grid
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}

dn: GlueSEAccessProtocolLocalID=gsiftp, GlueSEUniqueID=${se_host}
GlueSEAccessProtocolType: gsiftp
GlueSEAccessProtocolEndpoint: gsiftp://${se_host}
GlueSEAccessProtocolCapability: file transfer
GlueSEAccessProtocolVersion: 1.0.0
GlueSEAccessProtocolPort: 2811
GlueSEAccessProtocolSupportedSecurity: GSI
GlueChunkKey: GlueSEUniqueID=${se_host}

dn: GlueSEAccessProtocolLocalID=rfio, GlueSEUniqueID=${se_host}
GlueSEAccessProtocolType: rfio
GlueSEAccessProtocolEndpoint:
GlueSEAccessProtocolCapability:
GlueSEAccessProtocolVersion: 1.0.0
GlueSEAccessProtocolPort: 5001
GlueSEAccessProtocolSupportedSecurity: RFIO
GlueChunkKey: GlueSEUniqueID=${se_host}

dn: GlueSEControlProtocolLocalID=$control_protocol, GlueSEUniqueID=${se_host}
GlueSEControlProtocolType: $control_protocol
GlueSEControlProtocolEndpoint: $control_endpoint
GlueSEControlProtocolCapability:
GlueSEControlProtocolVersion: 1.0.0
GlueChunkKey: GlueSEUniqueID=${se_host}
EOF

for VO in $VOS; do

    if ( echo "${NODE_TYPE_LIST}" | grep SE_dpm > /dev/null ); then
	storage_path="/dpm/`hostname -d`/home/${VO}"
	storage_root="${VO}:${storage_path}"
    elif ( echo "${NODE_TYPE_LIST}" | grep SE_dcache > /dev/null ); then
	storage_path="/pnfs/`hostname -d`/data/${VO}"
	storage_root="${VO}:${storage_path}"
    else
	storage_path=$( eval echo '$'VO_`echo ${VO} | tr '[:lower:]' '[:upper:]'`_STORAGE_DIR )
	storage_root="${VO}:${storage_path#${CLASSIC_STORAGE_DIR}}"
    fi

    cat <<EOF >> $outfile

dn: GlueSALocalID=$VO,GlueSEUniqueID=${se_host}
GlueSARoot: $storage_root
GlueSAPath: $storage_path
GlueSAType: permanent
GlueSAPolicyMaxFileSize: 10000
GlueSAPolicyMinFileSize: 1
GlueSAPolicyMaxData: 100
GlueSAPolicyMaxNumFiles: 10
GlueSAPolicyMaxPinDuration: 10
GlueSAPolicyQuota: 0
GlueSAPolicyFileLifeTime: permanent
GlueSAStateAvailableSpace: 1
GlueSAStateUsedSpace: 1
GlueSAAccessControlBaseRule: $VO
GlueChunkKey: GlueSEUniqueID=${se_host}
EOF

done

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
	$INSTALL_ROOT/lcg/etc/GlueSE.template > \
	$INSTALL_ROOT/lcg/var/gip/ldif/static-file-SE.ldif

fi #endif for SE_HOST

if ( echo "${NODE_TYPE_LIST}" | grep -w VOBOX > /dev/null ); then

outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-vobox.conf


    for x in VOS SITE_EMAIL SITE_NAME VOBOX_PORT; do
        if [ "x`eval echo '$'$x`" = "x" ]; then
            echo "\$$x not set"
            return 1
        fi
    done

    for VO in $VOS; do
        dir=${INSTALL_ROOT}/edg/var/info/$VO
        mkdir -p $dir
	f=$dir/$VO.list
	[ -f $f ] || touch $f
        # work out the sgm user for this VO
        sgmuser=`users_getsgmuser $VO`
	sgmgroup=`id -g $sgmuser`
	chown -R ${sgmuser}:${sgmgroup} $dir
	chmod -R go-w $dir
    done

    cat <<EOF > $outfile
dn: GlueServiceUniqueID=gsissh://${VOBOX_HOST}:${VOBOX_PORT}
GlueServiceName: ${SITE_NAME}-vobox
GlueServiceType: VOBOX
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: gsissh://${VOBOX_HOST}:${VOBOX_PORT}
GlueServiceURI: unset
GlueServiceAccessPointURL: gsissh://${VOBOX_HOST}:${VOBOX_PORT}
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF

    for VO in $VOS; do
        echo "GlueServiceAccessControlRule: $VO" >> $outfile
    done

    echo >> $outfile

    $INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
	$INSTALL_ROOT/lcg/etc/GlueService.template > \
	$INSTALL_ROOT/lcg/var/gip/ldif/static-file-VOBOX.ldif

fi #endif for VOBOX_HOST


cat << EOT > $INSTALL_ROOT/globus/libexec/edg.info
#!/bin/bash
#
# info-globus-ldif.sh
#
#Configures information providers for MDS
#
cat << EOF

dn: Mds-Vo-name=local,o=grid
objectclass: GlobusTop
objectclass: GlobusActiveObject
objectclass: GlobusActiveSearch
type: exec
path: $INSTALL_ROOT/lcg/libexec/
base: lcg-info-wrapper
args:
cachetime: 60
timelimit: 20
sizelimit: 250

EOF

EOT

chmod a+x $INSTALL_ROOT/globus/libexec/edg.info

if [ ! -d "$INSTALL_ROOT/lcg/libexec" ]; then
    mkdir -p $INSTALL_ROOT/lcg/libexec
fi

cat << EOF > $INSTALL_ROOT/lcg/libexec/lcg-info-wrapper
#!/bin/sh

export LANG=C
$INSTALL_ROOT/lcg/bin/lcg-info-generic $INSTALL_ROOT/lcg/etc/lcg-info-generic.conf

EOF

chmod a+x $INSTALL_ROOT/lcg/libexec/lcg-info-wrapper

cat << EOT > $INSTALL_ROOT/globus/libexec/edg.schemalist
#!/bin/bash

cat <<EOF
${INSTALL_ROOT}/globus/etc/openldap/schema/core.schema
${INSTALL_ROOT}/glue/schema/ldap/Glue-CORE.schema
${INSTALL_ROOT}/glue/schema/ldap/Glue-CE.schema
${INSTALL_ROOT}/glue/schema/ldap/Glue-CESEBind.schema
${INSTALL_ROOT}/glue/schema/ldap/Glue-SE.schema
EOF

EOT

chmod a+x $INSTALL_ROOT/globus/libexec/edg.schemalist

# Configure gin
if ( ! echo "${NODE_TYPE_LIST}" | egrep -q '^UI$|^WN[A-Za-z_]*$'  ); then
    if [ ! -d ${INSTALL_ROOT}/glite/var/rgma/.certs ]; then
	mkdir -p ${INSTALL_ROOT}/glite/var/rgma/.certs
    fi

    cp -pf /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem \
	${INSTALL_ROOT}/glite/var/rgma/.certs
    chown rgma:rgma ${INSTALL_ROOT}/glite/var/rgma/.certs/host*

    (
	egrep -v 'sslCertFile|sslKey' \
	    ${INSTALL_ROOT}/glite/etc/rgma/ClientAuthentication.props
	echo "sslCertFile=${INSTALL_ROOT}/glite/var/rgma/.certs/hostcert.pem"
	echo "sslKey=${INSTALL_ROOT}/glite/var/rgma/.certs/hostkey.pem"
    ) > /tmp/props.$$
    mv -f /tmp/props.$$ ${INSTALL_ROOT}/glite/etc/rgma/ClientAuthentication.props

    #Turn on Gin for the GIP and maybe FMON
    export RGMA_HOME=${INSTALL_ROOT}/glite
    ${RGMA_HOME}/bin/rgma-gin-config --gip=yes ${FMON}
    /sbin/chkconfig rgma-gin on
    /etc/rc.d/init.d/rgma-gin restart 2>${YAIM_LOG}
fi


return 0
}


config_globus

config_globus(){
# $Id: config_globus,v 1.34 2006/01/06 13:45:51 maart Exp $


requires CE_HOST PX_HOST RB_HOST SITE_NAME

GLOBUS_MDS=no
GLOBUS_GRIDFTP=no
GLOBUS_GATEKEEPER=no

if ( echo "${NODE_TYPE_LIST}" | grep '\<'CE > /dev/null ); then
    GLOBUS_MDS=yes
    GLOBUS_GRIDFTP=yes
    GLOBUS_GATEKEEPER=yes
fi
if ( echo "${NODE_TYPE_LIST}" | grep VOBOX > /dev/null ); then
    GLOBUS_MDS=yes
    if ! ( echo "${NODE_TYPE_LIST}" | grep '\<'RB > /dev/null ); then
	GLOBUS_GRIDFTP=yes
    fi
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<'SE > /dev/null ); then
    GLOBUS_MDS=yes
    GLOBUS_GRIDFTP=yes
fi
# DPM has its own ftp server
if ( echo "${NODE_TYPE_LIST}" | grep SE_dpm > /dev/null ); then
    GLOBUS_GRIDFTP=no
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<'PX > /dev/null ); then
    GLOBUS_MDS=yes
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<'RB > /dev/null ); then
    GLOBUS_MDS=yes
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<'LFC > /dev/null ); then
    GLOBUS_MDS=yes
fi
if ( echo "${NODE_TYPE_LIST}" | grep SE_dpm > /dev/null ); then
    X509_DPM1="x509_user_cert=/home/edginfo/.globus/usercert.pem"
    X509_DPM2="x509_user_key=/home/edginfo/.globus/userkey.pem"
else
    X509_DPM1=""
    X509_DPM2=""
fi
if [ "$GRIDICE_SERVER_HOST" = "`hostname -f`" ]; then
    GLOBUS_MDS=yes
fi

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

cat <<EOF > /etc/globus.conf
########################################################################
#
# Globus configuraton.
#
########################################################################
[common]
GLOBUS_LOCATION=${INSTALL_ROOT}/globus
globus_flavor_name=gcc32dbg
x509_user_cert=/etc/grid-security/hostcert.pem
x509_user_key=/etc/grid-security/hostkey.pem
gridmap=/etc/grid-security/grid-mapfile
gridmapdir=/etc/grid-security/gridmapdir/

EOF

if [ "$GLOBUS_MDS" = "yes" ]; then
cat <<EOF >> /etc/globus.conf

[mds]
globus_flavor_name=gcc32dbgpthr
user=edginfo
$X509_DPM1
$X509_DPM2

[mds/gris/provider/edg]

EOF

cat <<EOF >> /etc/globus.conf
[mds/gris/registration/site]
regname=$SITE_NAME
reghn=$CE_HOST

EOF
else
echo "[mds]"  >> /etc/globus.conf

fi

if [ "$GLOBUS_GRIDFTP" = "yes" ]; then

    cat <<EOF >> /etc/globus.conf
[gridftp]
log=/var/log/globus-gridftp.log
EOF

    cat <<EOF > /etc/logrotate.d/gridftp
/var/log/globus-gridftp.log /var/log/gridftp-lcas_lcmaps.log {
missingok
daily
compress
rotate 31
create 0644 root root
sharedscripts
} 
EOF

else
    echo "[gridftp]"  >> /etc/globus.conf
fi


if [ "$GLOBUS_GATEKEEPER" = "yes" ]; then

if [ "x`grep globus-gatekeeper /etc/services`" = "x" ]; then
    echo "globus-gatekeeper   2119/tcp" >> /etc/services
fi

cat <<EOF > /etc/logrotate.d/globus-gatekeeper
/var/log/globus-gatekeeper.log {
nocompress
copy
rotate 1
prerotate
killall -s USR1 -e /opt/edg/sbin/edg-gatekeeper
endscript
postrotate
find /var/log/globus-gatekeeper.log.20????????????.*[0-9] -mtime +7 -exec gzip {} \;
endscript
} 
EOF

cat <<EOF >> /etc/globus.conf
[gatekeeper]

default_jobmanager=fork
job_manager_path=\$GLOBUS_LOCATION/libexec
globus_gatekeeper=${INSTALL_ROOT}/edg/sbin/edg-gatekeeper
extra_options=\"-lcas_db_file lcas.db -lcas_etc_dir ${INSTALL_ROOT}/edg/etc/lcas/ -lcasmod_dir \
${INSTALL_ROOT}/edg/lib/lcas/ -lcmaps_db_file lcmaps.db -lcmaps_etc_dir ${INSTALL_ROOT}/edg/etc/lcmaps -lcmapsmod_dir ${INSTALL_ROOT}/edg/lib/lcmaps\"
logfile=/var/log/globus-gatekeeper.log
jobmanagers="fork ${JOB_MANAGER}"

[gatekeeper/fork]
type=fork
job_manager=globus-job-manager

[gatekeeper/${JOB_MANAGER}]
type=${JOB_MANAGER}

EOF
else
cat <<EOF >> /etc/globus.conf
[gatekeeper]
default_jobmanager=fork
job_manager_path=${GLOBUS_LOCATION}/libexec
 
jobmanagers="fork "
 
[gatekeeper/fork]
type=fork
job_manager=globus-job-manager
EOF
fi

$INSTALL_ROOT/globus/sbin/globus-initialization.sh 2>> $YAIM_LOG

if [ "$GLOBUS_MDS" = "yes" ]; then
    /sbin/chkconfig globus-mds on
    /sbin/service globus-mds stop
    /sbin/service globus-mds start
fi
if [ "$GLOBUS_GATEKEEPER" = "yes" ]; then
    /sbin/chkconfig globus-gatekeeper on
    /sbin/service globus-gatekeeper stop
    /sbin/service globus-gatekeeper start
fi
if [ "$GLOBUS_GRIDFTP" = "yes" ]; then
    /sbin/chkconfig globus-gridftp on
    /sbin/service globus-gridftp stop
    /sbin/service globus-gridftp start
    /sbin/chkconfig lcg-mon-gridftp on
    /etc/rc.d/init.d/lcg-mon-gridftp restart
fi

return 0
}


config_sedcache

# config_sedcache defined at the end!

backup_file()
{
  test -f "$1" && cp -p "$1" "$1".`date +%y%m%d_%H%M%S`.old
}

check_portmap()
{
  PPORT=111

  if test `netstat -tapn | grep portmap | grep :${PPORT} | wc -l` -eq 0 ; then
    return 1
  fi
  
  return 0
}

config_sedcache_check_sanity () {
  # Start portmapper
  check_portmap || {
    echo "Portmapper is not running, trying to start portmap service." >&2
    /sbin/service portmap start
  }
  check_portmap || {
    echo "Portmapper is not running, please start it before running this script." >&2
    return $?
  }
}

check_postgresql() {

  # check the postgres user exists
  if ( ! id postgres > /dev/null ); then
    echo "postgres user not present"
    exit 1
  fi

  su - postgres -c "
    psql -l >/dev/null 2>&1
    if test \$? -ne 0 ; then
      echo \"PostgreSQL database is not running, please start it before running this script!\" >&2
      exit 1
    fi

    echo \"\\q\" | psql -U postgres dcache >/dev/null 2>&1
    if test \$? -eq 0 ; then
      echo \"PostgreSQL dcache database already exists!  Skipping...\" >&2
      exit 2
    fi
  "

  return $?
}

check_set_java()
{
  javaconf=/etc/java/java.conf
  test -f $javaconf && . $javaconf
  
  java=$JAVA_HOME/bin/java
  test -x "$java" || java=
  
  [ -z "$java" ] && java=`ls /usr/java/j2re*/bin/java  2> /dev/null | tail -n 1`
  [ -z "$java" ] && java=`ls /usr/java/j2sdk*/bin/java 2> /dev/null | tail -n 1`
  [ -z "$java" ] && return 1

  return 0
}

config_sedcache_psql () {
######################################################################
# Postgres configuration script from d-cache-lcg
######################################################################


db_dir=/var/lib/pgsql
data_dir=$db_dir/data

# Enable network access in postgres config file (default port 5432 is used)

conf1=$data_dir/postgresql.conf
tmp1=$conf1.tmp

conf2=$db_dir/data/pg_hba.conf
tmp2=$conf2.tmp

# Allow hosts to connect to the DB

hostname=`hostname`
ip0='               '
ip1='127.0.0.1      '
ip2=`host $hostname | awk '{ printf "%-15s", $NF }'`
local_trust="
local   all         all         $ip0                     trust
"
hosts_trust="
host    all         all         $ip1   255.255.255.255   trust
host    all         all         $ip2   255.255.255.255   trust
"

AWK='
    /^#?local .*$/ {
	print local_trust
	next
    }
    /^# *host .* trust *$/ {
	print hosts_trust
	next
    }
    /^host .* 127.0.0.1/ {
	print "host    all         all         127.0.0.1/32   trust"
	next
    }
    /^host .* ::1.128/ {
	print "host    all         all         ::1/128        trust"
	next
    }
    {
	print
    }
'

# Start PostgreSQL
/sbin/service postgresql condstop
/sbin/service postgresql start

if ! /sbin/service postgresql status > /dev/null
then
    # version 8 will not start with version 7 DB format
    mv $data_dir $data_dir.`date +%y%m%d_%H%M%S`.old
    /sbin/service postgresql start
fi

# Check that PostgreSQL is running and that dcache database doesn't exist
check_postgresql
status=$?
test $status = 1 && return 1

test $status = 0 && su - postgres -c "
  cd  # ensure cwd is accessible

  sed 's/^# *tcpip_socket *=.*/tcpip_socket = true/' $conf1 > $tmp1 &&
      mv $tmp1 $conf1

  awk '$AWK' local_trust='$local_trust' hosts_trust='$hosts_trust' $conf2 > \
      $tmp2 && mv $tmp2 $conf2

  # Make PostgreSQL re-read 'pg_hba.conf' (Client Authentication Configuration File)
  killall -HUP postmaster

  # Command to create DB
  createdb dcache
  if test \$? -ne 0 ; then
    echo \"Couldn't create dcache database!\" >2&

    return 1
  fi

  # Create DB user 'srmdcache'
  echo \"create user srmdcache password 'srmdcache'; \\q\" |
      psql -U postgres dcache
  if test \$? -ne 0 ; then
    echo \"Couldn't create dcache user!\" >2&

    return 1
  fi

  # Command to create the DB for the Resilience Manager
  createdb -O srmdcache replicas
  if test \$? -ne 0 ; then
    echo \"Couldn't create the DB for the Resilience Manager!\" >2&

    return 1
  fi

  # Initialize db tables for the Resilience Manager
  psql -d replicas -U srmdcache -f /opt/d-cache/etc/pd_dump-s-U_enstore.sql
"

# Automatic startup on reboot
chkconfig --level 2345 postgresql on

}

config_sedcache_lcg() {
######################################################################
# d-cache configuration adapted from package d-cache-lcg
######################################################################

# Used variables
# Obligatory: DCACHE_ADMIN=admin_host
#             DCACHE_POOLS="pool_node1:[size:]/pool_path1 pool_node2:[size:]/pool_path2"
#             <size> ::= [0-9]*
# Optional:   DCACHE_PORT_RANGE="20000,25000"

admin=$DCACHE_ADMIN
node=
pool=no
port_range=20000,25000
thishost=`hostname -f`
poolspec=`echo "$DCACHE_POOLS" | grep -o "$thishost:[^ ]*"`
gridftp_thru_admin=no

if [ "x${DCACHE_ADMIN#$thishost}" = "x" ]; then
    node=admin
elif [ "x$poolspec" != "x" ]; then
    node=pool
else
    echo "[ERROR]: This host was not recognized as admin or pool"
    echo "[ERROR]: Please check variables DCACHE_ADMIN and DCACHE_POOL"
    return 1
fi

if [ "x$DCACHE_PORT_RANGE" != "x" ]; then
    lower=`echo "$DCACHE_PORT_RANGE" | sed -e 's/^\([0-9]*\),.*$/\1/' -e 's/.*,.*//'`
    upper=`echo "$DCACHE_PORT_RANGE" | sed -e 's/^.*,\([0-9]*\)$/\1/' -e 's/.*,.*//'`

    case $lower,$upper in
    ????*,????*)
        port_range=$lower,$upper
        ;;
    *)
        echo "[ERROR]: \$DCACHE_PORT_RANGE has illegal format:"
        echo "[ERROR]: $DCACHE_PORT_RANGE"
        return 1
    esac
fi

if test X$node = Xadmin
then
    config_sedcache_psql || return $?

    pnfs=${INSTALL_ROOT}/pnfs.3.1.10/pnfs
    dest=$pnfs/etc/pnfs_config
    template=$dest.template

    cp $template $dest

    pnfs_done=${INSTALL_ROOT}/d-cache-lcg/install/.pnfs_done
    d=`dirname $pnfs_done`

    mkdir -p $d || {
	echo "[ERROR]: cannot mkdir $d"
	return 1
    }

    test -f $pnfs_done || {
	install=$pnfs/install/pnfs-install.sh

	$install || {
	    echo "[ERROR]: $install failed (exit code $?) -- abort"
	    return 1
	}
	touch $pnfs_done
    }

    # PNFS needed for dCache configuration
    $pnfs/bin/pnfs restart
fi

if test X$node = Xadmin
then
    GSIDCAP=no
    if test X$gridftp_thru_admin = Xno && test "X$poolspec" = X
    then
        GRIDFTP=no
    else
        GRIDFTP=yes
    fi
    SRM=yes
else
    # pool node
    GSIDCAP=no
    if test X$gridftp_thru_admin = Xno
    then
        GRIDFTP=yes
    else
        GRIDFTP=no
    fi
    SRM=no
fi

dir=${INSTALL_ROOT}/d-cache
dest=$dir/etc/node_config
template=$dest.template

backup_file $dest
sed "
    s/^ *\(NODE_TYPE\) *=.*/\1=$node/
    s/^ *\(SRM\) *=.*/\1=$SRM/
    s/^ *\(GRIDFTP\) *=.*/\1=$GRIDFTP/
    s/^ *\(GSIDCAP\) *=.*/\1=$GSIDCAP/
" $template > $dest

dest=$dir/config/dCacheSetup
template=$dir/etc/dCacheSetup.template

check_set_java

tab=`echo I | tr I '\11'`
s=" $tab\\n"
cont='\\\
'
SED="
    s|^\(java=\"\)[^ ]*\(.*\)|\1$java\2|
    /^java_options=/{
	: join
	    /\\\\$/{
		N
		b join
	    }
	s|\([\"$s]\)-Dorg.globus.tcp.port.range=[0-9]*,[0-9]*|\1|
	s|\([\"$s]\)-XX:MaxDirectMemorySize=[0-9]*m|\1|
	s|\([\"$s]\)-Xmx[0-9]*m|\1|
	s|\([\"$s]\)-server|\1|
	s|[^=]*=\"|&-Dorg.globus.tcp.port.range=$port_range $cont|
	s|[^=]*=\"|&-XX:MaxDirectMemorySize=256m $cont|
	s|[^=]*=\"|&-Xmx256m $cont|
	s|[^=]*=\"|&-server $cont|
	: loop
	    s|\\\\\n[ $tab]*\"[ $tab]*$|\"|
	    t loop
    }
    s|^\(serviceLocatorHost=\).*|\1$admin|
    s|^\(defaultPnfsServer=\).*|\1$admin|
    s|^\(srmDbHost=\).*|\1$admin|
    s|^\(pnfsSrmPath=\).*|\1/|
    s|^\(maxActiveGet=\).*|\11000|
    s|^\(maxActivePut=\).*|\11000|
    s|^\(maxActiveCopy=\).*|\11000|
    s|^\(RecursiveDirectoryCreation=\).*|\1true|
    s|^\(AdvisoryDelete=\).*|\1true|
    s|^\(clientDataPortRange=\).*|\1$port_range|
    s|^\(cacheInfo=\).*|\1pnfs|
"

backup_file $dest
sed "$SED" $template > $dest

backup_file $dir/etc/pool_path
rm -f $dir/etc/pool_path
for poolspec in `echo "$DCACHE_POOLS" | grep -o "$thishost:[^ ]*"`
do
    pool=${poolspec##*:}
    size=$(echo $poolspec | grep -o ":[0-9]*:" | cut -d: -f 2)

    if [ "x${pool#/}" = "x$pool" ]; then
        echo "[ERROR]: pool path must be absolute:"
        echo "[ERROR]: $poolspec"
        return 1
    fi
    
    mkdir -p $pool
    (cd "$pool") || {
        echo "[ERROR]: Could not 'cd' to \`$pool'"
        return 1
    }
    
    if test "x$size" = x || test $size -eq 0 ; then  
        size=`df -k $pool | awk 'NR == 2 { print int($4 / 1024 / 1024) }'`
    fi

    echo "$pool $size no" >> $dir/etc/pool_path
done

dcache_done=${INSTALL_ROOT}/d-cache-lcg/install/.dcache_done
d=`dirname $dcache_done`

mkdir -p $d || {
    echo "[ERROR]: cannot mkdir $d"
    return 1
}

test -f $dcache_done || true && {
    #
    # HACK
    #
    export EDITOR=true

    install=$dir/install/install.sh

    $install || {
	echo "$install failed (exit code $?) -- abort"
	return 1
    }
    touch $dcache_done
}

# dcache 1.6.6 has opt merged with core

rm -f /etc/init.d/dcache-opt /etc/rc.d/rc?.d/???dcache-opt

test X$node = Xadmin && {
    ln -fs $pnfs/bin/pnfs       /etc/init.d
    ln -fs $dir/bin/dcache-core /etc/init.d
}

test "X$pool" != Xno && {
    ln -fs $dir/bin/dcache-pool /etc/init.d
}

dest=$dir/etc/door_config
template=$dest.template

if test -f $template
then
    tab=`echo I | tr I '\11'`
    SED="
	s|^\(ADMIN_NODE[ $tab]*\).*|\1$admin|
	s|^\(GSIDCAP[ $tab]*\).*|\1$GSIDCAP|
	s|^\(GRIDFTP[ $tab]*\).*|\1$GRIDFTP|
	s|^\(SRM[ $tab]*\).*|\1$SRM|
    "

    backup_file $dest
    sed "$SED" $template > $dest
fi

if test X$node = Xpool ; then
    # For some reason running this script on the admin work doesn't work
    # (all doors are down as a result of this script beeing executed)
    install=$dir/install/install_doors.sh
    $install || {
        echo "$install failed (exit code $?) -- abort"
        return 1
    }

    if test `cat /etc/fstab | grep ^$admin:/fs | wc -l` -eq 0 ; then
      # admin's pnfs not in fstab, add it
      cat <<EOF>>/etc/fstab
$admin:/fs /pnfs/fs             nfs     hard,intr,rw,noac,auto 0 0
EOF
    fi
fi

cd /etc/rc.d

for i in 0 1 6
do
    (
	cd rc$i.d || exit 1

	test X$node = Xadmin && {
	    ln -fs ../init.d/dcache-core K13dcache-core
	    ln -fs ../init.d/pnfs        K14pnfs
	}

	test "X$pool" != Xno && {
	    ln -fs ../init.d/dcache-pool K12dcache-pool
	}
    )
done

for i in 2 3 4 5
do
    (
	cd rc$i.d || exit 1

	test X$node = Xadmin && {
	    ln -fs ../init.d/pnfs        S86pnfs
	    ln -fs ../init.d/dcache-core S87dcache-core
	}

	test "X$pool" != Xno && {
	    ln -fs ../init.d/dcache-pool S88dcache-pool
	}
    )
done

}

config_sedcache_start() {

# Added postgresql in order to solve problems with
# postmaster not listening after initial install

/sbin/service dcache-pool stop
/sbin/service dcache-core stop
/sbin/service postgresql stop

/sbin/service postgresql start
/sbin/service dcache-core start
/sbin/service dcache-pool start

}

config_sedcache_gridmap() {
## Gridmap

# Setting up dCache specific crontab entry for grid-mapfile conversion

DCACHE_KPWD_CONV=${INSTALL_ROOT}/d-cache/bin/grid-mapfile2dcache-kpwd

if [ ! -x $DCACHE_KPWD_CONV ]; then
   echo "[ERROR]: $DCACHE_KPWD_CONV is missing"
   return 1
fi

if ! test -x ${DCACHE_KPWD_CONV} ; then
  echo "[WARNING]: Couldn't generate dCache kpwd file (missing ${DCACHE_KPWD_CONV})" >&2
else
  ${DCACHE_KPWD_CONV}
fi

}

config_sedcache_cron() {

# Now remove and replace its cron job
rm -f ${CRON_DIR:-/etc/cron.d}/edg-mkgridmap

let minute="$RANDOM%60"

let h1="$RANDOM%6"
let h2="$h1+6"
let h3="$h1+12"
let h4="$h1+18"

job="$minute $h1,$h2,$h3,$h4 * * * ${INSTALL_ROOT}/edg/sbin/edg-mkgridmap \
--output=/etc/grid-security/grid-mapfile --safe && \
${INSTALL_ROOT}/d-cache/bin/grid-mapfile2dcache-kpwd"

cron_job edg-mkgridmap root "$job"

echo "$job" | sed 's|[^/]*||' | sh

## Information Provider

# Creating script for dynamic info generation

outfile=$INSTALL_ROOT/lcg/var/gip/plugin/lcg-info-dynamic-se
configfile=$INSTALL_ROOT/lcg/var/gip/ldif/static-file-SE.ldif

if [ ! -f $configfile ]; then
   echo "[ERROR]: $configfile does not exist"
   return 1
fi

cat <<EOF>$outfile
#!/bin/sh
#
# Script dynamically generated by YAIM function ${FUNCNAME[0]}
#

${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-dcache $configfile

EOF

chmod +x $outfile


# Crontab entry to have up-to-date proxy for dynamic info query

job="7 * * * * ${INSTALL_ROOT}/globus/bin/grid-proxy-init -q \
-cert /etc/grid-security/hostcert.pem \
-key /etc/grid-security/hostkey.pem -out ${INSTALL_ROOT}/lcg/hostproxy.\$$ && \
chown edginfo ${INSTALL_ROOT}/lcg/hostproxy.\$$ && \
mv -f ${INSTALL_ROOT}/lcg/hostproxy.\$$ ${INSTALL_ROOT}/lcg/hostproxy"

cron_job edginfo-proxy root "$job"

echo "$job" | sed 's|[^/]*||' | sh

rgma_user=rgma

job="8 * * * * ${INSTALL_ROOT}/globus/bin/grid-proxy-init -q \
-cert /etc/grid-security/hostcert.pem \
-key /etc/grid-security/hostkey.pem -out ${INSTALL_ROOT}/lcg/hostproxy.\$$ && \
chown $rgma_user ${INSTALL_ROOT}/lcg/hostproxy.\$$ && \
mv -f ${INSTALL_ROOT}/lcg/hostproxy.\$$ ${INSTALL_ROOT}/lcg/hostproxy.$rgma_user"

cron_job rgma-proxy root "$job"

echo "$job" | sed 's|[^/]*||' | sh


}

######################################################################

config_sedcache() {

  INSTALL_ROOT=${INSTALL_ROOT:-/opt}

  if test "X$RESET_DCACHE_CONFIGURATION" = Xyes
  then
    requires DCACHE_ADMIN DCACHE_POOLS
    requires USERS_CONF VOS

    if [ ! -e $USERS_CONF ]; then
	echo "$USERS_CONF not found."
	return 1
    fi

    check_users_conf_format

    config_sedcache_check_sanity || return $?
    config_sedcache_lcg
    config_sedcache_start

    domain=`hostname -d`

    awk -F: '{print $4, $5}' ${USERS_CONF} | sort -u | while read groupname virtorg; do
	if ( [ "$virtorg" ] && echo $VOS | grep -w "$virtorg" > /dev/null ); then
	    dir=/pnfs/$domain/data/$virtorg
	    if ! test -d $dir; then
		mkdir -p $dir
		chgrp $groupname $dir
		chmod g+w,o-rwx $dir
	    fi
	fi
    done

  fi

  config_sedcache_gridmap
  config_sedcache_cron

  return 0

}

About this document ...

SE_dcache Generic Configuration Reference

This document was generated using the LaTeX2HTML translator Version 2002 (1.62)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -html_version 4.0 -no_navigation -address 'GRID deployment' SE_dcache.drv_html

The translation was initiated by Oliver KEEBLE on 2006-01-16


GRID deployment