LCG - TAR_WN - Generic Configuration Reference



Document identifier: LCG-GIS-CR-TAR_WN
Date: 16 January 2006
Author: LCG Deployment - GIS team;Retico,Antonio;Vidic,Valentin;
Version:
Abstract: Configuration steps done by the YAIM script 'configure_TAR_WN'

Contents

Introduction

This document lists the manual steps for the installation and configuration of a LCG TAR_WN Node.
Furthermore it provides a specification of the YAIM functions used to configure the node with the script-based configuration.

The configuration has been tested on a standard Scientific Linux 3.0 Installation.

Link to this document:
This document is available on the Grid Deployment web site

http://www.cern.ch/grid-deployment/gis/lcg-GCR/index.html

Variables

In order to set-up a TAR_WN node, you need at least the following variables to be correctly configured in the site configuration file (site-info.def):
BDII_HOST :
BDII Hostname.
CE_HOST :
Computing Element Hostname.
DPM_HOST :
Host name of the DPM host, used also as a default DPM for the lcg-stdout-mon .
EDG_WL_SCRATCH :
Optional scratch directory for jobs.
EDG_WL_SCRATCH :
Optional scratch directory for jobs.
FTS_SERVER_URL :
URL of the File Transfer Service server.
GLOBUS_TCP_PORT_RANGE :
Port range for Globus IO.
GSSKLOG :
yes or no, indicating whether the site provides an AFS authentication server which maps gsi credentials into Kerberos tokens .
GSSKLOG_SERVER :
If GSSKLOG is yes, the name of the AFS authentication server host.
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
JAVA_LOCATION :
Path to Java VM installation. It can be used in order to run a different version of java installed locally.
MON_HOST :
MON Box Hostname.
PX_HOST :
PX hostname.
REG_HOST :
RGMA Registry hostname.
SE_LIST :
A list of hostnames of the SEs available at your site.
SITE_NAME :
Your GIIS.
VOBOX_HOST :
VOBOX hostname.
VOBOX_PORT :
The port the VOBOX gsisshd listens on.
VOS :
List of supported VOs. For each item listed in the VOS variable you need to create a set of new variables as follows:
VO_$<$VO-NAME$>$_SE :
Default SE used by the VO. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_SW_DIR :
Area on the WN for the installation of the experiment software. If on the WNs a predefined shared area has been mounted where VO managers can pre-install software, then these variable should point to this area. If instead there is not a shared area and each job must install the software, then this variables should contain a dot ( . ).Anyway the mounting of shared areas, as well as the local installation of VO software is not managed by yaim and should be handled locally by Site Administrators. WARNING: VO-NAME must be in capital cases.

Check TAR installation

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_check_tar'.

Check if <INSTALL_ROOT> directory exists.

Check if dependency software is installed (test if <INSTALL_ROOT>/edg/share/java/log4j.jar exists).


Specification of function: config_check_tar

The function 'config_check_tar' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_check_tar

The code is also reproduced in 14.1..

Install CA certificates

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'install_certs_userland'.

If CA certificates are not already installed in /etc/grid-security/certificates, certificate RPMs are downloaded from http://grid-deployment.web.cern.ch/grid-deployment/download/RpmDir/security/index.html and extracted to <INSTALL_ROOT>/etc/grid-security/certificates.


Specification of function: install_certs_userland

The function 'install_certs_userland' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/install_certs_userland

The code is also reproduced in 14.2..

Fix CRL update script

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_fix_edg-fetch-crl-cron'.

CRL update script (<INSTALL_ROOT>/edg/etc/cron/edg-fetch-crl-cron) is modified to use <INSTALL_ROOT>/edg as EDG_LOCATION and <INSTALL_ROOT>/etc/grid-security/certificates as X509_CERT_DIR.


Specification of function: config_fix_edg-fetch-crl-cron

The function 'config_fix_edg-fetch-crl-cron' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_fix_edg-fetch-crl-cron

The code is also reproduced in 14.3..

Set-up updating of CRLs

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_crl'.

Cron script is installed to fetch new versions of CRLs four times a day. The time when the script is run is randomized in order to distribute the load on CRL servers. If the configuration is run as root, the cron entry is installed in /etc/cron.d/edg-fetch-crl, otherwise it is installed as a user cron entry.

CRLs are also updated immediately by running the update script (<INSTALL_ROOT>/edg/etc/cron/edg-fetch-crl-cron).

Logrotate script is installed as /etc/logrotate.d/edg-fetch-crl to prevent the logs from growing indefinitely.


Specification of function: config_crl

The function 'config_crl' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_crl

The code is reproduced also in 14.4..

Set-up Replica Manager

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_replica_manager'.

Variable substitutions are generated in <INSTALL_ROOT>/edg/etc/edg-replica-manager/edg-replica-manager.conf.values_local:

@EDG.LOCATION@|<INSTALL_ROOT>/edg|location of edg middleware
@LOCALDOMAIN@|<domain>|the local domain
@DEFAULT.SE@|<SE_HOST>|the host of the close SE
@DEFAULT.CE@|<CE_HOST>|the host of the close CE
@INFOSERVICE@|MDS|The info provider to use. It can be Stub, MDS or RGMA
@RLS.MODE@|LrcOnly|The mode the RLS should be run in. LrcOnly or WithRli
@STUBFILE@||The properties file for the static file - only needed in Stub mode
@MDS.HOST@|<BDII_HOST>|The host of the MDS info provider
@MDS.PORT@|2170|The port of the MDS info provider
@ROS.FAILURE@|false|Fail if no ROS is available
@CONF.GCC@|_gcc3_2_2|The gcc suffix as used on the build box (empty for 2.95, _gcc3_2_2 for 3.2.)
@IGNORE.PREFIX@|true|Whether the RM will ignore the lfn and guid prefix.
@GRIDFTP.DCAU@|false|Does GridFTP use Data Channel Authentication (DCAU)
@GRIDFTP.STREAMS.SMALL@|1|The default number of stream to use for a small file
@GRIDFTP.STREAMS.BIG@|3|The default number of stream to use for a big file
@GRIDFTP.FILESIZE.THRESHOLD@|100|The Threshold (in MB) above which a file to transfer is considered "big"

The value of <domain> is determined by running hostname -d. Using these substitutions and templates in <INSTALL_ROOT>/edg/etc/edg-replica-manager/, Replica Manager is configured by generating files in <EDG_LOCATION>/var/etc/edg-replica-manager:

<INSTALL_ROOT>/edg/sbin/edg-replica-manager-configure <INSTALL_ROOT>/edg/etc/edg-replica-manager/edg-replica-manager.conf.values_local


Specification of function: config_replica_manager

The function 'config_replica_manager' needs the following variables to be set in the configuration file:

BDII_HOST :
BDII Hostname.
CE_HOST :
Computing Element Hostname.
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
SE_LIST :
A list of hostnames of the SEs available at your site.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_replica_manager

The code is also reproduced in 14.5..

Set-up user environment

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_tar_user_env'.

<INSTALL_ROOT>/etc/profile.d/grid_env.sh and <INSTALL_ROOT>/etc/profile.d/grid_env.csh are fixed to use the correct location of LCG software:

UI_LOC=<INSTALL_ROOT>

If CA certificates are installed in <INSTALL_ROOT>/etc/grid-security/certificates, X509_CERT_DIR is also modified appropriately.


Specification of function: config_tar_user_env

The function 'config_tar_user_env' needs the following variables to be set in the configuration file:

GLOBUS_TCP_PORT_RANGE :
Port range for Globus IO.
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_tar_user_env

The code is also reproduced in 14.6..

Set-up LCG environment variables

Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_lcgenv'.

The LCG middleware needs some environment variables to be set up at boot time. The variable should be available both in 'bash-like' shells and in 'csh-like' shells.

This can be obtained in different ways:

The simplest way, if you have 'root' permissions, is to put a shell script for each of the supported shell 'families' in the directory /etc/profile.d. The script will be automatically sourced at start up.

If instead you are not a superuser and you are doing the installation in a private directory (e.g. you are installing a Re-locatable Distribution of a Worker Node or a User Interface in the <INSTALL_ROOT> directory), you could create the scripts in the directory <INSTALL_ROOT>/etc/profile.d, in order to have the variables automatically set up by LCG tools.

The list of the environment variables to be set up follows:

LCG_GFAL_INFOSYS:
Hostname of the BDII node.
MYPROXY_SERVER:
Hostname of the Proxy server.
PATH:
Add to the PATH variable the path /opt/d-cache-client/bin
LD_LIBRARY_PATH:
Add to the LD_LIBRARY_PATH variable the path /opt/d-cache-client/dcap
SRM_PATH:
Installation directory of the srm client. The default value for this variable is /opt/d-cache-client/srm
VO_<VO-NAME>_SW_DIR:
For each virtual organization <VO-NAME> An environment variable VO_<VO-NAME>_SW_DIR is needed. This variable points to the installation directory of the VO software.
VO_<VO-NAME>_DEFAULT_SE:
For each virtual organization <VO-NAME> An environment variable VO_<VO-NAME>_DEFAULT_SE is needed. This variable points to the Default Storage Element for that VO.

The examples given hereafter refer to the simple configuration method described above. In the following description we will refer to the two possible locations as to the <LCG_ENV_LOC>.
So, according to the cases above described, either

<LCG_ENV_LOC>=/etc/profile.d

or

<LCG_ENV_LOC>=<INSTALL_ROOT>/etc/profile.d

Examples:

WARNING: The two scripts must be executable by all users.

> chmod a+x ${LCG_ENV_LOC}/lcgenv.csh
> chmod a+x ${LCG_ENV_LOC}/lcgenv.sh


Specification of function: config_lcgenv

The function 'config_lcgenv' needs the following variables to be set in the configuration file:

BDII_HOST :
BDII Hostname.
CE_HOST :
Computing Element Hostname.
DPM_HOST :
Host name of the DPM host, used also as a default DPM for the lcg-stdout-mon .
EDG_WL_SCRATCH :
Set this if you want jobs to use a particular scratch area.
EDG_WL_SCRATCH :
Set this if you want jobs to use a particular scratch area.
GLOBUS_TCP_PORT_RANGE :
Port range for Globus IO.
GSSKLOG :
yes or no, indicating whether the site provides an AFS authentication server which maps gsi credentials into Kerberos tokens .
GSSKLOG_SERVER :
If GSSKLOG is yes, the name of the AFS authentication server host.
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
PX_HOST :
PX hostname.
SE_LIST :
A list of hostnames of the SEs available at your site.
SITE_NAME :
Your GIIS.
VOBOX_HOST :
VOBOX hostname.
VOS :
List of supported VOs. For each item listed in the VOS variable you need to create a set of new variables as follows:
VO_$<$VO-NAME$>$_SE :
Default SE used by the VO. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_SW_DIR :
Area on the WN for the installation of the experiment software. If on the WNs a predefined shared area has been mounted where VO managers can pre-install software, then these variable should point to this area. If instead there is not a shared area and each job must install the software, then this variables should contain a dot ( . ).Anyway the mounting of shared areas, as well as the local installation of VO software is not managed by yaim and should be handled locally by Site Administrators. WARNING: VO-NAME must be in capital cases.

The function does exit with return code 1 if they are not set.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_lcgenv

The code is reproduced also in 14.7..

Set-up Java location

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_java'.

Since Java is not included in the LCG distribution, Java location needs to be configured with yaim.

If <JAVA_LOCATION> is not defined in site-info.def, it is determined from installed Java RPMs (if available).

In relocatable distribution, JAVA_HOME environment variable is defined in <INSTALL_ROOT>/etc/profile.d/grid_env.sh and <INSTALL_ROOT>/etc/profile.d/grid_env.csh.

Otherwise, JAVA_HOME is defined in /etc/java/java.conf and /etc/java.conf and Java binaries added to PATH in <INSTALL_ROOT>/edg/etc/profile.d/j2.sh and <INSTALL_ROOT>/edg/etc/profile.d/j2.csh.


Specification of function: config_java

The function 'config_java' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
JAVA_LOCATION :
Path to Java VM installation. It can be used in order to run a different version of java installed locally.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_java

The code is reproduced also in 14.8..

Set-up R-GMA client

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_rgma_client'.

R-GMA client configuration is generated in <INSTALL_ROOT>/glite/etc/rgma/rgma.conf by running:

<INSTALL_ROOT>/glite/share/rgma/scripts/rgma-setup.py --secure=no --server=<MON_HOST> --registry=<REG_HOST> --schema=<REG_HOST>

<INSTALL_ROOT>/edg/etc/profile.d/edg-rgma-env.sh and <INSTALL_ROOT>/edg/etc/profile.d/edg-rgma-env.csh with the following functionality:

These files are sourced into the users environment from <INSTALL_ROOT>/etc/profile.d/z_edg_profile.sh and <INSTALL_ROOT>/etc/profile.d/z_edg_profile.csh.


Specification of function: config_rgma_client

The function 'config_rgma_client' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
MON_HOST :
MON Box Hostname.
REG_HOST :
RGMA Registry hostname.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_rgma_client

The code is also reproduced in 14.9..

Set-up File Transfer Service client

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_fts_client'.

If <FTS_SERVER_URL> is set, gLite FTS client is configured by creating <INSTALL_ROOT>/glite/etc/services.xml:

<services>

  <service name="EGEEfts">
    <parameters>
      <endpoint><FTS_SERVER_URL>/services/FileTransfer</endpoint>
      <type>org.glite.FileTransfer</type>
      <version>3.0.0</version>
    </parameters>
  </service>

  <service name="EGEEchannel">
    <parameters>
      <endpoint><FTS_SERVER_URL>/services/ChannelManagement</endpoint>
      <type>org.glite.ChannelManagement</type>
      <version>3.0.0</version>
    </parameters>
  </service>

</services>


Specification of function: config_fts_client

The function 'config_fts_client' needs the following variables to be set in the configuration file:

FTS_SERVER_URL :
URL of the File Transfer Service server.
INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_fts_client

The code is also reproduced in 14.10..

Set-up GSI enabled SSH

Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch

This chapter describes the configuration steps done by the yaim function 'config_gsissh'.

First, the generic configuration script (<INSTALL_ROOT>/globus/setup/gsi_openssh_setup/setup-openssh) is run (if it exists). It will generate host keys and a default configuration.

At the moment, further configuration is only done for VOBOX nodes.

Configuration files for SSH daemon and client are modified to use GSI authentication. All other authentication types and root login are disabled for the daemon.

The startup script is modified to make the gsisshd listen on a nondefault port (<VOBOX_PORT>). It is than installed as /etc/init.d/gsisshd. The gsisshd daemon is restarted and configured to start on boot.


Specification of function: config_gsissh

The function 'config_gsissh' needs the following variables to be set in the configuration file:

INSTALL_ROOT :
Installation root - change if using the re-locatable distribution.
VOBOX_PORT :
The port the VOBOX gsisshd listens on.

The original code of the function can be found in:

/opt/lcg/yaim/functions/config_gsissh

The code is also reproduced in 14.11..

Source code


config_check_tar

function config_check_tar () {

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

if [ ! -d ${INSTALL_ROOT} ]; then
        echo "please make the distribution available under ${INSTALL_ROOT}"
        echo "or update your site-info.def file to reflect the correct path"
        return 1
fi

if [ $UID -ne 0 ]; then
        if [ ! -f ${INSTALL_ROOT}/edg/share/java/log4j.jar ]; then
                echo "You are attempting a userland installation"
                echo "Have you untarred the dependency software into"
                echo "${INSTALL_ROOT}/edg yet?"
                return 1
        fi
fi

return 0

}


install_certs_userland

function install_certs_userland () {

if central_certs; then
    echo "Certificates found in /etc/grid-security/certificates"
    echo "Not installing relocated versions"
    return 0
fi

CA_WGET="http://grid-deployment.web.cern.ch/grid-deployment/download/RpmDir/security/index.html"

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

if ! ( which rpm2cpio > /dev/null 2>&1 ); then
	echo "rpm2cpio is required to install the CA rpms"
	return 1
fi

mkdir tmp.$$ || return 1

cd tmp.$$


wget -l1 -nd -r --quiet $CA_WGET

if ! (ls *.rpm > /dev/null 2>&1); then
	echo "Couldn't download the CA rpms"
	return 1
fi

for i in *.rpm; do
	rpm2cpio $i | cpio -i --make-directories --quiet
done

if [ ! -d ${INSTALL_ROOT}/etc/grid-security/certificates ]; then
	mkdir -p ${INSTALL_ROOT}/etc/grid-security/certificates
fi

mv -f etc/grid-security/certificates/* ${INSTALL_ROOT}/etc/grid-security/certificates

cd - > /dev/null

rm -rf tmp.$$

return 0

}


config_fix_edg-fetch-crl-cron

function config_fix_edg-fetch-crl-cron () {

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

ex $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron <<EOF
%s#^\$para{EDG_LOCATION}=".*";#\$para{EDG_LOCATION}="$INSTALL_ROOT/edg";#
w
q
EOF

ex $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron <<EOF
%s#^\$para{X509_CERT_DIR}=".*";#\$para{X509_CERT_DIR}="$INSTALL_ROOT/etc/grid-security/certificates";#
w
q
EOF

return 0

}


config_crl

config_crl(){

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

let minute="$RANDOM%60"

let h1="$RANDOM%24"
let h2="($h1+6)%24"
let h3="($h1+12)%24"
let h4="($h1+18)%24"

if !( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then

    if [ ! -f /etc/cron.d/edg-fetch-crl ]; then
	echo "Now updating the CRLs - this may take a few minutes..."
	$INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> /var/log/edg-fetch-crl-cron.log 2>&1
    fi

cron_job edg-fetch-crl root "$minute $h1,$h2,$h3,$h4 * * * $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> /var/log/edg-fetch-crl-cron.log 2>&1"

    cat <<EOF > /etc/logrotate.d/edg-fetch
/var/log/edg-fetch-crl-cron.log  {
    compress
    monthly
    rotate 12
    missingok
    ifempty
    create
}
EOF

else

    cron_job edg-fetch-crl `whoami` "$minute $h1,$h2,$h3,$h4 * * * $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> $INSTALL_ROOT/edg/var/log/edg-fetch-crl-cron.log 2>&1"
    if [ ! -d $INSTALL_ROOT/edg/var/log ]; then
	mkdir -p $INSTALL_ROOT/edg/var/log
    fi
    echo "Now updating the CRLs - this may take a few minutes..."
    $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> $INSTALL_ROOT/edg/var/log/edg-fetch-crl-cron.log 2>&1

fi

return 0
}


config_replica_manager

config_replica_manager(){

# SE_HOST and CE_HOST are not strictly required
requires BDII_HOST

se_host="${SE_LIST%% *}"

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

if [ -f ${INSTALL_ROOT}/edg/etc/edg-replica-manager/edg-replica-manager.conf.values_local ]; then
        mv -f ${INSTALL_ROOT}/edg/etc/edg-replica-manager/edg-replica-manager.conf.values_local /tmp/edg-replica-manager.conf.values_local.orig
fi

cat <<EOF > ${INSTALL_ROOT}/edg/etc/edg-replica-manager/edg-replica-manager.conf.values_local
@EDG.LOCATION@|$INSTALL_ROOT/edg|location of edg middleware
@LOCALDOMAIN@|`hostname -d`|the local domain
@DEFAULT.SE@|$se_host|the host of the close SE
@DEFAULT.CE@|$CE_HOST|the host of the close CE
@INFOSERVICE@|MDS|The info provider to use. It can be Stub, MDS or RGMA
@RLS.MODE@|LrcOnly|The mode the RLS should be run in. LrcOnly or WithRli
@STUBFILE@||The properties file for the static file - only needed in Stub mode
@MDS.HOST@|$BDII_HOST|The host of the MDS info provider
@MDS.PORT@|2170|The port of the MDS info provider
@ROS.FAILURE@|false|Fail if no ROS is available
@CONF.GCC@|_gcc3_2_2|The gcc suffix as used on the build box (empty for 2.95, _gcc3_2_2 for 3.2.)
@IGNORE.PREFIX@|true|Whether the RM will ignore the lfn and guid prefix.
@GRIDFTP.DCAU@|false|Does GridFTP use Data Channel Authentication (DCAU)
@GRIDFTP.STREAMS.SMALL@|1|The default number of stream to use for a small file
@GRIDFTP.STREAMS.BIG@|3|The default number of stream to use for a big file
@GRIDFTP.FILESIZE.THRESHOLD@|100|The Threshold (in MB) above which a file to transfer is considered "big"
EOF

oldEDG_LOCATION=$EDG_LOCATION
oldEDG_LOCATION_VAR=$EDG_LOCATION_VAR
export EDG_LOCATION=${INSTALL_ROOT}/edg
export EDG_LOCATION_VAR=${INSTALL_ROOT}/edg/var

${INSTALL_ROOT}/edg/sbin/edg-replica-manager-configure \
${INSTALL_ROOT}/edg/etc/edg-replica-manager/edg-replica-manager.conf.values_local >> $YAIM_LOG

export EDG_LOCATION=$oldEDG_LOCATION
export EDG_LOCATION_VAR=$oldEDG_LOCATION_VAR

return 0
}


config_tar_user_env

function config_tar_user_env () {

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

if ( ! central_certs ); then
    x509_cert_dir=${INSTALL_ROOT}/etc/grid-security/certificates
else
    x509_cert_dir=${X509_CERT_DIR:-/etc/grid-security/certificates}
fi

if [ ! -d /etc/grid-security/vomsdir -a -d $INSTALL_ROOT/etc/grid-security/vomsdir ]; then
    x509_voms_dir=${INSTALL_ROOT}/etc/grid-security/vomsdir
else
    x509_voms_dir=${X509_VOMS_DIR:-/etc/grid-security/vomsdir}
fi

# If GLOBUS_TCP_PORT_RANGE is unset, give it a good default
GLOBUS_TCP_PORT_RANGE=${GLOBUS_TCP_PORT_RANGE:-"20000 25000"}

####################################
# sh
####################################

cat > $INSTALL_ROOT/etc/profile.d/grid_env.sh <<EOF

# Edit this line to reflect the mountpoint of your middleware
INSTALL_ROOT="$INSTALL_ROOT"
EOF

cat >> $INSTALL_ROOT/etc/profile.d/grid_env.sh <<'EOF'

# Leave the rest...

if [ "${LCG_ENV_SET}" ]; then return 0; fi

# Root directory for EDG software. (mandatory)
# Usual value: /opt/edg
EDG_LOCATION=${INSTALL_ROOT}/edg
# Directory for machine-specific files.
# Usual value: $EDG_LOCATION/var
EDG_LOCATION_VAR=$EDG_LOCATION/var
# World writable directory for temporary files. (mandatory)
# Usual value: /tmp
EDG_TMP=/tmp
EDG_WL_LOCATION=$EDG_LOCATION


# Usual value: /opt/lcg
LCG_LOCATION=${INSTALL_ROOT}/lcg
# Directory for machine-specific files.
# Usual value: $LCG_LOCATION/var
LCG_LOCATION_VAR=$LCG_LOCATION/var
# World writable directory for temporary files. (mandatory)
# Usual value: /tmp
LCG_TMP=/tmp

GLOBUS_LOCATION=${INSTALL_ROOT}/globus
#GLOBUS_CONFIG=/etc/globus.conf
GLOBUS_CONFIG=/dev/null

GPT_LOCATION=${INSTALL_ROOT}/gpt

GLITE_LOCATION=${INSTALL_ROOT}/glite
GLITE_LOCATION_VAR=${GLITE_LOCATION}/var

for i in ${INSTALL_ROOT}/etc/env.d/*.sh; do
       if [ -x $i ]; then
           . $i
       fi
done

LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${INSTALL_ROOT}/edg/externals/lib

if [ -d ${INSTALL_ROOT}/gcc-3.2.2/lib ]; then
        LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:${INSTALL_ROOT}/gcc-3.2.2/lib
fi

RGMA_HOME="${INSTALL_ROOT}/glite"

export EDG_LOCATION EDG_LOCATION_VAR EDG_TMP EDG_WL_LOCATION LCG_LOCATION LCG_LOCATION_VAR LCG_TMP JAVA_HOME GLOBUS_LOCATION GLOBUS_CONFIG GPT_LOCATION GLITE_LOCATION GLITE_LOCATION_VAR LD_LIBRARY_PATH RGMA_HOME

EOF

cat >> $INSTALL_ROOT/etc/profile.d/grid_env.sh <<EOF

GLOBUS_TCP_PORT_RANGE="${GLOBUS_TCP_PORT_RANGE}"
export GLOBUS_TCP_PORT_RANGE

X509_CERT_DIR=$x509_cert_dir
X509_VOMS_DIR=$x509_voms_dir
export X509_CERT_DIR X509_VOMS_DIR
EOF

####################################
# csh
####################################

cat > $INSTALL_ROOT/etc/profile.d/grid_env.csh <<EOF

# Edit this line to reflect the mountpoint of your middleware
setenv INSTALL_ROOT "$INSTALL_ROOT"
EOF

cat >> $INSTALL_ROOT/etc/profile.d/grid_env.csh <<'EOF'

# Leave the rest...

if ( $?LCG_ENV_SET ) then
    return 0
endif

# Root directory for EDG software. (mandatory)
# Usual value: /opt/edg
setenv EDG_LOCATION ${INSTALL_ROOT}/edg
# Directory for machine-specific files.
# Usual value: $EDG_LOCATION/var
setenv EDG_LOCATION_VAR $EDG_LOCATION/var
# World writable directory for temporary files. (mandatory)
# Usual value: /tmp
setenv EDG_TMP /tmp
setenv EDG_WL_LOCATION $EDG_LOCATION


# Usual value: /opt/lcg
setenv LCG_LOCATION ${INSTALL_ROOT}/lcg
# Directory for machine-specific files.
# Usual value: $LCG_LOCATION/var
setenv LCG_LOCATION_VAR $LCG_LOCATION/var
# World writable directory for temporary files. (mandatory)
# Usual value: /tmp
setenv LCG_TMP /tmp

setenv GLOBUS_LOCATION ${INSTALL_ROOT}/globus
#setenv GLOBUS_CONFIG /etc/globus.conf
setenv GLOBUS_CONFIG /dev/null

setenv GPT_LOCATION ${INSTALL_ROOT}/gpt

setenv GLITE_LOCATION ${INSTALL_ROOT}/glite
setenv GLITE_LOCATION_VAR ${GLITE_LOCATION}/var

foreach i (${INSTALL_ROOT}/etc/env.d/*.csh)
       if ( -x $i ) then
           source $i
       endif
end

setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:${INSTALL_ROOT}/edg/externals/lib

if ( -d ${INSTALL_ROOT}/gcc-3.2.2/lib ) then
        setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:${INSTALL_ROOT}/gcc-3.2.2/lib
endif

setenv RGMA_HOME "${INSTALL_ROOT}/glite"

EOF

cat >>$INSTALL_ROOT/etc/profile.d/grid_env.csh <<EOF
setenv GLOBUS_TCP_PORT_RANGE "$GLOBUS_TCP_PORT_RANGE"

setenv X509_CERT_DIR $x509_cert_dir
setenv X509_VOMS_DIR $x509_voms_dir
EOF

if [ $UID -eq 0 ]; then
	echo "If you want this to be your default middleware installation"
	echo "please run the following command"
	echo "ln -s $INSTALL_ROOT/etc/profile.d/grid_env.sh $INSTALL_ROOT/etc/profile.d/grid_env.csh /etc/profile.d"
else
	echo "IMPORTANT"
	echo "you need to make sure that the correct grid_env.*sh file in"
	echo "$INSTALL_ROOT/etc/profile.d/"
	echo "is sourced when you log in so that your"
	echo "environment is set up correctly"
fi


}


config_lcgenv

config_lcgenv() {

INSTALL_ROOT=${INSTALL_ROOT:-/opt}

if !( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then
	LCG_ENV_LOC=/etc/profile.d
else
	LCG_ENV_LOC=${INSTALL_ROOT}/etc/env.d
fi

requires BDII_HOST SITE_NAME CE_HOST

if ( ! echo "${NODE_TYPE_LIST}" | grep -q UI ); then
    requires VOS VO__SW_DIR SE_LIST
fi

default_se="${SE_LIST%% *}"

if [ "$default_se" ]; then
    for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
	if [ "x`eval echo '$'VO_${VO}_DEFAULT_SE`" = "x" ]; then
	    eval VO_${VO}_DEFAULT_SE=$default_se
	fi
    done
fi

########## sh ##########
cat << EOF > ${LCG_ENV_LOC}/lcgenv.sh 
#!/bin/sh
if test "x\${LCG_ENV_SET+x}" = x; then
export LCG_GFAL_INFOSYS=$BDII_HOST:2170
EOF

if [ "$PX_HOST" ]; then
echo "export MYPROXY_SERVER=$PX_HOST" >> ${LCG_ENV_LOC}/lcgenv.sh
fi

if ( echo "${NODE_TYPE_LIST}" | egrep -q 'WN|VOBOX' ); then
    if [ "$SITE_NAME" ]; then
	echo "export SITE_NAME=$SITE_NAME" >> ${LCG_ENV_LOC}/lcgenv.sh
    fi

    if [ "$CE_HOST" ]; then
	echo "export SITE_GIIS_URL=$CE_HOST" >> ${LCG_ENV_LOC}/lcgenv.sh
    fi
fi

if [ -d ${INSTALL_ROOT}/d-cache/srm/bin ]; then
    echo export PATH="\${PATH}:${INSTALL_ROOT}/d-cache/srm/bin:${INSTALL_ROOT}/d-cache/dcap/bin" >> ${LCG_ENV_LOC}/lcgenv.sh
fi

if [ -d ${INSTALL_ROOT}/d-cache/dcap/lib ]; then
    echo export LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:${INSTALL_ROOT}/d-cache/dcap/lib >> ${LCG_ENV_LOC}/lcgenv.sh
fi

if [ -d ${INSTALL_ROOT}/d-cache/srm ]; then
    echo export SRM_PATH=${INSTALL_ROOT}/d-cache/srm >> ${LCG_ENV_LOC}/lcgenv.sh
fi

if [ "$EDG_WL_SCRATCH" ]; then
    echo "export EDG_WL_SCRATCH=$EDG_WL_SCRATCH"  >> ${LCG_ENV_LOC}/lcgenv.sh
fi 

for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do

    default_se=`eval echo '$'VO_${VO}_DEFAULT_SE`
    if [ "$default_se" ]; then
	echo "export VO_${VO}_DEFAULT_SE=$default_se" >> ${LCG_ENV_LOC}/lcgenv.sh
    fi

    if ( ! echo "${NODE_TYPE_LIST}" | grep -q UI ); then
	sw_dir=`eval echo '$'VO_${VO}_SW_DIR`
	if [ "$sw_dir" ]; then
	    echo "export VO_${VO}_SW_DIR=$sw_dir" >> ${LCG_ENV_LOC}/lcgenv.sh
	fi
    fi

done

if [ "$VOBOX_HOST" ]; then
   requires GSSKLOG 	
   if  [ "${GSSKLOG}x" == "yesx" ]; then	
      requires GSSKLOG_SERVER	
      echo "export GSSKLOG_SERVER=$GSSKLOG_SERVER" >> ${LCG_ENV_LOC}/lcgenv.sh
   fi	
fi

if [ "${DPM_HOST}" ]; then
    echo "export DPNS_HOST=${DPM_HOST}" >> ${LCG_ENV_LOC}/lcgenv.sh
    echo "export DPM_HOST=${DPM_HOST}" >> ${LCG_ENV_LOC}/lcgenv.sh
fi

if [ "$GLOBUS_TCP_PORT_RANGE" ]; then
    echo "export MYPROXY_TCP_PORT_RANGE=\"${GLOBUS_TCP_PORT_RANGE/ /,}\"" >> ${LCG_ENV_LOC}/lcgenv.sh
fi

if ( echo $NODE_TYPE_LIST | egrep -q UI ); then
    cat << EOF >> ${LCG_ENV_LOC}/lcgenv.sh
if [ "x\$X509_USER_PROXY" = "x" ]; then
    export X509_USER_PROXY=/tmp/x509up_u\$(id -u)
fi
EOF
fi

echo fi >> ${LCG_ENV_LOC}/lcgenv.sh
########## sh ##########


########## csh ##########
cat << EOF > ${LCG_ENV_LOC}/lcgenv.csh
#!/bin/csh
if ( ! \$?LCG_ENV_SET ) then
setenv LCG_GFAL_INFOSYS $BDII_HOST:2170
EOF

if [ "$PX_HOST" ]; then
echo "setenv MYPROXY_SERVER $PX_HOST" >>  ${LCG_ENV_LOC}/lcgenv.csh
fi

if ( echo "${NODE_TYPE_LIST}" | egrep -q 'WN|VOBOX' ); then
    if [ "$SITE_NAME" ]; then
	echo "setenv SITE_NAME $SITE_NAME" >>  ${LCG_ENV_LOC}/lcgenv.csh
    fi

    if [ "$CE_HOST" ]; then
	echo "setenv SITE_GIIS_URL $CE_HOST" >>  ${LCG_ENV_LOC}/lcgenv.csh
    fi
fi

if [ -d ${INSTALL_ROOT}/d-cache/srm/bin ]; then
    echo setenv PATH "\${PATH}:${INSTALL_ROOT}/d-cache/srm/bin:${INSTALL_ROOT}/d-cache/dcap/bin" >> ${LCG_ENV_LOC}/lcgenv.csh
fi

if [ -d ${INSTALL_ROOT}/d-cache/dcap/lib ]; then
    echo setenv LD_LIBRARY_PATH \${LD_LIBRARY_PATH}:${INSTALL_ROOT}/d-cache/dcap/lib >> ${LCG_ENV_LOC}/lcgenv.csh
fi

if [ -d ${INSTALL_ROOT}/d-cache/srm ]; then
    echo setenv SRM_PATH ${INSTALL_ROOT}/d-cache/srm >> ${LCG_ENV_LOC}/lcgenv.csh
fi

if [ "$EDG_WL_SCRATCH" ]; then
    echo "setenv EDG_WL_SCRATCH $EDG_WL_SCRATCH"  >> ${LCG_ENV_LOC}/lcgenv.csh
fi                                                                                

for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do

    default_se=`eval echo '$'VO_${VO}_DEFAULT_SE`
    if [ "$default_se" ]; then
	echo "setenv VO_${VO}_DEFAULT_SE $default_se" >> ${LCG_ENV_LOC}/lcgenv.csh
    fi

    if ( ! echo "${NODE_TYPE_LIST}" | grep -q UI ); then
	sw_dir=`eval echo '$'VO_${VO}_SW_DIR`
	if [ "$sw_dir" ]; then
	    echo "setenv VO_${VO}_SW_DIR $sw_dir" >> ${LCG_ENV_LOC}/lcgenv.csh
	fi
    fi

done

if [ "$VOBOX_HOST" ]; then
   requires GSSKLOG	
   if [ "${GSSKLOG}x" == "yesx" ]; then	
   requires GSSKLOG_SERVER
      echo "setenv GSSKLOG_SERVER $GSSKLOG_SERVER" >> ${LCG_ENV_LOC}/lcgenv.csh
   fi	
fi


if [ "${DPM_HOST}" ]; then
    echo "setenv DPNS_HOST ${DPM_HOST}" >> ${LCG_ENV_LOC}/lcgenv.csh
    echo "setenv DPM_HOST ${DPM_HOST}" >> ${LCG_ENV_LOC}/lcgenv.csh
fi

if [ "$GLOBUS_TCP_PORT_RANGE" ]; then
    echo "setenv MYPROXY_TCP_PORT_RANGE \"${GLOBUS_TCP_PORT_RANGE/ /,}\"" >> ${LCG_ENV_LOC}/lcgenv.csh
fi

if ( echo $NODE_TYPE_LIST | egrep -q UI ); then
    cat << EOF >> ${LCG_ENV_LOC}/lcgenv.csh
if ( ! \$?X509_USER_PROXY ) then
   setenv X509_USER_PROXY /tmp/x509up_u\`id -u\`
endif
EOF
fi

echo endif >> ${LCG_ENV_LOC}/lcgenv.csh
########## csh ##########

chmod a+xr ${LCG_ENV_LOC}/lcgenv.csh
chmod a+xr ${LCG_ENV_LOC}/lcgenv.sh

return 0
}


config_java

function config_java () {

INSTALL_ROOT=${INSTALL_ROOT:-/opt} 

# If JAVA_LOCATION is not set by the admin, take a guess
if [ -z "$JAVA_LOCATION" ]; then
    java=`rpm -qa | grep j2sdk-` || java=`rpm -qa | grep j2re`
    if [ "$java" ]; then
	JAVA_LOCATION=`rpm -ql $java | egrep '/bin/java$' | sort | head -1 | sed 's#/bin/java##'`
    fi
fi

if [ ! "$JAVA_LOCATION" -o ! -d "$JAVA_LOCATION" ]; then
   echo "Please check your value for JAVA_LOCATION"
   return 1
fi

if ( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then

# We're configuring a relocatable distro

    if [ ! -d ${INSTALL_ROOT}/edg/etc/profile.d ]; then
	mkdir -p ${INSTALL_ROOT}/edg/etc/profile.d/
    fi

    cat  > $INSTALL_ROOT/edg/etc/profile.d/j2.sh <<EOF

JAVA_HOME=$JAVA_LOCATION
export JAVA_HOME
EOF

    cat  > $INSTALL_ROOT/edg/etc/profile.d/j2.csh <<EOF

setenv JAVA_HOME $JAVA_LOCATION
EOF

    chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.sh
    chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.csh

    return 0

fi # end of relocatable stuff

# We're root and it's not a relocatable

if [ ! -d /etc/java ]; then
    mkdir /etc/java
fi

echo "export JAVA_HOME=$JAVA_LOCATION" > /etc/java/java.conf
echo "export JAVA_HOME=$JAVA_LOCATION" > /etc/java.conf
chmod +x /etc/java/java.conf 

#This hack is here due to SL and the java profile rpms, Laurence Field

if [ ! -d ${INSTALL_ROOT}/edg/etc/profile.d ]; then
    mkdir -p ${INSTALL_ROOT}/edg/etc/profile.d/
fi

cat << EOF > $INSTALL_ROOT/edg/etc/profile.d/j2.sh
if [ -z "\$PATH" ]; then
   export PATH=${JAVA_LOCATION}/bin
else
   export PATH=${JAVA_LOCATION}/bin:\${PATH}
fi
EOF

chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.sh

cat << EOF > $INSTALL_ROOT/edg/etc/profile.d/j2.csh
if ( \$?PATH ) then
    setenv PATH ${JAVA_LOCATION}/bin:\${PATH}
else
    setenv PATH ${JAVA_LOCATION}/bin
endif
EOF

chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.csh

return 0

}


config_rgma_client

config_rgma_client(){

requires MON_HOST REG_HOST

INSTALL_ROOT=${INSTALL_ROOT:-/opt} 

# NB java stuff now in config_java, which must be run before

export RGMA_HOME=${INSTALL_ROOT}/glite

# in order to use python from userdeps.tgz we need to source the env
if ( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then
    . $INSTALL_ROOT/etc/profile.d/grid_env.sh
fi

${RGMA_HOME}/share/rgma/scripts/rgma-setup.py --secure=yes --server=${MON_HOST} --registry=${REG_HOST} --schema=${REG_HOST} 

cat << EOF > ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.sh
export RGMA_HOME=${INSTALL_ROOT}/glite
export APEL_HOME=${INSTALL_ROOT}/glite

echo \$PYTHONPATH | grep -q ${INSTALL_ROOT}/glite/lib/python && isthere=1 || isthere=0
if [ \$isthere = 0 ]; then
    if [ -z \$PYTHONPATH ]; then
        export PYTHONPATH=${INSTALL_ROOT}/glite/lib/python
    else
        export PYTHONPATH=\$PYTHONPATH:${INSTALL_ROOT}/glite/lib/python
    fi
fi

echo \$LD_LIBRARY_PATH | grep -q ${INSTALL_ROOT}/glite/lib && isthere=1 || isthere=0
if [ \$isthere = 0 ]; then
    if [ -z \$LD_LIBRARY_PATH ]; then
        export LD_LIBRARY_PATH=${INSTALL_ROOT}/glite/lib
    else
        export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:${INSTALL_ROOT}/glite/lib
    fi
fi
EOF

chmod a+rx ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.sh

cat << EOF > ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.csh
setenv RGMA_HOME ${INSTALL_ROOT}/glite
setenv APEL_HOME ${INSTALL_ROOT}/glite

echo \$PYTHONPATH | grep -q ${INSTALL_ROOT}/glite/lib/python && set isthere=1 || set isthere=0
if ( \$isthere == 0 ) then
    if ( -z \$PYTHONPATH ) then
        setenv PYTHONPATH ${INSTALL_ROOT}/glite/lib/python
    else
        setenv PYTHONPATH \$PYTHONPATH\:${INSTALL_ROOT}/glite/lib/python
    endif
endif

echo \$LD_LIBRARY_PATH | grep -q ${INSTALL_ROOT}/glite/lib && set isthere=1 || set isthere=0
if ( \$isthere == 0 ) then
    if ( -z \$LD_LIBRARY_PATH ) then
        setenv LD_LIBRARY_PATH ${INSTALL_ROOT}/glite/lib
    else
        setenv LD_LIBRARY_PATH \$LD_LIBRARY_PATH\:${INSTALL_ROOT}/glite/lib
    endif
endif
EOF

chmod a+rx  ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.csh

return 0
}


config_fts_client

function config_fts_client () {

INSTALL_ROOT=${INSTALL_ROOT:-/opt} 

if [ -z "$FTS_SERVER_URL" ]; then
    return 0
fi

cat > ${INSTALL_ROOT}/glite/etc/services.xml <<EOF
<?xml version="1.0" encoding="UTF-8"?>

<services>

  <service name="EGEEfts">
    <parameters>
      <endpoint>${FTS_SERVER_URL}/services/FileTransfer</endpoint>
      <type>org.glite.FileTransfer</type>
      <version>3.0.0</version>
    </parameters>
  </service>

  <service name="EGEEchannel">
    <parameters>
      <endpoint>${FTS_SERVER_URL}/services/ChannelManagement</endpoint>
      <type>org.glite.ChannelManagement</type>
      <version>3.0.0</version>
    </parameters>
  </service>

</services>
EOF

return 0

}


config_gsissh

config_gsissh(){

INSTALL_ROOT=${INSTALL_ROOT:-/opt}
export GLOBUS_LOCATION=${INSTALL_ROOT}/globus
export PERLLIB=${PERLLIB}:${INSTALL_ROOT}/gpt/lib/perl


#configuring gsissh server

if [ -f ${INSTALL_ROOT}/globus/setup/gsi_openssh_setup/setup-openssh ]; then
        ${INSTALL_ROOT}/globus/setup/gsi_openssh_setup/setup-openssh
else
        echo "Impossible to configure gsissh: setup-openssh configuration script not found"
        exit 1
fi

if ( echo "${NODE_TYPE_LIST}" | grep VOBOX > /dev/null ); then

requires VOBOX_PORT 

cat ${INSTALL_ROOT}/globus/sbin/SXXsshd | sed "s/SSHD_ARGS=\"\"/SSHD_ARGS=\"-p ${VOBOX_PORT}\"/" > ${INSTALL_ROOT}/globus/sbin/SXXsshd.TMP

if [ $? == 0 ]; then
   mv ${INSTALL_ROOT}/globus/sbin/SXXsshd.TMP ${INSTALL_ROOT}/globus/sbin/SXXsshd
   chmod +x ${INSTALL_ROOT}/globus/sbin/SXXsshd
else
   echo "Unable to modify GSISSH startup script."
   exit 1
fi


if [ "x`grep LCG ${INSTALL_ROOT}/globus/etc/ssh/sshd_config`" = "x"  ]; then
cat <<EOF >>${INSTALL_ROOT}/globus/etc/ssh/sshd_config
#####################################
#ADDs ON from LCG

PermitRootLogin no
RSAAuthentication no
PubkeyAuthentication no
PasswordAuthentication no
ChallengeResponseAuthentication no
#####################################

EOF

fi 


# install the service under /etc/init.d and configure it


if  [ ! -f /etc/init.d/gsisshd ]; then 
	ln -s ${INSTALL_ROOT}/globus/sbin/SXXsshd /etc/init.d/gsisshd
fi 

/sbin/chkconfig --add gsisshd
/sbin/chkconfig gsisshd on
/sbin/service gsisshd restart

fi


#configure the client

if [ ! -d ${INSTALL_ROOT}/globus/etc/ssh ]; then
    mkdir -p ${INSTALL_ROOT}/globus/etc/ssh
fi


if [ "x`grep LCG ${INSTALL_ROOT}/globus/etc/ssh/ssh_config`" = "x"  ]; then

cat <<EOF >>${INSTALL_ROOT}/globus/etc/ssh/ssh_config
#####################################
#ADDs ON from LCG

GssapiAuthentication yes
GssapiKeyExchange yes
GssapiDelegateCredentials yes

#####################################
EOF

fi 

}

About this document ...

TAR_WN Generic Configuration Reference

This document was generated using the LaTeX2HTML translator Version 2002 (1.62)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -html_version 4.0 -no_navigation -address 'GRID deployment' TAR_WN.drv_html

The translation was initiated by Oliver KEEBLE on 2006-01-16


GRID deployment