#
# =================================================================
-# Installation instructions, up to date for 2.2.0 version
+# This is SALOME2 version 3.1 alpha
# =================================================================
#
-
-You'll find here generic instructions for installing the SALOME2 platform.
-
-
-1. Quick Overview
- --------------
-
-First of all, you have to check (or install if needed) the dependant
-software programs on your system. These programs are:
-
-- common development tools as gcc, automake, autoconf and libtools.
-- third party softwares used in SALOME building or runtime process
- (python, OCC, VTK, ...)
-
-Further details can be found in sections [2] and [3].
-
-If the dependencies are installed on your system, then you have to set
-your shell environment to get access to the software components
-(cf. [4]. "Preparing the shell environment").
-
-The next step is to install the KERNEL (cf. [5] "Installing KERNEL"):
-$ mkdir <kernel_build>
-$ mkdir <kernel_install>
-$ cd <kernel_src>
-$ ./build_configure
-$ cd <kernel_build>
-$ <kernel_src>/configure --prefix=<kernel_install>
-$ make
-$ make install
-
-Then, the SALOME components GEOM, MED, VISU, ... can be installed
-with a similar procedure (cf. [6]).
-
-Eventually, the platform can be run by executing the shell script
-runSalome (cf. [7]). Here, somme additionnal variables have to be set
-to describe the SALOME runtime configuration (<COMPONENT>_ROOT_DIR,
-OMNIORB_CONFIG)
-
-The following provides you with specific instructions for each step.
-
-
-2. System configuration
- --------------------
-
-We suggest the following configuration for building process:
-- gcc-3.2 (3.2.3 if gcc3-2 is not the native compiler of your distribution)
-- automake-1.9 (only aclocal is used)
-- autoconf-2.59
-- libtool-1.5.6
-
-remarks :
-- This is the minimum level of automake,autoconf & libtool, if you need
- to compile all the third party softwares (included OpenCascade 5.2).
-- SALOME2 is compiled and tested on a RedHat 8.0 version, with
- - gcc-3.2
- and
- - automake-1.6.3
- - autoconf-2.53
- - libtool-1.4.2
- This level of automake,autoconf & libtool is OK if you do not need to
- recompile OpenCascade 5.2.
-
-3. Third-party dependencies
- ------------------------
-
-The SALOME platform relies on a set of third-party softwares. The
-current version depends on:
-
- CAS-5.2 OpenCascade (try binaries,a source patch is needed)
- PyQt-3.3.2 Python-Qt Wrapper
- Python-2.2.2 Python interpreter
- SWIG-1.3.17 SWIG library
- VTK-4.2.2 VTK 3D-viewer
- boost-1_31_0 C++ library (only include templates are used)
- hdf5-1.4.4 Files Database library
- med-2.1.6 MED Data Format support for file records
- omniORB-3.0.5 ORB used in SALOME
- qt-x11-free-3.0.5 Qt library
- qwt-0.4.1 Graph components for Qt
- sip-3.3.2 langage binding software
- tcl8.3 Tcl interpreter
- tk8.3 Tk widgets
-
-And, in order to build the documentation:
-
- doxygen-1.3-rc2
- graphviz-1.9
-
-Additionnal software may be installed for optional features:
-
- netgen4.3_calibre3_gcc3.2.3.tgz
- tix8.1.4_calibre3_gcc3.2.3.tgz
- openpbs-2.3.16
- lsf-???
-
-######
-###### To Do -------------------------
-######
-- Instructions for installing these software programs can be found in a
- special note doc/configuration_examples/install-prerequis.
-- Installation shell scripts are also provided.
- These scripts have to be adapted to your own configuration.
-
-- See doc/configuration_examples/*
-######
-###### To Do -------------------------
-######
-
-In the following, we assume that all the third-party softwares are
-installed in the same root directory, named <salomeroot>/prerequis.
-Then, your file system should probably look like:
-
-<salomeroot>/prerequis/Python-2.2.2
-<salomeroot>/prerequis/omniORB-3.0.5
-<salomeroot>/prerequis/qt-x11-free-3.0.5
-...
-
-
-4. Preparing the shell environment
- -------------------------------
-
-Some variables have to be set to get acces to third-party software
-components (include files, executable, library, ...) during building
-process and runtime.
-
-The shell file prerequis.sh, embedded in the KERNEL source package,
-provides a template for setting those variables. In this example, all the
-softwares are supposed to be installed in the same root directory,
-named here INSTALLROOT.
-
-Copy the prerequis.sh in a working directory and adjust the settings
-to your own configuration. To get the shell prepared, just
-execute the following command in the building shell:
-
-$ source prerequis.sh
-
-(we assume here a ksh or bash mode)
-
-
-5. Installing the KERNEL component
- -------------------------------
-
-We use here the notation <kernel_src> to specify the source directory
-of the KERNEL component. The shell environment is supposed to have
-been set (cf. 4).
-
-Installing the KERNEL from a source package needs three directories:
-
-- the source directory, denoted here by <kernel_src>.
-
-- the build directory, denoted by <kernel_build> in the following. This
- directory can't be the same directory as <kernel_src>.
-
-- the install directory, denoted by <kernel_install> in the following. This
- directory can't be the same directory as <kernel_src> or
- <kernel_build>.
-
-The installing process is:
-
-STEP 1: preparing directories
-create the <kernel_build> and the <kernel_install> directories.
-$ mkdir <kernel_build>
-$ mkdir <kernel_install>
-
-STEP 2: build configure script
-go to <kernel_src> directory and generate the "configure" script.
-$ cd <kernel_src>
-$ ./build_configure
-
-If it doesn't work, check your system automake tools as specified in
-section [2].
-
-STEP 3: configure the building process
-go to the build directory and execute the configuration process:
-$ cd <kernel_build>
-$ <kernel_src>/configure --prefix=<kernel_install>
-
-Note that <kernel_install> must be an absolute path.
-
-When the configure process is complete, check the status of
-third-party softwares detection. You should have a status like:
-
----------------------------------------------
-Summary
----------------------------------------------
-
-Configure
- cc : yes
- boost : yes
- lex_yacc : yes
- python : yes
- swig : yes
- threads : yes
- OpenGL : yes
- qt : yes
- vtk : yes
- hdf5 : yes
- med2 : yes
- omniORB : yes
- occ : yes
- sip : yes
- pyqt : yes
- qwt : yes
- doxygen : yes
- graphviz : no
- openpbs : no
- lsf : no
-
-Default ORB : omniORB
-
----------------------------------------------
-
-If a software get a status "no", then it's not "seen" in the system:
-- the software is not installed, or
-- the shell environment is not set correctly.
-
-In this example, the software programs graphviz, openpbs and lsf are not
-installed (optional for most usages).
-
-
-STEP 4 : Building the binary files
-Execute make in the <kernel_build> directory:
-$ make
-
-
-STEP 5: Installing binary files, scripts and documentation
-Execute install target in the <kernel_build> directory:
-$ make install
-
-
-6. Installing the SALOME components
- --------------------------------
-
-Installing a component <COMPONENT> is done by following the same
-instructions as given for the KERNEL, replacing KERNEL by
-<COMPONENT> (build_configure, configure, make, make install).
-
-You just have to be aware of the dependencies between components:
-
-- MED depends on KERNEL
-- GEOM depends on KERNEL
-- SMESH depends on KERNEL, MED, GEOM
-- VISU depends on KERNEL, MED
-- SUPERV depends on KERNEL
-
-For example, installing the component SMESH needs the previous
-installation of the KERNEL component, and then the GEOM and MED components.
-
-The building process uses the variables <COMPONENT>_ROOT_DIR to
-localize the dependant components. The variables must be set to the
-install path directory of the components <COMPONENT> (ex:
-KERNEL_ROOT_DIR=<kernel_install>).
-
-In the above example, the three variables KERNEL_ROOT_DIR,
-GEOM_ROOT_DIR and MED_ROOT_DIR have to be set before configuring the
-building process of the SMESH component (STEP 3).
-
-
-7. Runtime
- -------
-
-To run the SALOME platform, the procedure is:
-- set the shell environment to get acces to third-party softwares
-
-$ source prerequis.sh
-
-- define the SALOME configuration by setting the whole set of
- variables <COMPONENT>_ROOT_DIR. Here, you just have to set the
- kernel and the components you need.
-
-$ export KERNEL_ROOT_DIR=<kernel_install>
-$ export MED_ROOT_DIR=<med_install>
-$ ...
-
-- define the CORBA configuration file by setting the variable
- OMNIORB_CONFIG. This variable must be set to a writable file
- path. The file may be arbitrary chosen and doesn't need to exist
- before running. We suggest:
-
-$ export OMNIORB_CONFIG=$HOME/.omniORB.cfg
-
-- run the SALOME platform by executing the script runSalome
-$ $KERNEL_ROOT_DIR/bin/salome/runSalome
-
-
-8. Suggestions and advices
- ----------------------
-
-For convenience or customization, we suggest the following organisation:
-
-- chose and create a root directory for the SALOME platform, say
- <salomeroot>.
-
-- install the third-party softwares in a sub-directory "prerequis"
-
-- install the SALOME components in a sub-directory "SALOME2"
-
-- make personnal copies of the files prerequis.sh and runSalome in
- <salomeroot>.
-
-$ cp <kernel_src>/prerequis.sh <rundir>/.
-$ cp <kernel_install>/bin/salome/runSalome <rundir>/.
-
-Edit the file prerequis.sh and adjust it to your own configuration.
-
-- define the SALOME2 configuration
-This step consists in setting the KERNEL_ROOT_DIR, the whole set of
-variables <COMPONENT>_ROOT_DIR you need, and the OMNIORB_CONFIG
-variable.
-We suggest to create a shell file envSalome.sh containing those
-settings. Then the configuration consists in loading envSalome.sh in
-the runtime shell:
-
-$ source envSalome.sh
-
-- When installed with this file organisation, running SALOME is done
- with the following shell commands:
-
-$ source <salomeroot>/prerequis.sh
-$ source <salomeroot>/envSalome.sh
-$ ./runSalome
-
-
-# =================================================================
-# End of file
=================================================================
-Installation instructions, up to date for 3.0 version
+General information, for developpers and users
=================================================================
+*html version of this document is produced with docutils*::
-You'll find here generic instructions for installing the SALOME2 platform.
+ rest2html < doc.txt > doc.html
-Summary
--------
+This document corresponds to SALOME2 3.1. (alpha version)
-`1. Quick Overview`_
+How to install SALOME
+---------------------
-`2. System configuration`_
+See INSTALL_ for general information on required configuration and
+prerequisites, compilation procedure, setting environment, first launch.
-`3. Third-party dependencies`_
+.. _INSTALL: ./doc/INSTALL.html
-`4. Preparing the shell environment`_
+Source code structuration and Unit Tests
+----------------------------------------
-`5. Installing the KERNEL component`_
+See UnitTests_ for general information on code directories structure,
+unit tests associated to the different kind of classes, and how to run
+the unit tests.
-`6. Installing the SALOME components`_
+.. _UnitTests: ./doc/UnitTests.html
-`7. Runtime`_
+End User documentation
+----------------------
-`8. Suggestions and advices`_
+link to end user documentation.
-1. Quick Overview
------------------
-
-First of all, you have to check (or install if needed) the dependant
-software programs on your system. These programs are:
-
-- common development tools as gcc, automake, autoconf and libtools.
-- third party softwares used in SALOME building or runtime process
- (python, OCC, VTK, ...)
-
-Further details can be found in sections [2] and [3].
-
-If the dependencies are installed on your system, then you have to set
-your shell environment to get access to the software components
-(cf. [4]. "Preparing the shell environment").
-
-The next step is to install the KERNEL (cf. [5] "Installing KERNEL"):
-
-::
-
-$ mkdir <kernel_build>
-$ mkdir <kernel_install>
-$ cd <kernel_src>
-$ ./build_configure
-$ cd <kernel_build>
-$ <kernel_src>/configure --prefix=<kernel_install>
-$ make
-$ make install
-
-Then, the SALOME components GEOM, MED, VISU, ... can be installed
-with a similar procedure (cf. [6]).
-
-Eventually, the platform can be run by executing the shell script
-runSalome (cf. [7]). Here, somme additionnal variables have to be set
-to describe the SALOME runtime configuration (<COMPONENT>_ROOT_DIR,
-OMNIORB_CONFIG)
-
-The following provides you with specific instructions for each step.
-
-
-2. System configuration
------------------------
-
-SALOME is compiled and tested on differents platforms with native packages:
-- Debian sarge
-- Mandrake 10.1
-- ...
-
-If you have another platform, we suggest the following configuration
-for building process:
-
-- gcc-3.3.x or 3.4.x
-- automake-1.7 or more (only aclocal is used)
-- autoconf-2.59
-- libtool-1.5.6
-
-remarks:
-
-- This is the minimum level of automake, autoconf and libtool, if you need
- to compile all the third party softwares (included OpenCascade 5.2.x).
-
-3. Third-party dependencies
----------------------------
-
-The SALOME platform relies on a set of third-party softwares. The
-current version depends on the following list
-(versions given here are from Debian Sarge, except OpenCascade, VTK and MED,
-which are not Debian packages):
-
-=================== ===================================================
-CAS-5.2.4 OpenCascade (try binaries,a source patch is needed)
-VTK-4.2.6 VTK 3D-viewer
-PyQt-3.13 Python-Qt Wrapper
-Python-2.3.5 Python interpreter
-SWIG-1.3.24 SWIG library
-boost-1_32_0 C++ library (only include templates are used)
-hdf5-1.6.2 Files Database library
-med-2.2.2 MED Data Format support for file records
-omniORB-4.0.5 ORB used in SALOME
-qt-x11-free-3.3.3 Qt library
-qwt-4.2 Graph components for Qt
-sip4-4.1.1 langage binding software
-=================== ===================================================
-
-And, in order to build the documentation:
-
-=================== ===================================================
-doxygen-1.4.2
-graphviz-2.2.1
-=================== ===================================================
-
-
-Additionnal software may be installed for optional features:
-
-=================== ===================================================
-netgen4.3 + patch
-tix8.1.4
-openpbs-2.3.16
-lsf-???
-=================== ===================================================
-
-
-
-3.1 To Do
-~~~~~~~~~
-- Instructions for installing these software programs can be found in a
- special note doc/configuration_examples/install-prerequis.
-- Installation shell scripts are also provided.
- These scripts have to be adapted to your own configuration.
-
-- See doc/configuration_examples/*
-
-In the following, we assume that all the third-party softwares are
-installed in the same root directory, named <salomeroot>/prerequis.
-Then, your file system should probably look like::
-
- <salomeroot>/prerequis/Python-2.2.2
- <salomeroot>/prerequis/omniORB-3.0.5
- <salomeroot>/prerequis/qt-x11-free-3.0.5
- ...
-
-
-4. Preparing the shell environment
-----------------------------------
-
-Some variables have to be set to get acces to third-party software
-components (include files, executable, library, ...) during building
-process and runtime.
-
-The shell file prerequis.sh, embedded in the KERNEL source package,
-provides a template for setting those variables. In this example, all the
-softwares are supposed to be installed in the same root directory,
-named here INSTALLROOT.
-
-Copy the prerequis.sh in a working directory and adjust the settings
-to your own configuration. To get the shell prepared, just
-execute the following command in the building shell:
-
-$ source prerequis.sh
-
-(we assume here a ksh or bash mode)
-
-
-5. Installing the KERNEL component
-----------------------------------
-
-We use here the notation <kernel_src> to specify the source directory
-of the KERNEL component. The shell environment is supposed to have
-been set (cf. 4).
-
-Installing the KERNEL from a source package needs three directories:
-
-- the source directory, denoted here by <kernel_src>.
-
-- the build directory, denoted by <kernel_build> in the following. This
- directory can't be the same directory as <kernel_src>.
-
-- the install directory, denoted by <kernel_install> in the following. This
- directory can't be the same directory as <kernel_src> or
- <kernel_build>.
-
-The installing process is:
-
-STEP 1:
- preparing directories
-
- create the <kernel_build> and the <kernel_install> directories::
-
- $ mkdir <kernel_build>
- $ mkdir <kernel_install>
-
-STEP 2:
- build configure script
-
- go to <kernel_src> directory and generate the "configure" script::
-
- $ cd <kernel_src>
- $ ./build_configure
-
- If it doesn't work, check your system automake tools as specified in
- section [2].
-
-STEP 3:
- configure the building process
- go to the build directory and execute the configuration process::
-
- $ cd <kernel_build>
- $ <kernel_src>/configure --prefix=<kernel_install>
-
- Note that <kernel_install> must be an absolute path.
-
- When the configure process is complete, check the status of
- third-party softwares detection. You should have a status like::
-
- ---------------------------------------------
- Summary
- ---------------------------------------------
- Configure
- cc : yes
- boost : yes
- lex_yacc : yes
- python : yes
- swig : yes
- threads : yes
- OpenGL : yes
- qt : yes
- vtk : yes
- hdf5 : yes
- med2 : yes
- omniORB : yes
- occ : yes
- sip : yes
- pyqt : yes
- qwt : yes
- doxygen : yes
- graphviz : no
- openpbs : no
- lsf : no
- Default ORB : omniORB
- ----------------------------------------------
-
-If a software get a status "no", then it's not "seen" in the system:
-
-- the software is not installed, or
-- the shell environment is not set correctly.
-
-In this example, the software programs graphviz, openpbs and lsf are not
-installed (optional for most usages).
-
-
-STEP 4 :
- Building the binary files
-
- Execute make in the <kernel_build> directory::
-
- $ make
-
-
-STEP 5:
- Installing binary files, scripts and documentation
-
- Execute install target in the <kernel_build> directory::
-
- $ make install
-
-
-6. Installing the SALOME components
------------------------------------
-
-Installing a component <COMPONENT> is done by following the same
-instructions as given for the KERNEL, replacing KERNEL by
-<COMPONENT> (build_configure, configure, make, make install).
-
-You just have to be aware of the dependencies between components:
-
-- MED depends on KERNEL
-- GEOM depends on KERNEL
-- SMESH depends on KERNEL, MED, GEOM
-- VISU depends on KERNEL, MED
-- SUPERV depends on KERNEL
-
-For example, installing the component SMESH needs the previous
-installation of the KERNEL component, and then the GEOM and MED components.
-
-The building process uses the variables <COMPONENT>_ROOT_DIR to
-localize the dependant components. The variables must be set to the
-install path directory of the components <COMPONENT> (ex:
-KERNEL_ROOT_DIR=<kernel_install>).
-
-In the above example, the three variables KERNEL_ROOT_DIR,
-GEOM_ROOT_DIR and MED_ROOT_DIR have to be set before configuring the
-building process of the SMESH component (STEP 3).
-
-
-7. Runtime
-----------
-
-To run the SALOME platform, the procedure is:
-
-- set the shell environment to get acces to third-party softwares::
-
- $ source prerequis.sh
-
-- define the SALOME configuration by setting the whole set of
- variables <COMPONENT>_ROOT_DIR. Here, you just have to set the
- kernel and the components you need::
-
- $ export KERNEL_ROOT_DIR=<kernel_install>
- $ export MED_ROOT_DIR=<med_install>
- $ ...
-
-- define the CORBA configuration file by setting the variable
- OMNIORB_CONFIG. This variable must be set to a writable file
- path. The file may be arbitrary chosen and doesn't need to exist
- before running. We suggest::
-
- $ export OMNIORB_CONFIG=$HOME/.omniORB.cfg
-
-- run the SALOME platform by executing the script runSalome::
-
- $KERNEL_ROOT_DIR/bin/salome/runSalome
-
-
-8. Suggestions and advices
---------------------------
-
-For convenience or customization, we suggest the following organisation:
-
-- chose and create a root directory for the SALOME platform, say
- <salomeroot>.
-
-- install the third-party softwares in a sub-directory "prerequis"
-
-- install the SALOME components in a sub-directory "SALOME2"
-
-- make personnal copies of the files prerequis.sh and runSalome in
- <salomeroot>::
-
- $ cp <kernel_src>/prerequis.sh <rundir>/.
- $ cp <kernel_install>/bin/salome/runSalome <rundir>/.
-
- Edit the file prerequis.sh and adjust it to your own configuration.
-
-- define the SALOME2 configuration
-
- This step consists in setting the KERNEL_ROOT_DIR, the whole set of
- variables <COMPONENT>_ROOT_DIR you need, and the OMNIORB_CONFIG
- variable.
-
- We suggest to create a shell file envSalome.sh containing those
- settings. Then the configuration consists in loading envSalome.sh in
- the runtime shell::
-
- $ source envSalome.sh
-
-- When installed with this file organisation, running SALOME is done
- with the following shell commands::
-
- $ source <salomeroot>/prerequis.sh
- $ source <salomeroot>/envSalome.sh
- $ ./runSalome
+Developper documentation
+------------------------
+How to generate the developper documentation.
--- /dev/null
+
+=================================================================
+Installation instructions, up to date for 3.0 version
+=================================================================
+
+You'll find here generic instructions for installing the SALOME2 platform.
+
+Summary
+-------
+
+`1. Quick Overview`_
+
+`2. System configuration`_
+
+`3. Third-party dependencies`_
+
+`4. Preparing the shell environment`_
+
+`5. Installing the KERNEL component`_
+
+`6. Installing the SALOME components`_
+
+`7. Runtime`_
+
+`8. Suggestions and advices`_
+
+
+1. Quick Overview
+-----------------
+
+First of all, you have to check (or install if needed) the dependant
+software programs on your system. These programs are:
+
+- common development tools as gcc, automake, autoconf and libtools.
+- third party softwares used in SALOME building or runtime process
+ (python, OCC, VTK, ...)
+
+Further details can be found in sections [2] and [3].
+
+If the dependencies are installed on your system, then you have to set
+your shell environment to get access to the software components
+(cf. [4]. "Preparing the shell environment").
+
+The next step is to install the KERNEL (cf. [5] "Installing KERNEL"):
+
+::
+
+$ mkdir <kernel_build>
+$ mkdir <kernel_install>
+$ cd <kernel_src>
+$ ./build_configure
+$ cd <kernel_build>
+$ <kernel_src>/configure --prefix=<kernel_install>
+$ make
+$ make install
+
+Then, the SALOME components GEOM, MED, VISU, ... can be installed
+with a similar procedure (cf. [6]).
+
+Eventually, the platform can be run by executing the shell script
+runSalome (cf. [7]). Here, somme additionnal variables have to be set
+to describe the SALOME runtime configuration (<COMPONENT>_ROOT_DIR,
+OMNIORB_CONFIG)
+
+The following provides you with specific instructions for each step.
+
+
+2. System configuration
+-----------------------
+
+SALOME is compiled and tested on differents platforms with native packages:
+- Debian sarge
+- Mandrake 10.1
+- ...
+
+If you have another platform, we suggest the following configuration
+for building process:
+
+- gcc-3.3.x or 3.4.x
+- automake-1.7 or more (only aclocal is used)
+- autoconf-2.59
+- libtool-1.5.6
+
+remarks:
+
+- This is the minimum level of automake, autoconf and libtool, if you need
+ to compile all the third party softwares (included OpenCascade 5.2.x).
+
+3. Third-party dependencies
+---------------------------
+
+The SALOME platform relies on a set of third-party softwares. The
+current version depends on the following list
+(versions given here are from Debian Sarge, except OpenCascade, VTK and MED,
+which are not Debian packages):
+
+=================== ===================================================
+CAS-5.2.4 OpenCascade (try binaries,a source patch is needed)
+VTK-4.2.6 VTK 3D-viewer
+PyQt-3.13 Python-Qt Wrapper
+Python-2.3.5 Python interpreter
+SWIG-1.3.24 SWIG library
+boost-1_32_0 C++ library (only include templates are used)
+hdf5-1.6.2 Files Database library
+med-2.2.2 MED Data Format support for file records
+omniORB-4.0.5 ORB used in SALOME
+qt-x11-free-3.3.3 Qt library
+qwt-4.2 Graph components for Qt
+sip4-4.1.1 langage binding software
+=================== ===================================================
+
+And, in order to build the documentation:
+
+=================== ===================================================
+doxygen-1.4.2
+graphviz-2.2.1
+=================== ===================================================
+
+
+Additionnal software may be installed for optional features:
+
+=================== ===================================================
+netgen4.3 + patch
+tix8.1.4
+openpbs-2.3.16
+lsf-???
+=================== ===================================================
+
+
+
+3.1 To Do
+~~~~~~~~~
+- Instructions for installing these software programs can be found in a
+ special note doc/configuration_examples/install-prerequis.
+- Installation shell scripts are also provided.
+ These scripts have to be adapted to your own configuration.
+
+- See doc/configuration_examples/*
+
+In the following, we assume that all the third-party softwares are
+installed in the same root directory, named <salomeroot>/prerequis.
+Then, your file system should probably look like::
+
+ <salomeroot>/prerequis/Python-2.2.2
+ <salomeroot>/prerequis/omniORB-3.0.5
+ <salomeroot>/prerequis/qt-x11-free-3.0.5
+ ...
+
+
+4. Preparing the shell environment
+----------------------------------
+
+Some variables have to be set to get acces to third-party software
+components (include files, executable, library, ...) during building
+process and runtime.
+
+The shell file prerequis.sh, embedded in the KERNEL source package,
+provides a template for setting those variables. In this example, all the
+softwares are supposed to be installed in the same root directory,
+named here INSTALLROOT.
+
+Copy the prerequis.sh in a working directory and adjust the settings
+to your own configuration. To get the shell prepared, just
+execute the following command in the building shell:
+
+$ source prerequis.sh
+
+(we assume here a ksh or bash mode)
+
+
+5. Installing the KERNEL component
+----------------------------------
+
+We use here the notation <kernel_src> to specify the source directory
+of the KERNEL component. The shell environment is supposed to have
+been set (cf. 4).
+
+Installing the KERNEL from a source package needs three directories:
+
+- the source directory, denoted here by <kernel_src>.
+
+- the build directory, denoted by <kernel_build> in the following. This
+ directory can't be the same directory as <kernel_src>.
+
+- the install directory, denoted by <kernel_install> in the following. This
+ directory can't be the same directory as <kernel_src> or
+ <kernel_build>.
+
+The installing process is:
+
+STEP 1:
+ preparing directories
+
+ create the <kernel_build> and the <kernel_install> directories::
+
+ $ mkdir <kernel_build>
+ $ mkdir <kernel_install>
+
+STEP 2:
+ build configure script
+
+ go to <kernel_src> directory and generate the "configure" script::
+
+ $ cd <kernel_src>
+ $ ./build_configure
+
+ If it doesn't work, check your system automake tools as specified in
+ section [2].
+
+STEP 3:
+ configure the building process
+ go to the build directory and execute the configuration process::
+
+ $ cd <kernel_build>
+ $ <kernel_src>/configure --prefix=<kernel_install>
+
+ Note that <kernel_install> must be an absolute path.
+
+ When the configure process is complete, check the status of
+ third-party softwares detection. You should have a status like::
+
+ ---------------------------------------------
+ Summary
+ ---------------------------------------------
+ Configure
+ cc : yes
+ boost : yes
+ lex_yacc : yes
+ python : yes
+ swig : yes
+ threads : yes
+ OpenGL : yes
+ qt : yes
+ vtk : yes
+ hdf5 : yes
+ med2 : yes
+ omniORB : yes
+ occ : yes
+ sip : yes
+ pyqt : yes
+ qwt : yes
+ doxygen : yes
+ graphviz : no
+ openpbs : no
+ lsf : no
+ Default ORB : omniORB
+ ----------------------------------------------
+
+If a software get a status "no", then it's not "seen" in the system:
+
+- the software is not installed, or
+- the shell environment is not set correctly.
+
+In this example, the software programs graphviz, openpbs and lsf are not
+installed (optional for most usages).
+
+
+STEP 4 :
+ Building the binary files
+
+ Execute make in the <kernel_build> directory::
+
+ $ make
+
+
+STEP 5:
+ Installing binary files, scripts and documentation
+
+ Execute install target in the <kernel_build> directory::
+
+ $ make install
+
+
+6. Installing the SALOME components
+-----------------------------------
+
+Installing a component <COMPONENT> is done by following the same
+instructions as given for the KERNEL, replacing KERNEL by
+<COMPONENT> (build_configure, configure, make, make install).
+
+You just have to be aware of the dependencies between components:
+
+- MED depends on KERNEL
+- GEOM depends on KERNEL
+- SMESH depends on KERNEL, MED, GEOM
+- VISU depends on KERNEL, MED
+- SUPERV depends on KERNEL
+
+For example, installing the component SMESH needs the previous
+installation of the KERNEL component, and then the GEOM and MED components.
+
+The building process uses the variables <COMPONENT>_ROOT_DIR to
+localize the dependant components. The variables must be set to the
+install path directory of the components <COMPONENT> (ex:
+KERNEL_ROOT_DIR=<kernel_install>).
+
+In the above example, the three variables KERNEL_ROOT_DIR,
+GEOM_ROOT_DIR and MED_ROOT_DIR have to be set before configuring the
+building process of the SMESH component (STEP 3).
+
+
+7. Runtime
+----------
+
+To run the SALOME platform, the procedure is:
+
+- set the shell environment to get acces to third-party softwares::
+
+ $ source prerequis.sh
+
+- define the SALOME configuration by setting the whole set of
+ variables <COMPONENT>_ROOT_DIR. Here, you just have to set the
+ kernel and the components you need::
+
+ $ export KERNEL_ROOT_DIR=<kernel_install>
+ $ export MED_ROOT_DIR=<med_install>
+ $ ...
+
+- define the CORBA configuration file by setting the variable
+ OMNIORB_CONFIG. This variable must be set to a writable file
+ path. The file may be arbitrary chosen and doesn't need to exist
+ before running. We suggest::
+
+ $ export OMNIORB_CONFIG=$HOME/.omniORB.cfg
+
+- run the SALOME platform by executing the script runSalome::
+
+ $KERNEL_ROOT_DIR/bin/salome/runSalome
+
+
+8. Suggestions and advices
+--------------------------
+
+For convenience or customization, we suggest the following organisation:
+
+- chose and create a root directory for the SALOME platform, say
+ <salomeroot>.
+
+- install the third-party softwares in a sub-directory "prerequis"
+
+- install the SALOME components in a sub-directory "SALOME2"
+
+- make personnal copies of the files prerequis.sh and runSalome in
+ <salomeroot>::
+
+ $ cp <kernel_src>/prerequis.sh <rundir>/.
+ $ cp <kernel_install>/bin/salome/runSalome <rundir>/.
+
+ Edit the file prerequis.sh and adjust it to your own configuration.
+
+- define the SALOME2 configuration
+
+ This step consists in setting the KERNEL_ROOT_DIR, the whole set of
+ variables <COMPONENT>_ROOT_DIR you need, and the OMNIORB_CONFIG
+ variable.
+
+ We suggest to create a shell file envSalome.sh containing those
+ settings. Then the configuration consists in loading envSalome.sh in
+ the runtime shell::
+
+ $ source envSalome.sh
+
+- When installed with this file organisation, running SALOME is done
+ with the following shell commands::
+
+ $ source <salomeroot>/prerequis.sh
+ $ source <salomeroot>/envSalome.sh
+ $ ./runSalome
+
--- /dev/null
+
+=================================================================
+Source code structuration and Unit Tests
+=================================================================
+
+You will find here general information on code directories structure,
+unit tests associated to the different kind of classes, and how to run
+the unit tests.
+
+1. SALOME KERNEL source code structuration
+==========================================
+
+1.1 General structure of KERNEL_SRC
+-----------------------------------
+
+KERNEL_SRC
+
+KERNEL_SRC/adm_local
+
+KERNEL_SRC/bin
+
+KERNEL_SRC/doc
+
+KERNEL_SRC/examples
+
+KERNEL_SRC/idl
+
+KERNEL_SRC/resources
+
+KERNEL_SRC/salome_adm
+
+KERNEL_SRC/share
+
+KERNEL_SRC/src
+
+KERNEL_SRC/tests
+
+
+1.2 Directory src: C++ and Python source code
+---------------------------------------------
+
+1.2.1 Basic services non related to CORBA
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Basics
+ A set of general purpose C++ services.
+
+SALOMELocalTrace
+ A multithread trace system that allows message tracing on standard error
+ or a file.
+
+CASCatch
+ Exceptions and signal handler.
+
+HDFPersist
+ A C++ interface to HDF.
+
+1.2.2 Basic CORBA services
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Logger
+ A CORBA server that collects the trace messages from differents CORBA
+ process.
+
+SALOMETraceCollector
+ A multithread trace system derived from SALOMELocalTrace, that sends messages
+ to Logger server via CORBA.
+
+Utils
+ A set of general purpose services related to CORBA, such as basic CORBA
+ exception system.
+
+NamingService
+ C++ and Python interfaces to name, store and retreive CORBA objects
+
+GenericObj
+ A generic CORBA interface for CORBA objects, to count distributed references,
+ and to allow destruction by client.
+
+1.2.3 Miscelaneous CORBA servers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Registry
+
+ModuleCatalog
+
+DataTypeCatalog
+
+RessourcesCatalog
+
+ResourcesManager
+
+Notification
+
+NOTIFICATION_SWIG
+
+1.2.4 CORBA Containers for SALOME Modules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Container
+
+TestContainer
+
+LifeCycleCORBA
+
+LifeCycleCORBA_SWIG
+
+1.2.5 STUDY server and related interfaces and tools
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+SALOMEDSClient
+
+TOOLSDS
+
+SALOMEDSImpl
+
+SALOMEDS
+