From: André Date: Wed, 21 Apr 2010 12:36:13 +0000 (+0200) Subject: Initial Commit X-Git-Tag: V6_4_0rc3~179 X-Git-Url: http://git.salome-platform.org/gitweb/?a=commitdiff_plain;h=5b0506dcc6d9336fc869443fa70975d27c8f71c1;p=modules%2Fadao.git Initial Commit --- 5b0506dcc6d9336fc869443fa70975d27c8f71c1 diff --git a/AUTHORS b/AUTHORS new file mode 100644 index 0000000..e69de29 diff --git a/COPYING b/COPYING new file mode 100644 index 0000000..b1e3f5a --- /dev/null +++ b/COPYING @@ -0,0 +1,504 @@ + GNU LESSER GENERAL PUBLIC LICENSE + Version 2.1, February 1999 + + Copyright (C) 1991, 1999 Free Software Foundation, Inc. + 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + +[This is the first released version of the Lesser GPL. It also counts + as the successor of the GNU Library Public License, version 2, hence + the version number 2.1.] + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +Licenses are intended to guarantee your freedom to share and change +free software--to make sure the software is free for all its users. + + This license, the Lesser General Public License, applies to some +specially designated software packages--typically libraries--of the +Free Software Foundation and other authors who decide to use it. You +can use it too, but we suggest you first think carefully about whether +this license or the ordinary General Public License is the better +strategy to use in any particular case, based on the explanations below. + + When we speak of free software, we are referring to freedom of use, +not price. Our General Public Licenses are designed to make sure that +you have the freedom to distribute copies of free software (and charge +for this service if you wish); that you receive source code or can get +it if you want it; that you can change the software and use pieces of +it in new free programs; and that you are informed that you can do +these things. + + To protect your rights, we need to make restrictions that forbid +distributors to deny you these rights or to ask you to surrender these +rights. These restrictions translate to certain responsibilities for +you if you distribute copies of the library or if you modify it. + + For example, if you distribute copies of the library, whether gratis +or for a fee, you must give the recipients all the rights that we gave +you. You must make sure that they, too, receive or can get the source +code. If you link other code with the library, you must provide +complete object files to the recipients, so that they can relink them +with the library after making changes to the library and recompiling +it. And you must show them these terms so they know their rights. + + We protect your rights with a two-step method: (1) we copyright the +library, and (2) we offer you this license, which gives you legal +permission to copy, distribute and/or modify the library. + + To protect each distributor, we want to make it very clear that +there is no warranty for the free library. Also, if the library is +modified by someone else and passed on, the recipients should know +that what they have is not the original version, so that the original +author's reputation will not be affected by problems that might be +introduced by others. + + Finally, software patents pose a constant threat to the existence of +any free program. We wish to make sure that a company cannot +effectively restrict the users of a free program by obtaining a +restrictive license from a patent holder. Therefore, we insist that +any patent license obtained for a version of the library must be +consistent with the full freedom of use specified in this license. + + Most GNU software, including some libraries, is covered by the +ordinary GNU General Public License. This license, the GNU Lesser +General Public License, applies to certain designated libraries, and +is quite different from the ordinary General Public License. We use +this license for certain libraries in order to permit linking those +libraries into non-free programs. + + When a program is linked with a library, whether statically or using +a shared library, the combination of the two is legally speaking a +combined work, a derivative of the original library. The ordinary +General Public License therefore permits such linking only if the +entire combination fits its criteria of freedom. The Lesser General +Public License permits more lax criteria for linking other code with +the library. + + We call this license the "Lesser" General Public License because it +does Less to protect the user's freedom than the ordinary General +Public License. It also provides other free software developers Less +of an advantage over competing non-free programs. These disadvantages +are the reason we use the ordinary General Public License for many +libraries. However, the Lesser license provides advantages in certain +special circumstances. + + For example, on rare occasions, there may be a special need to +encourage the widest possible use of a certain library, so that it becomes +a de-facto standard. To achieve this, non-free programs must be +allowed to use the library. A more frequent case is that a free +library does the same job as widely used non-free libraries. In this +case, there is little to gain by limiting the free library to free +software only, so we use the Lesser General Public License. + + In other cases, permission to use a particular library in non-free +programs enables a greater number of people to use a large body of +free software. For example, permission to use the GNU C Library in +non-free programs enables many more people to use the whole GNU +operating system, as well as its variant, the GNU/Linux operating +system. + + Although the Lesser General Public License is Less protective of the +users' freedom, it does ensure that the user of a program that is +linked with the Library has the freedom and the wherewithal to run +that program using a modified version of the Library. + + The precise terms and conditions for copying, distribution and +modification follow. Pay close attention to the difference between a +"work based on the library" and a "work that uses the library". The +former contains code derived from the library, whereas the latter must +be combined with the library in order to run. + + GNU LESSER GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License Agreement applies to any software library or other +program which contains a notice placed by the copyright holder or +other authorized party saying it may be distributed under the terms of +this Lesser General Public License (also called "this License"). +Each licensee is addressed as "you". + + A "library" means a collection of software functions and/or data +prepared so as to be conveniently linked with application programs +(which use some of those functions and data) to form executables. + + The "Library", below, refers to any such software library or work +which has been distributed under these terms. A "work based on the +Library" means either the Library or any derivative work under +copyright law: that is to say, a work containing the Library or a +portion of it, either verbatim or with modifications and/or translated +straightforwardly into another language. (Hereinafter, translation is +included without limitation in the term "modification".) + + "Source code" for a work means the preferred form of the work for +making modifications to it. For a library, complete source code means +all the source code for all modules it contains, plus any associated +interface definition files, plus the scripts used to control compilation +and installation of the library. + + Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running a program using the Library is not restricted, and output from +such a program is covered only if its contents constitute a work based +on the Library (independent of the use of the Library in a tool for +writing it). Whether that is true depends on what the Library does +and what the program that uses the Library does. + + 1. You may copy and distribute verbatim copies of the Library's +complete source code as you receive it, in any medium, provided that +you conspicuously and appropriately publish on each copy an +appropriate copyright notice and disclaimer of warranty; keep intact +all the notices that refer to this License and to the absence of any +warranty; and distribute a copy of this License along with the +Library. + + You may charge a fee for the physical act of transferring a copy, +and you may at your option offer warranty protection in exchange for a +fee. + + 2. You may modify your copy or copies of the Library or any portion +of it, thus forming a work based on the Library, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) The modified work must itself be a software library. + + b) You must cause the files modified to carry prominent notices + stating that you changed the files and the date of any change. + + c) You must cause the whole of the work to be licensed at no + charge to all third parties under the terms of this License. + + d) If a facility in the modified Library refers to a function or a + table of data to be supplied by an application program that uses + the facility, other than as an argument passed when the facility + is invoked, then you must make a good faith effort to ensure that, + in the event an application does not supply such function or + table, the facility still operates, and performs whatever part of + its purpose remains meaningful. + + (For example, a function in a library to compute square roots has + a purpose that is entirely well-defined independent of the + application. Therefore, Subsection 2d requires that any + application-supplied function or table used by this function must + be optional: if the application does not supply it, the square + root function must still compute square roots.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Library, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Library, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote +it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Library. + +In addition, mere aggregation of another work not based on the Library +with the Library (or with a work based on the Library) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may opt to apply the terms of the ordinary GNU General Public +License instead of this License to a given copy of the Library. To do +this, you must alter all the notices that refer to this License, so +that they refer to the ordinary GNU General Public License, version 2, +instead of to this License. (If a newer version than version 2 of the +ordinary GNU General Public License has appeared, then you can specify +that version instead if you wish.) Do not make any other change in +these notices. + + Once this change is made in a given copy, it is irreversible for +that copy, so the ordinary GNU General Public License applies to all +subsequent copies and derivative works made from that copy. + + This option is useful when you wish to copy part of the code of +the Library into a program that is not a library. + + 4. You may copy and distribute the Library (or a portion or +derivative of it, under Section 2) in object code or executable form +under the terms of Sections 1 and 2 above provided that you accompany +it with the complete corresponding machine-readable source code, which +must be distributed under the terms of Sections 1 and 2 above on a +medium customarily used for software interchange. + + If distribution of object code is made by offering access to copy +from a designated place, then offering equivalent access to copy the +source code from the same place satisfies the requirement to +distribute the source code, even though third parties are not +compelled to copy the source along with the object code. + + 5. A program that contains no derivative of any portion of the +Library, but is designed to work with the Library by being compiled or +linked with it, is called a "work that uses the Library". Such a +work, in isolation, is not a derivative work of the Library, and +therefore falls outside the scope of this License. + + However, linking a "work that uses the Library" with the Library +creates an executable that is a derivative of the Library (because it +contains portions of the Library), rather than a "work that uses the +library". The executable is therefore covered by this License. +Section 6 states terms for distribution of such executables. + + When a "work that uses the Library" uses material from a header file +that is part of the Library, the object code for the work may be a +derivative work of the Library even though the source code is not. +Whether this is true is especially significant if the work can be +linked without the Library, or if the work is itself a library. The +threshold for this to be true is not precisely defined by law. + + If such an object file uses only numerical parameters, data +structure layouts and accessors, and small macros and small inline +functions (ten lines or less in length), then the use of the object +file is unrestricted, regardless of whether it is legally a derivative +work. (Executables containing this object code plus portions of the +Library will still fall under Section 6.) + + Otherwise, if the work is a derivative of the Library, you may +distribute the object code for the work under the terms of Section 6. +Any executables containing that work also fall under Section 6, +whether or not they are linked directly with the Library itself. + + 6. As an exception to the Sections above, you may also combine or +link a "work that uses the Library" with the Library to produce a +work containing portions of the Library, and distribute that work +under terms of your choice, provided that the terms permit +modification of the work for the customer's own use and reverse +engineering for debugging such modifications. + + You must give prominent notice with each copy of the work that the +Library is used in it and that the Library and its use are covered by +this License. You must supply a copy of this License. If the work +during execution displays copyright notices, you must include the +copyright notice for the Library among them, as well as a reference +directing the user to the copy of this License. Also, you must do one +of these things: + + a) Accompany the work with the complete corresponding + machine-readable source code for the Library including whatever + changes were used in the work (which must be distributed under + Sections 1 and 2 above); and, if the work is an executable linked + with the Library, with the complete machine-readable "work that + uses the Library", as object code and/or source code, so that the + user can modify the Library and then relink to produce a modified + executable containing the modified Library. (It is understood + that the user who changes the contents of definitions files in the + Library will not necessarily be able to recompile the application + to use the modified definitions.) + + b) Use a suitable shared library mechanism for linking with the + Library. A suitable mechanism is one that (1) uses at run time a + copy of the library already present on the user's computer system, + rather than copying library functions into the executable, and (2) + will operate properly with a modified version of the library, if + the user installs one, as long as the modified version is + interface-compatible with the version that the work was made with. + + c) Accompany the work with a written offer, valid for at + least three years, to give the same user the materials + specified in Subsection 6a, above, for a charge no more + than the cost of performing this distribution. + + d) If distribution of the work is made by offering access to copy + from a designated place, offer equivalent access to copy the above + specified materials from the same place. + + e) Verify that the user has already received a copy of these + materials or that you have already sent this user a copy. + + For an executable, the required form of the "work that uses the +Library" must include any data and utility programs needed for +reproducing the executable from it. However, as a special exception, +the materials to be distributed need not include anything that is +normally distributed (in either source or binary form) with the major +components (compiler, kernel, and so on) of the operating system on +which the executable runs, unless that component itself accompanies +the executable. + + It may happen that this requirement contradicts the license +restrictions of other proprietary libraries that do not normally +accompany the operating system. Such a contradiction means you cannot +use both them and the Library together in an executable that you +distribute. + + 7. You may place library facilities that are a work based on the +Library side-by-side in a single library together with other library +facilities not covered by this License, and distribute such a combined +library, provided that the separate distribution of the work based on +the Library and of the other library facilities is otherwise +permitted, and provided that you do these two things: + + a) Accompany the combined library with a copy of the same work + based on the Library, uncombined with any other library + facilities. This must be distributed under the terms of the + Sections above. + + b) Give prominent notice with the combined library of the fact + that part of it is a work based on the Library, and explaining + where to find the accompanying uncombined form of the same work. + + 8. You may not copy, modify, sublicense, link with, or distribute +the Library except as expressly provided under this License. Any +attempt otherwise to copy, modify, sublicense, link with, or +distribute the Library is void, and will automatically terminate your +rights under this License. However, parties who have received copies, +or rights, from you under this License will not have their licenses +terminated so long as such parties remain in full compliance. + + 9. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Library or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Library (or any work based on the +Library), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Library or works based on it. + + 10. Each time you redistribute the Library (or any work based on the +Library), the recipient automatically receives a license from the +original licensor to copy, distribute, link with or modify the Library +subject to these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties with +this License. + + 11. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Library at all. For example, if a patent +license would not permit royalty-free redistribution of the Library by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Library. + +If any portion of this section is held invalid or unenforceable under any +particular circumstance, the balance of the section is intended to apply, +and the section as a whole is intended to apply in other circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 12. If the distribution and/or use of the Library is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Library under this License may add +an explicit geographical distribution limitation excluding those countries, +so that distribution is permitted only in or among countries not thus +excluded. In such case, this License incorporates the limitation as if +written in the body of this License. + + 13. The Free Software Foundation may publish revised and/or new +versions of the Lesser General Public License from time to time. +Such new versions will be similar in spirit to the present version, +but may differ in detail to address new problems or concerns. + +Each version is given a distinguishing version number. If the Library +specifies a version number of this License which applies to it and +"any later version", you have the option of following the terms and +conditions either of that version or of any later version published by +the Free Software Foundation. If the Library does not specify a +license version number, you may choose any version ever published by +the Free Software Foundation. + + 14. If you wish to incorporate parts of the Library into other free +programs whose distribution conditions are incompatible with these, +write to the author to ask for permission. For software which is +copyrighted by the Free Software Foundation, write to the Free +Software Foundation; we sometimes make exceptions for this. Our +decision will be guided by the two goals of preserving the free status +of all derivatives of our free software and of promoting the sharing +and reuse of software generally. + + NO WARRANTY + + 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO +WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. +EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR +OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY +KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE +LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME +THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN +WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY +AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU +FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR +CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE +LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING +RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A +FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF +SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH +DAMAGES. + + END OF TERMS AND CONDITIONS + + How to Apply These Terms to Your New Libraries + + If you develop a new library, and you want it to be of the greatest +possible use to the public, we recommend making it free software that +everyone can redistribute and change. You can do so by permitting +redistribution under these terms (or, alternatively, under the terms of the +ordinary General Public License). + + To apply these terms, attach the following notices to the library. It is +safest to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least the +"copyright" line and a pointer to where the full notice is found. + + + Copyright (C) + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with this library; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + +Also add information on how to contact you by electronic and paper mail. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the library, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the + library `Frob' (a library for tweaking knobs) written by James Random Hacker. + + , 1 April 1990 + Ty Coon, President of Vice + +That's all there is to it! + + diff --git a/ChangeLog b/ChangeLog new file mode 100644 index 0000000..e69de29 diff --git a/INSTALL b/INSTALL new file mode 100644 index 0000000..e490384 --- /dev/null +++ b/INSTALL @@ -0,0 +1 @@ +SALOME2 : PYHELLO module (sample) diff --git a/Makefile.am b/Makefile.am new file mode 100644 index 0000000..2035ab4 --- /dev/null +++ b/Makefile.am @@ -0,0 +1,34 @@ +# Copyright (C) 2010 CEA/DEN, EDF R&D, OPEN CASCADE +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# + +include $(top_srcdir)/adm_local/unix/make_common_starter.am + +ACLOCAL_AMFLAGS = -I adm_local/unix \ + -I ${KERNEL_ROOT_DIR}/salome_adm/unix/config_files \ + -I ${GUI_ROOT_DIR}/adm_local/unix/config_files + +SUBDIRS = adm_local src + +DISTCLEANFILES = a.out aclocal.m4 configure local-install.sh + +EXTRA_DIST += \ + build_configure \ + clean_configure + +dist-hook: + rm -rf `find $(distdir) -name CVS` + diff --git a/NEWS b/NEWS new file mode 100644 index 0000000..e69de29 diff --git a/README b/README new file mode 100644 index 0000000..e69de29 diff --git a/adm_local/Makefile.am b/adm_local/Makefile.am new file mode 100644 index 0000000..f35273f --- /dev/null +++ b/adm_local/Makefile.am @@ -0,0 +1,24 @@ +# Copyright (C) 2007-2008 CEA/DEN, EDF R&D, OPEN CASCADE +# +# Copyright (C) 2003-2007 OPEN CASCADE, EADS/CCR, LIP6, CEA/DEN, +# CEDRAT, EDF R&D, LEG, PRINCIPIA R&D, BUREAU VERITAS +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +include $(top_srcdir)/adm_local/unix/make_common_starter.am + +SUBDIRS = unix diff --git a/adm_local/unix/make_common_starter.am b/adm_local/unix/make_common_starter.am new file mode 100644 index 0000000..cd85151 --- /dev/null +++ b/adm_local/unix/make_common_starter.am @@ -0,0 +1,96 @@ +# Copyright (C) 2007-2008 CEA/DEN, EDF R&D, OPEN CASCADE +# +# Copyright (C) 2003-2007 OPEN CASCADE, EADS/CCR, LIP6, CEA/DEN, +# CEDRAT, EDF R&D, LEG, PRINCIPIA R&D, BUREAU VERITAS +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# + +# ============================================================ +# The following is to avoid PACKAGE_... env variable +# redefinition compilation warnings +# ============================================================ +AM_CXXFLAGS = @KERNEL_CXXFLAGS@ -include SALOMEconfig.h +AM_CPPFLAGS = @KERNEL_CXXFLAGS@ -include SALOMEconfig.h + +# ============================================================ +# This file defines the common definitions used in several +# Makefile. This file must be included, if needed, by the file +# Makefile.am. +# ============================================================ +# Standard directory for installation +# +salomeincludedir = $(includedir)/salome +libdir = $(prefix)/lib@LIB_LOCATION_SUFFIX@/salome +bindir = $(prefix)/bin/salome +salomescriptdir = $(bindir) +salomepythondir = $(pythondir)/salome +salomepyexecdir = $(pyexecdir)/salome + +# Directory for installing idl files +salomeidldir = $(prefix)/idl/salome + +# Directory for installing resource files +salomeresdir = $(prefix)/share/salome/resources/@MODULE_NAME@ + +# Directories for installing admin files +admlocaldir = $(prefix)/adm_local +admlocalunixdir = $(admlocaldir)/unix +admlocalm4dir = $(admlocaldir)/unix/config_files + +# Shared modules installation directory +sharedpkgpythondir = $(salomepythondir)/shared_modules + +# Documentation directory +docdir = $(datadir)/doc/salome + +# common rules + +# meta object implementation files generation (moc) +%_moc.cxx: %.h + $(MOC) $< -o $@ + +# translation (*.qm) files generation (lrelease) +%.qm: %.ts + $(LRELEASE) $< -qm $@ + +# resource files generation (qrcc) +qrc_%.cxx: %.qrc + $(QRCC) $< -o $@ -name $(*F) + +# qt forms files generation (uic) +ui_%.h: %.ui + $(UIC) -o $@ $< + +# extra distributed files +EXTRA_DIST = $(MOC_FILES:%_moc.cxx=%.h) $(QRC_FILES:qrc_%.cxx=%.qrc) \ + $(UIC_FILES:ui_%.h=%.ui) $(nodist_salomeres_DATA:%.qm=%.ts) + +# customize clean operation +mostlyclean-local: + rm -f @builddir@/*_moc.cxx + rm -f @builddir@/*.qm + rm -f @builddir@/ui_*.h + rm -f @builddir@/qrc_*.cxx + +# tests +tests: unittest + +unittest: $(UNIT_TEST_PROG) + @if test "x$(UNIT_TEST_PROG)" != "x"; then \ + $(UNIT_TEST_PROG); \ + fi; diff --git a/build_configure b/build_configure new file mode 100755 index 0000000..299c6f7 --- /dev/null +++ b/build_configure @@ -0,0 +1,104 @@ +#!/bin/bash +# Copyright (C) 2007-2008 CEA/DEN, EDF R&D, OPEN CASCADE +# +# Copyright (C) 2003-2007 OPEN CASCADE, EADS/CCR, LIP6, CEA/DEN, +# CEDRAT, EDF R&D, LEG, PRINCIPIA R&D, BUREAU VERITAS +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +# Tool for updating list of .in file for the SALOME project +# and regenerating configure script +# Author : +# Modified by : Alexander BORODIN (OCN) - autotools usage +# Date : 10/10/2002 +# +ORIG_DIR=`pwd` +CONF_DIR=`echo $0 | sed -e "s,[^/]*$,,;s,/$,,;s,^$,.,"` + +######################################################################## +# Test if the KERNEL_ROOT_DIR is set correctly + +if test ! -d "${KERNEL_ROOT_DIR}"; then + echo "failed : KERNEL_ROOT_DIR variable is not correct !" + exit +fi + + +######################################################################## +# Test if the GUI_ROOT_DIR is set correctly + +if test ! -d "${GUI_ROOT_DIR}"; then + echo "failed : GUI_ROOT_DIR variable is not correct !" + exit +fi + +cd ${CONF_DIR} +ABS_CONF_DIR=`pwd` + +######################################################################## + +# ____________________________________________________________________ +# aclocal creates the aclocal.m4 file from the standard macro and the +# custom macro embedded in the directory adm_local/unix/config_files +# and KERNEL config_files directory. +# output: +# aclocal.m4 +# autom4te.cache (directory) +echo "======================================================= aclocal" + +aclocal -I adm_local/unix/config_files \ + -I ${KERNEL_ROOT_DIR}/salome_adm/unix/config_files \ + -I ${GUI_ROOT_DIR}/adm_local/unix/config_files || exit 1 + +# ____________________________________________________________________ +# libtoolize creates some configuration files (ltmain.sh, +# config.guess and config.sub). It only depends on the libtool +# version. The files are created in the directory specified with the +# AC_CONFIG_AUX_DIR() tag (see configure.ac). +# output: +# adm_local/unix/config_files/config.guess +# adm_local/unix/config_files/config.sub +# adm_local/unix/config_files/ltmain.sh +echo "==================================================== libtoolize" + +libtoolize --force --copy --automake || exit 1 + +# ____________________________________________________________________ +# autoconf creates the configure script from the file configure.ac (or +# configure.in if configure.ac doesn't exist) +# output: +# configure +echo "====================================================== autoconf" + +autoconf + +# ____________________________________________________________________ +# automake creates some scripts used in building process +# (install-sh, missing, ...). It only depends on the automake +# version. The files are created in the directory specified with the +# AC_CONFIG_AUX_DIR() tag (see configure.ac). This step also +# creates the Makefile.in files from the Makefile.am files. +# output: +# adm_local/unix/config_files/compile +# adm_local/unix/config_files/depcomp +# adm_local/unix/config_files/install-sh +# adm_local/unix/config_files/missing +# adm_local/unix/config_files/py-compile +# Makefile.in (from Makefile.am) +echo "====================================================== automake" + +automake --copy --gnu --add-missing diff --git a/clean_configure b/clean_configure new file mode 100755 index 0000000..f57f7b3 --- /dev/null +++ b/clean_configure @@ -0,0 +1,35 @@ +#!/bin/sh +# Copyright (C) 2007-2008 CEA/DEN, EDF R&D, OPEN CASCADE +# +# Copyright (C) 2003-2007 OPEN CASCADE, EADS/CCR, LIP6, CEA/DEN, +# CEDRAT, EDF R&D, LEG, PRINCIPIA R&D, BUREAU VERITAS +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +rm -rf autom4te.cache aclocal.m4 configure make_config +find . -name "*~" -print -exec rm {} \; +find . -name "*.pyc" -print -exec rm {} \; +#exit +# ==================== ON SORT AVANT + +find bin -name Makefile.in | xargs rm -f +find doc -name Makefile.in | xargs rm -f +find idl -name Makefile.in | xargs rm -f +find resources -name Makefile.in | xargs rm -f +find salome_adm -name Makefile.in | xargs rm -f +find src -name Makefile.in | xargs rm -f +rm -f Makefile.in diff --git a/configure.ac b/configure.ac new file mode 100644 index 0000000..fdfb80f --- /dev/null +++ b/configure.ac @@ -0,0 +1,184 @@ +# Copyright (C) 2010 CEA/DEN, EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# + +AC_INIT([DATASSIM module], [0.1], [andre.ribes@edf.fr], [SalomeDATASSIM]) +AC_CONFIG_AUX_DIR(adm_local/unix/config_files) +AC_CANONICAL_HOST +AC_CANONICAL_TARGET +AM_INIT_AUTOMAKE([-Wno-portability]) + +XVERSION=`echo $VERSION | awk -F. '{printf("0x%02x%02x%02x",$1,$2,$3)}'` +AC_SUBST(XVERSION) + +# set up MODULE_NAME variable for dynamic construction of directories (resources, etc.) +MODULE_NAME=datassim +AC_SUBST(MODULE_NAME) + +dnl +dnl Initialize source and build root directories +dnl + +ROOT_BUILDDIR=`pwd` +ROOT_SRCDIR=`echo $0 | sed -e "s,[[^/]]*$,,;s,/$,,;s,^$,.,"` +cd $ROOT_SRCDIR +ROOT_SRCDIR=`pwd` +cd $ROOT_BUILDDIR + +AC_SUBST(ROOT_SRCDIR) +AC_SUBST(ROOT_BUILDDIR) + +echo +echo Source root directory : $ROOT_SRCDIR +echo Build root directory : $ROOT_BUILDDIR +echo +echo + +AC_CHECK_PROG(SHELL,sh) +AC_SUBST(SHELL) + +if test -z "$AR"; then + AC_CHECK_PROGS(AR,ar xar,:,$PATH) +fi +AC_SUBST(AR) + +dnl Export the AR macro so that it will be placed in the libtool file +dnl correctly. +export AR + +echo +echo --------------------------------------------- +echo testing make +echo --------------------------------------------- +echo + +AC_PROG_MAKE_SET +AC_PROG_INSTALL +AC_LOCAL_INSTALL +dnl +dnl libtool macro check for CC, LD, NM, LN_S, RANLIB, STRIP + for shared libraries + +AC_ENABLE_DEBUG(yes) +AC_DISABLE_PRODUCTION + +echo --------------------------------------------- +echo testing libtool +echo --------------------------------------------- + +dnl first, we set static to no! +dnl if we want it, use --enable-static +AC_ENABLE_STATIC(no) + +AC_LIBTOOL_DLOPEN +AC_PROG_LIBTOOL + +dnl Fix up the INSTALL macro if it s a relative path. We want the +dnl full-path to the binary instead. +case "$INSTALL" in + *install-sh*) + INSTALL='\${KERNEL_ROOT_DIR}'/adm_local/unix/config_files/install-sh + ;; +esac + +echo +echo --------------------------------------------- +echo testing python +echo --------------------------------------------- +echo + +CHECK_PYTHON + +AM_PATH_PYTHON(2.4) + +echo +echo --------------------------------------------- +echo testing QT +echo --------------------------------------------- +echo + +CHECK_QT + +echo +echo --------------------------------------------- +echo Testing html generators +echo --------------------------------------------- +echo + +CHECK_HTML_GENERATORS + +echo +echo --------------------------------------------- +echo Testing Kernel +echo --------------------------------------------- +echo + +CHECK_KERNEL + +echo +echo --------------------------------------------- +echo Testing GUI +echo --------------------------------------------- +echo + +CHECK_SALOME_GUI + +echo +echo --------------------------------------------- +echo Summary +echo --------------------------------------------- +echo + +echo Configure +variables="python_ok qt_ok doxygen_ok Kernel_ok" + +for var in $variables +do + printf " %10s : " `echo \$var | sed -e "s,_ok,,"` + eval echo \$$var +done + +dnl We don t need to say when we re entering directories if we re using +dnl GNU make becuase make does it for us. +if test "X$GMAKE" = "Xyes"; then + AC_SUBST(SETX) SETX=":" +else + AC_SUBST(SETX) SETX="set -x" +fi +echo +echo --------------------------------------------- +echo generating Makefiles and configure files +echo --------------------------------------------- +echo + +AC_OUTPUT_COMMANDS([ \ + chmod +x ./bin/*; \ +]) + +# This list is initiated using autoscan and must be updated manually +# when adding a new file .in to manage. When you execute +# autoscan, the Makefile list is generated in the output file configure.scan. +# This could be helpfull to update de configuration. +AC_OUTPUT([ \ + adm_local/Makefile \ + adm_local/unix/Makefile \ + src/Makefile \ + src/DATASSIM/Makefile \ + src/DATASSIMGUI/Makefile \ + resources/Makefile \ + Makefile \ +]) diff --git a/doc/ComposantAD.pdf b/doc/ComposantAD.pdf new file mode 100644 index 0000000..19a5fe8 Binary files /dev/null and b/doc/ComposantAD.pdf differ diff --git a/resources/DATASSIMCatalog.xml b/resources/DATASSIMCatalog.xml new file mode 100644 index 0000000..e69de29 diff --git a/resources/Makefile.am b/resources/Makefile.am new file mode 100644 index 0000000..987593a --- /dev/null +++ b/resources/Makefile.am @@ -0,0 +1,21 @@ +# Copyright (C) 2010 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# + +include $(top_srcdir)/adm_local/unix/make_common_starter.am + +dist_salomeres_DATA = DATASSIMCatalog.xml diff --git a/src/Makefile.am b/src/Makefile.am new file mode 100644 index 0000000..77d5101 --- /dev/null +++ b/src/Makefile.am @@ -0,0 +1,21 @@ +# Copyright (C) 2010 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com + +include $(top_srcdir)/adm_local/unix/make_common_starter.am + +SUBDIRS = daComposant diff --git a/src/daComposant/Makefile.am b/src/daComposant/Makefile.am new file mode 100644 index 0000000..522eda7 --- /dev/null +++ b/src/daComposant/Makefile.am @@ -0,0 +1,24 @@ +# Copyright (C) 2010 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# + +include $(top_srcdir)/adm_local/unix/make_common_starter.am + +# Scripts to be installed +dist_salomescript_SCRIPTS = daCore + diff --git a/src/daComposant/daAlgorithms/3DVAR.py b/src/daComposant/daAlgorithms/3DVAR.py new file mode 100644 index 0000000..d1ad427 --- /dev/null +++ b/src/daComposant/daAlgorithms/3DVAR.py @@ -0,0 +1,216 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Algorithme variationnel statique (3D-VAR) +""" +__author__ = "Jean-Philippe ARGAUD - Mars 2009" + +import sys ; sys.path.insert(0, "../daCore") +import logging +import Persistence +from BasicObjects import Algorithm +import PlatformInfo ; m = PlatformInfo.SystemUsage() + +import numpy +import scipy.optimize + +if logging.getLogger().level < 30: + iprint = 1 + message = scipy.optimize.tnc.MSG_ALL + disp = 1 +else: + iprint = -1 + message = scipy.optimize.tnc.MSG_NONE + disp = 0 + +# ============================================================================== +class ElementaryAlgorithm(Algorithm): + def __init__(self): + Algorithm.__init__(self) + self._name = "3DVAR" + logging.debug("%s Initialisation"%self._name) + + def run(self, Xb=None, Y=None, H=None, M=None, R=None, B=None, Q=None, Par=None): + """ + Calcul de l'estimateur 3D-VAR + """ + logging.debug("%s Lancement"%self._name) + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("Mo"))) + # + Hm = H["Direct"].appliedTo + Ht = H["Adjoint"].appliedInXTo + # + # Utilisation éventuelle d'un vecteur H(Xb) précalculé + # ---------------------------------------------------- + if H["AppliedToX"] is not None and H["AppliedToX"].has_key("HXb"): + logging.debug("%s Utilisation de HXb"%self._name) + HXb = H["AppliedToX"]["HXb"] + else: + logging.debug("%s Calcul de Hm(Xb)"%self._name) + HXb = Hm( Xb ) + # + # Calcul du préconditionnement + # ---------------------------- + # Bdemi = numpy.linalg.cholesky(B) + # + # Calcul de l'innovation + # ---------------------- + d = Y - HXb + logging.debug("%s Innovation d = %s"%(self._name, d)) + # + # Précalcul des inversion appellée dans les fonction-coût et gradient + # ------------------------------------------------------------------- + BI = B.I + RI = R.I + # + # Définition de la fonction-coût + # ------------------------------ + def CostFunction(x): + _X = numpy.asmatrix(x).flatten().T + logging.info("%s CostFunction X = %s"%(self._name, numpy.asmatrix( _X ).flatten())) + _HX = Hm( _X ) + _HX = numpy.asmatrix(_HX).flatten().T + Jb = 0.5 * (_X - Xb).T * BI * (_X - Xb) + Jo = 0.5 * (Y - _HX).T * RI * (Y - _HX) + J = float( Jb ) + float( Jo ) + logging.info("%s CostFunction Jb = %s"%(self._name, Jb)) + logging.info("%s CostFunction Jo = %s"%(self._name, Jo)) + logging.info("%s CostFunction J = %s"%(self._name, J)) + self.StoredVariables["CostFunctionJb"].store( Jb ) + self.StoredVariables["CostFunctionJo"].store( Jo ) + self.StoredVariables["CostFunctionJ" ].store( J ) + return float( J ) + # + def GradientOfCostFunction(x): + _X = numpy.asmatrix(x).flatten().T + logging.info("%s GradientOfCostFunction X = %s"%(self._name, numpy.asmatrix( _X ).flatten())) + _HX = Hm( _X ) + _HX = numpy.asmatrix(_HX).flatten().T + GradJb = BI * (_X - Xb) + GradJo = - Ht( (_X, RI * (Y - _HX)) ) + GradJ = numpy.asmatrix( GradJb ).flatten().T + numpy.asmatrix( GradJo ).flatten().T + logging.debug("%s GradientOfCostFunction GradJb = %s"%(self._name, numpy.asmatrix( GradJb ).flatten())) + logging.debug("%s GradientOfCostFunction GradJo = %s"%(self._name, numpy.asmatrix( GradJo ).flatten())) + logging.debug("%s GradientOfCostFunction GradJ = %s"%(self._name, numpy.asmatrix( GradJ ).flatten())) + # self.StoredVariables["GradientOfCostFunctionJb"].store( Jb ) + # self.StoredVariables["GradientOfCostFunctionJo"].store( Jo ) + # self.StoredVariables["GradientOfCostFunctionJ" ].store( J ) + return GradJ.A1 + # + # Point de démarrage de l'optimisation : Xini = Xb + # ------------------------------------ + if type(Xb) is type(numpy.matrix([])): + Xini = Xb.A1.tolist() + else: + Xini = list(Xb) + logging.debug("%s Point de démarrage Xini = %s"%(self._name, Xini)) + # + # Paramètres de pilotage + # ---------------------- + if Par.has_key("Bounds") and (type(Par["Bounds"]) is type([]) or type(Par["Bounds"]) is type(())) and (len(Par["Bounds"]) > 0): + Bounds = Par["Bounds"] + else: + Bounds = None + MinimizerList = ["LBFGSB","TNC", "CG", "BFGS"] + if Par.has_key("Minimizer") and (Par["Minimizer"] in MinimizerList): + Minimizer = str( Par["Minimizer"] ) + else: + Minimizer = "LBFGSB" + logging.debug("%s Minimiseur utilisé = %s"%(self._name, Minimizer)) + if Par.has_key("MaximumNumberOfSteps") and (Par["MaximumNumberOfSteps"] > -1): + maxiter = int( Par["MaximumNumberOfSteps"] ) + else: + maxiter = 15000 + logging.debug("%s Nombre maximal de pas d'optimisation = %s"%(self._name, maxiter)) + # + # Minimisation de la fonctionnelle + # -------------------------------- + if Minimizer == "LBFGSB": + Minimum, J_optimal, Informations = scipy.optimize.fmin_l_bfgs_b( + func = CostFunction, + x0 = Xini, + fprime = GradientOfCostFunction, + args = (), + bounds = Bounds, + maxfun = maxiter, + iprint = iprint, + ) + logging.debug("%s %s Minimum = %s"%(self._name, Minimizer, Minimum)) + logging.debug("%s %s Nb of F = %s"%(self._name, Minimizer, Informations['funcalls'])) + logging.debug("%s %s RetCode = %s"%(self._name, Minimizer, Informations['warnflag'])) + elif Minimizer == "TNC": + Minimum, nfeval, rc = scipy.optimize.fmin_tnc( + func = CostFunction, + x0 = Xini, + fprime = GradientOfCostFunction, + args = (), + bounds = Bounds, + maxfun = maxiter, + messages = message, + ) + logging.debug("%s %s Minimum = %s"%(self._name, Minimizer, Minimum)) + logging.debug("%s %s Nb of F = %s"%(self._name, Minimizer, nfeval)) + logging.debug("%s %s RetCode = %s"%(self._name, Minimizer, rc)) + elif Minimizer == "CG": + Minimum, fopt, nfeval, grad_calls, rc = scipy.optimize.fmin_cg( + f = CostFunction, + x0 = Xini, + fprime = GradientOfCostFunction, + args = (), + maxiter = maxiter, + disp = disp, + full_output = True, + ) + logging.debug("%s %s Minimum = %s"%(self._name, Minimizer, Minimum)) + logging.debug("%s %s Nb of F = %s"%(self._name, Minimizer, nfeval)) + logging.debug("%s %s RetCode = %s"%(self._name, Minimizer, rc)) + elif Minimizer == "BFGS": + Minimum, fopt, gopt, Hopt, nfeval, grad_calls, rc = scipy.optimize.fmin_bfgs( + f = CostFunction, + x0 = Xini, + fprime = GradientOfCostFunction, + args = (), + maxiter = maxiter, + disp = disp, + full_output = True, + ) + logging.debug("%s %s Minimum = %s"%(self._name, Minimizer, Minimum)) + logging.debug("%s %s Nb of F = %s"%(self._name, Minimizer, nfeval)) + logging.debug("%s %s RetCode = %s"%(self._name, Minimizer, rc)) + else: + raise ValueError("Error in Minimizer name: %s"%Minimizer) + # + # Calcul de l'analyse + # -------------------- + Xa = numpy.asmatrix(Minimum).T + logging.debug("%s Analyse Xa = %s"%(self._name, Xa)) + # + self.StoredVariables["Analysis"].store( Xa.A1 ) + self.StoredVariables["Innovation"].store( d.A1 ) + # + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("MB"))) + logging.debug("%s Terminé"%self._name) + # + return 0 + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' diff --git a/src/daComposant/daAlgorithms/Blue.py b/src/daComposant/daAlgorithms/Blue.py new file mode 100644 index 0000000..3e5704d --- /dev/null +++ b/src/daComposant/daAlgorithms/Blue.py @@ -0,0 +1,83 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Algorithme de Kalman simple (BLUE) +""" +__author__ = "Jean-Philippe ARGAUD - Mars 2008" + +import sys ; sys.path.insert(0, "../daCore") +import logging +import Persistence +from BasicObjects import Algorithm +import PlatformInfo ; m = PlatformInfo.SystemUsage() + +# ============================================================================== +class ElementaryAlgorithm(Algorithm): + def __init__(self): + Algorithm.__init__(self) + self._name = "BLUE" + logging.debug("%s Initialisation"%self._name) + + def run(self, Xb=None, Y=None, H=None, M=None, R=None, B=None, Q=None, Par=None): + """ + Calcul de l'estimateur BLUE (ou Kalman simple, ou Interpolation Optimale) + """ + logging.debug("%s Lancement"%self._name) + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("Mo"))) + # + Hm = H["Direct"].asMatrix() + Ht = H["Adjoint"].asMatrix() + # + # Utilisation éventuelle d'un vecteur H(Xb) précalculé + # ---------------------------------------------------- + if H["AppliedToX"] is not None and H["AppliedToX"].has_key("HXb"): + logging.debug("%s Utilisation de HXb"%self._name) + HXb = H["AppliedToX"]["HXb"] + else: + logging.debug("%s Calcul de Hm * Xb"%self._name) + HXb = Hm * Xb + + # Calcul de la matrice de gain dans l'espace le plus petit + if Y.size <= Xb.size: + logging.debug("%s Calcul de K dans l'espace des observations"%self._name) + K = B * Ht * (Hm * B * Ht + R).I + else: + logging.debug("%s Calcul de K dans l'espace d'ébauche"%self._name) + K = (Ht * R.I * Hm + B.I).I * Ht * R.I + # + # Calcul de l'innovation et de l'analyse + # -------------------------------------- + d = Y - HXb + logging.debug("%s Innovation d = %s"%(self._name, d)) + Xa = Xb + K*d + logging.debug("%s Analyse Xa = %s"%(self._name, Xa)) + # + self.StoredVariables["Analysis"].store( Xa.A1 ) + self.StoredVariables["Innovation"].store( d.A1 ) + # + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("MB"))) + logging.debug("%s Terminé"%self._name) + # + return 0 + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' diff --git a/src/daComposant/daAlgorithms/EnsembleBlue.py b/src/daComposant/daAlgorithms/EnsembleBlue.py new file mode 100644 index 0000000..287e81a --- /dev/null +++ b/src/daComposant/daAlgorithms/EnsembleBlue.py @@ -0,0 +1,88 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Algorithme de methode d'ensemble simple +""" +__author__ = "Sebastien MASSART, Jean-Philippe ARGAUD - Novembre 2008" + +import sys ; sys.path.insert(0, "../daCore") +import logging +import numpy +import Persistence +from BasicObjects import Algorithm +import PlatformInfo ; m = PlatformInfo.SystemUsage() + +# ============================================================================== +class ElementaryAlgorithm(Algorithm): + def __init__(self): + Algorithm.__init__(self) + self._name = "ENSEMBLEBLUE" + logging.debug("%s Initialisation"%self._name) + + def run(self, Xb=None, Y=None, H=None, M=None, R=None, B=None, Q=None, Par=None ): + """ + Calcul d'une estimation BLUE d'ensemble : + - génération d'un ensemble d'observations, de même taille que le + nombre d'ébauches + - calcul de l'estimateur BLUE pour chaque membre de l'ensemble + """ + logging.debug("%s Lancement"%self._name) + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("Mo"))) + # + # Nombre d'ensemble pour l'ébauche + # -------------------------------- + nb_ens = Xb.stepnumber() + # + # Construction de l'ensemble des observations, par génération a partir + # de la diagonale de R + # -------------------------------------------------------------------- + DiagonaleR = numpy.diag(R) + EnsembleY = numpy.zeros([len(Y),nb_ens]) + for npar in range(len(DiagonaleR)) : + bruit = numpy.random.normal(0,DiagonaleR[npar],nb_ens) + EnsembleY[npar,:] = Y[npar] + bruit + EnsembleY = numpy.matrix(EnsembleY) + # + # Initialisation des opérateurs d'observation et de la matrice gain + # ----------------------------------------------------------------- + Hm = H["Direct"].asMatrix() + Ht = H["Adjoint"].asMatrix() + + K = B * Ht * (Hm * B * Ht + R).I + + # Calcul du BLUE pour chaque membre de l'ensemble + # ----------------------------------------------- + for iens in range(nb_ens): + d = EnsembleY[:,iens] - Hm * Xb.valueserie(iens) + Xa = Xb.valueserie(iens) + K*d + + self.StoredVariables["Analysis"].store( Xa.A1 ) + self.StoredVariables["Innovation"].store( d.A1 ) + # + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("Mo"))) + logging.debug("%s Terminé"%self._name) + return 0 + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + diff --git a/src/daComposant/daAlgorithms/Kalman.py b/src/daComposant/daAlgorithms/Kalman.py new file mode 100644 index 0000000..d4c817f --- /dev/null +++ b/src/daComposant/daAlgorithms/Kalman.py @@ -0,0 +1,96 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Algorithme de Kalman pour un système discret + + Remarque : les observations sont exploitées à partir du pas de temps 1, et + sont utilisées dans Yo comme rangées selon ces indices. Donc le pas 0 n'est + pas utilisé puisque la première étape de Kalman passe de 0 à 1 avec + l'observation du pas 1. +""" +__author__ = "Jean-Philippe ARGAUD - Septembre 2008" + +import sys ; sys.path.insert(0, "../daCore") +import logging +import Persistence +from BasicObjects import Algorithm +import PlatformInfo ; m = PlatformInfo.SystemUsage() + +# ============================================================================== +class ElementaryAlgorithm(Algorithm): + def __init__(self): + Algorithm.__init__(self) + self._name = "KALMAN" + logging.debug("%s Initialisation"%self._name) + + def run(self, Xb=None, Y=None, H=None, M=None, R=None, B=None, Q=None, Par=None): + """ + Calcul de l'estimateur de Kalman + """ + logging.debug("%s Lancement"%self._name) + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("Mo"))) + # + # Opérateur d'observation + # ----------------------- + Hm = H["Direct"].asMatrix() + Ht = H["Adjoint"].asMatrix() + # + # Opérateur d'évolution + # --------------------- + Mm = M["Direct"].asMatrix() + Mt = M["Adjoint"].asMatrix() + # + duration = Y.stepnumber() + # + # Initialisation + # -------------- + Xn = Xb + Pn = B + self.StoredVariables["Analysis"].store( Xn.A1 ) + self.StoredVariables["CovarianceAPosteriori"].store( Pn ) + # + for step in range(duration-1): + logging.debug("%s Etape de Kalman %i (i.e. %i->%i) sur un total de %i"%(self._name, step+1, step,step+1, duration-1)) + # + # Etape de prédiction + # ------------------- + Xn_predicted = Mm * Xn + Pn_predicted = Mm * Pn * Mt + Q + # + # Etape de correction + # ------------------- + d = Y.valueserie(step+1) - Hm * Xn_predicted + K = Pn_predicted * Ht * (Hm * Pn_predicted * Ht + R).I + Xn = Xn_predicted + K * d + Pn = Pn_predicted - K * Hm * Pn_predicted + # + self.StoredVariables["Analysis"].store( Xn.A1 ) + self.StoredVariables["CovarianceAPosteriori"].store( Pn ) + self.StoredVariables["Innovation"].store( d.A1 ) + # + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("Mo"))) + logging.debug("%s Terminé"%self._name) + # + return 0 + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' diff --git a/src/daComposant/daAlgorithms/LinearLeastSquares.py b/src/daComposant/daAlgorithms/LinearLeastSquares.py new file mode 100644 index 0000000..855d8a1 --- /dev/null +++ b/src/daComposant/daAlgorithms/LinearLeastSquares.py @@ -0,0 +1,62 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Algorithme de moindre carres pondérés (analyse sans ebauche) +""" +__author__ = "Sophie RICCI, Jean-Philippe ARGAUD - Septembre 2008" + +import sys ; sys.path.insert(0, "../daCore") +import logging +import Persistence +from BasicObjects import Algorithm +import PlatformInfo ; m = PlatformInfo.SystemUsage() + +# ============================================================================== +class ElementaryAlgorithm(Algorithm): + def __init__(self): + Algorithm.__init__(self) + self._name = "LINEARLEASTSQUARES" + + def run(self, Xb=None, Y=None, H=None, M=None, R=None, B=None, Q=None, Par=None): + """ + Calcul de l'estimateur au sens des moindres carres sans ebauche + """ + logging.debug("%s Lancement"%self._name) + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("Mo"))) + # + Hm = H["Direct"].asMatrix() + Ht = H["Adjoint"].asMatrix() + # + K = (Ht * R.I * Hm ).I * Ht * R.I + Xa = K * Y + # + self.StoredVariables["Analysis"].store( Xa.A1 ) + # + logging.debug("%s Taille mémoire utilisée de %.1f Mo"%(self._name, m.getUsedMemory("Mo"))) + logging.debug("%s Terminé"%self._name) + # + return 0 + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + diff --git a/src/daComposant/daAlgorithms/__init__.py b/src/daComposant/daAlgorithms/__init__.py new file mode 100644 index 0000000..6bcb582 --- /dev/null +++ b/src/daComposant/daAlgorithms/__init__.py @@ -0,0 +1,19 @@ +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# diff --git a/src/daComposant/daCore/AssimilationStudy.py b/src/daComposant/daCore/AssimilationStudy.py new file mode 100644 index 0000000..83b4813 --- /dev/null +++ b/src/daComposant/daCore/AssimilationStudy.py @@ -0,0 +1,598 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Définit les outils généraux élémentaires. + + Ce module est destiné à etre appelée par AssimilationStudy pour constituer + les objets élémentaires de l'algorithme. +""" +__author__ = "Jean-Philippe ARGAUD - Mars 2008" + +import os, sys +import numpy +import Logging ; Logging.Logging() # A importer en premier +import Persistence +from BasicObjects import Operator + +# ============================================================================== +class AssimilationStudy: + """ + Cette classe sert d'interface pour l'utilisation de l'assimilation de + données. Elle contient les méthodes ou accesseurs nécessaires à la + construction d'un calcul d'assimilation. + """ + def __init__(self, name=""): + """ + Prévoit de conserver l'ensemble des variables nécssaires à un algorithme + élémentaire. Ces variables sont ensuite disponibles pour implémenter un + algorithme élémentaire particulier. + + Background............: vecteur Xb + Observation...........: vecteur Y (potentiellement temporel) + d'observations + State.................: vecteur d'état dont une partie est le vecteur de + contrôle. Cette information n'est utile que si l'on veut faire des + calculs sur l'état complet, mais elle n'est pas indispensable pour + l'assimilation. + Control...............: vecteur X contenant toutes les variables de + contrôle, i.e. les paramètres ou l'état dont on veut estimer la + valeur pour obtenir les observations + ObservationOperator...: opérateur d'observation H + + Les observations présentent une erreur dont la matrice de covariance est + R. L'ébauche du vecteur de contrôle présente une erreur dont la matrice + de covariance est B. + """ + self.__name = str(name) + self.__Xb = None + self.__Y = None + self.__B = None + self.__R = None + self.__Q = None + self.__H = {} + self.__M = {} + # + self.__X = Persistence.OneVector() + self.__Parameters = {} + self.__StoredDiagnostics = {} + # + # Variables temporaires + self.__algorithm = {} + self.__algorithmFile = None + self.__algorithmName = None + self.__diagnosticFile = None + # + # Récupère le chemin du répertoire parent et l'ajoute au path + # (Cela complète l'action de la classe PathManagement dans PlatformInfo, + # qui est activée dans Persistence) + self.__parent = os.path.abspath(os.path.join(os.path.dirname(__file__),"..")) + sys.path.insert(0, self.__parent) + sys.path = list(set(sys.path)) # Conserve en unique exemplaire chaque chemin + + # --------------------------------------------------------- + def setBackground(self, + asVector = None, + asPersistentVector = None, + Scheduler = None, + ): + """ + Permet de définir l'estimation a priori : + - asVector : entrée des données, comme un vecteur compatible avec le + constructeur de numpy.matrix + - asPersistentVector : entrée des données, comme un vecteur de type + persistent contruit avec la classe ad-hoc "Persistence" + - Scheduler est le contrôle temporel des données + """ + if asVector is not None: + if type( asVector ) is type( numpy.matrix([]) ): + self.__Xb = numpy.matrix( asVector.A1, numpy.float ).T + else: + self.__Xb = numpy.matrix( asVector, numpy.float ).T + elif asPersistentVector is not None: + self.__Xb = asPersistentVector + else: + raise ValueError("Error: improperly defined background") + return 0 + + def setBackgroundError(self, asCovariance=None): + """ + Permet de définir la covariance des erreurs d'ébauche : + - asCovariance : entrée des données, comme une matrice compatible avec + le constructeur de numpy.matrix + """ + self.__B = numpy.matrix( asCovariance, numpy.float ) + return 0 + + # ----------------------------------------------------------- + def setObservation(self, + asVector = None, + asPersistentVector = None, + Scheduler = None, + ): + """ + Permet de définir les observations : + - asVector : entrée des données, comme un vecteur compatible avec le + constructeur de numpy.matrix + - asPersistentVector : entrée des données, comme un vecteur de type + persistent contruit avec la classe ad-hoc "Persistence" + - Scheduler est le contrôle temporel des données disponibles + """ + if asVector is not None: + if type( asVector ) is type( numpy.matrix([]) ): + self.__Y = numpy.matrix( asVector.A1, numpy.float ).T + else: + self.__Y = numpy.matrix( asVector, numpy.float ).T + elif asPersistentVector is not None: + self.__Y = asPersistentVector + else: + raise ValueError("Error: improperly defined observations") + return 0 + + def setObservationError(self, asCovariance=None): + """ + Permet de définir la covariance des erreurs d'observations : + - asCovariance : entrée des données, comme une matrice compatible avec + le constructeur de numpy.matrix + """ + self.__R = numpy.matrix( asCovariance, numpy.float ) + return 0 + + def setObservationOperator(self, + asFunction = {"Direct":None, "Tangent":None, "Adjoint":None}, + asMatrix = None, + appliedToX = None, + ): + """ + Permet de définir un opérateur d'observation H. L'ordre de priorité des + définitions et leur sens sont les suivants : + - si asFunction["Tangent"] et asFunction["Adjoint"] ne sont pas None + alors on définit l'opérateur à l'aide de fonctions. Si la fonction + "Direct" n'est pas définie, on prend la fonction "Tangent". + - si les fonctions ne sont pas disponibles et si asMatrix n'est pas + None, alors on définit l'opérateur "Direct" et "Tangent" à l'aide de + la matrice, et l'opérateur "Adjoint" à l'aide de la transposée. La + matrice fournie doit être sous une forme compatible avec le + constructeur de numpy.matrix. + - si l'argument "appliedToX" n'est pas None, alors on définit, pour des + X divers, l'opérateur par sa valeur appliquée à cet X particulier, + sous la forme d'un dictionnaire appliedToX[NAME] avec NAME un nom. + L'opérateur doit néanmoins déjà avoir été défini comme d'habitude. + """ + if (type(asFunction) is type({})) and (asFunction["Tangent"] is not None) and (asFunction["Adjoint"] is not None): + if not asFunction.has_key("Direct") or (asFunction["Direct"] is None): + self.__H["Direct"] = Operator( fromMethod = asFunction["Tangent"] ) + else: + self.__H["Direct"] = Operator( fromMethod = asFunction["Direct"] ) + self.__H["Tangent"] = Operator( fromMethod = asFunction["Tangent"] ) + self.__H["Adjoint"] = Operator( fromMethod = asFunction["Adjoint"] ) + elif asMatrix is not None: + mat = numpy.matrix( asMatrix, numpy.float ) + self.__H["Direct"] = Operator( fromMatrix = mat ) + self.__H["Tangent"] = Operator( fromMatrix = mat ) + self.__H["Adjoint"] = Operator( fromMatrix = mat.T ) + else: + raise ValueError("Error: improperly defined observation operator") + # + if appliedToX is not None: + self.__H["AppliedToX"] = {} + if type(appliedToX) is not dict: + raise ValueError("Error: observation operator defined by \"appliedToX\" need a dictionary as argument.") + for key in appliedToX.keys(): + if type( appliedToX[key] ) is type( numpy.matrix([]) ): + # Pour le cas où l'on a une vraie matrice + self.__H["AppliedToX"][key] = numpy.matrix( appliedToX[key].A1, numpy.float ).T + elif type( appliedToX[key] ) is type( numpy.array([]) ) and len(appliedToX[key].shape) > 1: + # Pour le cas où l'on a un vecteur représenté en array avec 2 dimensions + self.__H["AppliedToX"][key] = numpy.matrix( appliedToX[key].reshape(len(appliedToX[key]),), numpy.float ).T + else: + self.__H["AppliedToX"][key] = numpy.matrix( appliedToX[key], numpy.float ).T + else: + self.__H["AppliedToX"] = None + # + return 0 + + # ----------------------------------------------------------- + def setEvolutionModel(self, + asFunction = {"Direct":None, "Tangent":None, "Adjoint":None}, + asMatrix = None, + Scheduler = None, + ): + """ + Permet de définir un opérateur d'évolution M. L'ordre de priorité des + définitions et leur sens sont les suivants : + - si asFunction["Tangent"] et asFunction["Adjoint"] ne sont pas None + alors on définit l'opérateur à l'aide de fonctions. Si la fonction + "Direct" n'est pas définie, on prend la fonction "Tangent". + - si les fonctions ne sont pas disponibles et si asMatrix n'est pas + None, alors on définit l'opérateur "Direct" et "Tangent" à l'aide de + la matrice, et l'opérateur "Adjoint" à l'aide de la transposée. La + matrice fournie doit être sous une forme compatible avec le + constructeur de numpy.matrix. + """ + if (type(asFunction) is type({})) and (asFunction["Tangent"] is not None) and (asFunction["Adjoint"] is not None): + if not asFunction.has_key("Direct") or (asFunction["Direct"] is None): + self.__M["Direct"] = Operator( fromMethod = asFunction["Tangent"] ) + else: + self.__M["Direct"] = Operator( fromMethod = asFunction["Direct"] ) + self.__M["Tangent"] = Operator( fromMethod = asFunction["Tangent"] ) + self.__M["Adjoint"] = Operator( fromMethod = asFunction["Adjoint"] ) + elif asMatrix is not None: + matrice = numpy.matrix( asMatrix, numpy.float ) + self.__M["Direct"] = Operator( fromMatrix = matrice ) + self.__M["Tangent"] = Operator( fromMatrix = matrice ) + self.__M["Adjoint"] = Operator( fromMatrix = matrice.T ) + else: + raise ValueError("Error: improperly defined evolution operator") + return 0 + + def setEvolutionError(self, asCovariance=None): + """ + Permet de définir la covariance des erreurs de modèle : + - asCovariance : entrée des données, comme une matrice compatible avec + le constructeur de numpy.matrix + """ + self.__Q = numpy.matrix( asCovariance, numpy.float ) + return 0 + + # ----------------------------------------------------------- + def setControls (self, asVector = None ): + """ + Permet de définir la valeur initiale du vecteur X contenant toutes les + variables de contrôle, i.e. les paramètres ou l'état dont on veut + estimer la valeur pour obtenir les observations. C'est utile pour un + algorithme itératif/incrémental + - asVector : entrée des données, comme un vecteur compatible avec le + constructeur de numpy.matrix. + """ + if asVector is not None: + self.__X.store( asVector ) + return 0 + + # ----------------------------------------------------------- + def setAlgorithm(self, choice = None ): + """ + Permet de sélectionner l'algorithme à utiliser pour mener à bien l'étude + d'assimilation. L'argument est un champ caractère se rapportant au nom + d'un fichier contenu dans "../daAlgorithms" et réalisant l'opération + d'assimilation sur les arguments (Xb,Y,H,R,B,Xa). + """ + if choice is None: + raise ValueError("Error: algorithm choice has to be given") + if self.__algorithmName is not None: + raise ValueError("Error: algorithm choice has already been done as \"%s\", it can't be changed."%self.__algorithmName) + daDirectory = "daAlgorithms" + # + # Recherche explicitement le fichier complet + # ------------------------------------------ + module_path = None + for directory in sys.path: + if os.path.isfile(os.path.join(directory, daDirectory, str(choice)+'.py')): + module_path = os.path.abspath(os.path.join(directory, daDirectory)) + if module_path is None: + raise ImportError("No algorithm module named \"%s\" was found in a \"%s\" subdirectory\n The search path is %s"%(choice, daDirectory, sys.path)) + # + # Importe le fichier complet comme un module + # ------------------------------------------ + try: + sys_path_tmp = sys.path ; sys.path.insert(0,module_path) + self.__algorithmFile = __import__(str(choice), globals(), locals(), []) + self.__algorithmName = str(choice) + sys.path = sys_path_tmp ; del sys_path_tmp + except ImportError, e: + raise ImportError("The module named \"%s\" was found, but is incorrect at the import stage.\n The import error message is: %s"%(choice,e)) + # + # Instancie un objet du type élémentaire du fichier + # ------------------------------------------------- + self.__algorithm = self.__algorithmFile.ElementaryAlgorithm() + return 0 + + def setAlgorithmParameters(self, asDico=None): + """ + Permet de définir les paramètres de l'algorithme, sous la forme d'un + dictionnaire. + """ + self.__Parameters = dict( asDico ) + return 0 + + # ----------------------------------------------------------- + def setDiagnostic(self, choice = None, name = "", unit = "", basetype = None, parameters = {} ): + """ + Permet de sélectionner un diagnostic a effectuer. + """ + if choice is None: + raise ValueError("Error: diagnostic choice has to be given") + daDirectory = "daDiagnostics" + # + # Recherche explicitement le fichier complet + # ------------------------------------------ + module_path = None + for directory in sys.path: + if os.path.isfile(os.path.join(directory, daDirectory, str(choice)+'.py')): + module_path = os.path.abspath(os.path.join(directory, daDirectory)) + if module_path is None: + raise ImportError("No diagnostic module named \"%s\" was found in a \"%s\" subdirectory\n The search path is %s"%(choice, daDirectory, sys.path)) + # + # Importe le fichier complet comme un module + # ------------------------------------------ + try: + sys_path_tmp = sys.path ; sys.path.insert(0,module_path) + self.__diagnosticFile = __import__(str(choice), globals(), locals(), []) + sys.path = sys_path_tmp ; del sys_path_tmp + except ImportError, e: + raise ImportError("The module named \"%s\" was found, but is incorrect at the import stage.\n The import error message is: %s"%(choice,e)) + # + # Instancie un objet du type élémentaire du fichier + # ------------------------------------------------- + if self.__StoredDiagnostics.has_key(name): + raise ValueError("A diagnostic with the same name already exists") + else: + self.__StoredDiagnostics[name] = self.__diagnosticFile.ElementaryDiagnostic( + name = name, + unit = unit, + basetype = basetype, + parameters = parameters ) + return 0 + + # ----------------------------------------------------------- + def shape_validate(self): + """ + Validation de la correspondance correcte des tailles des variables et + des matrices s'il y en a. + """ + if self.__Xb is None: __Xb_shape = (0,) + elif hasattr(self.__Xb,"shape"): + if type(self.__Xb.shape) is tuple: __Xb_shape = self.__Xb.shape + else: __Xb_shape = self.__Xb.shape() + else: raise TypeError("Xb has no attribute of shape: problem !") + # + if self.__Y is None: __Y_shape = (0,) + elif hasattr(self.__Y,"shape"): + if type(self.__Y.shape) is tuple: __Y_shape = self.__Y.shape + else: __Y_shape = self.__Y.shape() + else: raise TypeError("Y has no attribute of shape: problem !") + # + if self.__B is None: __B_shape = (0,0) + elif hasattr(self.__B,"shape"): + if type(self.__B.shape) is tuple: __B_shape = self.__B.shape + else: __B_shape = self.__B.shape() + else: raise TypeError("B has no attribute of shape: problem !") + # + if self.__R is None: __R_shape = (0,0) + elif hasattr(self.__R,"shape"): + if type(self.__R.shape) is tuple: __R_shape = self.__R.shape + else: __R_shape = self.__R.shape() + else: raise TypeError("R has no attribute of shape: problem !") + # + if self.__Q is None: __Q_shape = (0,0) + elif hasattr(self.__Q,"shape"): + if type(self.__Q.shape) is tuple: __Q_shape = self.__Q.shape + else: __Q_shape = self.__Q.shape() + else: raise TypeError("Q has no attribute of shape: problem !") + # + if len(self.__H) == 0: __H_shape = (0,0) + elif type(self.__H) is type({}): __H_shape = (0,0) + elif hasattr(self.__H["Direct"],"shape"): + if type(self.__H["Direct"].shape) is tuple: __H_shape = self.__H["Direct"].shape + else: __H_shape = self.__H["Direct"].shape() + else: raise TypeError("H has no attribute of shape: problem !") + # + if len(self.__M) == 0: __M_shape = (0,0) + elif type(self.__M) is type({}): __M_shape = (0,0) + elif hasattr(self.__M["Direct"],"shape"): + if type(self.__M["Direct"].shape) is tuple: __M_shape = self.__M["Direct"].shape + else: __M_shape = self.__M["Direct"].shape() + else: raise TypeError("M has no attribute of shape: problem !") + # + # Vérification des conditions + # --------------------------- + if not( len(__Xb_shape) == 1 or min(__Xb_shape) == 1 ): + raise ValueError("Shape characteristic of Xb is incorrect: \"%s\""%(__Xb_shape,)) + if not( len(__Y_shape) == 1 or min(__Y_shape) == 1 ): + raise ValueError("Shape characteristic of Y is incorrect: \"%s\""%(__Y_shape,)) + # + if not( min(__B_shape) == max(__B_shape) ): + raise ValueError("Shape characteristic of B is incorrect: \"%s\""%(__B_shape,)) + if not( min(__R_shape) == max(__R_shape) ): + raise ValueError("Shape characteristic of R is incorrect: \"%s\""%(__R_shape,)) + if not( min(__Q_shape) == max(__Q_shape) ): + raise ValueError("Shape characteristic of Q is incorrect: \"%s\""%(__Q_shape,)) + if not( min(__M_shape) == max(__M_shape) ): + raise ValueError("Shape characteristic of M is incorrect: \"%s\""%(__M_shape,)) + # + if len(self.__H) > 0 and not(type(self.__H) is type({})) and not( __H_shape[1] == max(__Xb_shape) ): + raise ValueError("Shape characteristic of H \"%s\" and X \"%s\" are incompatible"%(__H_shape,__Xb_shape)) + if len(self.__H) > 0 and not(type(self.__H) is type({})) and not( __H_shape[0] == max(__Y_shape) ): + raise ValueError("Shape characteristic of H \"%s\" and Y \"%s\" are incompatible"%(__H_shape,__Y_shape)) + if len(self.__H) > 0 and not(type(self.__H) is type({})) and len(self.__B) > 0 and not( __H_shape[1] == __B_shape[0] ): + raise ValueError("Shape characteristic of H \"%s\" and B \"%s\" are incompatible"%(__H_shape,__B_shape)) + if len(self.__H) > 0 and not(type(self.__H) is type({})) and len(self.__R) > 0 and not( __H_shape[0] == __R_shape[1] ): + raise ValueError("Shape characteristic of H \"%s\" and R \"%s\" are incompatible"%(__H_shape,__R_shape)) + # + if len(self.__B) > 0 and not( __B_shape[1] == max(__Xb_shape) ): + raise ValueError("Shape characteristic of B \"%s\" and Xb \"%s\" are incompatible"%(__B_shape,__Xb_shape)) + # + if len(self.__R) > 0 and not( __R_shape[1] == max(__Y_shape) ): + raise ValueError("Shape characteristic of R \"%s\" and Y \"%s\" are incompatible"%(__R_shape,__Y_shape)) + # + if len(self.__M) > 0 and not(type(self.__M) is type({})) and not( __M_shape[1] == max(__Xb_shape) ): + raise ValueError("Shape characteristic of M \"%s\" and X \"%s\" are incompatible"%(__M_shape,__Xb_shape)) + # + return 1 + + # ----------------------------------------------------------- + def analyze(self): + """ + Permet de lancer le calcul d'assimilation. + + Le nom de la méthode à activer est toujours "run". Les paramètres en + arguments de la méthode sont fixés. En sortie, on obtient les résultats + dans la variable de type dictionnaire "StoredVariables", qui contient en + particulier des objets de Persistence pour les analyses, OMA... + """ + self.shape_validate() + # + self.__algorithm.run( + Xb = self.__Xb, + Y = self.__Y, + H = self.__H, + M = self.__M, + R = self.__R, + B = self.__B, + Q = self.__Q, + Par = self.__Parameters, + ) + return 0 + + # ----------------------------------------------------------- + def get(self, key=None): + """ + Renvoie les résultats disponibles après l'exécution de la méthode + d'assimilation, ou les diagnostics disponibles. Attention, quand un + diagnostic porte le même nom qu'un variable stockée, c'est la variable + stockée qui est renvoyée, et le diagnostic est inatteignable. + """ + if key is not None: + if self.__algorithm.has_key(key): + return self.__algorithm.get( key ) + elif self.__StoredDiagnostics.has_key(key): + return self.__StoredDiagnostics[key] + else: + raise ValueError("The requested key \"%s\" does not exists as a diagnostic or as a stored variable."%key) + else: + allvariables = self.__algorithm.get() + allvariables.update( self.__StoredDiagnostics ) + return allvariables + + def get_available_algorithms(self): + """ + Renvoie la liste des algorithmes identifiés par les chaînes de + caractères + """ + files = [] + for directory in sys.path: + if os.path.isdir(os.path.join(directory,"daAlgorithms")): + for fname in os.listdir(os.path.join(directory,"daAlgorithms")): + root, ext = os.path.splitext(fname) + if ext == '.py' and root != '__init__': + files.append(root) + files.sort() + return files + + def get_available_diagnostics(self): + """ + Renvoie la liste des diagnostics identifiés par les chaînes de + caractères + """ + files = [] + for directory in sys.path: + if os.path.isdir(os.path.join(directory,"daDiagnostics")): + for fname in os.listdir(os.path.join(directory,"daDiagnostics")): + root, ext = os.path.splitext(fname) + if ext == '.py' and root != '__init__': + files.append(root) + files.sort() + return files + + # ----------------------------------------------------------- + def get_algorithms_main_path(self): + """ + Renvoie le chemin pour le répertoire principal contenant les algorithmes + dans un sous-répertoire "daAlgorithms" + """ + return self.__parent + + def add_algorithms_path(self, asPath=None): + """ + Ajoute au chemin de recherche des algorithmes un répertoire dans lequel + se trouve un sous-répertoire "daAlgorithms" + + Remarque : si le chemin a déjà été ajouté pour les diagnostics, il n'est + pas indispensable de le rajouter ici. + """ + if not os.path.isdir(asPath): + raise ValueError("The given "+asPath+" argument must exist as a directory") + if not os.path.isdir(os.path.join(asPath,"daAlgorithms")): + raise ValueError("The given \""+asPath+"\" argument must contain a subdirectory named \"daAlgorithms\"") + if not os.path.isfile(os.path.join(asPath,"daAlgorithms","__init__.py")): + raise ValueError("The given \""+asPath+"/daAlgorithms\" path must contain a file named \"__init__.py\"") + sys.path.insert(0, os.path.abspath(asPath)) + sys.path = list(set(sys.path)) # Conserve en unique exemplaire chaque chemin + return 1 + + def get_diagnostics_main_path(self): + """ + Renvoie le chemin pour le répertoire principal contenant les diagnostics + dans un sous-répertoire "daDiagnostics" + """ + return self.__parent + + def add_diagnostics_path(self, asPath=None): + """ + Ajoute au chemin de recherche des algorithmes un répertoire dans lequel + se trouve un sous-répertoire "daDiagnostics" + + Remarque : si le chemin a déjà été ajouté pour les algorithmes, il n'est + pas indispensable de le rajouter ici. + """ + if not os.path.isdir(asPath): + raise ValueError("The given "+asPath+" argument must exist as a directory") + if not os.path.isdir(os.path.join(asPath,"daDiagnostics")): + raise ValueError("The given \""+asPath+"\" argument must contain a subdirectory named \"daDiagnostics\"") + if not os.path.isfile(os.path.join(asPath,"daDiagnostics","__init__.py")): + raise ValueError("The given \""+asPath+"/daDiagnostics\" path must contain a file named \"__init__.py\"") + sys.path.insert(0, os.path.abspath(asPath)) + sys.path = list(set(sys.path)) # Conserve en unique exemplaire chaque chemin + return 1 + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + ADD = AssimilationStudy("Ma premiere etude BLUE") + + ADD.setBackground (asVector = [0, 1, 2]) + ADD.setBackgroundError (asCovariance = "1 0 0;0 1 0;0 0 1") + ADD.setObservation (asVector = [0.5, 1.5, 2.5]) + ADD.setObservationError (asCovariance = "1 0 0;0 1 0;0 0 1") + ADD.setObservationOperator(asMatrix = "1 0 0;0 1 0;0 0 1") + + ADD.setAlgorithm(choice="Blue") + + ADD.analyze() + + print "Nombre d'analyses :", ADD.get("Analysis").stepnumber() + print "Analyse résultante :", ADD.get("Analysis").valueserie(0) + print "Innovation :", ADD.get("Innovation").valueserie(0) + print + + print "Algorithmes disponibles :", ADD.get_available_algorithms() + # print " Chemin des algorithmes :", ADD.get_algorithms_main_path() + print "Diagnostics disponibles :", ADD.get_available_diagnostics() + # print " Chemin des diagnostics :", ADD.get_diagnostics_main_path() + print + + ADD.setDiagnostic("RMS", "Ma RMS") + + liste = ADD.get().keys() + liste.sort() + print "Variables et diagnostics disponibles :", liste + print + diff --git a/src/daComposant/daCore/BasicObjects.py b/src/daComposant/daCore/BasicObjects.py new file mode 100644 index 0000000..bdcae37 --- /dev/null +++ b/src/daComposant/daCore/BasicObjects.py @@ -0,0 +1,213 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Définit les outils généraux élémentaires. + + Ce module est destiné à etre appelée par AssimilationStudy pour constituer + les objets élémentaires de l'algorithme. +""" +__author__ = "Jean-Philippe ARGAUD - Mars 2008" + +import numpy +import Persistence + +# ============================================================================== +class Operator: + """ + Classe générale d'interface de type opérateur + """ + def __init__(self, fromMethod=None, fromMatrix=None): + """ + On construit un objet de ce type en fournissant à l'aide de l'un des + deux mots-clé, soit une fonction python, soit matrice. + Arguments : + - fromMethod : argument de type fonction Python + - fromMatrix : argument adapté au constructeur numpy.matrix + """ + if fromMethod is not None: + self.__Method = fromMethod + self.__Matrix = None + elif fromMatrix is not None: + self.__Method = None + self.__Matrix = numpy.matrix( fromMatrix, numpy.float ) + else: + self.__Method = None + self.__Matrix = None + + def appliedTo(self, xValue): + """ + Permet de restituer le résultat de l'application de l'opérateur à un + argument xValue. Cette méthode se contente d'appliquer, son argument + devant a priori être du bon type. + Arguments : + - xValue : argument adapté pour appliquer l'opérateur + """ + if self.__Matrix is not None: + return self.__Matrix * xValue + else: + return self.__Method( xValue ) + + def appliedInXTo(self, (xNominal, xValue) ): + """ + Permet de restituer le résultat de l'application de l'opérateur à un + argument xValue, sachant que l'opérateur est valable en xNominal. + Cette méthode se contente d'appliquer, son argument devant a priori + être du bon type. Si l'opérateur est linéaire car c'est une matrice, + alors il est valable en tout point nominal et il n'est pas nécessaire + d'utiliser xNominal. + Arguments : une liste contenant + - xNominal : argument permettant de donner le point où l'opérateur + est construit pour etre ensuite appliqué + - xValue : argument adapté pour appliquer l'opérateur + """ + if self.__Matrix is not None: + return self.__Matrix * xValue + else: + return self.__Method( (xNominal, xValue) ) + + def asMatrix(self): + """ + Permet de renvoyer l'opérateur sous la forme d'une matrice + """ + if self.__Matrix is not None: + return self.__Matrix + else: + raise ValueError("Matrix form of the operator is not available") + + def shape(self): + """ + Renvoie la taille sous forme numpy si l'opérateur est disponible sous + la forme d'une matrice + """ + if self.__Matrix is not None: + return self.__Matrix.shape + else: + raise ValueError("Matrix form of the operator is not available, nor the shape") + +# ============================================================================== +class Algorithm: + """ + Classe générale d'interface de type algorithme + + Elle donne un cadre pour l'écriture d'une classe élémentaire d'algorithme + d'assimilation, en fournissant un container (dictionnaire) de variables + persistantes initialisées, et des méthodes d'accès à ces variables stockées. + + Une classe élémentaire d'algorithme doit implémenter la méthode "run". + """ + def __init__(self): + """ + L'initialisation présente permet de fabriquer des variables de stockage + disponibles de manière générique dans les algorithmes élémentaires. Ces + variables de stockage sont ensuite conservées dans un dictionnaire + interne à l'objet, mais auquel on accède par la méthode "get". + + Les variables prévues sont : + - Analysis : l'analyse + - Innovation : l'innovation : d = Y - H Xb + - SigmaObs2 : correction optimale des erreurs d'observation + - SigmaBck2 : correction optimale des erreurs d'ébauche + - OMA : Observation moins Analysis : Y - Xa + - OMB : Observation moins Background : Y - Xb + - AMB : Analysis moins Background : Xa - Xb + - CovarianceAPosteriori : matrice A + On peut rajouter des variables à stocker dans l'initialisation de + l'algorithme élémentaire qui va hériter de cette classe + """ + self.StoredVariables = {} + self.StoredVariables["CostFunctionJ"] = Persistence.OneScalar(name = "CostFunctionJ") + self.StoredVariables["CostFunctionJb"] = Persistence.OneScalar(name = "CostFunctionJb") + self.StoredVariables["CostFunctionJo"] = Persistence.OneScalar(name = "CostFunctionJo") + self.StoredVariables["GradientOfCostFunctionJ"] = Persistence.OneScalar(name = "GradientOfCostFunctionJ") + self.StoredVariables["GradientOfCostFunctionJb"] = Persistence.OneScalar(name = "GradientOfCostFunctionJb") + self.StoredVariables["GradientOfCostFunctionJo"] = Persistence.OneScalar(name = "GradientOfCostFunctionJo") + self.StoredVariables["Analysis"] = Persistence.OneVector(name = "Analysis") + self.StoredVariables["Innovation"] = Persistence.OneVector(name = "Innovation") + self.StoredVariables["SigmaObs2"] = Persistence.OneScalar(name = "SigmaObs2") + self.StoredVariables["SigmaBck2"] = Persistence.OneScalar(name = "SigmaBck2") + self.StoredVariables["OMA"] = Persistence.OneVector(name = "OMA") + self.StoredVariables["OMB"] = Persistence.OneVector(name = "OMB") + self.StoredVariables["BMA"] = Persistence.OneVector(name = "BMA") + self.StoredVariables["CovarianceAPosteriori"] = Persistence.OneMatrix(name = "CovarianceAPosteriori") + self._name = None + + def get(self, key=None): + """ + Renvoie l'une des variables stockées identifiée par la clé, ou le + dictionnaire de l'ensemble des variables disponibles en l'absence de + clé. Ce sont directement les variables sous forme objet qui sont + renvoyées, donc les méthodes d'accès à l'objet individuel sont celles + des classes de persistance. + """ + if key is not None: + return self.StoredVariables[key] + else: + return self.StoredVariables + + def has_key(self, key=None): + """ + Vérifie si l'une des variables stockées est identifiée par la clé. + """ + return self.StoredVariables.has_key(key) + + def run(self, Xb=None, Y=None, H=None, M=None, R=None, B=None, Q=None, Par=None): + """ + Doit implémenter l'opération élémentaire de calcul d'assimilation sous + sa forme mathématique la plus naturelle possible. + """ + raise NotImplementedError("Mathematical assimilation calculation has not been implemented!") + +# ============================================================================== +class Diagnostic: + """ + Classe générale d'interface de type diagnostic + + Ce template s'utilise de la manière suivante : il sert de classe "patron" en + même temps que l'une des classes de persistance, comme "OneScalar" par + exemple. + + Une classe élémentaire de diagnostic doit implémenter ses deux méthodes, la + méthode "_formula" pour écrire explicitement et proprement la formule pour + l'écriture mathématique du calcul du diagnostic (méthode interne non + publique), et "calculate" pour activer la précédente tout en ayant vérifié + et préparé les données, et pour stocker les résultats à chaque pas (méthode + externe d'activation). + """ + def __init__(self, name = "", parameters = {}): + self.name = str(name) + self.parameters = dict( parameters ) + + def _formula(self, *args): + """ + Doit implémenter l'opération élémentaire de diagnostic sous sa forme + mathématique la plus naturelle possible. + """ + raise NotImplementedError("Diagnostic mathematical formula has not been implemented!") + + def calculate(self, *args): + """ + Active la formule de calcul avec les arguments correctement rangés + """ + raise NotImplementedError("Diagnostic activation method has not been implemented!") + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' diff --git a/src/daComposant/daCore/Logging.py b/src/daComposant/daCore/Logging.py new file mode 100644 index 0000000..b56f932 --- /dev/null +++ b/src/daComposant/daCore/Logging.py @@ -0,0 +1,162 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Ce module permet de mettre en place un logging utilisable partout dans + l'application, par défaut à la console, et si nécessaire dans un fichier. + + Il doit être appelé en premier dans AssimilationStudy (mais pas directement + dans les applications utilisateurs), en l'important et en instanciant un + objet : + import Logging ; Logging.Logging() + + Par défaut, seuls les messages du niveau WARNING ou au-delà sont disponibles + (donc les simples messages d'info ne sont pas disponibles), ce que l'on peut + changer à l'instanciation avec le mot-clé "level" : + import Logging ; Logging.Logging(level=20) + + On peut éventuellement demander à l'objet de sortir aussi les messages dans + un fichier (noms par défaut : AssimilationStudy.log, niveau NOTSET) : + import Logging ; Logging.Logging().setLogfile() + + Si on veut changer le nom du fichier ou le niveau global de message, il faut + récupérer l'instance et appliquer les méthodes : + import Logging + log = Logging.Logging() + import logging + log.setLevel(logging.DEBUG) + log.setLogfile(filename="toto.log", filemode="a", level=logging.WARNING) + et on change éventuellement le niveau avec : + log.setLogfileLevel(logging.INFO) + + Ensuite, n'importe où dans les applications, il suffit d'utiliser le module + "logging" (avec un petit "l") : + import logging + log = logging.getLogger(NAME) # Avec rien (recommandé) ou un nom NAME + log.critical("...") + log.error("...") + log.warning("...") + log.info("...") + log.debug("...") + ou encore plus simplement : + import logging + logging.info("...") + + Dans une application, à n'importe quel endroit et autant de fois qu'on veut, + on peut changer le niveau global de message en utilisant par exemple : + import logging + logging.setLevel(logging.DEBUG) + + On rappelle les niveaux (attributs de "logging") et leur ordre : + NOTSET=0 < DEBUG=10 < INFO=20 < WARNING=30 < ERROR=40 < CRITICAL=50 +""" +__author__ = "Jean-Philippe ARGAUD - Octobre 2008" + +import os +import sys +import logging +from PlatformInfo import PlatformInfo + +LOGFILE = os.path.join(os.path.abspath(os.curdir),"AssimilationStudy.log") + +# ============================================================================== +class Logging: + def __init__(self, level=logging.WARNING): + """ + Initialise un logging à la console pour TOUS les niveaux de messages. + """ + logging.basicConfig( + format = '%(levelname)-8s %(message)s', + level = level, + stream = sys.stdout, + ) + self.__logfile = None + # + # Initialise l'affichage de logging + # --------------------------------- + p = PlatformInfo() + # + logging.info( "--------------------------------------------------" ) + logging.info( "Lancement de "+p.getName()+" "+p.getVersion() ) + logging.info( "--------------------------------------------------" ) + logging.info( "Versions logicielles :" ) + logging.info( "- Python "+p.getPythonVersion() ) + logging.info( "- Numpy "+p.getNumpyVersion() ) + logging.info( "- Scipy "+p.getScipyVersion() ) + logging.info( "" ) + + def setLogfileLevel(self, level=logging.NOTSET ): + """ + Permet de changer globalement le niveau des messages disponibles. + """ + logging.getLogger().setLevel(level) + + def setLogfile(self, filename=LOGFILE, filemode="w", level=logging.NOTSET): + """ + Permet de disposer des messages dans un fichier EN PLUS de la console. + """ + if self.__logfile is not None: + # Supprime le précédent mode de stockage fichier s'il exsitait + logging.getLogger().removeHandler(self.__logfile) + self.__logfile = logging.FileHandler(filename, filemode) + self.__logfile.setLevel(level) + self.__logfile.setFormatter( + logging.Formatter('%(asctime)s %(levelname)-8s %(message)s', + '%d %b %Y %H:%M:%S')) + logging.getLogger().addHandler(self.__logfile) + + def setLogfileLevel(self, level=logging.NOTSET ): + """ + Permet de changer le niveau des messages stockés en fichier. Il ne sera + pris en compte que s'il est supérieur au niveau global. + """ + self.__logfile.setLevel(level) + + def getLevel(self): + """ + Renvoie le niveau de Logging sous forme texte + """ + return logging.getLevelName( logging.getLogger().getEffectiveLevel() ) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + import os.path + + l = Logging(level = logging.NOTSET) + + logging.info("Message numéro 1 uniquement disponible sur console") + + l.setLogfile(level = logging.WARNING) + if not os.path.isfile(LOGFILE): + raise ValueError("Le fichier de log \"%s\" n'a pas pu être créé."%LOGFILE) + + logging.info("Message numéro 2 uniquement disponible sur console") + logging.warning("Message numéro 3 conjointement disponible sur console et fichier") + + l.setLogfileLevel(logging.INFO) + + logging.info("Message numéro 4 conjointement disponible sur console et fichier") + + print + print " Le logging a été correctement initialisé. Le fichier suivant" + print " %s"%os.path.basename(LOGFILE) + print " a été correctement créé, et peut être effacé après vérification." + print diff --git a/src/daComposant/daCore/Persistence.py b/src/daComposant/daCore/Persistence.py new file mode 100644 index 0000000..4f15a46 --- /dev/null +++ b/src/daComposant/daCore/Persistence.py @@ -0,0 +1,663 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Définit des outils de persistence et d'enregistrement de séries de valeurs + pour analyse ultérieure ou utilisation de calcul. +""" +__author__ = "Jean-Philippe ARGAUD - Mars 2008" + +import numpy + +from PlatformInfo import PathManagement ; PathManagement() + +# ============================================================================== +class Persistence: + """ + Classe générale de persistence définissant les accesseurs nécessaires + (Template) + """ + def __init__(self, name="", unit="", basetype=str): + """ + name : nom courant + unit : unité + basetype : type de base de l'objet stocké à chaque pas + + La gestion interne des données est exclusivement basée sur les variables + initialisées ici (qui ne sont pas accessibles depuis l'extérieur des + objets comme des attributs) : + __step : numérotation par défaut du pas courant + __basetype : le type de base de chaque valeur, sous la forme d'un type + permettant l'instanciation ou le casting Python + __steps : les pas de stockage. Par défaut, c'est __step + __values : les valeurs de stockage. Par défaut, c'est None + """ + self.__name = str(name) + self.__unit = str(unit) + # + self.__step = -1 + self.__basetype = basetype + # + self.__steps = [] + self.__values = [] + + def basetype(self, basetype=None): + """ + Renvoie ou met en place le type de base des objets stockés + """ + if basetype is None: + return self.__basetype + else: + self.__basetype = basetype + + def store(self, value=None, step=None): + """ + Stocke une valeur à un pas. Une instanciation est faite avec le type de + base pour stocker l'objet. Si le pas n'est pas fournit, on utilise + l'étape de stockage comme valeur de pas. + """ + if value is None: raise ValueError("Value argument required") + self.__step += 1 + if step is not None: + self.__steps.append(step) + else: + self.__steps.append(self.__step) + # + self.__values.append(self.__basetype(value)) + + def shape(self): + """ + Renvoie la taille sous forme numpy du dernier objet stocké. Si c'est un + objet numpy, renvoie le shape. Si c'est un entier, un flottant, un + complexe, renvoie 1. Si c'est une liste ou un dictionnaire, renvoie la + longueur. Par défaut, renvoie 1. + """ + if len(self.__values) > 0: + if self.__basetype in [numpy.matrix, numpy.array]: + return self.__values[-1].shape + elif self.__basetype in [int, float]: + return (1,) + elif self.__basetype in [list, dict]: + return (len(self.__values[-1]),) + else: + return (1,) + else: + raise ValueError("Object has no shape before its first storage") + + def __len__(self): + """ + Renvoie le nombre d'éléments dans un séquence ou la plus grande + dimension d'une matrice + """ + return max( self.shape() ) + + # --------------------------------------------------------- + def stepserie(self, item=None, step=None): + """ + Renvoie par défaut toute la liste des pas de temps. Si l'argument "step" + existe dans la liste des pas de stockage effectués, renvoie ce pas + "step". Si l'argument "item" est correct, renvoie le pas stockée au + numéro "item". + """ + if step is not None and step in self.__steps: + return step + elif item is not None and item < len(self.__steps): + return self.__steps[item] + else: + return self.__steps + + def valueserie(self, item=None, step=None): + """ + Renvoie par défaut toute la liste des valeurs/objets. Si l'argument + "step" existe dans la liste des pas de stockage effectués, renvoie la + valeur stockée à ce pas "step". Si l'argument "item" est correct, + renvoie la valeur stockée au numéro "item". + """ + if step is not None and step in self.__steps: + index = self.__steps.index(step) + return self.__values[index] + elif item is not None and item < len(self.__values): + return self.__values[item] + else: + return self.__values + + def stepnumber(self): + """ + Renvoie le nombre de pas de stockage. + """ + return len(self.__steps) + + # --------------------------------------------------------- + def mean(self): + """ + Renvoie la valeur moyenne des données à chaque pas. Il faut que le type + de base soit compatible avec les types élémentaires numpy. + """ + try: + return [numpy.matrix(item).mean() for item in self.__values] + except: + raise TypeError("Base type is incompatible with numpy") + + def std(self, ddof=0): + """ + Renvoie l'écart-type des données à chaque pas. Il faut que le type de + base soit compatible avec les types élémentaires numpy. + + ddof : c'est le nombre de degrés de liberté pour le calcul de + l'écart-type, qui est dans le diviseur. Inutile avant Numpy 1.1 + """ + try: + if numpy.version.version >= '1.1.0': + return [numpy.matrix(item).std(ddof=ddof) for item in self.__values] + else: + return [numpy.matrix(item).std() for item in self.__values] + except: + raise TypeError("Base type is incompatible with numpy") + + def sum(self): + """ + Renvoie la somme des données à chaque pas. Il faut que le type de + base soit compatible avec les types élémentaires numpy. + """ + try: + return [numpy.matrix(item).sum() for item in self.__values] + except: + raise TypeError("Base type is incompatible with numpy") + + def min(self): + """ + Renvoie le minimum des données à chaque pas. Il faut que le type de + base soit compatible avec les types élémentaires numpy. + """ + try: + return [numpy.matrix(item).min() for item in self.__values] + except: + raise TypeError("Base type is incompatible with numpy") + + def max(self): + """ + Renvoie le maximum des données à chaque pas. Il faut que le type de + base soit compatible avec les types élémentaires numpy. + """ + try: + return [numpy.matrix(item).max() for item in self.__values] + except: + raise TypeError("Base type is incompatible with numpy") + + def plot(self, item=None, step=None, + steps = None, + title = "", + xlabel = "", + ylabel = "", + ltitle = None, + geometry = "600x400", + filename = "", + persist = False, + pause = True, + ): + """ + Renvoie un affichage de la valeur à chaque pas, si elle est compatible + avec un affichage Gnuplot (donc essentiellement un vecteur). Si + l'argument "step" existe dans la liste des pas de stockage effectués, + renvoie l'affichage de la valeur stockée à ce pas "step". Si l'argument + "item" est correct, renvoie l'affichage de la valeur stockée au numéro + "item". Par défaut ou en l'absence de "step" ou "item", renvoie un + affichage successif de tous les pas. + + Arguments : + - step : valeur du pas à afficher + - item : index de la valeur à afficher + - steps : liste unique des pas de l'axe des X, ou None si c'est + la numérotation par défaut + - title : base du titre général, qui sera automatiquement + complétée par la mention du pas + - xlabel : label de l'axe des X + - ylabel : label de l'axe des Y + - ltitle : titre associé au vecteur tracé + - geometry : taille en pixels de la fenêtre et position du coin haut + gauche, au format X11 : LxH+X+Y (défaut : 600x400) + - filename : base de nom de fichier Postscript pour une sauvegarde, + qui est automatiquement complétée par le numéro du + fichier calculé par incrément simple de compteur + - persist : booléen indiquant que la fenêtre affichée sera + conservée lors du passage au dessin suivant + Par défaut, persist = False + - pause : booléen indiquant une pause après chaque tracé, et + attendant un Return + Par défaut, pause = True + """ + import os + # + # Vérification de la disponibilité du module Gnuplot + try: + import Gnuplot + self.__gnuplot = Gnuplot + except: + raise ImportError("The Gnuplot module is required to plot the object.") + # + # Vérification et compléments sur les paramètres d'entrée + if persist: + self.__gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist -geometry '+geometry + else: + self.__gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -geometry '+geometry + if ltitle is None: + ltitle = "" + self.__g = self.__gnuplot.Gnuplot() # persist=1 + self.__g('set terminal '+self.__gnuplot.GnuplotOpts.default_term) + self.__g('set style data lines') + self.__g('set grid') + self.__g('set autoscale') + self.__g('set xlabel "'+str(xlabel).encode('ascii','replace')+'"') + self.__g('set ylabel "'+str(ylabel).encode('ascii','replace')+'"') + # + # Tracé du ou des vecteurs demandés + indexes = [] + if step is not None and step in self.__steps: + indexes.append(self.__steps.index(step)) + elif item is not None and item < len(self.__values): + indexes.append(item) + else: + indexes = indexes + range(len(self.__values)) + # + i = -1 + for index in indexes: + self.__g('set title "'+str(title).encode('ascii','replace')+' (pas '+str(index)+')"') + if ( type(steps) is type([]) ) or ( type(steps) is type(numpy.array([])) ): + Steps = list(steps) + else: + Steps = range(len(self.__values[index])) + # + self.__g.plot( self.__gnuplot.Data( Steps, self.__values[index], title=ltitle ) ) + # + if filename != "": + i += 1 + stepfilename = "%s_%03i.ps"%(filename,i) + if os.path.isfile(stepfilename): + raise ValueError("Error: a file with this name \"%s\" already exists."%stepfilename) + self.__g.hardcopy(filename=stepfilename, color=1) + if pause: + raw_input('Please press return to continue...\n') + + # --------------------------------------------------------- + def stepmean(self): + """ + Renvoie la moyenne sur toutes les valeurs sans tenir compte de la + longueur des pas. Il faut que le type de base soit compatible avec + les types élémentaires numpy. + """ + try: + return numpy.matrix(self.__values).mean() + except: + raise TypeError("Base type is incompatible with numpy") + + def stepstd(self, ddof=0): + """ + Renvoie l'écart-type de toutes les valeurs sans tenir compte de la + longueur des pas. Il faut que le type de base soit compatible avec + les types élémentaires numpy. + + ddof : c'est le nombre de degrés de liberté pour le calcul de + l'écart-type, qui est dans le diviseur. Inutile avant Numpy 1.1 + """ + try: + if numpy.version.version >= '1.1.0': + return numpy.matrix(self.__values).std(ddof=ddof) + else: + return numpy.matrix(self.__values).std() + except: + raise TypeError("Base type is incompatible with numpy") + + def stepsum(self): + """ + Renvoie la somme de toutes les valeurs sans tenir compte de la + longueur des pas. Il faut que le type de base soit compatible avec + les types élémentaires numpy. + """ + try: + return numpy.matrix(self.__values).sum() + except: + raise TypeError("Base type is incompatible with numpy") + + def stepmin(self): + """ + Renvoie le minimum de toutes les valeurs sans tenir compte de la + longueur des pas. Il faut que le type de base soit compatible avec + les types élémentaires numpy. + """ + try: + return numpy.matrix(self.__values).min() + except: + raise TypeError("Base type is incompatible with numpy") + + def stepmax(self): + """ + Renvoie le maximum de toutes les valeurs sans tenir compte de la + longueur des pas. Il faut que le type de base soit compatible avec + les types élémentaires numpy. + """ + try: + return numpy.matrix(self.__values).max() + except: + raise TypeError("Base type is incompatible with numpy") + + def cumsum(self): + """ + Renvoie la somme cumulée de toutes les valeurs sans tenir compte de la + longueur des pas. Il faut que le type de base soit compatible avec + les types élémentaires numpy. + """ + try: + return numpy.matrix(self.__values).cumsum(axis=0) + except: + raise TypeError("Base type is incompatible with numpy") + + # On pourrait aussi utiliser les autres attributs d'une "matrix", comme + # "tofile", "min"... + + def stepplot(self, + steps = None, + title = "", + xlabel = "", + ylabel = "", + ltitle = None, + geometry = "600x400", + filename = "", + persist = False, + pause = True, + ): + """ + Renvoie un affichage unique pour l'ensemble des valeurs à chaque pas, si + elles sont compatibles avec un affichage Gnuplot (donc essentiellement + un vecteur). Si l'argument "step" existe dans la liste des pas de + stockage effectués, renvoie l'affichage de la valeur stockée à ce pas + "step". Si l'argument "item" est correct, renvoie l'affichage de la + valeur stockée au numéro "item". + + Arguments : + - steps : liste unique des pas de l'axe des X, ou None si c'est + la numérotation par défaut + - title : base du titre général, qui sera automatiquement + complétée par la mention du pas + - xlabel : label de l'axe des X + - ylabel : label de l'axe des Y + - ltitle : titre associé au vecteur tracé + - geometry : taille en pixels de la fenêtre et position du coin haut + gauche, au format X11 : LxH+X+Y (défaut : 600x400) + - filename : nom de fichier Postscript pour une sauvegarde, + - persist : booléen indiquant que la fenêtre affichée sera + conservée lors du passage au dessin suivant + Par défaut, persist = False + - pause : booléen indiquant une pause après chaque tracé, et + attendant un Return + Par défaut, pause = True + """ + import os + # + # Vérification de la disponibilité du module Gnuplot + try: + import Gnuplot + self.__gnuplot = Gnuplot + except: + raise ImportError("The Gnuplot module is required to plot the object.") + # + # Vérification et compléments sur les paramètres d'entrée + if persist: + self.__gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist -geometry '+geometry + else: + self.__gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -geometry '+geometry + if ltitle is None: + ltitle = "" + if ( type(steps) is type([]) ) or ( type(steps) is type(numpy.array([])) ): + Steps = list(steps) + else: + Steps = range(len(self.__values[0])) + self.__g = self.__gnuplot.Gnuplot() # persist=1 + self.__g('set terminal '+self.__gnuplot.GnuplotOpts.default_term) + self.__g('set style data lines') + self.__g('set grid') + self.__g('set autoscale') + self.__g('set title "'+str(title).encode('ascii','replace') +'"') + self.__g('set xlabel "'+str(xlabel).encode('ascii','replace')+'"') + self.__g('set ylabel "'+str(ylabel).encode('ascii','replace')+'"') + # + # Tracé du ou des vecteurs demandés + indexes = range(len(self.__values)) + self.__g.plot( self.__gnuplot.Data( Steps, self.__values[indexes.pop(0)], title=ltitle+" (pas 0)" ) ) + for index in indexes: + self.__g.replot( self.__gnuplot.Data( Steps, self.__values[index], title=ltitle+" (pas %i)"%index ) ) + # + if filename != "": + self.__g.hardcopy(filename=filename, color=1) + if pause: + raw_input('Please press return to continue...\n') + +# ============================================================================== +class OneScalar(Persistence): + """ + Classe définissant le stockage d'une valeur unique réelle (float) par pas + + Le type de base peut être changé par la méthode "basetype", mais il faut que + le nouveau type de base soit compatible avec les types par éléments de + numpy. On peut même utiliser cette classe pour stocker des vecteurs/listes + ou des matrices comme dans les classes suivantes, mais c'est déconseillé + pour conserver une signification claire des noms. + """ + def __init__(self, name="", unit="", basetype = float): + Persistence.__init__(self, name, unit, basetype) + +class OneVector(Persistence): + """ + Classe définissant le stockage d'une liste (list) de valeurs homogènes par + hypothèse par pas. Pour éviter les confusions, ne pas utiliser la classe + "OneVector" pour des données hétérogènes, mais bien "OneList". + """ + def __init__(self, name="", unit="", basetype = list): + Persistence.__init__(self, name, unit, basetype) + +class OneMatrix(Persistence): + """ + Classe définissant le stockage d'une matrice de valeurs (numpy.matrix) par + pas + """ + def __init__(self, name="", unit="", basetype = numpy.matrix): + Persistence.__init__(self, name, unit, basetype) + +class OneList(Persistence): + """ + Classe définissant le stockage d'une liste de valeurs potentiellement + hétérogènes (list) par pas. Pour éviter les confusions, ne pas utiliser la + classe "OneVector" pour des données hétérogènes, mais bien "OneList". + """ + def __init__(self, name="", unit="", basetype = list): + Persistence.__init__(self, name, unit, basetype) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + print "======> Un flottant" + OBJET_DE_TEST = OneScalar("My float", unit="cm") + OBJET_DE_TEST.store( 5.) + OBJET_DE_TEST.store(-5.) + OBJET_DE_TEST.store( 1.) + print "Les pas de stockage :", OBJET_DE_TEST.stepserie() + print "Les valeurs :", OBJET_DE_TEST.valueserie() + print "La 2ème valeur :", OBJET_DE_TEST.valueserie(1) + print "La dernière valeur :", OBJET_DE_TEST.valueserie(-1) + print "Valeurs par pas :" + print " La moyenne :", OBJET_DE_TEST.mean() + print " L'écart-type :", OBJET_DE_TEST.std() + print " La somme :", OBJET_DE_TEST.sum() + print " Le minimum :", OBJET_DE_TEST.min() + print " Le maximum :", OBJET_DE_TEST.max() + print "Valeurs globales :" + print " La moyenne :", OBJET_DE_TEST.stepmean() + print " L'écart-type :", OBJET_DE_TEST.stepstd() + print " La somme :", OBJET_DE_TEST.stepsum() + print " Le minimum :", OBJET_DE_TEST.stepmin() + print " Le maximum :", OBJET_DE_TEST.stepmax() + print " La somme cumulée :", OBJET_DE_TEST.cumsum() + print "Taille \"shape\" :", OBJET_DE_TEST.shape() + print "Taille \"len\" :", len(OBJET_DE_TEST) + del OBJET_DE_TEST + print + + print "======> Un entier" + OBJET_DE_TEST = OneScalar("My int", unit="cm", basetype=int) + OBJET_DE_TEST.store( 5 ) + OBJET_DE_TEST.store(-5 ) + OBJET_DE_TEST.store( 1.) + print "Les pas de stockage :", OBJET_DE_TEST.stepserie() + print "Les valeurs :", OBJET_DE_TEST.valueserie() + print "La 2ème valeur :", OBJET_DE_TEST.valueserie(1) + print "La dernière valeur :", OBJET_DE_TEST.valueserie(-1) + print "Valeurs par pas :" + print " La moyenne :", OBJET_DE_TEST.mean() + print " L'écart-type :", OBJET_DE_TEST.std() + print " La somme :", OBJET_DE_TEST.sum() + print " Le minimum :", OBJET_DE_TEST.min() + print " Le maximum :", OBJET_DE_TEST.max() + print "Valeurs globales :" + print " La moyenne :", OBJET_DE_TEST.stepmean() + print " L'écart-type :", OBJET_DE_TEST.stepstd() + print " La somme :", OBJET_DE_TEST.stepsum() + print " Le minimum :", OBJET_DE_TEST.stepmin() + print " Le maximum :", OBJET_DE_TEST.stepmax() + print " La somme cumulée :", OBJET_DE_TEST.cumsum() + print "Taille \"shape\" :", OBJET_DE_TEST.shape() + print "Taille \"len\" :", len(OBJET_DE_TEST) + del OBJET_DE_TEST + print + + print "======> Un booléen" + OBJET_DE_TEST = OneScalar("My bool", unit="", basetype=bool) + OBJET_DE_TEST.store( True ) + OBJET_DE_TEST.store( False ) + OBJET_DE_TEST.store( True ) + print "Les pas de stockage :", OBJET_DE_TEST.stepserie() + print "Les valeurs :", OBJET_DE_TEST.valueserie() + print "La 2ème valeur :", OBJET_DE_TEST.valueserie(1) + print "La dernière valeur :", OBJET_DE_TEST.valueserie(-1) + print "Taille \"shape\" :", OBJET_DE_TEST.shape() + print "Taille \"len\" :", len(OBJET_DE_TEST) + del OBJET_DE_TEST + print + + print "======> Un vecteur de flottants" + OBJET_DE_TEST = OneVector("My float vector", unit="cm") + OBJET_DE_TEST.store( (5 , -5) ) + OBJET_DE_TEST.store( (-5, 5 ) ) + OBJET_DE_TEST.store( (1., 1.) ) + print "Les pas de stockage :", OBJET_DE_TEST.stepserie() + print "Les valeurs :", OBJET_DE_TEST.valueserie() + print "La 2ème valeur :", OBJET_DE_TEST.valueserie(1) + print "La dernière valeur :", OBJET_DE_TEST.valueserie(-1) + print "Valeurs par pas :" + print " La moyenne :", OBJET_DE_TEST.mean() + print " L'écart-type :", OBJET_DE_TEST.std() + print " La somme :", OBJET_DE_TEST.sum() + print " Le minimum :", OBJET_DE_TEST.min() + print " Le maximum :", OBJET_DE_TEST.max() + print "Valeurs globales :" + print " La moyenne :", OBJET_DE_TEST.stepmean() + print " L'écart-type :", OBJET_DE_TEST.stepstd() + print " La somme :", OBJET_DE_TEST.stepsum() + print " Le minimum :", OBJET_DE_TEST.stepmin() + print " Le maximum :", OBJET_DE_TEST.stepmax() + print " La somme cumulée :", OBJET_DE_TEST.cumsum() + print "Taille \"shape\" :", OBJET_DE_TEST.shape() + print "Taille \"len\" :", len(OBJET_DE_TEST) + del OBJET_DE_TEST + print + + print "======> Une liste hétérogène" + OBJET_DE_TEST = OneList("My list", unit="bool/cm") + OBJET_DE_TEST.store( (True , -5) ) + OBJET_DE_TEST.store( (False, 5 ) ) + OBJET_DE_TEST.store( (True , 1.) ) + print "Les pas de stockage :", OBJET_DE_TEST.stepserie() + print "Les valeurs :", OBJET_DE_TEST.valueserie() + print "La 2ème valeur :", OBJET_DE_TEST.valueserie(1) + print "La dernière valeur :", OBJET_DE_TEST.valueserie(-1) + print "Valeurs par pas : attention, on peut les calculer car True=1, False=0, mais cela n'a pas de sens" + print " La moyenne :", OBJET_DE_TEST.mean() + print " L'écart-type :", OBJET_DE_TEST.std() + print " La somme :", OBJET_DE_TEST.sum() + print " Le minimum :", OBJET_DE_TEST.min() + print " Le maximum :", OBJET_DE_TEST.max() + print "Valeurs globales : attention, on peut les calculer car True=1, False=0, mais cela n'a pas de sens" + print " La moyenne :", OBJET_DE_TEST.stepmean() + print " L'écart-type :", OBJET_DE_TEST.stepstd() + print " La somme :", OBJET_DE_TEST.stepsum() + print " Le minimum :", OBJET_DE_TEST.stepmin() + print " Le maximum :", OBJET_DE_TEST.stepmax() + print " La somme cumulée :", OBJET_DE_TEST.cumsum() + print "Taille \"shape\" :", OBJET_DE_TEST.shape() + print "Taille \"len\" :", len(OBJET_DE_TEST) + del OBJET_DE_TEST + print + + print "======> Utilisation directe de la classe Persistence" + OBJET_DE_TEST = Persistence("My object", unit="", basetype=int ) + OBJET_DE_TEST.store( 1 ) + OBJET_DE_TEST.store( 3 ) + OBJET_DE_TEST.store( 7 ) + print "Les pas de stockage :", OBJET_DE_TEST.stepserie() + print "Les valeurs :", OBJET_DE_TEST.valueserie() + print "La 2ème valeur :", OBJET_DE_TEST.valueserie(1) + print "La dernière valeur :", OBJET_DE_TEST.valueserie(-1) + print "Taille \"shape\" :", OBJET_DE_TEST.shape() + print "Taille \"len\" :", len(OBJET_DE_TEST) + del OBJET_DE_TEST + print + + print "======> Affichage d'objets stockés" + OBJET_DE_TEST = Persistence("My object", unit="", basetype=numpy.array) + D = OBJET_DE_TEST + vect1 = [1, 2, 1, 2, 1] + vect2 = [-3, -3, 0, -3, -3] + vect3 = [-1, 1, -5, 1, -1] + vect4 = 100*[0.29, 0.97, 0.73, 0.01, 0.20] + print "Stockage de 3 vecteurs de longueur identique" + D.store(vect1) + D.store(vect2) + D.store(vect3) + print "Affichage de l'ensemble du stockage sur une même image" + D.stepplot( + title = "Tous les vecteurs", + filename="vecteurs.ps", + xlabel = "Axe X", + ylabel = "Axe Y", + pause = False ) + print "Stockage d'un quatrième vecteur de longueur différente" + D.store(vect4) + print "Affichage séparé du dernier stockage" + D.plot( + item = 3, + title = "Vecteurs", + filename = "vecteur", + xlabel = "Axe X", + ylabel = "Axe Y", + pause = False ) + print "Les images ont été stockées en fichiers Postscript" + print "Taille \"shape\" du dernier objet stocké",OBJET_DE_TEST.shape() + print "Taille \"len\" du dernier objet stocké",len(OBJET_DE_TEST) + del OBJET_DE_TEST + print diff --git a/src/daComposant/daCore/PlatformInfo.py b/src/daComposant/daCore/PlatformInfo.py new file mode 100644 index 0000000..6e50e12 --- /dev/null +++ b/src/daComposant/daCore/PlatformInfo.py @@ -0,0 +1,255 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Informations sur le code et la plateforme, et mise à jour des chemins + + La classe "PlatformInfo" permet de récupérer les informations générales sur + le code et la plateforme sous forme de strings, ou d'afficher directement + les informations disponibles par les méthodes. L'impression directe d'un + objet de cette classe affiche les informations minimales. Par exemple : + print PlatformInfo() + print PlatformInfo().getVersion() + created = PlatformInfo().getDate() + + La classe "PathManagement" permet de mettre à jour les chemins système pour + ajouter les outils numériques, matrices... On l'utilise en instanciant + simplement cette classe, sans meme récupérer d'objet : + PathManagement() +""" +__author__ = "Jean-Philippe ARGAUD - Mars 2008" + +import os + +# ============================================================================== +class PlatformInfo: + """ + Rassemblement des informations sur le code et la plateforme + """ + def getName(self): + "Retourne le nom de l'application" + import version + return version.name + + def getVersion(self): + "Retourne le numéro de la version" + import version + return version.version + + def getDate(self): + "Retourne la date de création de la version" + import version + return version.date + + def getPythonVersion(self): + "Retourne la version de python utilisée" + import sys + return ".".join(map(str,sys.version_info[0:3])) + + def getNumpyVersion(self): + "Retourne la version de numpy utilisée" + import numpy.version + return numpy.version.version + + def getScipyVersion(self): + "Retourne la version de scipy utilisée" + import scipy.version + return scipy.version.version + + def getCurrentMemorySize(self): + "Retourne la taille mémoire courante utilisée" + return 1 + + def __str__(self): + import version + return "%s %s (%s)"%(version.name,version.version,version.date) + +# ============================================================================== +class PathManagement: + """ + Mise à jour du path système pour les répertoires d'outils + """ + def __init__(self): + import os, sys + parent = os.path.abspath(os.path.join(os.path.dirname(__file__),"..")) + self.__paths = {} + self.__paths["daExternals"] = os.path.join(parent,"daExternals") + self.__paths["daMatrices"] = os.path.join(parent,"daMatrices") + self.__paths["daNumerics"] = os.path.join(parent,"daNumerics") + # + for v in self.__paths.values(): + sys.path.insert(0, v ) + # + # Conserve en unique exemplaire chaque chemin + sys.path = list(set(sys.path)) + del parent + + def getpaths(self): + """ + Renvoie le dictionnaire des chemins ajoutés + """ + return self.__paths + +# ============================================================================== +class SystemUsage: + """ + Permet de récupérer les différentes tailles mémoires du process courant + """ + # + # Le module resource renvoie 0 pour les tailles mémoire. On utilise donc + # plutôt : http://code.activestate.com/recipes/286222/ et les infos de + # http://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/en-US/Reference_Guide/s2-proc-meminfo.html + # + _proc_status = '/proc/%d/status' % os.getpid() + _memo_status = '/proc/meminfo' + _scale = { + 'o': 1.0, + 'ko': 1024.0, 'mo': 1024.0*1024.0, + 'Ko': 1024.0, 'Mo': 1024.0*1024.0, + 'B': 1.0, + 'kB': 1024.0, 'mB': 1024.0*1024.0, + 'KB': 1024.0, 'MB': 1024.0*1024.0, + } + _max_mem = 0 + _max_rss = 0 + _max_sta = 0 + # + def _VmA(self, VmKey, unit): + try: + t = open(self._memo_status) + v = t.read() + t.close() + except: + return 0.0 # non-Linux? + i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...' + v = v[i:].split(None, 3) # whitespace + if len(v) < 3: + return 0.0 # invalid format? + # convert Vm value to bytes + mem = float(v[1]) * self._scale[v[2]] + return mem / self._scale[unit] + # + def getAvailablePhysicalMemory(self, unit="o"): + "Renvoie la mémoire physique utilisable en octets" + return self._VmA('MemTotal:', unit) + # + def getAvailableSwapMemory(self, unit="o"): + "Renvoie la mémoire swap utilisable en octets" + return self._VmA('SwapTotal:', unit) + # + def getAvailableMemory(self, unit="o"): + "Renvoie la mémoire totale (physique+swap) utilisable en octets" + return self._VmA('MemTotal:', unit) + self._VmA('SwapTotal:', unit) + # + def getUsableMemory(self, unit="o"): + """Renvoie la mémoire utilisable en octets + Rq : il n'est pas sûr que ce décompte soit juste... + """ + return self._VmA('MemFree:', unit) + self._VmA('SwapFree:', unit) + \ + self._VmA('Cached:', unit) + self._VmA('SwapCached:', unit) + # + def _VmB(self, VmKey, unit): + try: + t = open(self._proc_status) + v = t.read() + t.close() + except: + return 0.0 # non-Linux? + i = v.index(VmKey) # get VmKey line e.g. 'VmRSS: 9999 kB\n ...' + v = v[i:].split(None, 3) # whitespace + if len(v) < 3: + return 0.0 # invalid format? + # convert Vm value to bytes + mem = float(v[1]) * self._scale[v[2]] + return mem / self._scale[unit] + # + def getUsedMemory(self, unit="o"): + "Renvoie la mémoire totale utilisée en octets" + mem = self._VmB('VmSize:', unit) + self._max_mem = max(self._max_mem, mem) + return mem + # + def getUsedResident(self, unit="o"): + "Renvoie la mémoire résidente utilisée en octets" + mem = self._VmB('VmRSS:', unit) + self._max_rss = max(self._max_rss, mem) + return mem + # + def getUsedStacksize(self, unit="o"): + "Renvoie la taille du stack utilisé en octets" + mem = self._VmB('VmStk:', unit) + self._max_sta = max(self._max_sta, mem) + return mem + # + def getMaxUsedMemory(self): + "Renvoie la mémoire totale maximale mesurée" + return self._max_mem + # + def getMaxUsedResident(self): + "Renvoie la mémoire résidente maximale mesurée" + return self._max_rss + # + def getMaxUsedStacksize(self): + "Renvoie la mémoire du stack maximale mesurée" + return self._max_sta + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + print PlatformInfo() + print + p = PlatformInfo() + print "Les caractéristiques détaillées des applications et outils sont :" + print " - Application.......:",p.getName() + print " - Version...........:",p.getVersion() + print " - Date Application..:",p.getDate() + print " - Python............:",p.getPythonVersion() + print " - Numpy.............:",p.getNumpyVersion() + print " - Scipy.............:",p.getScipyVersion() + print + + p = PathManagement() + print "Les chemins ajoutés au système pour des outils :" + for k,v in p.getpaths().items(): + print " %12s : %s"%(k,os.path.basename(v)) + print + + m = SystemUsage() + print "La mémoire disponible est la suivante :" + print " - mémoire totale....: %4.1f Mo"%m.getAvailableMemory("Mo") + print " - mémoire physique..: %4.1f Mo"%m.getAvailablePhysicalMemory("Mo") + print " - mémoire swap......: %4.1f Mo"%m.getAvailableSwapMemory("Mo") + print " - utilisable........: %4.1f Mo"%m.getUsableMemory("Mo") + print "L'usage mémoire de cette exécution est le suivant :" + print " - mémoire totale....: %4.1f Mo"%m.getUsedMemory("Mo") + print " - mémoire résidente.: %4.1f Mo"%m.getUsedResident("Mo") + print " - taille de stack...: %4.1f Mo"%m.getUsedStacksize("Mo") + print "Création d'un objet range(1000000) et mesure mémoire" + x = range(1000000) + print " - mémoire totale....: %4.1f Mo"%m.getUsedMemory("Mo") + print "Destruction de l'objet et mesure mémoire" + del x + print " - mémoire totale....: %4.1f Mo"%m.getUsedMemory("Mo") + print "L'usage mémoire maximal de cette exécution est le suivant :" + print " - mémoire totale....: %4.1f Mo"%m.getMaxUsedMemory() + print " - mémoire résidente.: %4.1f Mo"%m.getMaxUsedResident() + print " - taille de stack...: %4.1f Mo"%m.getMaxUsedStacksize() + print diff --git a/src/daComposant/daCore/version.py b/src/daComposant/daCore/version.py new file mode 100644 index 0000000..7128d1a --- /dev/null +++ b/src/daComposant/daCore/version.py @@ -0,0 +1,23 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +name = "Data Assimilation Package" +version = "0.2.0" +date = "lundi 23 septembre 2009, 11:11:11 (UTC+0200)" diff --git a/src/daComposant/daDiagnostics/CompareMeanDependantVectors.py b/src/daComposant/daDiagnostics/CompareMeanDependantVectors.py new file mode 100644 index 0000000..d2b0dc1 --- /dev/null +++ b/src/daComposant/daDiagnostics/CompareMeanDependantVectors.py @@ -0,0 +1,118 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic qui effectue le test d egalite des moyennes de 2 vecteurs + dependants au sens du test de Student. + Ce diagnostic utilise le calcul de la p-value pour le test de Student + pour 2 vecteurs dependants + En input : la tolerance + En output : le resultat du diagnostic est une reponse booleenne au test : + True si les moyennes sont egales au sens du Test de Student + False dans le cas contraire. +""" +__author__ = "Sophie RICCI - Octobre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from ComputeStudent import DependantVectors +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + """ + Diagnostic qui effectueIndependantVectorsEqualVariance le test d egalite des moyennes de 2 vecteurs + dependants au sens du test de Student. + Ce diagnostic utilise le calcul de la p-value pour le test de Student + pour 2 vecteurs dependants + En input : la tolerance + En output : le resultat du diagnostic est une reponse booleenne au test : + True si les moyennes sont egales au sens du Test de Student + False dans le cas contraire. + """ + def __init__(self, name="", unit="", basetype = None, parameters = {} ): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool) + if not self.parameters.has_key("tolerance"): + raise ValueError("A parameter named \"tolerance\" is required.") + + def formula(self, V1, V2): + """ + Effectue le calcul de la p-value de Student pour deux vecteurs. + """ + [aire, Q, reponse, message] = DependantVectors( + vector1 = V1, + vector2 = V2, + tolerance = self.parameters["tolerance"] ) + logging.info( message ) + answerStudentTest = False + if (aire < (100.*self.parameters["tolerance"])) : + answerStudentTest = False + else: + answerStudentTest = True + return answerStudentTest + + def calculate(self, vector1 = None, vector2 = None, step = None): + """ + Active la formule de calcul + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Student value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size, or the vector types are incompatible") + value = self.formula( V1, V2 ) + self.store( value = value, step = step) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + print " Test d'égalite des moyennes au sens de Student pour deux vecteurs" + print " dépendants." + print + # + # Initialisation des inputs et appel du diagnostic + # -------------------------------------------------------------------- + tolerance = 0.05 + D = ElementaryDiagnostic("ComputeMeanStudent_DependVect", parameters = { + "tolerance":tolerance, + }) + # + # Tirage de l'echantillon aleatoire + # -------------------------------------------------------------------- + x1 = numpy.array(([-0.23262176, 1.36065207, 0.32988102, 0.24400551, -0.66765848, -0.19088483, -0.31082575, 0.56849814, 1.21453443, 0.99657516])) + x2 = numpy.array(([-0.23, 1.36, 0.32, 0.24, -0.66, -0.19, -0.31, 0.56, 1.21, 0.99])) + # + # Calcul + # -------------------------------------------------------------------- + D.calculate(x1, x2) + # + if D.valueserie(0) : + print " L'hypothèse d'égalité des moyennes est valide." + print + else : + raise ValueError("The egality of the means is NOT valid") diff --git a/src/daComposant/daDiagnostics/CompareMeanIndependantVectorsDifferentVariance.py b/src/daComposant/daDiagnostics/CompareMeanIndependantVectorsDifferentVariance.py new file mode 100644 index 0000000..15e7865 --- /dev/null +++ b/src/daComposant/daDiagnostics/CompareMeanIndependantVectorsDifferentVariance.py @@ -0,0 +1,117 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic qui effectue le test d egalite des moyennes de 2 vecteurs + independants supposes de variances differentes au sens du test de Student. + En input : la tolerance + En output : le resultat du diagnostic est une reponse booleenne au test : + True si les moyennes sont egales au sens du Test de Student + False dans le cas contraire. +""" +__author__ = "Sophie RICCI - Octobre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from ComputeStudent import IndependantVectorsDifferentVariance +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + """ + Diagnostic qui effectue le test d egalite des moyennes de 2 vecteurs + independants supposes de variances differentes au sens du test de Student. + En input : la tolerance + En output : le resultat du diagnostic est une reponse booleenne au test : + True si les moyennes sont egales au sens du Test de Student + False dans le cas contraire. + """ + def __init__(self, name="", unit="", basetype = None, parameters = {} ): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool) + if not self.parameters.has_key("tolerance"): + raise ValueError("A parameter named \"tolerance\" is required.") + + def formula(self, V1, V2): + """ + Effectue le calcul de la p-value de Student pour deux vecteurs + independants supposes de variances differentes. + """ + [aire, Q, reponse, message] = IndependantVectorsDifferentVariance( + vector1 = V1, + vector2 = V2, + tolerance = self.parameters["tolerance"], + ) + logging.info( message ) + answerStudentTest = False + if (aire < (100.*self.parameters["tolerance"])) : + answerStudentTest = False + else: + answerStudentTest = True + return answerStudentTest + + def calculate(self, vector1 = None, vector2 = None, step = None): + """ + Active la formule de calcul + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Student value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size, or the vector types are incompatible") + value = self.formula( V1, V2 ) + self.store( value = value, step = step) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + print " Test d'égalite des moyennes au sens de Student pour deux vecteurs" + print " indépendants supposés de variances différentes." + print + # + # Initialisation des inputs et appel du diagnostic + # -------------------------------------------------------------------- + tolerance = 0.05 + D = ElementaryDiagnostic("IndependantVectorsDifferentVariance", parameters = { + "tolerance":tolerance, + }) + # + # Tirage de l'echantillon aleatoire + # -------------------------------------------------------------------- + x1 = numpy.array(([-0.23262176, 1.36065207, 0.32988102, 0.24400551, -0.66765848, -0.19088483, -0.31082575, 0.56849814, 1.21453443, 0.99657516])) + x2 = numpy.array(([-0.23, 1.36, 0.32, 0.24, -0.66, -0.19, -0.31, 0.56, 1.21, 0.99])) + # + # Calcul + # -------------------------------------------------------------------- + D.calculate(x1, x2) +# + if D.valueserie(0) : + print " L'hypothèse d'égalité des moyennes est valide." + print + else : + raise ValueError("The egality of the means is NOT valid") + diff --git a/src/daComposant/daDiagnostics/CompareMeanIndependantVectorsEqualVariance.py b/src/daComposant/daDiagnostics/CompareMeanIndependantVectorsEqualVariance.py new file mode 100644 index 0000000..927cba2 --- /dev/null +++ b/src/daComposant/daDiagnostics/CompareMeanIndependantVectorsEqualVariance.py @@ -0,0 +1,116 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic qui effectue le test d egalite des moyennes de 2 vecteurs + independants supposes de variances egales au sens du test de Student. + En input : la tolerance + En output : le resultat du diagnostic est une reponse booleenne au test : + True si les moyennes sont egales au sens du Test de Student + False dans le cas contraire. +""" +__author__ = "Sophie RICCI - Octobre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from ComputeStudent import IndependantVectorsEqualVariance +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + """ + Diagnostic qui effectue le test d egalite des moyennes de 2 vecteurs independants supposes de variances egales au sens du test de Student. + En input : la tolerance + En output : le resultat du diagnostic est une reponse booleenne au test : + True si les moyennes sont egales au sens du Test de Student + False dans le cas contraire. + """ + def __init__(self, name="", unit="", basetype = None, parameters = {} ): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool) + if not self.parameters.has_key("tolerance"): + raise ValueError("A parameter named \"tolerance\" is required.") + + def formula(self, V1, V2): + """ + Effectue le calcul de la p-value de Student pour deux vecteurs + independants supposes de variances egales. + """ + [aire, Q, reponse, message] = IndependantVectorsEqualVariance( + vector1 = V1, + vector2 = V2, + tolerance = self.parameters["tolerance"], + ) + logging.info( message ) + answerStudentTest = False + if (aire < (100.*self.parameters["tolerance"])) : + answerStudentTest = False + else: + answerStudentTest = True + return answerStudentTest + + def calculate(self, vector1 = None, vector2 = None, step = None): + """ + Active la formule de calcul + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Student value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size, or the vector types are incompatible") + value = self.formula( V1, V2 ) + self.store( value = value, step = step) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + print " Test d'égalite des moyennes au sens de Student pour deux vecteurs" + print " indépendants supposés de variances égales" + print + # + # Initialisation des inputs et appel du diagnostic + # -------------------------------------------------------------------- + tolerance = 0.05 + D = ElementaryDiagnostic("ComputeMeanStudent_IndepVect_EgalVar", parameters = { + "tolerance":tolerance, + }) + # + # Tirage de l'echantillon aleatoire + # -------------------------------------------------------------------- + x1 = numpy.array(([-0.23262176, 1.36065207, 0.32988102, 0.24400551, -0.66765848, -0.19088483, -0.31082575, 0.56849814, 1.21453443, 0.99657516])) + x2 = numpy.array(([-0.23, 1.36, 0.32, 0.24, -0.66, -0.19, -0.31, 0.56, 1.21, 0.99])) + # + # Calcul + # -------------------------------------------------------------------- + D.calculate(x1, x2) + # + if D.valueserie(0) : + print " L'hypothèse d'égalité des moyennes est valide." + print + else : + raise ValueError("The egality of the means is NOT valid") + diff --git a/src/daComposant/daDiagnostics/CompareVarianceFisher.py b/src/daComposant/daDiagnostics/CompareVarianceFisher.py new file mode 100644 index 0000000..0fc3a96 --- /dev/null +++ b/src/daComposant/daDiagnostics/CompareVarianceFisher.py @@ -0,0 +1,119 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic qui compare les variances de 2 vecteurs au sens de Fisher à + l'aide du calcul de la p-value pour le test de Fisher. + - entrée : la tolérance (tolerance) sous forme de paramètres dans le + dictionnaire Par, et les deux vecteurs d'échantillons. + - sortie : le résultat du diagnostic est une réponse booléenne au test : + True si l'égalite des variances est valide au sens du test de Fisher, + False dans le cas contraire +""" +__author__ = "Sophie RICCI - Juillet 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from ComputeFisher import ComputeFisher +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + """ + Diagnostic qui compare les variances de 2 vecteurs au sens de Fisher à + l'aide du calcul de la p-value pour le test de Fisher. + - entrée : la tolérance (tolerance) sous forme de paramètres dans le + dictionnaire Par, et les deux vecteurs d'échantillons. + - sortie : le résultat du diagnostic est une réponse booléenne au test : + True si l'égalite des variances est valide au sens du test de Fisher, + False dans le cas contraire + """ + def __init__(self, name="", unit="", basetype = None, parameters = {} ): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool) + if not self.parameters.has_key("tolerance"): + raise ValueError("A parameter named \"tolerance\" is required.") + + def formula(self, V1, V2): + """ + Effectue le test de Fisher avec la p-value pour 2 vecteurs + """ + [aire, f, reponse, message] = ComputeFisher( + vector1 = V1, + vector2 = V2, + tolerance = self.parameters["tolerance"], + ) + answerKhisquareTest = False + if (aire < (100.*self.parameters["tolerance"])) : + answerKhisquareTest = False + else: + answerKhisquareTest = True + logging.info( message ) + # + return answerKhisquareTest + + def calculate(self, vector1 = None, vector2 = None, step = None): + """ + Active la formule de calcul + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Fisher p-value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size, or the vector types are incompatible") + # + value = self.formula( V1, V2 ) + # + self.store( value = value, step = step) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + print " Test d'égalite des variances pour deux vecteurs de taille 10" + print + # + # Initialisation des inputs et appel du diagnostic + # -------------------------------------------------------------------- + tolerance = 0.05 + D = ElementaryDiagnostic("CompareVarianceFisher", parameters = { + "tolerance":tolerance, + }) + # + # Tirage de l'echantillon aleatoire + # -------------------------------------------------------------------- + x1 = numpy.array(([-0.23262176, 1.36065207, 0.32988102, 0.24400551, -0.66765848, -0.19088483, -0.31082575, 0.56849814, 1.21453443, 0.99657516])) + x2 = numpy.array(([-0.23, 1.36, 0.32, 0.24, -0.66, -0.19, -0.31, 0.56, 1.21, 0.99])) + # + # Calcul + # -------------------------------------------------------------------- + D.calculate(x1, x2) + # + if D.valueserie(0) : + print " L'hypothèse d'égalité des deux variances est correcte." + print + else : + raise ValueError("L'hypothèse d'égalité des deux variances est fausse.") diff --git a/src/daComposant/daDiagnostics/ComputeBiais.py b/src/daComposant/daDiagnostics/ComputeBiais.py new file mode 100644 index 0000000..9c05425 --- /dev/null +++ b/src/daComposant/daDiagnostics/ComputeBiais.py @@ -0,0 +1,89 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Calcul du biais (i.e. la moyenne) à chaque pas. Ce diagnostic très simple + est présent pour rappeller à l'utilisateur de l'assimilation qu'il faut + qu'il vérifie le biais de ses erreurs en particulier. +""" +__author__ = "Sophie RICCI - Aout 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from AssimilationStudy import AssimilationStudy + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = float ) + + def _formula(self, V): + """ + Calcul du biais, qui est simplement la moyenne du vecteur + """ + biais = V.mean() + # + return biais + + def calculate(self, vector = None, step = None): + """ + Teste les arguments, active la formule de calcul et stocke le résultat + """ + if vector is None: + raise ValueError("One vector must be given to compute biais") + V = numpy.array(vector) + if V.size < 1: + raise ValueError("The given vector must not be empty") + # + value = self._formula( V) + # + self.store( value = value, step = step ) + +#=============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + # + # Instanciation de l'objet diagnostic + # ----------------------------------- + D = ElementaryDiagnostic("Mon ComputeBiais") + # + # Tirage d un vecteur choisi + # -------------------------- + x = numpy.matrix(([3., 4., 5.])) + print " Le vecteur de type 'matrix' choisi est..:", x + print " Le biais attendu de ce vecteur est......:", x.mean() + # + D.calculate( vector = x) + print " Le biais obtenu de ce vecteur est.......:", D.valueserie(0) + print + # + # Tirage d un vecteur choisi + # -------------------------- + x = numpy.array(range(11)) + print " Le vecteur de type 'array' choisi est...:", x + print " Le biais attendu de ce vecteur est......:", x.mean() + # + D.calculate( vector = x) + print " Le biais obtenu de ce vecteur est.......:", D.valueserie(1) + print diff --git a/src/daComposant/daDiagnostics/ComputeCostFunction.py b/src/daComposant/daDiagnostics/ComputeCostFunction.py new file mode 100644 index 0000000..9504abf --- /dev/null +++ b/src/daComposant/daDiagnostics/ComputeCostFunction.py @@ -0,0 +1,141 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Calcul de la fonction coût +""" +__author__ = "Sophie RICCI - Octobre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from AssimilationStudy import AssimilationStudy +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name) + Persistence.OneScalar.__init__( self, name, unit, basetype = float) + + def _formula(self, X, HX, Xb, Y, R, B): + """ + Calcul de la fonction cout + """ + Jb = 1./2. * (X - Xb).T * B.I * (X - Xb) + logging.info( "Partial cost function : Jb = %s"%Jb ) + # + Jo = 1./2. * (Y - HX).T * R.I * (Y - HX) + logging.info( "Partial cost function : Jo = %s"%Jo ) + # + J = Jb + Jo + logging.info( "Total cost function : J = Jo + Jb = %s"%J ) + return J + + def calculate(self, x = None, Hx = None, xb = None, yo = None, R = None, B = None , step = None): + """ + Teste les arguments, active la formule de calcul et stocke le résultat + """ + if (x is None) or (xb is None) or (yo is None) : + raise ValueError("Vectors x, xb and yo must be given to compute J") +# if (type(x) is not float) and (type(x) is not numpy.float64) : +# if (x.size < 1) or (xb.size < 1) or (yo.size < 1): +# raise ValueError("Vectors x, xb and yo must not be empty") + if hasattr(numpy.matrix(x),'A1') : + X = numpy.matrix(x).A1 + if hasattr(numpy.matrix(xb),'A1') : + Xb = numpy.matrix(xb).A1 + if hasattr(numpy.matrix(yo),'A1') : + Y = numpy.matrix(yo).A1 + B = numpy.matrix(B) + R = numpy.matrix(R) + if (Hx is None ) : + raise ValueError("The given vector must be given") +# if (Hx.size < 1) : +# raise ValueError("The given vector must not be empty") + HX = Hx.A1 + if (B is None ) or (R is None ): + raise ValueError("The matrices B and R must be given") +# if (B.size < 1) or (R.size < 1) : +# raise ValueError("The matrices B and R must not be empty") + # + value = self._formula(X, HX, Xb, Y, R, B) + # + self.store( value = value, step = step ) + +#=============================================================================== +if __name__ == "__main__": + print "\nAUTOTEST\n" + # + D = ElementaryDiagnostic("Ma fonction cout") + # + # Vecteur de type array + # --------------------- + x = numpy.array([1., 2.]) + xb = numpy.array([2., 2.]) + yo = numpy.array([5., 6.]) + H = numpy.matrix(numpy.identity(2)) + Hx = H*x + Hx = Hx.T + B = numpy.matrix(numpy.identity(2)) + R = numpy.matrix(numpy.identity(2)) + # + D.calculate( x = x, Hx = Hx, xb = xb, yo = yo, R = R, B = B) + print "Le vecteur x choisi est...:", x + print "L ebauche xb choisie est...:", xb + print "Le vecteur d observation est...:", yo + print "B = ", B + print "R = ", R + print "La fonction cout J vaut ...: %.2e"%D.valueserie(0) + print "La fonction cout J vaut ...: ",D.valueserie(0) + + if (abs(D.valueserie(0) - 16.5) > 1.e-6) : + raise ValueError("The computation of the cost function is NOT correct") + else : + print "The computation of the cost function is OK" + print + # + # float simple + # ------------ + x = 1. + print type(x) + xb = 2. + yo = 5. + H = numpy.matrix(numpy.identity(1)) + Hx = numpy.dot(H,x) + Hx = Hx.T + B = 1. + R = 1. + # + D.calculate( x = x, Hx = Hx, xb = xb, yo = yo, R = R, B = B) + print "Le vecteur x choisi est...:", x + print "L ebauche xb choisie est...:", xb + print "Le vecteur d observation est...:", yo + print "B = ", B + print "R = ", R + print "La fonction cout J vaut ...: %.2e"%D.valueserie(1) + if (abs(D.valueserie(1) - 8.5) > 1.e-6) : + raise ValueError("The computation of the cost function is NOT correct") + else : + print "The computation of the cost function is OK" + print + diff --git a/src/daComposant/daDiagnostics/ComputeCostFunctionLin.py b/src/daComposant/daDiagnostics/ComputeCostFunctionLin.py new file mode 100644 index 0000000..550be82 --- /dev/null +++ b/src/daComposant/daDiagnostics/ComputeCostFunctionLin.py @@ -0,0 +1,119 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Calcul de la fonction coût avec Hlin + HX = Hxb + Hlin dx +""" +__author__ = "Sophie RICCI - Octobre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from AssimilationStudy import AssimilationStudy +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name) + Persistence.OneScalar.__init__( self, name, unit, basetype = float) + self.__name = str( name ) + + def _formula(self, X = None, dX = None, Hlin = None, Xb=None, HXb = None, Y=None, R=None, B=None): + + """ + Calcul de la fonction cout + """ + HX = HXb + Hlin.T * dX + if hasattr(HX, 'A1') : + HX = HX.A1 + # + Jb = 1./2. * (X - Xb).T * B.I * (X - Xb) + logging.info( "Partial cost function : Jb = %s"%Jb ) + # + Jo = 1./2. * (Y - HX).T * R.I * (Y - HX) + logging.info( "Partial cost function : Jo = %s"%Jo ) + # + J = Jb + Jo + logging.info( "Total cost function : J = Jo + Jb = %s"%J ) + return J + + def calculate(self, x = None, dx = None, Hlin = None, xb = None, Hxb = None, yo = None, R = None, B = None , step = None): + """ + Teste les arguments, active la formule de calcul et stocke le résultat + """ + if (x is None) or (xb is None) or (yo is None) or (dx is None): + raise ValueError("Vectors x, dx, xb and yo must be given to compute J") + dX = dx + if hasattr(numpy.matrix(x), 'A1') : + X = numpy.matrix(x).A1 + if hasattr(numpy.matrix(xb), 'A1') : + Xb = numpy.matrix(xb).A1 + if hasattr(numpy.matrix(yo), 'A1') : + Y = numpy.matrix(yo).A1 + B = numpy.matrix(B) + R = numpy.matrix(R) + if (Hlin is None ) : + raise ValueError("HlinT vector must be given") + if (Hxb is None ) : + raise ValueError("The given vector must be given") + HXb = Hxb + if (B is None ) or (R is None ): + raise ValueError("The matrices B and R must be given") + # + value = self._formula(X, dX, Hlin, Xb, HXb, Y, R, B) + # + self.store( value = value, step = step ) + +#=============================================================================== +if __name__ == "__main__": + print "\nAUTOTEST\n" + # + D = ElementaryDiagnostic("Ma fonction cout") + # + # Vecteur de type array + # --------------------- + x = numpy.array([1., 2.]) + dx = numpy.array([0.1, 0.2]) + xb = numpy.array([2., 2.]) + yo = numpy.array([5., 6.]) + Hlin = numpy.matrix(numpy.identity(2)) + Hxb = Hlin *xb + Hxb = Hxb.T + Hxb = Hxb.A1 + B = numpy.matrix(numpy.identity(2)) + R = numpy.matrix(numpy.identity(2)) + # + D.calculate( x = x, dx = dx, Hlin = Hlin, xb = xb, Hxb = Hxb, yo = yo, R = R, B = B) + print "Le vecteur x choisi est...:", x + print "L ebauche xb choisie est...:", xb + print "Le vecteur d observation est...:", yo + print "B = ", B + print "R = ", R + print "La fonction cout J vaut ...: %.2e"%D.valueserie(0) + # + if (abs(D.valueserie(0) - 11.925) > 1.e-6) : + raise ValueError("The computation of the cost function is NOT correct") + else : + print "The computation of the cost function is OK" + print diff --git a/src/daComposant/daDiagnostics/ComputeVariance.py b/src/daComposant/daDiagnostics/ComputeVariance.py new file mode 100644 index 0000000..22ae8e6 --- /dev/null +++ b/src/daComposant/daDiagnostics/ComputeVariance.py @@ -0,0 +1,90 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Calcul de la variance d'un vecteur à chaque pas. Ce diagnostic très simple + est présent pour rappeller à l'utilisateur de l'assimilation qu'il faut + qu'il vérifie les variances de ses écarts en particulier. +""" +__author__ = "Jean-Philippe ARGAUD - Septembre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from AssimilationStudy import AssimilationStudy + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = float) + + def _formula(self, V): + """ + Calcul de la variance du vecteur en argument. Elle est faite avec une + division par la taille du vecteur. + """ + variance = V.var() + # + return variance + + def calculate(self, vector = None, step = None): + """ + Teste les arguments, active la formule de calcul et stocke le résultat + """ + if vector is None: + raise ValueError("One vector must be given to compute biais") + V = numpy.array(vector) + if V.size < 1: + raise ValueError("The given vector must not be empty") + # + value = self._formula( V) + # + self.store( value = value, step = step ) + +#=============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + # + D = ElementaryDiagnostic("Ma variance") + # + # Vecteur de type matrix + # ---------------------- + x = numpy.matrix(([3., 4., 5.])) + print " Le vecteur de type 'matrix' choisi est..:", x + print " Le moyenne de ce vecteur est............:", x.mean() + print " La variance attendue de ce vecteur est..:", x.var() + # + D.calculate( vector = x) + print " La variance obtenue de ce vecteur est...:", D.valueserie(0) + print + # + # Vecteur de type array + # --------------------- + x = numpy.array(range(11)) + print " Le vecteur de type 'array' choisi est...:", x + print " Le moyenne de ce vecteur est............:", x.mean() + print " La variance attendue de ce vecteur est..:", x.var() + # + D.calculate( vector = x) + print " La variance obtenue de ce vecteur est...:", D.valueserie(1) + print diff --git a/src/daComposant/daDiagnostics/GaussianAdequation.py b/src/daComposant/daDiagnostics/GaussianAdequation.py new file mode 100644 index 0000000..c027659 --- /dev/null +++ b/src/daComposant/daDiagnostics/GaussianAdequation.py @@ -0,0 +1,159 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic qui effectue le test du Khi2 pour juger de l'adéquation entre + la distribution d'un échantillon et une distribution gaussienne dont la + moyenne et l'écart-type sont calculés sur l'échantillon. + En input : la tolerance(tolerance) et le nombre de classes(nbclasse) + En output : Le resultat du diagnostic est une reponse booleenne au test : + True si l adequation a une distribution gaussienne est valide + au sens du test du Khi2, + False dans le cas contraire. +""" +__author__ = "Sophie RICCI - Juillet 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +from numpy import random +import Persistence +from BasicObjects import Diagnostic +from ComputeKhi2 import ComputeKhi2_Gauss +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + """ + """ + def __init__(self, name="", unit="", basetype = None, parameters = {} ): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool) + for key in ["tolerance", "dxclasse", "nbclasses"]: + if not self.parameters.has_key(key): + raise ValueError("A parameter named \"%s\" is required."%key) + + def formula(self, V): + """ + Effectue le calcul de la p-value pour un vecteur et une distribution + gaussienne et un nombre de classes donne en parametre du diagnostic. + """ + + [vectclasse, eftho, efobs, valeurKhi2, areaKhi2, message] = ComputeKhi2_Gauss( + vectorV = V, + dx = self.parameters["dxclasse"], + nbclasses = self.parameters["nbclasses"], + SuppressEmptyClasses = True) + + + logging.info( message ) + logging.info( "(si <%.2f %s on refuse effectivement l'adéquation)"%(100.*self.parameters["tolerance"],"%") ) + logging.info("vecteur des classes=%s"%numpy.size(vectclasse) ) + logging.info("valeurKhi2=%s"%valeurKhi2) + logging.info("areaKhi2=%s"%areaKhi2) + logging.info("tolerance=%s"%self.parameters["tolerance"]) + + if (areaKhi2 < (100.*self.parameters["tolerance"])) : + answerKhisquareTest = False + else: + answerKhisquareTest = True + logging.info( "La réponse au test est donc est %s"%answerKhisquareTest ) + return answerKhisquareTest + + def calculate(self, vector = None, step = None): + """ + Active la formule de calcul + """ + if vector is None: + raise ValueError("One vector must be given to calculate the Khi2 test") + V = numpy.array(vector) + if V.size < 1: + raise ValueError("The given vector must not be empty") + # + value = self.formula( V ) + # + self.store( value = value, step = step) + +# ============================================================================== +if __name__ == "__main__": + print "\n AUTODIAGNOSTIC \n" + + print " Test d adequation du khi-2 a une gaussienne pour un vecteur x" + print " connu de taille 1000, issu d'une distribution gaussienne normale" + print " en fixant la largeur des classes" + print + # + # Initialisation des inputs et appel du diagnostic + # ------------------------------------------------ + tolerance = 0.05 + dxclasse = 0.1 + D = ElementaryDiagnostic("AdequationGaussKhi2", parameters = { + "tolerance":tolerance, + "dxclasse":dxclasse, + "nbclasses":None, + }) + # + # Tirage de l'echantillon aleatoire + # --------------------------------- + numpy.random.seed(2490) + x = random.normal(50.,1.5,1000) + # + # Calcul + # ------ + D.calculate(x) + # + if D.valueserie(0) : + print " L'adequation a une distribution gaussienne est valide." + print + else : + raise ValueError("L'adéquation a une distribution gaussienne n'est pas valide.") + + + print " Test d adequation du khi-2 a une gaussienne pour u:n vecteur x" + print " connu de taille 1000, issu d'une distribution gaussienne normale" + print " en fixant le nombre de classes" + print + # + # Initialisation des inputs et appel du diagnostic + # ------------------------------------------------ + tolerance = 0.05 + nbclasses = 70. + D = ElementaryDiagnostic("AdequationGaussKhi2", parameters = { + "tolerance":tolerance, + "dxclasse":None, + "nbclasses":nbclasses + }) + # + # Tirage de l'echantillon aleatoire + # --------------------------------- + numpy.random.seed(2490) + x = random.normal(50.,1.5,1000) + # + # Calcul + # ------ + D.calculate(x) + # + if D.valueserie(0) : + print " L'adequation a une distribution gaussienne est valide." + print + else : + raise ValueError("L'adequation a une distribution gaussienne n'est pas valide.") + + diff --git a/src/daComposant/daDiagnostics/HLinearity.py b/src/daComposant/daDiagnostics/HLinearity.py new file mode 100644 index 0000000..9509de6 --- /dev/null +++ b/src/daComposant/daDiagnostics/HLinearity.py @@ -0,0 +1,143 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnotic de test sur la validité de l'hypothèse de linéarité de l'opérateur + H entre xp et xm + + Pour calculer Hlin on utilise un schéma différences finies centrées 2 + points. On définit un dxparam tel que : + xp = xb + dxparam + et + xm = xb - dxparam + On calcule Hxp et Hxm pour obtenir Hlin. Hlin est utilise dans le Blue pour + caler un paramêtre. La question importante est de choisir un dxparam pas + trop grand. + + On veut vérifier ici que l'hypothèse de linéarite du modèle par rapport au + paramêtre est valide sur l'intervalle du paramêtre [xm, xp]. Pour cela on + s'assure que l'on peut retrouver la valeur Hxb par les développemenents de + Taylor en xp et xm. Ainsi on calcule 2 estimations de Hxb, l'une à partir de + Hxp (notee Hx1) et l'autre à partir de Hxm (notee Hx2), que l'on compare à + la valeur calculée de Hxb. On s'intèresse ensuite a la distance entre Hxb et + ses estimés Hx1 et Hx2. Si la distance est inférieure a un seuil de + tolerance, l hypothese est valide. +""" +__author__ = "Sophie RICCI - Septembre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from RMS import ElementaryDiagnostic as RMS +from AssimilationStudy import AssimilationStudy + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name="", unit="", basetype = None, parameters = {} ): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool) + if not self.parameters.has_key("tolerance"): + raise ValueError("A parameter named \"tolerance\" is required.") + + def formula(self, H, dxparam, Hxp, Hxm, Hx): + """ + Test sur la validite de l hypothese de linearite de H entre xp et xm + """ + dimension = numpy.size(Hx) + # + # Reconstruit les valeurs Hx1 et Hx2 de Hx a partir de Hxm et Hxp + # --------------------------------------------------------------- + Hx1 = Hxm + H.T * dxparam + Hx2 = Hxp - H.T * dxparam + # + # Calcul de l'ecart entre Hx1 et Hx et entre Hx2 et Hx + # ---------------------------------------------------- + ADD = AssimilationStudy() + ADD.setDiagnostic("RMS", + name = "Calcul de la RMS entre Hx1 et Hx et entre Hx2 et Hx") + RMS = ADD.get("Calcul de la RMS entre Hx1 et Hx et entre Hx2 et Hx") + RMS.calculate(Hx1,Hx) + std1 = RMS.valueserie(0) + RMS.calculate(Hx2,Hx) + std2 = RMS.valueserie(1) + # + # Normalisation des écarts par Hx pour comparer a un pourcentage + # -------------------------------------------------------------- + RMS.calculate(Hx,Hx-Hx) + std = RMS.valueserie(2) + err1=std1/std + err2=std2/std + # + # Comparaison + # ----------- + if ( (err1 < self.parameters["tolerance"]) and (err2 < self.parameters["tolerance"]) ): + reponse = True + else: + reponse = False + return reponse + + def calculate(self, Hlin = None, deltaparam = None, Hxp = None, Hxm = None, Hx = None, step = None): + """ + Arguments : + - Hlin : Operateur d obsevation lineaire + - deltaparam : pas sur le parametre param + - Hxp : calcul en xp = xb + deltaparam + - Hxm : calcul en xm = xb - deltaparam + - Hx : calcul en x (generalement xb) + """ + value = self.formula( Hlin, deltaparam, Hxp, Hxm, Hx ) + # + self.store( value = value, step = step) + +#=============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + print " Diagnotic de test sur la validité de l'hypothèse de linéarité de" + print " l'opérateur H entre xp et xm." + print + # + dimension = 3 + # + # Définition des données + # ---------------------- + Hx = numpy.array(([ 2., 4., 6.])) + Hxp = numpy.array(([ 3., 5., 7.])) + Hxm = numpy.array(([ 1., 3., 5.])) + H = (Hxp - Hxm)/(2.) + dxparam = 1. + # + # Instanciation de l'objet diagnostic + # ----------------------------------- + D = ElementaryDiagnostic("Mon TestHlin", parameters = {"tolerance": 0.1}) + # + # Calcul + # ------ + D.calculate( Hlin = H, deltaparam = dxparam, Hxp = Hxp, Hxm = Hxm, Hx = Hx) + + # Validation du calcul + # -------------------- + if not D.valueserie(0) : + raise ValueError("La linearisation de H autour de x entre xm et xp est fausse pour ce cas test lineaire") + else : + print " La linéarisation de H autour de x entre xm et xp est valide pour ce cas-test linéaire." + print diff --git a/src/daComposant/daDiagnostics/HomogeneiteKhi2.py b/src/daComposant/daDiagnostics/HomogeneiteKhi2.py new file mode 100644 index 0000000..acb2413 --- /dev/null +++ b/src/daComposant/daDiagnostics/HomogeneiteKhi2.py @@ -0,0 +1,121 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic qui effectue le test du Khi2 pour juger de l'homogénéite entre + les distributions de 2 vecteurs quelconques. + - entrée : la tolerance (tolerance) et le nombre de classes (nbclasse), + sous forme de paramètres dans le dictionnaire Par + - sortie : le resultat du diagnostic est une reponse booleenne au test : + True si l homogeneite est valide au sens du test du Khi2, + False dans le cas contraire. +""" +__author__ = "Sophie RICCI - Juillet 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +from numpy import random + +import Persistence +from BasicObjects import Diagnostic +from ComputeKhi2 import ComputeKhi2_Homogen +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name="", unit="", basetype = None, parameters = {} ): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool ) + for key in ["tolerance", "dxclasse", "nbclasses"]: + if not self.parameters.has_key(key): + raise ValueError("A parameter named \"%s\" is required."%key) + + def _formula(self, V1, V2): + """ + Effectue le calcul de la p-value pour deux vecteurs et un nombre de + classes donne en parametre du diagnostic. + """ + [classes, eftheo, efobs, valeurKhi2, areaKhi2, message] = ComputeKhi2_Homogen( + vectorV1 = V1, + vectorV2 = V2, + dx = self.parameters["dxclasse"], + nbclasses = self.parameters["nbclasses"], + SuppressEmptyClasses = True) + # + logging.info( message ) + logging.info( "(si <%.2f %s on refuse effectivement l'homogeneite)"%(100.*self.parameters["tolerance"],"%") ) + # + answerKhisquareTest = False + if (areaKhi2 < (100.*self.parameters["tolerance"])) : + answerKhisquareTest = False + else: + answerKhisquareTest = True + # + return answerKhisquareTest + + def calculate(self, vector1 = None, vector2 = None, step = None): + """ + Active la formule de calcul + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Khi2 value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size") + # + value = self._formula( V1, V2 ) + # + self.store( value = value, step = step ) + +# ============================================================================== +if __name__ == "__main__": + print "\n AUTODIAGNOSTIC \n" + + print " Test d'homogeneite du Khi-2 pour deux vecteurs de taille 10," + print " issus d'une distribution gaussienne normale" + print + # + # Initialisation des inputs et appel du diagnostic + # -------------------------------------------------------------------- + tolerance = 0.05 + dxclasse = 0.5 + D = ElementaryDiagnostic("HomogeneiteKhi2", parameters = { + "tolerance":tolerance, + "dxclasse":dxclasse, + "nbclasses":None, + }) + # + # Tirage de l'echantillon aleatoire + # -------------------------------------------------------------------- + numpy.random.seed(4000) + x1 = random.normal(50.,1.5,10000) + numpy.random.seed(2490) + x2 = random.normal(50.,1.5,10000) + # + # Calcul + # -------------------------------------------------------------------- + D.calculate(x1, x2) + # + print " La reponse du test est \"%s\" pour une tolerance de %.2e et une largeur de classe de %.2e "%(D.valueserie(0), tolerance, dxclasse) + print diff --git a/src/daComposant/daDiagnostics/PlotVector.py b/src/daComposant/daDiagnostics/PlotVector.py new file mode 100644 index 0000000..97b3372 --- /dev/null +++ b/src/daComposant/daDiagnostics/PlotVector.py @@ -0,0 +1,153 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Classe pour tracer simplement un vecteur à chaque pas +""" +__author__ = "Jean-Philippe ARGAUD - Juillet 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import os.path +import numpy +from BasicObjects import Diagnostic + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + try: + import Gnuplot + self.__gnuplot = Gnuplot + except: + raise ImportError("The Gnuplot module is required to plot the vector") + + def _formula(self, + Vector, Steps, + title, xlabel, ylabel, ltitle, + geometry, + filename, + persist, + pause ): + """ + Trace en gnuplot le vecteur Vector, avec une légende générale, en X et + en Y + """ + if persist: + self.__gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist -geometry '+geometry + else: + self.__gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -geometry '+geometry + # + self.__g = self.__gnuplot.Gnuplot() # persist=1 + self.__g('set terminal '+self.__gnuplot.GnuplotOpts.default_term) + self.__g('set style data lines') + self.__g('set grid') + self.__g('set autoscale') + self.__g('set title "'+title +'"') + self.__g('set xlabel "'+xlabel+'"') + self.__g('set ylabel "'+ylabel+'"') + self.__g.plot( self.__gnuplot.Data( Steps, Vector, title=ltitle ) ) + if filename != "": + self.__g.hardcopy(filename=filename, color=1) + if pause: + raw_input('Please press return to continue...\n') + # + return 1 + + def calculate(self, vector = None, steps = None, + title = "", xlabel = "", ylabel = "", ltitle = None, + geometry = "600x400", + filename = "", + persist = False, + pause = True ): + """ + Arguments : + - vector : le vecteur à tracer, en liste ou en numpy.array + - steps : liste unique des pas de l'axe des X, ou None si c'est + la numérotation par défaut + - title : titre général du dessin + - xlabel : label de l'axe des X + - ylabel : label de l'axe des Y + - ltitle : titre associé au vecteur tracé + - geometry : taille en pixels de la fenêtre et position du coin haut + gauche, au format X11 : LxH+X+Y (défaut : 600x400) + - filename : nom de fichier Postscript pour une sauvegarde à 1 pas + Attention, il faut changer le nom à l'appel pour + plusieurs pas de sauvegarde + - persist : booléen indiquant que la fenêtre affichée sera + conservée lors du passage au dessin suivant + Par défaut, persist = False + - pause : booléen indiquant une pause après chaque tracé, et + attendant un Return + Par défaut, pause = True + """ + if vector is None: + raise ValueError("One vector must be given to plot it.") + if ltitle is None: + ltitle = "" + Vector = numpy.array(vector) + if Vector.size < 1: + raise ValueError("The given vector must not be empty") + if steps is None: + Steps = range(len( vector )) + elif not ( type(steps) is type([]) or type(steps) is not type(numpy.array([])) ): + raise ValueError("The steps must be given as a list/tuple.") + else: + Steps = list(steps) + if os.path.isfile(filename): + raise ValueError("Error: a file with this name \"%s\" already exists."%filename) + # + value = self._formula( + Vector = Vector, + Steps = Steps, + title = str(title).encode('ascii','replace'), + xlabel = str(xlabel).encode('ascii','replace'), + ylabel = str(ylabel).encode('ascii','replace'), + ltitle = str(ltitle), + geometry = str(geometry), + filename = str(filename), + persist = bool(persist), + pause = bool(pause) ) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + D = ElementaryDiagnostic("Mon Plot") + + vect = [1, 2, 1, 2, 1] + D.calculate(vect, title = "Vecteur 1", xlabel = "Axe X", ylabel = "Axe Y" ) + vect = [1, 3, 1, 3, 1] + D.calculate(vect, title = "Vecteur 2", filename = "vecteur.ps") + vect = [1, 1, 1, 1, 1] + D.calculate(vect, title = "Vecteur 3") + vect = [0.29, 0.97, 0.73, 0.01, 0.20] + D.calculate(vect, title = "Vecteur 4") + vect = [-0.23262176, 1.36065207, 0.32988102, 0.24400551, -0.66765848, -0.19088483, -0.31082575, 0.56849814, 1.21453443, 0.99657516] + D.calculate(vect, title = "Vecteur 5") + vect = [0.29, 0.97, 0.73, 0.01, 0.20] + D.calculate(vect, title = "Vecteur 6 affiche avec une autre geometrie et position", geometry="800x200+50+50") + vect = 100*[0.29, 0.97, 0.73, 0.01, 0.20] + D.calculate(vect, title = "Vecteur 7 : long construit par repetition") + vect = [0.29, 0.97, 0.73, 0.01, 0.20] + D.calculate(vect, title = "Vecteur 8", ltitle = "Vecteur 8") + temps = [0.1,0.2,0.3,0.4,0.5] + D.calculate(vect, temps, title = "Vecteur 8 avec axe du temps modifie") + print diff --git a/src/daComposant/daDiagnostics/PlotVectors.py b/src/daComposant/daDiagnostics/PlotVectors.py new file mode 100644 index 0000000..219519e --- /dev/null +++ b/src/daComposant/daDiagnostics/PlotVectors.py @@ -0,0 +1,157 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Classe pour tracer simplement une liste de vecteurs à chaque pas +""" +__author__ = "Jean-Philippe ARGAUD - Septembre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import os.path +import numpy +from BasicObjects import Diagnostic + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + try: + import Gnuplot + self.__gnuplot = Gnuplot + except: + raise ImportError("The Gnuplot module is required to plot the vector") + + def _formula(self, + Vector, Steps, + title, xlabel, ylabel, ltitle, + geometry, + filename, + persist, + pause ): + """ + Trace en gnuplot chaque vecteur de la liste Vector, avec une légende + générale, en X et en Y + """ + if persist: + self.__gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -persist -geometry '+geometry + else: + self.__gnuplot.GnuplotOpts.gnuplot_command = 'gnuplot -geometry '+geometry + # + self.__g = self.__gnuplot.Gnuplot() # persist=1 + self.__g('set terminal '+self.__gnuplot.GnuplotOpts.default_term) + self.__g('set style data lines') + self.__g('set grid') + self.__g('set autoscale') + self.__g('set title "'+title +'"') + self.__g('set xlabel "'+xlabel+'"') + self.__g('set ylabel "'+ylabel+'"') + self.__g.plot( self.__gnuplot.Data( Steps, Vector.pop(0), title=ltitle.pop(0) ) ) + for vector in Vector: + self.__g.replot( self.__gnuplot.Data( Steps, vector, title=ltitle.pop(0) ) ) + if filename != "": + self.__g.hardcopy(filename=filename, color=1) + if pause: + raw_input('Please press return to continue...\n') + # + return 1 + + def calculate(self, vector = None, steps = None, + title = "", xlabel = "", ylabel = "", ltitle = None, + geometry = "600x400", + filename = "", + persist = False, + pause = True ): + """ + Arguments : + - vector : liste des vecteurs à tracer, chacun étant en liste ou + en numpy.array + - steps : liste unique des pas, ou None si c'est la numérotation + par défaut + - title : titre général du dessin + - xlabel : label de l'axe des X + - ylabel : label de l'axe des Y + - ltitle : liste des titres associés à chaque vecteur, dans le + même ordre que les vecteurs eux-mêmes + - geometry : taille en pixels de la fenêtre et position du coin haut + gauche, au format X11 : LxH+X+Y (défaut : 600x400) + - filename : nom de fichier Postscript pour une sauvegarde à 1 pas + Attention, il faut changer le nom à l'appel pour + plusieurs pas de sauvegarde + - persist : booléen indiquant que la fenêtre affichée sera + conservée lors du passage au dessin suivant + Par défaut, persist = False + - pause : booléen indiquant une pause après chaque tracé, et + attendant un Return + Par défaut, pause = True + """ + if vector is None: + raise ValueError("One vector must be given to plot it.") + if type(vector) is not type([]) and type(vector) is not type(()): + raise ValueError("The vector(s) must be given as a list/tuple.") + if ltitle is None or len(ltitle) != len(vector): + ltitle = ["" for i in range(len(vector))] + VectorList = [] + for onevector in vector: + VectorList.append( numpy.array( onevector ) ) + if VectorList[-1].size < 1: + raise ValueError("Each given vector must not be empty.") + if steps is None: + Steps = range(len(vector[0])) + elif not ( type(steps) is type([]) or type(steps) is not type(numpy.array([])) ): + raise ValueError("The steps must be given as a list/tuple.") + else: + Steps = list(steps) + if os.path.isfile(filename): + raise ValueError("Error: a file with this name \"%s\" already exists."%filename) + # + value = self._formula( + Vector = VectorList, + Steps = Steps, + title = str(title).encode('ascii','replace'), + xlabel = str(xlabel).encode('ascii','replace'), + ylabel = str(ylabel).encode('ascii','replace'), + ltitle = [str(lt) for lt in ltitle], + geometry = str(geometry), + filename = str(filename), + persist = bool(persist), + pause = bool(pause), + ) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + D = ElementaryDiagnostic("Mon Plot") + + vect1 = [1, 2, 1, 2, 1] + D.calculate([vect1,], title = "Vecteur 1", xlabel = "Axe X", ylabel = "Axe Y" ) + vect2 = [1, 3, 1, 3, 1] + D.calculate([vect1,vect2], title = "Vecteurs 1 et 2", filename = "liste_de_vecteurs.ps") + vect3 = [-1, 1, -1, 1, -1] + D.calculate((vect1,vect2,vect3), title = "Vecteurs 1 a 3") + vect4 = 100*[0.29, 0.97, 0.73, 0.01, 0.20] + D.calculate([vect4,], title = "Vecteur 4 : long construit par repetition") + D.calculate( + (vect1,vect2,vect3), + [0.1,0.2,0.3,0.4,0.5], + title = "Vecteurs 1 a 3, temps modifie", + ltitle = ["Vecteur 1","Vecteur 2","Vecteur 3"]) + print diff --git a/src/daComposant/daDiagnostics/RMS.py b/src/daComposant/daDiagnostics/RMS.py new file mode 100644 index 0000000..9ca170f --- /dev/null +++ b/src/daComposant/daDiagnostics/RMS.py @@ -0,0 +1,92 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Calcul d'une RMS +""" +__author__ = "Jean-Philippe ARGAUD - Juillet 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import math +import numpy +import Persistence +from BasicObjects import Diagnostic + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = float) + + def _formula(self, V1, V2): + """ + Fait un écart RMS entre deux vecteurs V1 et V2 + """ + rms = math.sqrt( ((V2 - V1)**2).sum() / float(V1.size) ) + # + return rms + + def calculate(self, vector1 = None, vector2 = None, step = None): + """ + Teste les arguments, active la formule de calcul et stocke le résultat + """ + if vector1 is None or vector2 is None: + raise ValueError("Two vectors must be given to calculate their RMS") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if V1.size < 1 or V2.size < 1: + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size") + # + value = self._formula( V1, V2 ) + # + self.store( value = value, step = step ) + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + + D = ElementaryDiagnostic("Ma RMS") + + vect1 = [1, 2, 1, 2, 1] + vect2 = [2, 1, 2, 1, 2] + D.calculate(vect1,vect2) + vect1 = [1, 3, 1, 3, 1] + vect2 = [2, 2, 2, 2, 2] + D.calculate(vect1,vect2) + vect1 = [1, 1, 1, 1, 1] + vect2 = [2, 2, 2, 2, 2] + D.calculate(vect1,vect2) + vect1 = [1, 1, 1, 1, 1] + vect2 = [4, -2, 4, -2, -2] + D.calculate(vect1,vect2) + vect1 = [0.29, 0.97, 0.73, 0.01, 0.20] + vect2 = [0.92, 0.86, 0.11, 0.72, 0.54] + D.calculate(vect1,vect2) + vect1 = [-0.23262176, 1.36065207, 0.32988102, 0.24400551, -0.66765848, -0.19088483, -0.31082575, 0.56849814, 1.21453443, 0.99657516] + vect2 = [0,0,0,0,0,0,0,0,0,0] + D.calculate(vect1,vect2) + print " Les valeurs de RMS attendues sont les suivantes : [1.0, 1.0, 1.0, 3.0, 0.53162016515553656, 0.73784217096601323]" + print " Les RMS obtenues................................:", D.valueserie() + print " La moyenne......................................:", D.stepmean() + print + diff --git a/src/daComposant/daDiagnostics/ReduceBiais.py b/src/daComposant/daDiagnostics/ReduceBiais.py new file mode 100644 index 0000000..31ebc18 --- /dev/null +++ b/src/daComposant/daDiagnostics/ReduceBiais.py @@ -0,0 +1,111 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic sur la reduction du biais lors de l'analyse +""" +__author__ = "Sophie RICCI - Aout 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from AssimilationStudy import AssimilationStudy + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool) + + def _formula(self, V1, V2): + """ + Vérification de la reduction du biais entre OMB et OMA lors de l'analyse + """ + biaisOMB = V1.mean() + biaisOMA = V2.mean() + # + if biaisOMA > biaisOMB: + reducebiais = False + else : + reducebiais = True + # + return reducebiais + + def calculate(self, vectorOMB = None, vectorOMA = None, step = None): + """ + Teste les arguments, active la formule de calcul et stocke le résultat + Arguments : + - vectorOMB : vecteur d'écart entre les observations et l'ébauche + - vectorOMA : vecteur d'écart entre les observations et l'analyse + """ + if ( (vectorOMB is None) or (vectorOMA is None) ): + raise ValueError("Two vectors must be given to test the reduction of the biais after analysis") + V1 = numpy.array(vectorOMB) + V2 = numpy.array(vectorOMA) + if V1.size < 1 or V2.size < 1: + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size") + # + value = self._formula( V1, V2 ) + # + self.store( value = value, step = step ) + +#=============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + # + # Instanciation de l'objet diagnostic + # ----------------------------------- + D = ElementaryDiagnostic("Mon ReduceBiais") + # + # Tirage des 2 vecteurs choisis + # ------------------------------- + x1 = numpy.matrix(([3. , 4., 5. ])) + x2 = numpy.matrix(([1.5, 2., 2.5])) + print " L'écart entre les observations et l'ébauche est OMB :", x1 + print " La moyenne de OMB (i.e. le biais) est de............:", x1.mean() + print " L'écart entre les observations et l'analyse est OMA :", x2 + print " La moyenne de OMA (i.e. le biais) est de............:", x2.mean() + # + D.calculate( vectorOMB = x1, vectorOMA = x2) + if not D.valueserie(0) : + print " Résultat : l'analyse NE RÉDUIT PAS le biais" + else : + print " Résultat : l'analyse RÉDUIT le biais" + print + # + # Tirage des 2 vecteurs choisis + # ------------------------------- + x1 = numpy.matrix(range(-5,6)) + x2 = numpy.array(range(11)) + print " L'écart entre les observations et l'ébauche est OMB :", x1 + print " La moyenne de OMB (i.e. le biais) est de............:", x1.mean() + print " L'écart entre les observations et l'analyse est OMA :", x2 + print " La moyenne de OMA (i.e. le biais) est de............:", x2.mean() + # + D.calculate( vectorOMB = x1, vectorOMA = x2) + if not D.valueserie(1) : + print " Résultat : l'analyse NE RÉDUIT PAS le biais" + else : + print " Résultat : l'analyse RÉDUIT le biais" + print diff --git a/src/daComposant/daDiagnostics/ReduceVariance.py b/src/daComposant/daDiagnostics/ReduceVariance.py new file mode 100644 index 0000000..2272eac --- /dev/null +++ b/src/daComposant/daDiagnostics/ReduceVariance.py @@ -0,0 +1,116 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic sur la reduction de la variance lors de l'analyse +""" +__author__ = "Jean-Philippe ARGAUD - Septembre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from AssimilationStudy import AssimilationStudy + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool ) + + def _formula(self, V1, V2): + """ + Vérification de la reduction de variance sur les écarts entre OMB et OMA + lors de l'analyse + """ + varianceOMB = V1.var() + varianceOMA = V2.var() + # + if varianceOMA > varianceOMB: + reducevariance = False + else : + reducevariance = True + # + return reducevariance + + def calculate(self, vectorOMB = None, vectorOMA = None, step = None): + """ + Teste les arguments, active la formule de calcul et stocke le résultat + Arguments : + - vectorOMB : vecteur d'écart entre les observations et l'ébauche + - vectorOMA : vecteur d'écart entre les observations et l'analyse + """ + if ( (vectorOMB is None) or (vectorOMA is None) ): + raise ValueError("Two vectors must be given to test the reduction of the variance after analysis") + V1 = numpy.array(vectorOMB) + V2 = numpy.array(vectorOMA) + if V1.size < 1 or V2.size < 1: + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size") + # + value = self._formula( V1, V2 ) + # + self.store( value = value, step = step ) + +#=============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + # + # Instanciation de l'objet diagnostic + # ----------------------------------- + D = ElementaryDiagnostic("Mon ReduceVariance") + # + # Vecteur de type matrix + # ---------------------- + x1 = numpy.matrix(([3. , 4., 5. ])) + x2 = numpy.matrix(([1.5, 2., 2.5])) + print " L'écart entre les observations et l'ébauche est OMB :", x1 + print " La moyenne de OMB (i.e. le biais) est de............:", x1.mean() + print " La variance de OMB est de...........................:", x1.var() + print " L'écart entre les observations et l'analyse est OMA :", x2 + print " La moyenne de OMA (i.e. le biais) est de............:", x2.mean() + print " La variance de OMA est de...........................:", x2.var() + # + D.calculate( vectorOMB = x1, vectorOMA = x2) + if not D.valueserie(0) : + print " Résultat : l'analyse NE RÉDUIT PAS la variance" + else : + print " Résultat : l'analyse RÉDUIT la variance" + print + # + # Vecteur de type array + # --------------------- + x1 = numpy.array(range(11)) + x2 = numpy.matrix(range(-10,12,2)) + print " L'écart entre les observations et l'ébauche est OMB :", x1 + print " La moyenne de OMB (i.e. le biais) est de............:", x1.mean() + print " La variance de OMB est de...........................:", x1.var() + print " L'écart entre les observations et l'analyse est OMA :", x2 + print " La moyenne de OMA (i.e. le biais) est de............:", x2.mean() + print " La variance de OMA est de...........................:", x2.var() + # + D.calculate( vectorOMB = x1, vectorOMA = x2) + if not D.valueserie(1) : + print " Résultat : l'analyse NE RÉDUIT PAS la variance" + else : + print " Résultat : l'analyse RÉDUIT la variance" + print diff --git a/src/daComposant/daDiagnostics/StopReductionVariance.py b/src/daComposant/daDiagnostics/StopReductionVariance.py new file mode 100644 index 0000000..7ebe03a --- /dev/null +++ b/src/daComposant/daDiagnostics/StopReductionVariance.py @@ -0,0 +1,121 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic sur l'arrêt (ou le ralentissement) de la réduction de la variance + au fil des pas (ou itérations) de l'analyse. + Ce diagnostic s'applique typiquement au vecteur de différence entre la + variance de OMB et la variance de OMA au fil du temps ou des itérations: + V[i] = vecteur des VAR(OMB)[i] - VAR(OMA)[i] au temps ou itération i. +""" +__author__ = "Sophie Ricci - Septembre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from AssimilationStudy import AssimilationStudy + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = int ) + + def _formula(self, V, CutOffSlope, MultiSlope0): + """ + Recherche du pas de temps ou iteration pour laquelle la reduction + de la variance est + - inferieure a la valeur seuil CutOffSlope + (si une valeure est donnee a CutOffSlope) + - inferieure a MultiSlope0 * la pente a la premiere iteration + (si une valeure est donnee a MultiSlope0) + V[i] = vecteur des VAR(OMB)[i] - VAR(OMA)[i] au temps ou iteration i. + """ + N = V.size + pente = numpy.matrix(numpy.zeros((N,))).T + iterstopreduction = 0. + for i in range (1, N) : + pente[i] = V[i]- V[i-1] + if pente[i] > 0.0 : + raise ValueError("The analysis is INCREASING the variance a l iteration ", i) + if CutOffSlope is not None: + if numpy.abs(pente[i]) < CutOffSlope : + iterstopreduction = i + break + if MultiSlope0 is not None: + if numpy.abs(pente[i]) < MultiSlope0 * numpy.abs(pente[1]) : + iterstopreduction = i + break + # + return iterstopreduction + + def calculate(self, vector = None, CutOffSlope = None, MultiSlope0 = None, step = None) : + """ + Teste les arguments, active la formule de calcul et stocke le resultat + Arguments : + - vector : vecteur des VAR(OMB) - VAR(OMA) au fil des iterations + - CutOffSlope : valeur minimale de la pente + - MultiSlope0 : Facteur multiplicatif de la pente initiale pour comparaison + """ + if (vector is None) : + raise ValueError("One vector must be given to test the convergence of the variance after analysis") + V = numpy.array(vector) + if V.size < 1 : + raise ValueError("The given vector must not be empty") + if (MultiSlope0 is None) and (CutOffSlope is None) : + raise ValueError("You must set the value of ONE of the CutOffSlope of MultiSlope0 key word") + # + value = self._formula( V, CutOffSlope, MultiSlope0 ) + # + self.store( value = value, step = step ) + +#=============================================================================== +if __name__ == "__main__": + print "\n AUTODIAGNOSTIC \n" + + # Instanciation de l'objet diagnostic + # ------------------------------------------------ + D = ElementaryDiagnostic("Mon StopReductionVariance") + + # Vecteur de reduction VAR(OMB)-VAR(OMA) + # ------------------------------------------------ + x = numpy.array(([0.60898111, 0.30449056, 0.15224528, 0.07612264, 0.03806132, 0.01903066, 0.00951533, 0.00475766, 0.00237883, 0.00118942])) + print " Le vecteur choisi est :", x + print " Sur ce vecteur, la reduction a l iteration N = 7 est inferieure a 0.005" + print " Sur ce vecteur, la reduction a l iteration N = 8 est inferieure a 0.01 * la reduction a l iteration 1" + + # Comparaison a la valeur seuil de la reduction + # ------------------------------------------------ + D.calculate( vector = x, CutOffSlope = 0.005, MultiSlope0 = None) + if (D.valueserie(0) - 7.) < 1.e-15 : + print " Test : La comparaison a la valeur seuil de la reduction est juste" + else : + print " Test : La comparaison a la valeur seuil de la reduction est fausse" + + # Comparaison a alpha* la reduction a la premiere iteration + # ------------------------------------------------ + D.calculate( vector = x, CutOffSlope = None, MultiSlope0 = 0.01) + if (D.valueserie(1) - 8.) < 1.e-15 : + print " Test : La comparaison a la reduction a la premiere iteration est juste" + else : + print " Test : La comparaison a la reduction a la premiere iteration est fausse" + print diff --git a/src/daComposant/daDiagnostics/VarianceOrder.py b/src/daComposant/daDiagnostics/VarianceOrder.py new file mode 100644 index 0000000..202838e --- /dev/null +++ b/src/daComposant/daDiagnostics/VarianceOrder.py @@ -0,0 +1,129 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Diagnostic sur les variances dans B et R par rapport à l'ébauche Xb et aux + observations Y. On teste si on a les conditions : + 1%*xb < sigma_b < 10%*xb + et + 1%*yo < sigma_o < 10%*yo + Le diagnostic renvoie True si les deux conditions sont simultanément + vérifiées, False dans les autres cas. +""" +__author__ = "Sophie RICCI, Jean-Philippe ARGAUD - Septembre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +import Persistence +from BasicObjects import Diagnostic +from scipy.linalg import eig +import logging + +# ============================================================================== +class ElementaryDiagnostic(Diagnostic,Persistence.OneScalar): + def __init__(self, name = "", unit = "", basetype = None, parameters = {}): + Diagnostic.__init__(self, name, parameters) + Persistence.OneScalar.__init__( self, name, unit, basetype = bool ) + + def _formula(self, xb, B, yo, R): + """ + Comparaison des variables et de leur variance relative + """ + valpB = eig(B, left = False, right = False) + valpR = eig(R, left = False, right = False) + logging.info(" Si l on souhaite 1%s*xb < sigma_b < 10%s*xb, les valeurs propres de B doivent etre comprises dans l intervalle [%.3e,%.3e]"%("%","%",1.e-4*xb.mean()*xb.mean(),1.e-2*xb.mean()*xb.mean())) + logging.info(" Si l on souhaite 1%s*yo < sigma_o < 10%s*yo, les valeurs propres de R doivent etre comprises dans l intervalle [%.3e,%.3e]"%("%","%",1.e-4*yo.mean()*yo.mean(),1.e-2*yo.mean()*yo.mean())) + # + limite_inf_valp = 1.e-4*xb.mean()*xb.mean() + limite_sup_valp = 1.e-2*xb.mean()*xb.mean() + variancexb = (valpB >= limite_inf_valp).all() and (valpB <= limite_sup_valp).all() + logging.info(" La condition empirique sur la variance de Xb est....: %s"%variancexb) + # + limite_inf_valp = 1.e-4*yo.mean()*yo.mean() + limite_sup_valp = 1.e-2*yo.mean()*yo.mean() + varianceyo = (valpR >= limite_inf_valp).all() and (valpR <= limite_sup_valp).all() + logging.info(" La condition empirique sur la variance de Y est.....: %s",varianceyo) + # + variance = variancexb and varianceyo + logging.info(" La condition empirique sur la variance globale est..: %s"%variance) + # + return variance + + def calculate(self, Xb = None, B = None, Y = None, R = None, step = None): + """ + Teste les arguments, active la formule de calcul et stocke le résultat + Arguments : + - Xb : valeur d'ébauche du paramêtre + - B : matrice de covariances d'erreur d'ébauche + - yo : vecteur d'observation + - R : matrice de covariances d'erreur d'observation + """ + if (Xb is None) or (B is None) or (Y is None) or (R is None): + raise ValueError("You must specify Xb, B, Y, R") + yo = numpy.array(Y) + BB = numpy.matrix(B) + xb = numpy.array(Xb) + RR = numpy.matrix(R) + if (RR.size < 1 ) or (BB.size < 1) : + raise ValueError("The background and the observation covariance matrices must not be empty") + if ( yo.size < 1 ) or ( xb.size < 1 ): + raise ValueError("The Xb background and the Y observation vectors must not be empty") + if xb.size*xb.size != BB.size: + raise ValueError("Xb background vector and B covariance matrix sizes are not consistent") + if yo.size*yo.size != RR.size: + raise ValueError("Y observation vector and R covariance matrix sizes are not consistent") + if yo.all() == 0. or xb.all() == 0. : + raise ValueError("The diagnostic can not be applied to zero vectors") + # + value = self._formula( xb, BB, yo, RR) + # + self.store( value = value, step = step ) + +#=============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + # + # Instanciation de l'objet diagnostic + # ----------------------------------- + D = ElementaryDiagnostic("Mon OrdreVariance") + # + # Vecteur de type matrix + # ---------------------- + xb = numpy.array([11000.]) + yo = numpy.array([1.e12 , 2.e12, 3.e12 ]) + B = 1.e06 * numpy.matrix(numpy.identity(1)) + R = 1.e22 * numpy.matrix(numpy.identity(3)) + # + D.calculate( Xb = xb, B = B, Y = yo, R = R) + print " L'ébauche est.......................................:",xb + print " Les observations sont...............................:",yo + print " La valeur moyenne des observations est..............: %.2e"%yo.mean() + print " La valeur moyenne de l'ebauche est..................: %.2e"%xb.mean() + print " La variance d'ébauche specifiée est.................: %.2e"%1.e6 + print " La variance d'observation spécifiée est.............: %.2e"%1.e22 + # + if D.valueserie(0) : + print " Les variances specifiées sont de l'ordre de 1% a 10% de l'ébauche et des observations" + else : + print " Les variances specifiées ne sont pas de l'ordre de 1% a 10% de l'ébauche et des observations" + print + + diff --git a/src/daComposant/daDiagnostics/__init__.py b/src/daComposant/daDiagnostics/__init__.py new file mode 100644 index 0000000..6bcb582 --- /dev/null +++ b/src/daComposant/daDiagnostics/__init__.py @@ -0,0 +1,19 @@ +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# diff --git a/src/daComposant/daExternals/ASTER/Building_AD_from_Aster.xml b/src/daComposant/daExternals/ASTER/Building_AD_from_Aster.xml new file mode 100644 index 0000000..51be879 --- /dev/null +++ b/src/daComposant/daExternals/ASTER/Building_AD_from_Aster.xml @@ -0,0 +1,275 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Building_Bparametres + (lp1 +. + + + Building_Xbparametres + (lp1 +. + + + Building_Yocalcul + (lp1 +. + + + Building_Yoexperiences + (lp1 +. + + + Sorties du calcul ADxa + + + + + Sorties du calcul ADA + + + + + Sorties du calcul ADInnovation + + + + + Sorties du calcul ADxb + + + + + Sorties du calcul ADYo + + + + + Sorties du calcul ADB + + + + + Sorties du calcul ADR + + + + + Sorties du calcul ADH + + + + + Building_Rexperiences + (lp1 +. + + + Entrees du calcul ADXb + + + + + Entrees du calcul ADYo + + + + + Entrees du calcul ADB + + + + + Entrees du calcul ADR + + + + + Entrees du calcul ADH + + + + + + + + + + + diff --git a/src/daComposant/daExternals/ASTER/Building_H_linear.xml b/src/daComposant/daExternals/ASTER/Building_H_linear.xml new file mode 100644 index 0000000..9bc3e25 --- /dev/null +++ b/src/daComposant/daExternals/ASTER/Building_H_linear.xml @@ -0,0 +1,335 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Perturbated_point_X ASTER + + Perturbated_point_X X + ASTER X + + + Perturbated_point_X iter + ASTER iter + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Finite_differences_derivation Gradient + Input Finite_differences_derivation + Input Gradient + Temporary_Parameters Finite_differences_derivation + Temporary_Parameters Gradient + + Finite_differences_derivation SmplPrt + Finite_differences_derivation.Elementary_calculation.Perturbated_point_X iter + + + Input nbBranches + Finite_differences_derivation nbBranches + + + Input itervect + Finite_differences_derivation SmplsCollection + + + Input seq_X + Finite_differences_derivation.Elementary_calculation.Perturbated_point_X seq_X + + + Input dX + Gradient dX + + + Temporary_Parameters ASTER_ROOT + Finite_differences_derivation.Elementary_calculation.ASTER ASTER_ROOT + + + Temporary_Parameters rcdir + Finite_differences_derivation.Elementary_calculation.ASTER rcdir + + + Temporary_Parameters debug + Finite_differences_derivation.Elementary_calculation.ASTER debug + + + Temporary_Parameters DISPLAY + Finite_differences_derivation.Elementary_calculation.ASTER DISPLAY + + + Temporary_Parameters SOURCES_ROOT + Finite_differences_derivation.Elementary_calculation.ASTER SOURCES_ROOT + + + Temporary_Parameters SOURCES_ROOT + Gradient SOURCES_ROOT + + + Temporary_Parameters export + Finite_differences_derivation.Elementary_calculation.ASTER export + + + Temporary_Parameters parametres + Finite_differences_derivation.Elementary_calculation.ASTER parametres + + + Temporary_Parameters calcul + Finite_differences_derivation.Elementary_calculation.ASTER calcul + + + Temporary_Parameters experience + Finite_differences_derivation.Elementary_calculation.ASTER experience + + + Temporary_Parameters fileparameters + Finite_differences_derivation.Elementary_calculation.ASTER fileparameters + + + Finite_differences_derivation.Elementary_calculation.ASTER FX + Gradient seq_FX + + + Finite_differences_derivation.Elementary_calculation.ASTER FY + Gradient seq_FY + + + Finite_differences_derivation.Elementary_calculation.ASTER DIMS + Gradient seq_DIMS + + + Finite_differences_derivation.Elementary_calculation.ASTER DIAG + Gradient lst_DIAG + + + Finite_differences_derivation.Elementary_calculation.ASTER iter + Gradient lst_iter + + + + H_linearization.Finite_differences_derivation.Elementary_calculation.ASTERX + +80000 +1000 +30 + + + + H_linearization.Temporary_ParametersASTER_ROOT + '' + + + H_linearization.Temporary_Parametersrcdir + '' + + + H_linearization.Temporary_Parametersdebug + 0 + + + H_linearization.Temporary_ParametersDISPLAY + :0.0 + + + H_linearization.Temporary_ParametersSOURCES_ROOT + . + + + H_linearization.Temporary_Parametersexport + '' + + + H_linearization.Temporary_Parametersparametres + + + + H_linearization.Temporary_Parameterscalcul + + + + H_linearization.Temporary_Parametersexperience + + + + H_linearization.Temporary_Parametersfileparameters + [] + + + H_linearization.InputX + +80000 +1000 +30 + + + + H_linearization.InputdX + +0.001 +0.001 +0.0001 + + + + + + + + + + + + diff --git a/src/daComposant/daExternals/YACS/Algorithmes_AD.xml b/src/daComposant/daExternals/YACS/Algorithmes_AD.xml new file mode 100644 index 0000000..fd2ac9e --- /dev/null +++ b/src/daComposant/daExternals/YACS/Algorithmes_AD.xml @@ -0,0 +1,252 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + BLUE par matricesYo + + + + + BLUE par matricesB + + + + + BLUE par matricesR + + + + + BLUE par matricesH + + + + + BLUE par matricesXb + + + + + 3D-VAR par matricesYo + + + + + 3D-VAR par matricesB + + + + + 3D-VAR par matricesR + + + + + 3D-VAR par matricesH + + + + + 3D-VAR par matricesXb + + + + + 3D-VAR par fonctionsYo + + + + + 3D-VAR par fonctionsB + + + + + 3D-VAR par fonctionsR + + + + + 3D-VAR par fonctionsXb + + + + + 3D-VAR par fonctionsBounds + (lp1 +. + + + + + + diff --git a/src/daComposant/daExternals/__init__.py b/src/daComposant/daExternals/__init__.py new file mode 100644 index 0000000..6bcb582 --- /dev/null +++ b/src/daComposant/daExternals/__init__.py @@ -0,0 +1,19 @@ +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# diff --git a/src/daComposant/daMatrices/__init__.py b/src/daComposant/daMatrices/__init__.py new file mode 100644 index 0000000..6bcb582 --- /dev/null +++ b/src/daComposant/daMatrices/__init__.py @@ -0,0 +1,19 @@ +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# diff --git a/src/daComposant/daNumerics/ComputeFisher.py b/src/daComposant/daNumerics/ComputeFisher.py new file mode 100644 index 0000000..d6c46f7 --- /dev/null +++ b/src/daComposant/daNumerics/ComputeFisher.py @@ -0,0 +1,116 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Outil numérique de calcul de la variable de Fisher pour comparer les + variances de 2 échantillons + + Ce calcul nécessite : + - en input : + - les deux vecteurs (comme liste, array ou matrix) d'échantillons + dont on veut comparer la variance, + - la tolérance + - en output : + - la p-value, + - la valeur de la variable aléatoire, + - la réponse au test ainsi que + - le message qui interprete la reponse du test. +""" +__author__ = "Sophie RICCI - Juillet 2008" + +import numpy +from scipy.stats import betai + +# ============================================================================== +def ComputeFisher(vector1 = None, vector2 = None, tolerance = 0.05 ): + """ + Outil numérique de calcul de la variable de Fisher pour comparer les + variances de 2 échantillons + + Ce calcul nécessite : + - en input : les deux vecteurs (comme liste, array ou matrix) + d'échantillons dont on veut comparer la variance, la + tolérance + - en output : la p-value, la valeur de la variable aléatoire, + la réponse au test ainsi que le message qui interprete + la reponse du test. + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Fisher value value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + # + # Calcul des variances des echantillons + # ------------------------------------- + # où var est calculee comme : var = somme (xi -xmean)**2 /(n-1) + n1 = V1.size + n2 = V2.size + var1 = V1.std() * V1.std() + var2 = V2.std() * V2.std() + if (var1 > var2): + f = var1/var2 + df1 = n1-1 + df2 = n2-1 + else: + f= var2/var1 + df1 = n2-1 + df2 = n1-1 + prob1= betai(0.5*df2,0.5*df1,float(df2)/float(df2+df1*f)) + prob2= (1. - betai(0.5*df1, 0.5*df2, float(df1)/float(df1+df2/f))) + prob = prob1 + prob2 + # + # Calcul de la p-value + # -------------------- + areafisher = 100 * prob + # + # Test + # ---- + message = "Il y a %.2f%s de chance de se tromper en refusant l'hypothèse d'égalité des variances des 2 échantillons (si <%.2f%s, on refuse effectivement l'égalité)"%(areafisher,"%",100.*tolerance,"%") + if (areafisher < (100.*tolerance)) : + answerTestFisher = False + else: + answerTestFisher = True + # print "La reponse au test est", answerTestFisher + + return areafisher, f, answerTestFisher, message + +# ============================================================================== +if __name__ == "__main__": + print "\nAUTOTEST\n" + # + # Echantillons + # ------------ + x1 = [-1., 0., 4., 2., -1., 3.] + x2 = [-1., 0., 4., 2., -1., 3.] + # + # Appel du calcul + # --------------- + [aire, f, reponse, message] = ComputeFisher( + vector1 = x1, + vector2 = x2, + tolerance = 0.05 ) + # + print " aire.....:", aire + print " f........:", f + print " reponse..:", reponse + print " message..:", message + print diff --git a/src/daComposant/daNumerics/ComputeKhi2.py b/src/daComposant/daNumerics/ComputeKhi2.py new file mode 100644 index 0000000..31e26d0 --- /dev/null +++ b/src/daComposant/daNumerics/ComputeKhi2.py @@ -0,0 +1,420 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Outil numerique de calcul de la variable Khi2 + + On peut realiser deux types de test du Khi2 : + - test d'adequation : comparer la distribution d'un echantillon a une + distribution theorique, + - test d'homogeneite : comparer les distributions de 2 vecteurs. + + Pour le test d'adequation, on travaille sur une gaussienne + dont la moyenne et l'ecart type sont calcules sur + l'echantillon, soit donnes. + + Ce fichier contient une classe "StatspourTests" de methodes qui realisent + differentes etapes utiles aux calculs des tests du Khi2. + + Ce fichier contient de plus 3 methodes : ComputeKhi2_testGauss, + ComputeKhi2_Gauss et ComputeKhi2_Homogen. + - ComputeKhi2_testGauss : calcul la distance du Khi2 entre un vecteur + aleatoire issu d un gaussienne et une distribution theorique gaussienne + dont on specifie la moyenne et l ecart type + - ComputeKhi2_Gauss : calcul la distance du Khi2 entre un vecteur donne et + une distribution theorique gaussienne dont la moyenne et l ecart type sont + calcules sur l echantillon + - ComputeKhi2_Homogen : calcul la distance du Khi2 entre deux vecteurs donnes + + Ces methodes necessitent et fournissent : + - en input : + - le ou les vecteurs dont on etudie la distribution, + - la distribution theorique et eventuellement la moyenne et ecart type, + - la largeur des classes, + - un booleen traduisant la suppression des classes vides + - en output : + - le vecteur des classes, + - les pdf theorique et donnee, + - la valeur du Khi2, + - la p-value qui represent l'aire de la queue de la distribution du + Khi2 et + - le message qui interprete le test. +""" +__author__ = "Sophie RICCI - Mars 2010" + +import numpy +from numpy import random +from scipy import arange, asarray, stats +from scipy.stats import histogram2, chisquare, chisqprob, norm +import logging + +# ============================================================================== +class StatspourTests : + """ + Classe de methodes pour la preparation du test de Khi2 + """ + def __init__(self, cdftheo=None, meantheo = None, stdtheo = None, pdftest=None,obs=None,use_mean_std_exp=True, dxmin=0.01, obsHomogen = None, nbclasses = None) : + + + if (pdftest is None and obs is None) : + raise ValueError('Donner soit une pdf de test soit un vecteur obs') + if not obs is None : + if pdftest is None : + self.__obs=asarray(obs) + if not pdftest is None : + if obs is None : + if len(pdftest) == 3: + niter=eval(pdftest[2]) + obs=[eval(" ".join(pdftest[:2])) for z in range(niter)] + self.__obs=asarray(obs) + else : + self.__obs=asarray(eval(" ".join(pdftest[:2]))) + if not (obsHomogen is None) : + self.__obsHomogen = asarray(obsHomogen) + self.__testHomogen = True + else : + self.__testHomogen = False + + + self.__mean_exp = self.__obs.mean() + self.__std_exp = self.__obs.std() + + if cdftheo is None : raiseValueError(" ... Definir le parametre cdftheo ...") + if use_mean_std_exp : + self.__cdf=cdftheo( self.__mean_exp, self.__std_exp).cdf + else : + self.__cdf=cdftheo( meantheo, stdtheo).cdf + + self.__min=min(self.__obs) + self.__max=max(self.__obs) + self.__N=len(self.__obs) + self.__use_mean_std_exp=use_mean_std_exp + self.__dxmin=dxmin + self.__nbclasses = nbclasses + if not (dxmin is None) and not (nbclasses is None) : + raise ValueError("... Specifier soit le nombre de classes, soit la largeur des classes") + if (dxmin is None) and (nbclasses is None) : + raise ValueError("... Specifier soit le nombre de classes, soit la largeur des classes") + if not (nbclasses is None) and (dxmin is None) : + self.__dxmin = (self.__max - self.__min ) / float(self.__nbclasses) + return None + + def MakeClasses(self) : + """ + Classification en classes + """ + self.__subdiv=arange(self.__min,self.__max+self.__dxmin,self.__dxmin) + self.__modalites=len(self.__subdiv) + return None + + def ComputeObs(self): + """ + Calcul de la probabilite observee de chaque classe + """ + self.__kobs=histogram2(self.__obs,self.__subdiv)[1:] + return self.__kobs + + def ComputeObsHomogen(self): + """ + Calcul de la probabilite observee pour le test homogeneite de chaque classe + """ + self.__kobsHomogen=histogram2(self.__obsHomogen,self.__subdiv)[1:] + return self.__kobsHomogen + + def ComputeTheo(self): + """ + Calcul de la probabilite theorique de chaque classe + """ + self.__ktheo=[self.__cdf(self.__subdiv[i+1])-self.__cdf(self.__subdiv[i]) for i in range(self.__modalites-1)] + self.__ktheo=asarray(self.__ktheo) + self.__ktheo=(sum(self.__kobs)/sum(self.__ktheo))*self.__ktheo + + def Computepdfs(self) : + + self.__subdiv=self.__subdiv[1:] + self.__pdfobs=[self.__kobs[i+1]/(self.__subdiv[i+1]-self.__subdiv[i]) for i in range(self.__modalites-2)] + + if self.__testHomogen : + self.__pdftheo=[self.__kobsHomogen[i+1]/(self.__subdiv[i+1]-self.__subdiv[i]) for i in range(self.__modalites-2)] + else : + self.__pdftheo=[self.__ktheo[i+1]/(self.__subdiv[i+1]-self.__subdiv[i]) for i in range(self.__modalites-2)] + + return self.__subdiv, self.__pdftheo, self.__pdfobs + + def Computeddl(self): + """ + Calcul du nombre de degres de liberte + """ + self.__ddl = self.__modalites - 1. + if self.__use_mean_std_exp : + self.__ddl = self.__ddl - 2. + if (self.__ddl < 1.): + raise ValueError("The ddl is 0, you must increase the number of classes nbclasse ") + logging.debug("Nombre de degres de liberte=%s"%self.__ddl) + + def ComputeValue(self) : + """ + Calcul de la variable Q qui suit une loi Khi-2 + """ + if self.__testHomogen : + kobs,ktheo=self.__kobs.tolist(),self.__kobsHomogen.tolist() + else : + kobs,ktheo=self.__kobs.tolist(),self.__ktheo.tolist() + + # on supprime les classes theoriques qui ont moins d'un element (sinon la distance khi2 tendrait vers l'infini) + ko,kt=[],[] + self.__count0 = 0. + for k,val in enumerate(ktheo): + if val > 1.0: + kt.append(val) + ko.append(kobs[k]) + else : + self.__count0 = self.__count0 +1. + logging.debug("WARNING : nombre de classes vides supprimees (effectif theorique inferieur a 1.) pour le calcul de la valeur du Khi2 = %s"%self.__count0) + ef1,ef2=asarray(ko),asarray(kt) + count = 0. + for el in ef1.tolist() : + if el < 5. : + count = count +1. + for el in ef2.tolist() : + if el < 5. : + count = count +1. + pourcent_nbclasse_effecinf = count /(2.*len(ef1.tolist())) *100. + if (pourcent_nbclasse_effecinf > 20.) : + logging.debug("WARNING : nombre de classes dont l effectif est inferieur a 5 elements %s"%pourcent_nbclasse_effecinf) + k,p = chisquare(ef1, ef2) + k2, p2 = [k],[p] + for shift in range(1,6): + k,p=chisquare(ef1[shift:],ef2[:-shift]) + k2.append(k) + p2.append(p) + k,p=chisquare(ef1[:-shift],ef2[shift:]) + k2.append(k) + p2.append(p) + logging.debug("Liste des valeurs du Khi2 = %s"%k2) + self.__khi2=min(k2) + self.__Q=self.__khi2 + + logging.debug("Valeur du Khi2=%s"%self.__Q) + return self.__Q + + def ComputeArea(self): + """ + Calcul de la p-value + """ + self.__areakhi2 = 100 * chisqprob(self.__Q, self.__ddl) + return self.__areakhi2 + + def WriteMessage(self): + """ + Interpretation du test + """ + message = "Il y a %.2f%s de chance de se tromper en refusant l'adequation"%(self.__areakhi2,"%") + return message + + def WriteMessageHomogen(self): + """ + Interpretation du test + """ + message = "Il y a %.2f%s de chance de se tromper en refusant l'homogeneite"%(self.__areakhi2,"%") + return message + +# ============================================================================== +def ComputeKhi2_testGauss( + meantheo = 0., + stdtheo = 1., + nech = 10, + dx = 0.1, + nbclasses = None, + SuppressEmptyClasses = True, + ): + """ + Test du Khi2 d adequation entre tirage aleatoire dans gaussienne et une gaussienne theo + """ + essai = StatspourTests( cdftheo=norm, meantheo = meantheo, stdtheo = stdtheo, pdftest = ["random.normal","(%.3f,%.2f,%d)"%(meantheo,stdtheo,nech)], obs = None, use_mean_std_exp=False,dxmin=dx, obsHomogen = None, nbclasses = nbclasses) + essai.MakeClasses() + essai.ComputeObs() + essai.ComputeTheo() + classes,eftheo, efobs = essai.Computepdfs() + essai.Computeddl() + valeurKhi2= essai.ComputeValue() + areaKhi2 = essai.ComputeArea() + message = essai.WriteMessage() + logging.debug("message %s"%message) + return classes, eftheo, efobs, valeurKhi2, areaKhi2, message + +def ComputeKhi2_Gauss( + vectorV = None, + dx = 0.1, + SuppressEmptyClasses = True, + nbclasses = None + ): + """ + Test du Khi2 d adequation entre un vecteur donne et une gaussienne theo de mean et std celles du vecteur + """ + essai = StatspourTests( cdftheo=norm, pdftest = None, obs = vectorV, use_mean_std_exp=True,dxmin=dx, obsHomogen = None, nbclasses = nbclasses) + essai.MakeClasses() + essai.ComputeObs() + essai.ComputeTheo() + classes,eftheo, efobs = essai.Computepdfs() + essai.Computeddl() + valeurKhi2= essai.ComputeValue() + areaKhi2 = essai.ComputeArea() + message = essai.WriteMessage() + logging.debug("message %s"%message) + return classes, eftheo, efobs, valeurKhi2, areaKhi2, message + +def ComputeKhi2_Homogen( + vectorV1 = None, + vectorV2 = None, + dx = 0.1, + SuppressEmptyClasses = True, + nbclasses = None + ): + """ + Test du Khi2 d homogeniete entre 2 vecteurs + """ + essai = StatspourTests( cdftheo=norm, pdftest = None, obs = vectorV1, use_mean_std_exp=True,dxmin=dx, obsHomogen = vectorV2, nbclasses = nbclasses) + essai.MakeClasses() + essai.ComputeObs() + essai.ComputeObsHomogen() + classes,eftheo, efobs = essai.Computepdfs() + essai.Computeddl() + valeurKhi2= essai.ComputeValue() + areaKhi2 = essai.ComputeArea() + message = essai.WriteMessageHomogen() + logging.debug("message %s"%message) + return classes, eftheo, efobs, valeurKhi2, areaKhi2, message + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + # + numpy.random.seed(100) + + # Test de verification d adequation entre une gaussienne et un tirage gaussien + print '' + print 'Test de verification d adequation entre une gaussienne centree normale et un tirage gaussien' + classes, eftheo, efobs, valeurKhi2, areaKhi2, message = ComputeKhi2_testGauss(meantheo = 0., stdtheo = 1., nech = 1000., dx = 0.1, SuppressEmptyClasses = True, nbclasses = None) + print ' valeurKhi2=',valeurKhi2 + print ' areaKhi2=',areaKhi2 + print ' ',message + + if (numpy.abs(areaKhi2 - 99.91)< 1.e-2) : + print "The computation of the khisquare value is OK" + else : + raise ValueError("The computation of the khisquare value is WRONG") + + numpy.random.seed(2490) + + # Test de verification d adequation entre une gaussienne et un vecteur donne + print '' + print 'Test de verification d adequation entre une gaussienne et un vecteur donne' + V = random.normal(50.,1.5,1000) + classes, eftheo, efobs, valeurKhi2, areaKhi2, message = ComputeKhi2_Gauss(dx = 0.1, vectorV = V, SuppressEmptyClasses = True, nbclasses = None) + print ' valeurKhi2=',valeurKhi2 + print ' areaKhi2=',areaKhi2 + print ' ',message + + if (numpy.abs(areaKhi2 - 99.60)< 1.e-2) : + print "The computation of the khisquare value is OK" + else : + raise ValueError("The computation of the khisquare value is WRONG") + + # Test de d homogeneite entre 2 vecteurs donnes + print '' + print 'Test d homogeneite entre 2 vecteurs donnes' + V1 = random.normal(50.,1.5,10000) + numpy.random.seed(2490) + V2 = random.normal(50.,1.5,10000) + classes, eftheo, efobs, valeurKhi2, areaKhi2, message = ComputeKhi2_Homogen(dx = 0.5, vectorV1 = V1, vectorV2 = V2, SuppressEmptyClasses = True, nbclasses = None) + print ' valeurKhi2=',valeurKhi2 + print ' areaKhi2=',areaKhi2 + print ' ',message + + if (numpy.abs(areaKhi2 - 99.98)< 1.e-2) : + print "The computation of the khisquare value is OK" + else : + raise ValueError("The computation of the khisquare value is WRONG") + + # Test de verification d adequation entre une gaussienne et un tirage gaussien en faisant varier le nombre de classes, echantillon de taille 10000 + print '' + print 'Test de verification d adequation entre une gaussienne et un vecteur aleatoire gaussien de taille 10000' +# file = 'ComputeKhi2_adequationGauss_fctnbclasses_nech10000.gnu' +# fid = open(file, "w") +# lines = '%s\n' % ('# dx , nbclasses, valeurKhi2, ProbKhi2' ) + numpy.random.seed(4000) + V = random.normal(0., 1.,10000) + aire = [] + for dx in arange(0.01, 1., 0.001) : + classes, eftheo, efobs, valeurKhi2, areaKhi2, message = ComputeKhi2_Gauss(dx = dx, vectorV = V, SuppressEmptyClasses = True, nbclasses = None) +# lines += '%f %f %f %f\n' % (dx, numpy.size(classes), valeurKhi2, areaKhi2) + aire.append(areaKhi2) + meanaire = numpy.asarray(aire) +# fid.writelines(lines) + + print " En moyenne, il y a ", meanaire.mean(),"% de chance de se tromper en refusant l adequation a une loi gaussienne pour un echantillon de taille 10000" + print + if (numpy.abs( meanaire.mean() - 71.79)< 1.e-2) : + print "The computation of the khisquare value is OK" + else : + raise ValueError("The computation of the khisquare value is WRONG") + + # Test de verification d adequation entre une gaussienne et un tirage gaussien en faisant varier le nombre de classes, echantillon de taille 1000 + print '' + print 'Test de verification d adequation entre une gaussienne et un vecteur aleatoire gaussien de taille 1000' +# file = 'ComputeKhi2_adequationGauss_fctnbclasses_nech1000.gnu' +# fid = open(file, "w") +# lines = '%s\n' % ('# dx , nbclasses, valeurKhi2, ProbKhi2' ) + numpy.random.seed(4000) + V = random.normal(0., 1.,1000) + aire = [] + for dx in arange(0.05, 1., 0.001) : + classes, eftheo, efobs, valeurKhi2, areaKhi2, message = ComputeKhi2_Gauss(dx = dx, vectorV = V, SuppressEmptyClasses = True, nbclasses = None) +# lines += '%f %f %f %f\n' % (dx, numpy.size(classes), valeurKhi2, areaKhi2) + aire.append(areaKhi2) + meanaire = numpy.asarray(aire) +# fid.writelines(lines) + + print " En moyenne, il y a ", meanaire.mean(),"% de chance de se tromper en refusant l adequation a une loi gaussienne pour un echantillon de taille 1000" + print + if (numpy.abs( meanaire.mean() - 90.60)< 1.e-2) : + print "The computation of the khisquare value is OK" + else : + raise ValueError("The computation of the khisquare value is WRONG") + + # Test de verification d adequation entre une gaussienne et un tirage gaussien en faisant varier le nombre de classes, echantillon de taille 100 + print '' + print 'Test de verification d adequation entre une gaussienne et un vecteur aleatoire gaussien de taille 100' +# file = 'ComputeKhi2_adequationGauss_fctnbclasses_nech100.gnu' +# fid = open(file, "w") +# lines = '%s\n' % ('# dx , nbclasses, valeurKhi2, ProbKhi2' ) + numpy.random.seed(4000) + V = random.normal(0., 1.,100) + aire = [] + for dx in arange(0.1, 1., 0.01) : + classes, eftheo, efobs, valeurKhi2, areaKhi2, message = ComputeKhi2_Gauss(dx = dx, vectorV = V, SuppressEmptyClasses = True, nbclasses = None) +# lines += '%f %f %f %f\n' % (dx, numpy.size(classes), valeurKhi2, areaKhi2) + aire.append(areaKhi2) + meanaire = numpy.asarray(aire) +# fid.writelines(lines) + + print " En moyenne, il y a ", meanaire.mean(),"% de chance de se tromper en refusant l adequation a une loi gaussienne pour un echantillon de taille 100" + print diff --git a/src/daComposant/daNumerics/ComputeStudent.py b/src/daComposant/daNumerics/ComputeStudent.py new file mode 100644 index 0000000..3736490 --- /dev/null +++ b/src/daComposant/daNumerics/ComputeStudent.py @@ -0,0 +1,260 @@ +#-*-coding:iso-8859-1-*- +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +# +__doc__ = """ + Outil numérique de calcul des variables de Student pour 2 vecteurs + dépendants ou indépendants, avec variances supposées égales ou différentes +""" +__author__ = "Sophie RICCI, Jean-Philippe ARGAUD - Octobre 2008" + +import sys ; sys.path.insert(0, "../daCore") + +import numpy +from scipy.stats import ttest_rel, ttest_ind, betai +import logging + +# ============================================================================== +def DependantVectors(vector1 = None, vector2 = None, tolerance = 0.05 ): + """ + Outil numérique de calcul de la variable de Student pour 2 vecteurs + dépendants + Ce calcul nécessite : + - en input : + - les deux vecteurs (comme liste, array ou matrix) + d'échantillons dont on veut comparer la variance, + - la tolérance + - en output : + - la p-value, + - la valeur de la variable aléatoire, + - la reponse au test pour une tolerance ainsi que + - le message qui interprete la reponse du test. + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Student value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + if V1.size != V2.size: + raise ValueError("The two given vectors must have the same size, or the vector types are incompatible") + # + # Calcul de la p-value du Test de Student + # -------------------------------------------------------------------- + [t, prob] = ttest_rel(V1, V2) + areastudent = 100. * prob + # + logging.debug("DEPENDANTVECTORS t = %.3f, areastudent = %.3f"%(t, areastudent)) + # + # Tests + # -------------------------------------------------------------------- + message = "DEPENDANTVECTORS Il y a %.2f %s de chance de se tromper en refusant l'hypothèse d'égalité des moyennes des 2 échantillons dépendants (si <%.2f %s on refuse effectivement l'égalité)"%(areastudent, "%", 100.*tolerance,"%") + logging.debug(message) + # + if (areastudent < (100.*tolerance)) : + answerTestStudent = False + else: + answerTestStudent = True + # + return areastudent, t, answerTestStudent, message + +# ============================================================================== +def IndependantVectorsDifferentVariance(vector1 = None, vector2 = None, tolerance = 0.05 ): + """ + Outil numerique de calcul de la variable de Student pour 2 vecteurs independants supposes de variances vraies differentes + En input : la tolerance + En output : la p-value, la valeur de la variable aleatoire, la reponse au test pour une tolerance ainsi que le message qui interprete la reponse du test. + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Student value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + # + # Calcul de la p-value du Test de Student + # -------------------------------------------------------------------- + # t = (m1 - m2)/ sqrt[ (var1/n1 + var2/n2) ] + # ou var est calcule comme var = somme (xi -xmena)**2 /(n-1) + n1 = V1.size + n2 = V2.size + mean1 = V1.mean() + mean2 = V2.mean() + var1 = numpy.sqrt(n1)/numpy.sqrt(n1-1) * V1.std() * numpy.sqrt(n1)/numpy.sqrt(n1-1) * V1.std() + var2 = numpy.sqrt(n2)/numpy.sqrt(n2-1) * V2.std() * numpy.sqrt(n2)/numpy.sqrt(n2-1) * V2.std() + t = (mean1 - mean2)/ numpy.sqrt( var1/n1 + var2/n2 ) + df = ( (var1/n1 + var2/n2) * (var1/n1 + var2/n2) ) / ( (var1/n1)*(var1/n1)/(n1-1) + (var2/n2)*(var2/n2)/(n2-1) ) + zerodivproblem = var1/n1 + var2/n2 == 0 + t = numpy.where(zerodivproblem, 1.0, t) # replace NaN t-values with 1.0 + prob = betai(0.5*df,0.5,float(df)/(df+t*t)) + areastudent = 100. * prob + # + logging.debug("IndependantVectorsDifferentVariance t = %.3f, areastudent = %.3f"%(t, areastudent)) + # + # Tests + # -------------------------------------------------------------------- + message = "IndependantVectorsDifferentVariance Il y a %.2f %s de chance de se tromper en refusant l'hypothèse d'égalité des moyennes des 2 échantillons indépendants supposés de variances différentes (si <%.2f %s on refuse effectivement l'égalité)"%(areastudent, "%", 100.* tolerance,"%") + logging.debug(message) + if (areastudent < (100.*tolerance)) : + answerTestStudent = False + else: + answerTestStudent = True + # + return areastudent, t, answerTestStudent, message + +# ============================================================================== +def IndependantVectorsEqualVariance(vector1 = None, vector2 = None, tolerance = 0.05 ): + """ + Outil numerique de calcul de la variable de Student pour 2 vecteurs independants supposes de meme variance vraie + En input : la tolerance + En output : la p-value, la valeur de la variable aleatoire, la reponse au test pour une tolerance ainsi que le message qui interprete la reponse du test. + """ + if (vector1 is None) or (vector2 is None) : + raise ValueError("Two vectors must be given to calculate the Student value") + V1 = numpy.array(vector1) + V2 = numpy.array(vector2) + if (V1.size < 1) or (V2.size < 1): + raise ValueError("The given vectors must not be empty") + # + # Calcul de la p-value du Test de Student + # -------------------------------------------------------------------- + # t = sqrt(n1+n2-2) * (m1 - m2)/ sqrt[ (1/n1 +1/n2) * ( (n1-1)var1 + (n2-1)var2 )] + # ou var est calcule comme var = somme (xi -xmena)**2 /(n-1) + [t, prob] = ttest_ind(V1, V2) + areastudent = 100. * prob + # + logging.debug("IndependantVectorsEqualVariance t = %.3f, areastudent = %.3f"%(t, areastudent)) + # Tests + # -------------------------------------------------------------------- + message = "IndependantVectorsEqualVariance Il y a %.2f %s de chance de se tromper en refusant l'hypothèse d'égalité des moyennes des 2 échantillons indépendants supposés de même variance (si <%.2f %s on refuse effectivement l'égalité)"%(areastudent, "%", 100.* tolerance,"%") + logging.debug(message) + if (areastudent < (100.*tolerance)) : + answerTestStudent = False + else: + answerTestStudent = True + + return areastudent, t, answerTestStudent, message + +# ============================================================================== +if __name__ == "__main__": + print '\n AUTODIAGNOSTIC \n' + # logging.getLogger().setLevel(logging.DEBUG) + + print + print " Test de Student pour des vecteurs dépendants" + print " --------------------------------------------" + # Tirage de l'echantillon + V1 = numpy.matrix(([-1., 0., 4.])).T + V2 = numpy.matrix(([-2., 0., 8.])).T + V1 = V1.A1 + V2 = V2.A1 + # + # Appel de l outil DependantVectors et initialisation des inputs + [aire, Q, reponse, message] = DependantVectors( + vector1 = V1, + vector2 = V2, + tolerance = 0.05) + # + # Verification par les calculs sans les routines de scipy.stats + # (ref numerical recipes) + n = V1.size + df= n -1 + # Les routines de scipy.stats utilisent une variance calculee avec n-1 et non n comme dans std + # t = (m1 - m2)/ sqrt[(varx1 + varx2 - 2 cov(x1, x2))/n ] + # ou var est calcule comme var = somme (xi -xmean)**2 /(n-1) + var1 = numpy.sqrt(n)/numpy.sqrt(n-1)* V1.std() * numpy.sqrt(n)/numpy.sqrt(n-1) * V1.std() + var2 = numpy.sqrt(n)/numpy.sqrt(n-1)* V2.std() * numpy.sqrt(n)/numpy.sqrt(n-1) * V2.std() + m1 = V1.mean() + m2 = V2.mean() + cov = 0. + for j in range(0, n) : + cov = cov + (V1[j] - m1)*(V2[j] - m2) + cov = cov /df + sd = numpy.sqrt((var1 + var2 - 2. *cov) / n) + tverif = (m1 -m2) /sd + aireverif = 100. * betai(0.5*df,0.5,float(df)/(df+tverif*tverif)) + if (aireverif - aire < 1.e-5) : + print " Le calcul est conforme à celui de l'algorithme du Numerical Recipes" + else : + raise ValueError("Le calcul n'est pas conforme à celui de l'algorithme Numerical Recipes") + + if (numpy.abs(aire - 57.99159)< 1.e-5) : + print " Le calcul est JUSTE sur cet exemple." + else : + raise ValueError("Le calcul est FAUX sur cet exemple.") + + print + print " Test de Student pour des vecteurs independants supposés de même variance" + print " ------------------------------------------------------------------------" + # Tirage de l'echantillon + V1 = numpy.matrix(([-1., 0., 4.])).T + V2 = numpy.matrix(([-2., 0., 8.])).T + V1 = V1.A1 + V2 = V2.A1 + # + # Appel de l outil IndependantVectorsDifferentVariance et initialisation des inputs + [aire, Q, reponse, message] = IndependantVectorsDifferentVariance( + vector1 = V1, + vector2 = V2, + tolerance = 0.05) + # + if (numpy.abs(aire - 78.91339)< 1.e-5) : + print " Le calcul est JUSTE sur cet exemple." + else : + raise ValueError("Le calcul est FAUX sur cet exemple.") + + print + print " Test de Student pour des vecteurs indépendants supposés de même variance" + print " ------------------------------------------------------------------------" + # Tirage de l'echantillon + V1 = numpy.matrix(([-1., 0., 4.])).T + V2 = numpy.matrix(([-2., 0., 8.])).T + V1 = V1.A1 + V2 = V2.A1 + # + # Appel de l outil IndependantVectorsEqualVariance et initialisation des inputs + [aire, Q, reponse, message] = IndependantVectorsEqualVariance( + vector1 = V1, + vector2 = V2, + tolerance = 0.05) + # + # Verification par les calculs sans les routines de scipy.stats (ref numerical recipes) + n1 = V1.size + n2 = V2.size + df= n1 + n2 -2 + # Les routines de scipy.stats utilisent une variance calculee avec n-1 et non n comme dans std + var1 = numpy.sqrt(n1)/numpy.sqrt(n1-1)* V1.std() * numpy.sqrt(n1)/numpy.sqrt(n1-1) * V1.std() + var2 = numpy.sqrt(n2)/numpy.sqrt(n2-1)* V2.std() * numpy.sqrt(n2)/numpy.sqrt(n2-1) * V2.std() + m1 = V1.mean() + m2 = V2.mean() + var = ((n1 -1.) *var1 + (n2 -1.) *var2 ) /df + tverif = (m1 -m2) /numpy.sqrt(var*(1./n1 + 1./n2)) + aireverif = 100. * betai(0.5*df,0.5,float(df)/(df+tverif*tverif)) + # + if (aireverif - aire < 1.e-5) : + print " Le calcul est conforme à celui de l'algorithme du Numerical Recipes" + else : + raise ValueError("Le calcul n'est pas conforme à celui de l'algorithme Numerical Recipes") + + if (numpy.abs(aire - 78.42572)< 1.e-5) : + print " Le calcul est JUSTE sur cet exemple." + else : + raise ValueError("Le calcul est FAUX sur cet exemple.") + + print diff --git a/src/daComposant/daNumerics/__init__.py b/src/daComposant/daNumerics/__init__.py new file mode 100644 index 0000000..6bcb582 --- /dev/null +++ b/src/daComposant/daNumerics/__init__.py @@ -0,0 +1,19 @@ +# +# Copyright (C) 2008-2009 EDF R&D +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA +# +# See http://www.salome-platform.org/ or email : webmaster.salome@opencascade.com +#