*Dr. Kamyshnikov
A.Vladimir*

Department of Economy

Tomsk State Architectural University,
Russia,Tomsk

URL http://site.yahoo.com/baturovo,
e-mail: kva_son@yahoo.com

address: 634003, RussiaTomsk-3,
POB 3046

tel: +7 (3822) 723633, fax:+7(3822)723970

**INTRODUCTION**

Optimizational approach to solving queries of complex systems synthesis is in itself a big reserve for elevating quality of planning, management and projecting. The choice of optimizational aim areas of changing parameters is the task of particular economic and technical brunches of science. What concerns the optimization mechanism is the subject of mathematical programming.

The success of linear programming
in increasing effectiveness of economic modeling and optimization of planning
is well known. It is less obvious in technique, management and projecting
as for added accuracy of above mentioned it is important to take into consideration
nonlinear effects. Creation of "simplex method" and appearance of powerful
PCs made linear programming an important tool for solving different problems
but at the same time showed its weakness.^{1)} Most of the queries
cannot be adequately solved by linear programming model because they include
nonlinear goal functions or constraints. All the above enumerated attracted
special attention of mathematicians to the progress in nonlinear programming
research.

It is important to mention that many
nonlinear optimizational queries that exist in economy and technique are
described as including convex or concave functional and convex areas of
possible solutions parameters (for more information see Casten theorem).
In literature on the subject, attention is mainly attracted to convex programming.
The reason is the possibility to produce universal method for solving basic
form equations as well as others with deviations from the basic form.^{2)}
One cannot guarantee the same for other nonlinear equations with wider
range of parameters. Let me leave aside the prior and talk more about the
later, as it is the subject of my research.

**DESCRIPTION**

Before I turn to the description of my research, let me first speculate a little on main definitions of nonlinear equations. These are tasks where two qualifications are relaxed: dividing and adding, which means that goal function and constraints might be nonlinear and variables can take values from some multitude, including discrete multitude. To the above mentioned problems of mathematical programming, refer the following:

- Tasks of integer programming with linear and nonlinear goal functions and constraint functions;
- Combined models of integer programming problems with linear and nonlinear goal functions and constraint functions;

- Problems of discrete programming, in which the value of variables is chosen out of given value multitude of rational numbers, not necessarily integer, also with linear and nonlinear goal function and constraint function;

- Problems of nonlinear programming with convex and concave functions;

- Problems of nonlinear programming with multiple-optima functions on non-convex or/and convex or/and non-linked zone of variables;

- Solving of equations and systems of equations, with functions that I identified above.

To my knowledge, there is such an opinion among specialists that to create a universal method for solving nonlinear problems, like Simplex -method that is used for linear queries, is impossible and the only way out are specialized methods for each particular type of programming problem. I hope now I can prove this information not up-to-date. Application of specialized algorithms requires exact correspondence of real model and problems that can be solved with the help of this particular algorithm. In case of any deviations you have to change it usually by simplifying mathematical model to the requirements of algorithm. It is important to mention also that initially the reality was already simplified in the model (in order to make it equal to the needs of an algorithm). As a result you answer the question that was not asked by the practice. In order to meet the requirements of imperfect mathematical method we replace the model of real natural conditions by the picture that either you or algorithm found suitable. Such fractional results are inapplicable for problems of integer programming because this data is used for forming planned decisions in complicated situations. Such problems with nonlinear functions are important for non-proportional fluctuation of expenses, determination of productivity value, quality assurance, in technical and economic addendum, in the sphere of nonlinear physical laws and others.

To begin with the description of my research, I want to insist on the fact that I created a unique and general method for solving non-linear queries of any level of complication. Queries that this method is solving are used in calculation of optimization problems and solution of nonlinear equations, for such areas as Modeling and Simulation, Operations Management, Machine Vision, Robotics & Automation. Using my method I successfully solved test queries given by Wolf, H.H.Rosenbrock and M.J.D. Powell, as well as different sets of non-linear equations. Using it, I am capable of solving quite complicated non-linear optimization problems of the classic form:

min F(X)

subject to:

Wi(X) <= 0, i = 1, ... , m_{1},

Gi(X) = 0, i = 1, ... , m_{2},

Where X is an n-vector, and the functions F, Wi and Gi should not have gaps and can be linear, nonlinear, multi-extreme, convex and non-convex. My universal method of finding optimal solution for the above-enumerated programming queries consist of two well-known and simple algorithms: method of proportionally deformed polyhedron and method of gradient descending. It is well known that these two methods usually produce only local, separate solutions. But it is not true for my technique. In the algorithm these simple methods let me always acquire global solutions. Obviously there is no magic in it. The results obtained by me (proved theoretically) are the effects of extending the theory of convex functions. My algorithm was checked during two years of investigations and illustrated by solving of several hundred, better to say thousand of, tests and real nonlinear equations (not all of them were included in this work as examples) and also applied theoretically. My algorithm meets all the requirements that I imposed before creating it. I was very thorough while testing it. My testing included the following steps:

- I solved all test equations that I could find in the literature on the subject.
- All real problems from the literature were solved too.
- Successfully tested it on several hundred self-created equations with number of variables starting from 1 up to 500
- Solved equations with implanted linear and nonlinear constraints, including discrete constraints on variables
- Solved several sets of equations for clients via e-mail

**COMPARISONS**

It is worth mentioning that from
the beginning of realization of existence of both local and global optimum
or optimums, we find information on algorithms for their location. It is
hard for me to make any connections or comparison of my method with others.
In accordance with the information of Northwestern University and Argonne
National Laboratory, such a method of mine does not exist. Scientists consider
it unrealistic to expect to find one general nonlinear optimization code
that is going to work for every kind of nonlinear model. Instead one should
try to select a code that fits the problem one is solving.^{3)}

I will try here to bring about possible
comparison points. It is feasible to relate solution time of my algorithm
and others. I was comparing my algorithm to stochastic search code by Georgian
professor Chichenadze. Test task # 14 was solved within 14 seconds while
it took Chichenadze 10 hours.^{4)} Modified method of nonlinear
and stochastic optimization systems based on ideas of Professors L. Ingber
and M.J.D. Powel allows to make solution more precise and finds global
optimum with set before probability but it does not guarantee theoretically
global solution. For this algorithm there will emerge a counterexample
that cannot be calculated by it. This cannot be said about my algorithm
which finds global optimum every time. From the calculation point of view
it does not have common error - the rounding default of the results accuracy.
Error is equal to the error of optimum calculation in optimal point, which
is determined by the chosen software and type of the computer. However
I should add that any universal algorithm (and mine is not an exception)
will not be as good for solving some particular problems as algorithm specifically
created for them. To my knowledge there is no other algorithm with such
broad universality as mine.

**EXAMPLES**

N_1. Equation with multi-extreme goal function.

Minimize:

EE1=exp{SIN(10*XX)*SIN(123+XX^0.73+3*XX^2.13+XX^1.55)}

where :

EE2=ABS{[X1^0.9+X2^0.8]/[1.1+COS(X3^2.3+X4^1.3)]} -3<0

EE3=XX-ABS{(X1^0.9+X2^0.8)/[1.1+COS(X3^2.3+X4^1.3)]}=0

Solution:

EE1= 0.3682, X1= 1.9484, X2= 1.5357, X3= 2.6601, X4= 1.445, EE2=-0.3259, EE3=0.00

N_2. Equation with multi-extreme goal function.

Minimize:

EE_{1}=3(SIN(4(3+X_{1})))^{2}+(SIN(4+X_{1}))^{12}

IF X_{1}<7.9 OR X_{1}>8.1
then EE_{1}=2+SIN(EE_{1})

where(<=0 ):

EE_{2}=-X_{1}.

Solution:

EE_{1}=0.00061823, X_{1}=7.995.

Control:

EE_{1}=0.001509568, X_{1}=8.00

N_3. Equation with multi-extreme goal function.

Minimize:

EE1=SIN((X_{1}+SIN(X_{1})+4)^{2}/(10+EXP(SIN(33X_{1}))))

IF X_{1}<16 or X_{1}
gt 16.2 then EE_{1}=2+EE_{1}

where(<=0 ):

EE_{2}=-X_{1}.

Solution:

EE_{1}=-0.99999999, X_{1}=16.097

Control:

EE_{1}=-0.975, X_{1}=16.1

N_ 4. Equation with multi-extreme goal function (function of Rozenbrok).

Minimize:

EE_{1}=100(X_{2}-X_{1}^{2})^{2}+(1-X_{1})^{2}

where(<=0 ):

EE_{2}=-X_{1}.

Solution:

EE_{1}=0.00, X_{1}=0.99999

N_ 5. Equation with multi-extreme goal function (function of Powell).

Minimize:

EE_{1}=(X_{1}+10X_{2})^{2}+5(X_{3}-X_{4})^{2}+(X_{2}-2X_{3})^{4}+10(X_{1}-X_{4})^{4}

Where:

X_{1}., X_{2}, X_{3},
X_{4}.>=0

Solution:

EE_{1}=0.00, X_{1}=0,00,
X_{2}=0.00, X_{3}=0.00, X_{4}=0.00

N_6. Equation with multi-extreme goal function.

Minimize:

EE_{1}=Sum(I=1,…,100)(X_{1}I)^{2}+10SIN(3.1416*3/2/IX_{I})+10SIN(5*3.1416*3/2/IX_{I})

+10SIN(9*3.1416*3/2/IX_{I})+10SIN(13*3.1416*3/2/IX_{I})+10SIN(17*3.1416*3/2/IX_{I})

+10SIN(21*3.1416*3/2/IX_{I})+10SIN(25*3.1416*3/2/IX_{I})+10SIN(27*3.1416*3/2/IX_{I})

+10SIN(33*3.1416*3/2/IX_{I})+10SIN(41*3.1416*3/2/IX_{I})+10SIN(81*3.1416*3/2/IX_{I})

where (<=0):

EE_{j}=X_{I}-200,
j=1,…,100

Solution:

EE_{1}=-8999.999, X_{1}=0.999,X_{2}=1.999,X_{3}=2.999,X_{4}=3.999,
. . . , X_{99}=98.999, X_{100}=99.999

N_7. Equation with multi-extreme goal function and non-convex and non-linked goal function.

Minimize:

EE_{1}=(SIN(X_{1})+SIN(7X_{1}))

where(<=):

EE_{2}=(COS(3.22X_{1})+1)

EE_{3}=(X_{1}-10).

Solution:

EE_{1}=-0.5928, X_{1}=4.879,
EE_{2}=0.00000321, EE_{3}=-5.12096

N_ 8. Equation with multi-extreme goal function and non-convex and non-linked goal function

Minimize:

EE_{1}=0.001((X_{1}-10)(X_{1}-1)(X_{1}-2)(X_{1}-3)*

(X_{1}-4)(X_{1}-5)(X_{1}-6)(X_{1}-7)(X_{1}-8)(X_{1}-9)+

COS(17X_{1})(X_{2}-5)(X_{2}-6)

where (<=0)

EE_{2}=(SIN(X_{1}+X_{2}))^{2}

Solution:

EE_{1}=-42.934069, X_{1}=1.29246267,
X_{2}=11.2795776, EE_{2}=0.0000321

N_ 9. Equation with multi-extreme goal function.

Minimize:

EE_{1}=(X_{1}-22)^{2}+(SIN(13X_{1})-3))^{4}

where (<=0):

EE_{2}=-X_{1}

Solution:

EE_{1}=0.00022543, X_{1}=21.98646728

Control:

EE_{1}=0.000354, X_{1}=21.99

N_ 10. Equation with multi-extreme goal function.

Minimize:

EE_{1}=22SIN(0.001X_{1})(X_{1}-5)^{2}(1/(X_{1}+4))(SIN(13*X_{1})+COS(20*X_{1})

where (<=0):

EE_{2}=-X_{1}

Solution:

EE_{1}=-35.2194, X_{1}=8.6238524

Control:

EE_{1}=-35.143360, X_{1}=8.62

N_ 11. Equation with multi-extreme goal function

Minimize:

EE_{1}=3(SIN(4(3+X_{1})))^{2}+(SIN(4+X_{1}))^{12}+(X_{2}-22)^{2}+(SIN(13X_{2}-3))^{4}

+SIN((X_{3}+SIN(X_{3})+4)^{2}/(10+EXP(SIN(33X_{3}))))
+22SIN(0.01X_{4})( X_{4}-5)^{2}(1/(X_{4}+4))*

+(SIN(13X_{4})+COS(20X_{4}))
+100(X_{6}-X_{5}^{2})^{2}+(1-X_{5})^{2}+(X_{7}+10X_{8})^{2}+5(X_{9}-X_{10})^{2}

+(X_{8}-2X_{9})^{4}+10(X_{7}-X_{10})^{4}

where :

X_{1},X_{2},…,X_{1}
>= 0

Solution:

EE_{1} = -36.07,X_{1}=
1.3200E-001

X_{2}= 22.20,X_{3}=
16.035,X_{4}= 8.6238, X_{5}= 1.030, X_{6}= 1.0622

X_{7}= 4.6579E-009,X_{8}=
4.65E-009, X_{9}= 5.3838E-003, X_{10}= 5.3836E-003

N_12. Equation with multi-extreme goal function.

EE1=3(SIN(4(3+X_{1})))^{2}+(SIN(4+X_{1}))^{12}

EE2=(X_{2}-22)^{2}+(SIN(13X_{2}-3))^{4}

EE3=SIN((X_{3}+SIN(X_{3})+4)^{2}/(10+EXP(SIN(33*X_{3}))))

EE4=SIN(0.01X_{4})*(X_{4}-5)^{2}*22(1/(X_{4}+4))(SIN(13X_{4})+COS(20X_{4}))

IF X_{4} > 10 then EE4=1

IF X_{4} < 9 then EE4=EE4*10

EE5=100*X_{6}-X_{5}^{2})^{2}+(1-X_{5})^{2}

EE6=(X_{7}+10X_{8})^{2}+5(X_{9}-X_{10})^{2}
+(X_{8}-2X_{9})^{4}+10(X_{7}-X_{10})^{4}

EE7=4/3(X_{11}^{2}-X_{11}*X_{12}+X_{12})^{2})

IF EE7 < 0 then EE7=0

Minimize:

EE(1)=EE1+EE2+EE3+EE4+EE5+EE6+EE7^{*0.75}-X_{13}

where (<=0):

EE_{2}=X_{13}-2

EE_{16}=-X_{11}-X_{12}-X_{13}

Solution:

EE(1)=-35.088, X_{1}=1.32129E-001,
X_{2}= 22.6815,X_{3}=4.359E-002,X_{4}=8.623

X_{5}=1.041,X_{6}=1.085,
X_{7}=3.4943E-002, X_{8}= 0.00,X_{9}=8.978625E-002

X_{10}=8.74408E-002,X_{11}=3.36281E-003,X_{12}=1.25656E-003,X_{13}=1.43497

EE_{2}=-5.6502E-001,EE_{16}=-1.43959

N_13. Equation with multi-extreme goal function and non-convex and non-linked goal function.

W=35, R=7, T1=2. XX=X1^0.5+(X2/X1)^0.5+(64/X2)^0.5

Minimize:

EE1=ABS(XX-3-(W-2*SIN(0.5*SIN(XX^0.3)))/R*T1*0.5+(2+(SIN(22*XX^3))^2)))

where (<=0):

EE2=(SIN(3.1416*X1))**2-0.1

EE3=(SIN(3.1416*X2))**2-0.1

EE4=(X1+1E-3)-X2

Solution:

EE1= 0.00,X1=1.999, X2=5.0906, EE2= -0.099,EE3= -0.021,EE4= -3.0904

N_14. Queries with multi-extreme
goal function.

XX=ABS((X1^0.9+X2^0.8)/(1.1+COS(X3^2.3+X4^1.3)))+Sum(j=1,…,100)(0.1*J*Xj^(1/j))

Minimize:

EE1=EXP(SIN(10*XX)*SIN(123+XX^0.73-3*XX^2.13+XX^1.55))

where (<=0):

EE2=ABS((X1^0.9+X2^0.8)/(1.1+COS(X3^2.3+X4^1.3)))-3

Solution:

EE(1) = 0.3679,X1,...,X100 = 1.

**References**

1) Mokhtar S. Bazaraa, C.M. Shetty,
"Nonlinear programming, Theory and Algorithms", John Wiley and Sons, New
York, 1979, p.8

2) Judin D.V., "Mathematical Programming",
Moscow Press, Moscow, 1982, p.39

3) For more information see http://www.mcs.anl.gov/otc/Guide/faq/nonlinear-programming-faq.html

4) Chichenadze V.K., "Solution of
non convex nonlinear optimization problems", Moscow "Science", Moscow,
1983, 1