Universal Global Method of Solving Non-Linear Systems Equations and Non-Linear Programming Problems with Multiple-Optima Functions on Non-Convex and Non-Linked Zones

Dr. Kamyshnikov A.Vladimir
Department of Economy
Tomsk State Architectural University, Russia,Tomsk
URL http://site.yahoo.com/baturovo, e-mail: kva_son@yahoo.com
address: 634003, RussiaTomsk-3, POB 3046
tel: +7 (3822) 723633, fax:+7(3822)723970

INTRODUCTION

Optimizational approach to solving queries of complex systems synthesis is in itself a big reserve for elevating quality of planning, management and projecting. The choice of optimizational aim areas of changing parameters is the task of particular economic and technical brunches of science. What concerns the optimization mechanism is the subject of mathematical programming.

The success of linear programming in increasing effectiveness of economic modeling and optimization of planning is well known. It is less obvious in technique, management and projecting as for added accuracy of above mentioned it is important to take into consideration nonlinear effects. Creation of "simplex method" and appearance of powerful PCs made linear programming an important tool for solving different problems but at the same time showed its weakness.1) Most of the queries cannot be adequately solved by linear programming model because they include nonlinear goal functions or constraints. All the above enumerated attracted special attention of mathematicians to the progress in nonlinear programming research.

It is important to mention that many nonlinear optimizational queries that exist in economy and technique are described as including convex or concave functional and convex areas of possible solutions parameters (for more information see Casten theorem). In literature on the subject, attention is mainly attracted to convex programming. The reason is the possibility to produce universal method for solving basic form equations as well as others with deviations from the basic form.2) One cannot guarantee the same for other nonlinear equations with wider range of parameters. Let me leave aside the prior and talk more about the later, as it is the subject of my research.

DESCRIPTION

Before I turn to the description of my research, let me first speculate a little on main definitions of nonlinear equations. These are tasks where two qualifications are relaxed: dividing and adding, which means that goal function and constraints might be nonlinear and variables can take values from some multitude, including discrete multitude. To the above mentioned problems of mathematical programming, refer the following:

A large number of books are devoted to the problem of solving above-mentioned queries. It is enough to tell that G. Vagner's "Basic operational research", published in 1973, gives references to more than 162 publications, "Nonlinear programming", by Bazara and Shetti, published in 1982 sources 600 books. Literature on the subject is not of course bound to what was brought up here. The reason for such an attention lies in the provisions of practice. The absence of effective and quick method for finding global optimum in such problems leads to artificial simplification of mathematical model in practice. As a consequence system characteristics in economy, finance, technique, chemistry, physics, architecture, etc deteriorate.

To my knowledge, there is such an opinion among specialists that to create a universal method for solving nonlinear problems, like Simplex -method that is used for linear queries, is impossible and the only way out are specialized methods for each particular type of programming problem. I hope now I can prove this information not up-to-date. Application of specialized algorithms requires exact correspondence of real model and problems that can be solved with the help of this particular algorithm. In case of any deviations you have to change it usually by simplifying mathematical model to the requirements of algorithm. It is important to mention also that initially the reality was already simplified in the model (in order to make it equal to the needs of an algorithm). As a result you answer the question that was not asked by the practice. In order to meet the requirements of imperfect mathematical method we replace the model of real natural conditions by the picture that either you or algorithm found suitable. Such fractional results are inapplicable for problems of integer programming because this data is used for forming planned decisions in complicated situations. Such problems with nonlinear functions are important for non-proportional fluctuation of expenses, determination of productivity value, quality assurance, in technical and economic addendum, in the sphere of nonlinear physical laws and others.

To begin with the description of my research, I want to insist on the fact that I created a unique and general method for solving non-linear queries of any level of complication. Queries that this method is solving are used in calculation of optimization problems and solution of nonlinear equations, for such areas as Modeling and Simulation, Operations Management, Machine Vision, Robotics & Automation. Using my method I successfully solved test queries given by Wolf, H.H.Rosenbrock and M.J.D. Powell, as well as different sets of non-linear equations. Using it, I am capable of solving quite complicated non-linear optimization problems of the classic form:

min F(X)

subject to:

Wi(X) <= 0, i = 1, ... , m1,

Gi(X) = 0, i = 1, ... , m2,

Where X is an n-vector, and the functions F, Wi and Gi should not have gaps and can be linear, nonlinear, multi-extreme, convex and non-convex. My universal method of finding optimal solution for the above-enumerated programming queries consist of two well-known and simple algorithms: method of proportionally deformed polyhedron and method of gradient descending. It is well known that these two methods usually produce only local, separate solutions. But it is not true for my technique. In the algorithm these simple methods let me always acquire global solutions. Obviously there is no magic in it. The results obtained by me (proved theoretically) are the effects of extending the theory of convex functions. My algorithm was checked during two years of investigations and illustrated by solving of several hundred, better to say thousand of, tests and real nonlinear equations (not all of them were included in this work as examples) and also applied theoretically. My algorithm meets all the requirements that I imposed before creating it. I was very thorough while testing it. My testing included the following steps:

Results obtained were always satisfactory which means that algorithm was able to find global optimum for multi-extreme function, as I was only calculating that type of function. Program that embodied my method and that can also be used, as subprogram is in FORTRAN-77, MS DOS. Resolution time depends on the type of a problem or number, and complexity of functions. For Pentium 200, I did not find an equation for which solution time was longer than an hour. In my practice programming, a query with 100 variables was solved within 15 minutes. Algorithm stops its search when either global optimum is found, or a particular number of iterations is achieved.

COMPARISONS

It is worth mentioning that from the beginning of realization of existence of both local and global optimum or optimums, we find information on algorithms for their location. It is hard for me to make any connections or comparison of my method with others. In accordance with the information of Northwestern University and Argonne National Laboratory, such a method of mine does not exist. Scientists consider it unrealistic to expect to find one general nonlinear optimization code that is going to work for every kind of nonlinear model. Instead one should try to select a code that fits the problem one is solving.3)

I will try here to bring about possible comparison points. It is feasible to relate solution time of my algorithm and others. I was comparing my algorithm to stochastic search code by Georgian professor Chichenadze. Test task # 14 was solved within 14 seconds while it took Chichenadze 10 hours.4) Modified method of nonlinear and stochastic optimization systems based on ideas of Professors L. Ingber and M.J.D. Powel allows to make solution more precise and finds global optimum with set before probability but it does not guarantee theoretically global solution. For this algorithm there will emerge a counterexample that cannot be calculated by it. This cannot be said about my algorithm which finds global optimum every time. From the calculation point of view it does not have common error - the rounding default of the results accuracy. Error is equal to the error of optimum calculation in optimal point, which is determined by the chosen software and type of the computer. However I should add that any universal algorithm (and mine is not an exception) will not be as good for solving some particular problems as algorithm specifically created for them. To my knowledge there is no other algorithm with such broad universality as mine.

EXAMPLES

N_1. Equation with multi-extreme goal function.

Minimize:

EE1=exp{SIN(10*XX)*SIN(123+XX^0.73+3*XX^2.13+XX^1.55)}

where :

EE2=ABS{[X1^0.9+X2^0.8]/[1.1+COS(X3^2.3+X4^1.3)]} -3<0

EE3=XX-ABS{(X1^0.9+X2^0.8)/[1.1+COS(X3^2.3+X4^1.3)]}=0

Solution:

EE1= 0.3682, X1= 1.9484, X2= 1.5357, X3= 2.6601, X4= 1.445, EE2=-0.3259, EE3=0.00

N_2. Equation with multi-extreme goal function.

Minimize:

EE1=3(SIN(4(3+X1)))2+(SIN(4+X1))12

IF X1<7.9 OR X1>8.1 then EE1=2+SIN(EE1)

where(<=0 ):

EE2=-X1.

Solution:

EE1=0.00061823, X1=7.995.

Control:

EE1=0.001509568, X1=8.00

N_3. Equation with multi-extreme goal function.

Minimize:

EE1=SIN((X1+SIN(X1)+4)2/(10+EXP(SIN(33X1))))

IF X1<16 or X1 gt 16.2 then EE1=2+EE1

where(<=0 ):

EE2=-X1.

Solution:

EE1=-0.99999999, X1=16.097

Control:

EE1=-0.975, X1=16.1

N_ 4. Equation with multi-extreme goal function (function of Rozenbrok).

Minimize:

EE1=100(X2-X12)2+(1-X1)2

where(<=0 ):

EE2=-X1.

Solution:

EE1=0.00, X1=0.99999

N_ 5. Equation with multi-extreme goal function (function of Powell).

Minimize:

EE1=(X1+10X2)2+5(X3-X4)2+(X2-2X3)4+10(X1-X4)4

Where:

X1., X2, X3, X4.>=0

Solution:

EE1=0.00, X1=0,00, X2=0.00, X3=0.00, X4=0.00

N_6. Equation with multi-extreme goal function.

Minimize:

EE1=Sum(I=1,,100)(X1I)2+10SIN(3.1416*3/2/IXI)+10SIN(5*3.1416*3/2/IXI)

+10SIN(9*3.1416*3/2/IXI)+10SIN(13*3.1416*3/2/IXI)+10SIN(17*3.1416*3/2/IXI)

+10SIN(21*3.1416*3/2/IXI)+10SIN(25*3.1416*3/2/IXI)+10SIN(27*3.1416*3/2/IXI)

+10SIN(33*3.1416*3/2/IXI)+10SIN(41*3.1416*3/2/IXI)+10SIN(81*3.1416*3/2/IXI)

where (<=0):

EEj=XI-200, j=1,,100

Solution:

EE1=-8999.999, X1=0.999,X2=1.999,X3=2.999,X4=3.999, . . . , X99=98.999, X100=99.999

N_7. Equation with multi-extreme goal function and non-convex and non-linked goal function.

Minimize:

EE1=(SIN(X1)+SIN(7X1))

where(<=):

EE2=(COS(3.22X1)+1)

EE3=(X1-10).

Solution:

EE1=-0.5928, X1=4.879, EE2=0.00000321, EE3=-5.12096

N_ 8. Equation with multi-extreme goal function and non-convex and non-linked goal function

Minimize:

EE1=0.001((X1-10)(X1-1)(X1-2)(X1-3)*

(X1-4)(X1-5)(X1-6)(X1-7)(X1-8)(X1-9)+

COS(17X1)(X2-5)(X2-6)

where (<=0)

EE2=(SIN(X1+X2))2

Solution:

EE1=-42.934069, X1=1.29246267, X2=11.2795776, EE2=0.0000321

N_ 9. Equation with multi-extreme goal function.

Minimize:

EE1=(X1-22)2+(SIN(13X1)-3))4

where (<=0):

EE2=-X1

Solution:

EE1=0.00022543, X1=21.98646728

Control:

EE1=0.000354, X1=21.99

N_ 10. Equation with multi-extreme goal function.

Minimize:

EE1=22SIN(0.001X1)(X1-5)2(1/(X1+4))(SIN(13*X1)+COS(20*X1)

where (<=0):

EE2=-X1

Solution:

EE1=-35.2194, X1=8.6238524

Control:

EE1=-35.143360, X1=8.62

N_ 11. Equation with multi-extreme goal function

Minimize:

EE1=3(SIN(4(3+X1)))2+(SIN(4+X1))12+(X2-22)2+(SIN(13X2-3))4

+SIN((X3+SIN(X3)+4)2/(10+EXP(SIN(33X3)))) +22SIN(0.01X4)( X4-5)2(1/(X4+4))*

+(SIN(13X4)+COS(20X4)) +100(X6-X52)2+(1-X5)2+(X7+10X8)2+5(X9-X10)2

+(X8-2X9)4+10(X7-X10)4

where :

X1,X2,,X1 >= 0

Solution:

EE1 = -36.07,X1= 1.3200E-001

X2= 22.20,X3= 16.035,X4= 8.6238, X5= 1.030, X6= 1.0622

X7= 4.6579E-009,X8= 4.65E-009, X9= 5.3838E-003, X10= 5.3836E-003

N_12. Equation with multi-extreme goal function.

EE1=3(SIN(4(3+X1)))2+(SIN(4+X1))12

EE2=(X2-22)2+(SIN(13X2-3))4

EE3=SIN((X3+SIN(X3)+4)2/(10+EXP(SIN(33*X3))))

EE4=SIN(0.01X4)*(X4-5)2*22(1/(X4+4))(SIN(13X4)+COS(20X4))

IF X4 > 10 then EE4=1

IF X4 < 9 then EE4=EE4*10

EE5=100*X6-X52)2+(1-X5)2

EE6=(X7+10X8)2+5(X9-X10)2 +(X8-2X9)4+10(X7-X10)4

EE7=4/3(X112-X11*X12+X12)2)

IF EE7 < 0 then EE7=0

Minimize:

EE(1)=EE1+EE2+EE3+EE4+EE5+EE6+EE7*0.75-X13

where (<=0):

EE2=X13-2

EE16=-X11-X12-X13

Solution:

EE(1)=-35.088, X1=1.32129E-001, X2= 22.6815,X3=4.359E-002,X4=8.623

X5=1.041,X6=1.085, X7=3.4943E-002, X8= 0.00,X9=8.978625E-002

X10=8.74408E-002,X11=3.36281E-003,X12=1.25656E-003,X13=1.43497

EE2=-5.6502E-001,EE16=-1.43959

N_13. Equation with multi-extreme goal function and non-convex and non-linked goal function.

W=35, R=7, T1=2. XX=X1^0.5+(X2/X1)^0.5+(64/X2)^0.5

Minimize:

EE1=ABS(XX-3-(W-2*SIN(0.5*SIN(XX^0.3)))/R*T1*0.5+(2+(SIN(22*XX^3))^2)))

where (<=0):

EE2=(SIN(3.1416*X1))**2-0.1

EE3=(SIN(3.1416*X2))**2-0.1

EE4=(X1+1E-3)-X2

Solution:

EE1= 0.00,X1=1.999, X2=5.0906, EE2= -0.099,EE3= -0.021,EE4= -3.0904

N_14. Queries with multi-extreme goal function.
XX=ABS((X1^0.9+X2^0.8)/(1.1+COS(X3^2.3+X4^1.3)))+Sum(j=1,,100)(0.1*J*Xj^(1/j))

Minimize:

EE1=EXP(SIN(10*XX)*SIN(123+XX^0.73-3*XX^2.13+XX^1.55))

where (<=0):

EE2=ABS((X1^0.9+X2^0.8)/(1.1+COS(X3^2.3+X4^1.3)))-3

Solution:

EE(1) = 0.3679,X1,...,X100 = 1.


References

1) Mokhtar S. Bazaraa, C.M. Shetty, "Nonlinear programming, Theory and Algorithms", John Wiley and Sons, New York, 1979, p.8
2) Judin D.V., "Mathematical Programming", Moscow Press, Moscow, 1982, p.39
3) For more information see http://www.mcs.anl.gov/otc/Guide/faq/nonlinear-programming-faq.html
4) Chichenadze V.K., "Solution of non convex nonlinear optimization problems", Moscow "Science", Moscow, 1983, 1