Ph.D. Theses
Permanent URI for this collection
Browse
Browsing Ph.D. Theses by Issue Date
Now showing 1 - 20 of 63
Results Per Page
Sort Options
Item Multiobjective linear programming and multiobjective zero-one programming(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 1981., 1981.) Kızıltan, Gülseren.; Kavrakoğlu, İbrahim.Item Cournot equilibrium with free entry for capitalistic and workers' enterprises(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 1985., 1985.) Pinhas, David.; Sertel, Murat R.,The purpose of this dissertation is to determine the Cournot Equilibrium with Free Entry for Capitalistic and Workers' Enterprises. An alternative Cournot-like free entry notion to Novshek's CEFE which we call Cournot-Sertel Equilibrium, is used to accomplish our purpose. Our analysis is carried out in a model such that potential firms all producing a homogeneous good possess identical workers and a common technology where preferences and production functions follow the functional forms of the log-linear model developed previously by Sertel. Making use of Cournot-Sertel Equilibrium concept; the size of the oligopoly, the production level, the capital and labor jnputs of each firm, the number of workers in each firm, total output of the industry and the price of the good are determined at equilibrium for profitmaximizing firms, for util ity maximizing workers' enterprises and for the first time for an industry where both coexist. Furthermore, it is demonstrated that Cournot-Sertel Equilibrium is also Novshek's CEFE for capitalistic and workers' enterprises in a log-linear model.Item Optimal control of generalized storage models(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 1985., 1985.) Doğu, Gülay.; Özekici, Süleyman.The purpose of this dissertation is to study to optimal control problem of the generalized storage processes over an infinite planning horizon. The generalization.of the controlled storage process allows for both positive and negative jumps by the stochastic input prqcess as well as controlled inputs and outputs. The extension of the theory for the optimal control of generalized storage processes mainly consists of studying various aspects of the uncontrolled storage model, deriving the sufficient condition of optimality, verifying the existence of a unique solution and studying its properties. The approach is to specify the stochastic structure of the processes involved in the model and monotonicity properties of the controls so as to guarantee the existence of a unique solution to the storage equation, to construct the Markov process model for the content level of the store and then to apply Markov decision theory in order to characterize the expected infinite time horizon discounted return. Consequently the sufficient condition of optimality is established as a functional differential equation in terms of the generator of the storage process and shown to possess a unique and continuously differentiable solution. In the process of verifying the existence and uniqueness of the optimal return and optimal controls, the deterministic version is considered first so as to shed light upon the nature of the solution methodology and then the results obtained are extended as to include the stochastic processes inherent in the generalized storage model.Item Simultaneous scheduling of machines and the material handling system in a flexible manufacturing system(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 1991., 1991.) Bilge, Ümit.; Ulusoy, Gündüz.The scheduling of material handling system is a crital issue in a Flexible Manufacturing System (FMS), although it has little importance in a job shop. The purpose of this dissertation is to exploit the interactions between the machine scheduling and the scheduling of the material handling system in an FMS and to integrate them by addressing them simultaneously. In the FMS under consideration, the material transfer between machines is done by a number of identical Automated Guided Vehicles (AGVs). Upon completing a loaded trip the AGV is designated to its next pick-up station. Therefore, the travel times of the empty trips depend on the ending and the starting points of the successive loaded trips assigned to a vehicle.This concept of sequence-dependent travel times increases the difficulty of the problem. As a first step, the combined machine and material handling system scheduling problem is formulated as a nonlinear mixed integer programming model which turned out to be of intractable size for real-world problems. Then, the problem is decomposed into two subproblems, one having the characteristics of the machine scheduling problem while the other is a vehicle scheduling problem and an iterative solution procedure is developed. At each iteration, a new machine schedule, generated by a heuristic procedure, is investigated for its feasibility to the vehicle scheduling subproblem. To do this, the operation completion times obtained from the, machine schedule are used to construct "time windows" for each material handling trip, and the second subproblem is handled as a "sliding time window" problem. The procedure is numerically tested on a number of example problems.Item Local constraint based analysis in resource constrained scheduling(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 1991., 1991.) Özdamar, Linet.; Ulusoy, Gündüz.Resource constrained scheduling problems are generally solved by optimization techniques which cannot accomodate large size problems with respect to computation time. Yet, fast and near optimal schedules are required in actual dynamic environments. A method which has a quick response to dynamic conditions is the usage of dispatching rules which are tested and evaluated in simulation studies or against static optimal solutions. The disadvantage of heuristics lies in their performance dependence on problem characteristics. Constraint based analysis (CBA) approach employed here stands midway between the latter two methods. The performance criterion is imposed on the scheduling process as a constraint together with resource limitations. These constraints are considered as locally essential at each scheduling decision point and activities are sequenced in accordance with the constraints. In this research, the local essential conditions are explained in detail and the static and quasi-dynamic algorithms for resource constrained scheduling in which they are incorporated are conveyed. Static solutions for· the previously solved project scheduling test problems obtained employing local CBA are compared with well-known heuristics as well as with the optimal solutions. Furthermore, CBA results for static job-shop problems representing various shop cha~acteristics are compared with heuristics.Item A new uniform order-based crossover operator for genetic algorithm applications to multi-component combinatorial optimization problems(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 1997., 1997.) Sivrikaya-Şerifoğlu, Funda.; Ulusoy, Gündüz.In this thesis, a new uniform order-based crossover operator to be used in genetic algorithm applications to combinatorial optimization problems is presented. There are three pure types of combinatorial problems: assignment, sequencing and selection problems. Most real world combinatorial problems involve components from these three types at the same time. The new operator is applicable to multi-component combinatorial optimization problems which involve one sequencing and one or more selection components. Examples are in abundance and include communications network design problems, machine and project scheduling problems. The operator is called the multi-component uniform order-based crossover or MCUOX. MCUOX processes ordering and value information concurrently. It works within a natural and direct representation of the problems. It is both general and powerful by being able to effectively search all the components of the problem. It enhances building block propagation, never violates precedence constraints given the parents are precedence; feasible, and is able to deal with additional selection components without any significant overhead. The application and effectiveness of MCUOX is illustrated at the hand of three well known difficult problems: simultaneous scheduling of machines and automated guided vehicles, resource constrained project scheduling problem with discounted cash flows, and parallel machine scheduling with earliness and tardiness penalties.|Keywords: Crossover operators, genetic algorithms, combinatorial optimizationItem Implicit algebraic curves and surfaces for shape modeling and recognition(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Sciences and Engineering, 1997., 1997.) Çivi, Hakan.; Erçil, Aytül.Implicit algebraic 2D curves and 3D surfaces are among the most powerful shape representations. With this approach, objects in 2D images are described by their silhouettes (em surfaces in 3D) and then represented by 2D implicit. polynomial curves; objects in 3D data are represented by the implicit polynomial surfaces. Implicit polynomial degrees of 2nd and. greater are used, typically up to 18th for 2D curves and 12th for 3D surfaces. In the thesis four contributions are presented whose common theme is using implicit polynomials for 2D and 3D shape modeling and object recognition purposes. Through the proposed concepts and algorithms, we argue that implicit polynomials provide a fast, and low computational cost model for low-error indexing into shape databases in 2D and 3D. The first contribution finds algebraic invariants of implicit polynomials under projective, affine, and Euclidean transformations by using and extending the symbolic method of classical invariant theory and shows how the invariants can be used for object recognition in invariant spaces. The second contribution is a novel model-based explicit algebraic expression for 2D-2D coordinate transformation estimation for Euclidean, similarity, and non-uniform scale transformations. The proposed explicit algebraic pose estimation expressions involve only the implicit polynomial coefficients for an object in two positions, and are drawn from an observation of how the implicit polynomial object model transforms under coordinate transformations. The third contribution is a 3D version of the 3L algorithm and an investigation of the specific problems in applying the technique in 3D as opposed to 2D. Finally, the fourth contribution explores the problem of getting affine invariant fits based on the 3L fitting as the 3L algorithm is inherently Euclidean invariant, but not affine invariant.Item New neurocomputational approaches for estimating road travel distances and for solving the euclidean traveling salesman problem(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 1999., 1999.) Aras, Necati.; Altınel, İ. Kuban.Neural networks are among the most rapidly developing new scientific tools. There are numerous publications reporting their success in estimation and optimization. This work concentrates on both of these aspects and applies neural networks for solving two problems from operations research. One of the problems is the distance estimation problem, which mainly deals with the estimation of the length of the shortest road connecting two points on the earth surface. First, multilayer perceptrons have been adopted. Then, a neural clustering strategy which uses the principle of vector quantization has been utilized prior to the estimation. Thr results are superior than those reported in the literature. The other problem is the well-known Euclidean traveling salesman problem. It tries to determine the shortest tour passing throgh the cities of a given set by visiting aech city exactly once. A new adaptive scheme has been developed in order to solve this problem. The new approach incorporates explicit statistical information obtained from the city coordinates into the adaptation mechanism of Kohonen's self-organizing map. Results obtained for different problems are better than the previous ones. The new approach is then adapted to the solution of the Euclidean Hamiltonian path problem whose combination with the decomposition philosophy resulted in a highly all-neural Eulidean traveling salesman problem algorithm.Item Classifier combination methods in pattern recognition(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2002., 2002.) Baykut, Alper.; Barbarosoğlu, Gülay.; Erçil, Aytül.This thesis studies methodologies to combine multiple classifiers to improve classification accuracy. Different classifiers, training methods and combination algorithms are covered throughout this study. The classifiers are extended to produce class probability estimates besides their class assignments to be able to combine them more efficiently. They are integrated in a framework to provide a toolbox for classifier combination. The leave-one-out training method is used and the results are combined using proposed weighted combination algorithms. The weights of the classifiers for the weighted classifier combination are determined based on the performance of the classifiers on the training phase. The classifiers and combination algorithms are evaluated using classical and proposed performance measures. It is found that the integration of the proposed reliability measure, improves the performance of classification. A sensitivity analysis shows that the proposed polynomial weight assignment applied with probability based combination is robust to choose classifiers for the classifier set and indicates a typical one to three per cent consistent improvement compared to a single best classifier of the same set.Item On unidirectional cyclic layouts, Hamiltonian circuits, capacitated vehicle routes and minimal spanning trees(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2004., 2004.) Öncan, Temel.; Altınel, İ. Kuban.This thesis consists of four major parts. The first part is on the Unidirectional Cyclic Layout Problem (UCLP). First, new efficient heuristics for the UCLP are proposed based on the ideas originally proposed for two well-known combinatorial optimization problems: The Asymmetric Traveling Salesman Problem (ATSP) and the Linear Ordering Problem (LOP). Then, we particularly consider the balanced case of the UCLP's satisfying the additional conservation of flow assumption: the material flow is conserved at every workstation and we develop a Branch and Bound algorithm for the Balanced UCLP (BUCLP). In the second part, we propose new extended ATSP formulations with O(n3) constraints and we analyze the strengths of their linear programming (LP) relaxation both analytically and experimentally. It is shown that the LP relaxation of one of the new formulations can have optimal objective value larger than the LP relaxation of the ATSP's multi-commodity flow formulations. In addition, we also propose new extended ATSP formulations with O(n2) constraints and compare their strengths with the ones of known ATSP formulations with O(n2) constraints. Finally, in the third and fourth parts, a new enhancement of the Clarke and Wright's savings heuristic for the Capacitated Vehicle Routing Problem and new enhancements of the Esau and Williams' savings heuristic for the Capacitated Minimal Spanning Tree Problem are respectively proposed.Item A simulation model for the long-term analysis of decentralized electricity markets(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2006., 2006.) Paşaoğlu, Güzay.; Or, İlhan.After the liberalization of the electricity generation industry, capacity expansion decisions are made by multiple self-oriented power companies. Unlike the regulated environment, decision-making of market participants is now guided by price signal feedbacks and by an imperfect foresight of the future market conditions that they will face. In such an environment, decision makers need to understand long-term dynamics of the supply and demand sides of the power market. To visualize the competitive electricity power market dynamics, a simulation model based on system dynamics (SD) philosophy is developed in this study. The developed model includes the demand module, the capacity expansion module, the power generation sector, an accounting and financial module, various competitors and a bidding mechanism. During the modeling process, various submodels have been developed and tested before being integrated to form a complete system model. At each phase of modeling, besides international literature, expert opinions have been considered in determining the model structure, relationship between entities, parameters and equations used in the model, to ensure the representativeness and the reliability of the model. In this context, additionally an Analythical Hierarchical Process based sub modeling and analysis has been conducted, in order to better understand and treat power generation facility options available to investors. The SD model has a flexible structure for parametric studies, such that results of different strategies/policies can be analyzed. The results obtained from the scenario analysis reveal that the model is able to capture most of the long and short-term dynamics which occur both on the supply and on the demand sides of the power market. The developed model is an excellent tool to be used in understanding, investigating and experimenting on a decentralized electricity market, especially in regard to investor behaviour; supply, demand and price fluctuations, short and long term effects of various decisions and primary energy resource limitations.Item Competitive learning with additional information for solving multi-facility weber problem(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2007., 2007.) Özkısacık, Kerem Can.; Altınel, İ. Kuban.It is possible to remark that the mathematical formulation of clustering analysis problem is similar to a certain class of location problems. Known as the multi{facility Weber problem, it is a non{convex continuous optimization problem where a prede- termined number of facilities are located on the plane such that demand weighted summation of distances between each customer and its closest facility is minimized. With further investigation, we pointed out that the clustering analysis formula- tion is a special case of this facility location problem mentioned above. Motivated by this relation, we revised two algorithms of competitive learning, namely vector quanti- zation and Kohonen type networks, to use additional information and applied them to solve the Weber problem. The results obtained with the new techniques are superior than those reported previously in the literature. In the second part of the study, we investigated the probabilistic version of the Weber problem. In this version, customer locations are not ¯xed and they are assumed to be distributed randomly. Methods proposed previously are revised to handle probabilistic case as well. To the best of our knowledge, ¯rst experimental results are reported for this type of the problem and traditional techniques are compared with newly proposed methods. Furthermore, we proposed an approximation scheme for dealing with more general probability distributions.Item Inventory policies for an assemble-to-order system with joint discount incentives(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2008., 2008.) Tombuş, Önder.; Bilgiç, Taner,We consider an assemble-to-order system to meet all of the stationary stochastic demand of a finished product in a periodic review setting. The finished product is assembled using two subassemblies (components). The demand must be met either by regular production or by using a faster but more expensive expedited mode. Components have independent setup, production, holding and expediting costs. However when both components fall short of demand they use the same expediting resource (same plane, same supplier channel, same overtime shift in a factory, etc.) causing a joint discount in unit expediting costs. This joint cost factor prevents solving of inventory control problem of each component independently and increases the time and space complexity of solving optimal inventory policy. We analyze models with and without setup costs. We prove that the optimal policy of the model without setup cost is a modified base stock policy, where target inventory for a component is a function of the other component's inventory level, both for a finite and an infinite horizon model. Similarly the optimal policy of the single and two period model with positive setup cost is a modified state dependent (s,S) policy, where (s,S) values of a component is a function of the other component's inventory level. Based on these results we develop an algorithm, which decreases time complexity, for solving finite and infinite horizon models in models without setup-costs optimally and in models with setup costs very close to optimal results.Item Modelling and analysis of agent based distributed scheduling systems(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2008., 2008.) Kurşun, Mahmut.; Ünal, Ali Tamer.In this study we have introduced a mathematical foundation and a framework for the modeling and analysis of agent based distributed scheduling systems. We defined the basic structure of an object oriented agent based system, and the issues in distributed systems. In order to explore the agent based structures deeply, a state based modeling approach is used. We combine a set of decision variables or control variables under the name of control policy. The control policy is comprised of the decision process (DP), response policy (RP), and agent stability during the decision process. To test the impact of various control policies to the performance of the system we set up an experiment with different problem settings based on due date tightness, the frequency of job arrivals, and processing time distribution. In this study we have been able to show that the solution procedure by itself does not guarantee a good overall performance in the system when the control policy is not set up correctly. Also we have been able to show the performance of these control policies change from one problem setting environment to another. So it is not possible to say that one magic solution procedure will be able to solve all the problem sets. It has been showed that introducing the decision time to the decision process leads to significantly poorer results compared to instantaneous policies, and even a responsive random dispatch policy may perform well better compared to a takes time control policy equipped with an optimum dispatch algorithm.Item Efficient simulations in finance(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2008., 2008.) Sak, Halis.; Hörmann, Wolfgang.Measuring the risk of a credit portfolio is a challenge for financial institutions because of the regulations brought by the Basel Committee. In recent years lots of models and state-of-the-art methods, which utilize Monte Carlo simulation, were proposed to solve this problem. In most of the models factors are used to account for the correlations between obligors. We concentrate on the the normal copula model, which assumes multivariate normality of the factors. Computation of value at risk (VaR) and expected shortfall (ES) for realistic credit portfolio models is subtle, since, (i) there is dependency throughout the portfolio; (ii) an efficient method is required to compute tail loss probabilities and conditional expectations at multiple points simultaneously. This is why Monte Carlo simulation must be improved by variance reduction techniques such as importance sampling (IS). Optimal IS probabilities are computed and compared with the “asymptotically optimal” probabilities for credit portfolios consisting of groups of independent obligors. Then, a new method is developed for simulating tail loss probabilities and conditional expectations for a standard credit risk portfolio. The new method is an integration of IS with inner replications using geometric shortcut for dependent obligors in a normal copula framework. Numerical results show that the new method is better than naive simulation for computing tail loss probabilities and conditional expectations at a single x and VaR value. Furthermore, it is clearly better than twostep IS in a single simulation to compute tail loss probabilities and conditional expectations at multiple x and VaR values. Then, the performance of outer IS strategies, which consider only shifting the mean of the systematic risk factors of realistic credit risk portfolios are evaluated. Finally, it is shown that compared to the standard t statistic a skewness-correction method of Peter Hall is a simple and more accurate alternative for constructing confidence intervals.Item The risk assessment and management of the Istanbul strait(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2008., 2008.) Türker, Ali Yasin.; Or, İlhan.There are more than 260 straits in the world. But none of them even remotely resembles the Turkish Straits in terms of the geography and other factors as they are considered as one of the most strategic waterways of the world. The Turkish Straits comprising of the Istanbul Strait and the Çanakkale Strait, are the second highest congested international waterways (second only to the Malacca Strait), since they constitute the only waterway between the Black Sea and the Mediterranean Sea. Along with the sharp rise in the number of oil tankers and the amount of oil they carry, especially after the 1990’s, the increasing maritime traffic in the Turkish Straits brings about a continuously growing risk of a large-scale accident in the Straits leading to huge environmental and material damage, as well as loss of human life. Earlier unfortunate examples show that such risk may turn into a nightmare at any time, unless the necessary measures are taken to ensure safety of navigation in the Straits. Potential maritime accidents, which impose serious risks on the nearby population, environment and property, as well as cultural and historical treasures of Istanbul, are the subject of this study. The objective is to measure the risk associated with the various environmental, physical and technical conditions related with the “potential accidents” in the Strait, based on the past accident and transit traffic data, along with the subjective expert evaluations of the accidents’ consequences. In this context, the key factor is the likelihood of an accident, along with the consequences of this potential accident in the Strait. In this regard, the Istanbul Strait Risk Model consisting of three sub-models, namely econometric, probabilistic consequence and analytic hierarchy process (AHP) models, has been developed. In the framework of the development of the econometric model, logistic regression techniques are used based on the past accident statistics of 1995- 2004, along with the year 2005 accident free transit data. This model is extensively used for testing the effects of the factors such as visibility, precipitation, wind speed, pilot utilization, local traffic and vessel characteristics, which may affect the accident probability in the Strait along with the scenario analysis. In the second part of the study, based on the experts’ views, the realization of various consequences after an accident occurrence is estimated. This model also demonstrates the effects of factors, such as vessel type and its cargo status, vessel length and accident location, over the realization of any consequence. Finally the AHP model is deployed to present the effects of other factors that are not considered in the first and second phase models. The results of this study identify and, to a large degree, quantify the risks and their sources regarding the maritime traffic in the Strait. As such, it can be said that the obtained results can also be deployed in selecting and guiding the measure to be taken regarding the mitigation of these risks; and thus enhance the maritime safety in the Strait. Nevertheless, there is still room to improve the safety and to reduce the maritime risk stemming from the maritime accidents in the Istanbul Strait.Item Maintenance of a multi-component dynamic system under partial observations(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2008., 2008.) Ünlüakın, Demet Özgür.; Bilgiç, Taner,This thesis studies the maintenance of a dynamic system consisting of several components which age in time at a given failure rate. The states of the components are hidden. In each decision epoch, the decision of whether replacing a component or doing nothing is to be made. The major difference of this problem from the other main- tenance problems is its complex structure due to many components. Two versions of the maintenance problem are studied. In the first one, it is possible to estimate the re- liability of the whole system. The aim is to find a minimal maintenance cost given that the reliability of the system should always be above a predetermined threshold value. In the second problem, partial observations, i.e., signals related with the components are observed in each time period. The next observation may have an associated cost to the decision maker. This problem is a partially observed Markov decision process (POMDP). Dynamic Bayesian networks (DBNs) are proposed as a solution to the first prob- lem. Four heuristic approaches are presented to select the component to be replaced. A hierarchical heuristic solution procedure is proposed to solve the second problem. An aggregate model is developed by aggregating states and actions so that it can be solved with exact POMDP solvers. Disaggregation is done by simulating the process with a DBN and applying troubleshooting approaches in the decision epochs where replacement is planned in the aggregate policy.Item Quantitative models for decision making in reverse logistics network design(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2009., 2009.) Tombuş, Ayşe Cilacı.; Aras, Necati.In this thesis, we focus on a problem in reverse logistics network design where the aim is locating distribution centers, inspection centers and remanufacturing facilities, determining the acquisition price as well as the amount of returned goods to be collected depending on the unit cost savings and competitor’s acquisition price. The coordination of the forward and reverse flows in the network is also taken into account in order to minimize the transportation costs, fixed costs and used product acquisition costs. A mixed-integer nonlinear programming problem has been formulated and exact algorithms have been suggested to solve it. When the acquisition price is set to a given value, the remaining problem becomes a mixed-integer programming problem which can be solved by Lagrangean relaxation, Benders Decomposition and Cross Decomposition algorithms. The best value of the acquisition price that minimizes the total cost is determined by the Golden Section search and computational results have been reported. Moreover, the effect of fixed cost, capacity as well as unit cost savings on the solution time have been analyzed.Item Coverage, sink location and routing problems in wireless sensor networks(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2009., 2009.) Güney, Evren.; Altınel, İ. Kuban.; Aras, Necati.Recent advances in electronics and telecommunications have enabled the development of low-cost, low-power multifunctional sensor devices that are powered by batteries and communicating through wireless channels over small distances. When a large number of these devices work collaboratively for certain purposes, they form a network which is called a wireless sensor network (WSN). Due to their benefits and inherent characteristics, WSNs give rise to a broad-range of many real-life applications. There are various design issues in the construction of WSNs. In this study, we focus on three important classes of problems, which are Point Coverage, Sink Location and Data Routing Problems. We propose a Binary Integer Programming (BIP) formulation to model the coverage problem and develop various heuristics to solve it. Both the Routing Problem (RP), which involves finding the most energy efficient sensor-tosink routes and Sink Location Problem (SLP), which is determining the optimal sink locations are important design issues to extend the lifetime of a WSN. However, the joint optimization of these two problems results in a more efficient network design, so we develop several Mixed Integer Linear Programming (MILP) formulations and propose optimum solution techniques and heuristics to solve them. Finally, we solve the complete integrated model, which consists of the coverage, sink location and data routing problems simultaneously. We call this hard problem as the Coverage, Location and Routing Problem (CLRP) and develop new mathematical programming formulations. We propose a nested solution procedure, where in the first level of this approach, good sensor locations are sought by using some metaheuristics. In the second level, we solve the remaining sink location and routing problem using efficient heuristics.Item Optimal placement, scheduling and routing to maximize lifetime in wireless sensor networks(Thesis (Ph.D.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2010., 2010.) Türkoğulları, Yavuz Boğaç.; Altınel, İ. Kuban.; Aras, Necati.A wireless sensor network consists of distributed autonomous tiny electronic devices called sensors. They have limited energy and capability for sensing, data processing and communication, but they can collectively behave to provide an effective network that monitors a region, and transmit information to gateway nodes called sinks. In most of the applications, the network must operate for long periods of time, so the available energy resources of the sensors must be managed efficiently. In this thesis we first develop mixed-integer linear programming models to maximize network lifetime by optimally determining locations of sensors and sinks, sensor-to-sink data routes and activity schedules of the deployed sensors over a finite planning horizon subject to coverage, flow conservation, energy consumption and budget constraints. Then, we propose valid inequalities, which we use with a new branch-and-price algorithm to solve the problem exactly. Although they provide stronger linear programming relaxations and thus increase the quality of the formulation, the size of the networks for which the model can be solved exactly is very limited. Hence, we develop approximate solution heuristics using techniques such as the Lagrangean relaxation, column generation and a greedy selection criterion. Based on the computational experiments we can say that they are accurate and efficient. The minimum cost coverage problem, which consists of the deployment of sensors over a sensor field modeled as a set of finite points so that total cost is minimized subject to the coverage requirements, generalizes the famous set covering problem. We provide a polyhedral analysis of this general problem, which has not been studied very much previously. We develope a procedure that generates valid inequalities and a sufficient condition for facet definition, and a cutting plane. Experiments show that the proposed valid inequality cuts of the fractional optimum and improve considerably the linear programming relaxation lowerbound. Finally, we revise the convergence proof of the deflected subgradient algorithm that uses conditional subgradients and suggest some refinements.