M.S. Theses
Permanent URI for this collection
Browse
Browsing M.S. Theses by Author "Akman, Şükrü Uğur."
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Application of data envelopment analysis to chemical engineering(Thesis (M.S.)-Bogazici University. Institute for Graduate Studies in Science and Engineering, 2010., 2010.) Ergün, Gökay Serdar.; Akman, Şükrü Uğur.Data Envelopment Analysis (DEA) is a method for measuring relative efficiencies of Decision Making Units (DMUs). DEA is used to rank the DMUs according to their relative efficiencies. For the last three decades, there have been incredible numbers of works on DEA and its applications. However, as of today, there is no application of DEA to Chemical Engineering. Hence, this thesis is focused on the applicability of the DEA to Chemical Engineering problems in order to judge its effectiveness, and possibly, to open up new research area particularly in chemical process systems engineering. In this thesis, two different problems of chemical engineering are solved via DEA by using Constant Returns-to-Scale (CRS) model. One of the problems is the ranking of the efficiencies of the alternative Heat Exchanger Network (HEN) structures, and the other is the ranking of the efficiencies of the alternative flowsheets of the Hydrodealkylation of Acetone (HDA) process. The DEA formulations are developed for both problems firstly by determining the DMUs, inputs, and outputs of the systems. Then, the DEA models are transformed into Linear Programming (LP) problems. The LPs are solved using the Excel Solver. The effects of the addition of value-judgement constraints in the DEA models are also considered. As a result of this thesis work, it is concluded that if a chemical engineer can clearly define the measure of efficiency and analyzes the relationships among the DMUs, inputs, and outputs, then the DEA is an easily applicable and trustable method to compute and rank the relative efficiencies of the alternative process flowsheets or designs. DEA is also applicable to very large-scale systems with many alternatives (DMUs), and many inputs and outputs since it requires the solution of LPs only.Item Causality in time series : dynamic time warping versus granger causality(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2012., 2012.) Yallak, Leyla Zeynep.; Akman, Şükrü Uğur.Causality is a concept studied in various areas such as economics, and engineering. Identifying the cause and effect relations among variables is important as it enables the control of the effected variable by the variation of the cause or helps predict the future behavior of the effected variable based on the behavior of the cause. Granger Causality (GC) test is a statistical test mainly used for causality detection in economics and recently in bioinformatics. The GC test determines whether one series Granger causes the other or not, or if there exists a feedback relation. However, results of the GC tests do not elucidate how these relations change with time. Dynamic Time Warping (DTW) is a method employed for similarity measurement in classification and clustering applications, in areas such as speech recognition and batch trajectory synchronization. In the DTW method, principles of dynamic programing are utilized and the series are aligned nonlinearly in the time axis. In this thesis work, it is proposed that DTW can help determine the temporal order and the lead/lag relations of the series, therefore, the causal relations. The DTW method is tested on selected synthetic data sets and on data from chemical and biochemical processes, and engineering related economic indicators. The DTW-based causality results are compared with those of the GC tests and cross-correlation analyses. The DTW-based results were found to be as expected and in accordance with the GC test only for the simple examples, for multivariable sets and nonlinearly-related variables, the method was unsuccessful.Item Compressive control-vector parameterization with discrete cosine transform for the solution of optimal-control problems(Thesis (M.A.) - Bogazici University. Institute for Graduate Studies in the Social Sciences, 2017., 2017.) Hallaçeli, Ahmet.; Akman, Şükrü Uğur.In this thesis, a novel numerical method for the solution of open-loop optimal-control problems is proposed. The method combines the flexibility of the standard Control-Vector Parameterization (CVP) technique with the compressive power of the Discrete Cosine Transform (DCT). Thus, the proposed method is termed as the Compressive Control-Vector Parameterization with Discrete Cosine Transform (CVP-DCT). In the CVP-DCT method, the control input is parameterized in terms of only few DCT Coefficients (DCTCs). The CVP-DCT method transcribes the optimal-control problem to a Nonlinear Programming (NLP) problem where the coefficients selected from the early elements of the DCTC vector are the optimization decision variables. Terminal and path constraints, as well as control bounds, are handled by the penalty-function method. Several problems are solved using the CVP-DCT, standard CVP, and Control-Vector Optimization (CVO) methods for comparison and demonstration of the pros and cons of the proposed CVP-DCT method. The method does not require a priori knowledge of the shape and complexity of the control trajectory and it can be used in any optimal control problem without prespecification. With only a few parameters, the CVP-DCT method can provide a good initial-guess trajectory to other sophisticated optimal control software packages. Especially if the control trajectory is smooth, the CVP-DCT method can provide solutions which are very close to the global solution using just a few decision variables. The performance, required number of DCTCs, and number of optimization decision variables of the method is independent of the dimension of the states and the number of time grids. Even very few DCTCs are enough for the reconstruction of the control vector to hundreds or even thousands of time grids without affecting the CPU time noticeably. Therefore, the proposed method is a viable technique for the fast solution of generic open-loop optimal-control problems with efficient lowdimensional parameterization.Item Constrained neural networks(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2012., 2012.) Özer, Ümit.; Akman, Şükrü Uğur.Item Data compression and reconstruction in process engineering applications(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2012., 2012.) Önol, Ceyda.; Akman, Şükrü Uğur.Recent improvements in sensor technology have resulted in huge amount of measured process data along with the increasing need for compression prior to storage. Hence, efficient process data compression and reconstruction techniques gain importance in various tasks such as process monitoring, system identification, and fault detection to save storage space and facilitate data transmission between a data collecting node and a data processing node. Main purpose of this thesis work is to be able to achieve the highest degree of compression and de-noising while preserving the key features of the original data upon retrieval and decompression. With this aim, the employed are the most appropriate dimensionality reduction technique among Piecewise Aggregate Approximation (PAA), One Dimensional and Two Dimensional Discrete Cosine Transform (1D-DCT and 2D-DCT) and One Dimensional and Two Dimensional Discrete Wavelet Transform (1D-DWT and 2D-DWT) by adjusting the threshold parameter in filtering. The data sets used are PortSimHigh, PortSimLow, SELDI-TOF MS and TEP. These techniques are evaluated in terms of compression ratio, reconstruction error norm, % relative global error and % relative maximum error for different α-% thresholding levels. It is concluded that high compression levels cannot be generated with thresholding percentile values less than 90% in both DCT and DWT methods whereas the quality of reconstruction deteriorates at higher threshold levels in return for better compression. Furthermore, it is revealed that the efficacy of the compression methods strongly depends on the data characteristics. DCT is suitable for smooth data sets with random trends whereas DWT is preferred for the noisy data sets with high peak content. 2D-DCT and 2D-DWT are favored for the multivariable data sets with highly correlated columns.Item Dynamics optimal control of flexible heat-exchanger networks(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 1995., 1995.) Boyacı, V. Cantürk.; Akman, Şükrü Uğur.In this work, the dynamics, controllabilitty and resiliency of Heat-Exchanger Networks (HENs) are investigated. A rigorous distributed-parameter model of typical countercurrent shell-and-tuble (multi-tube) heat exchangers constituting the HENs to be studied was develoed. The advanced numerical-solution algorithm eliminated the steady-state offset problem mentioned in literature. Two centralized optimal-control algorithms were developed and tested for various HENs, for different sets of disturbances in source-stream temperatures, and for different control-range constraints on target-stream-temperatures. In both of the algorithms, first, the values of the optimal bypass openings are determined by an optimizer which satisfies the control-precision constraints imposed on the target streams by referring to the algebraic HEN model. In centralized open-loop control algorithm, the bypasses were opened from their nominal values up to their optimal values as a function of time. The use of ramp functions resulted in very satisfactory dynamic response of the HENs, and temporal violation of the control-range constraints were prevented by optimal tuning of the rate with which the bypasses were opened. In the centralized closed-loop control algorithm, bypasses were opened as dependent on the pseudo-controlled target-stream temperatures. The use of state feedback resulted in smoother dynamic response for a sample HEN. Overall, both of the proposed centralized model-based optimal-control algorithms proved to be promising in control of HENs.Item Multiple and alternate optima of LP problems via recursive MILP : a MATLAB implementation(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2012., 2012.) Sayın, Ridade.; Akman, Şükrü Uğur.In this thesis, the Recursive Mixed Integer Linear Programming (RMILP) algorithm invented by Prof. Grossman's group at the Carnegie Mellon University is studied. The algorithm is guaranteed to find all the alternate optima of the LP problems. The main advantage of the proposed algorithm is that it can be built on efficient tools such as GAMS algebraic modeling system and does not require any modification of the LP solver. But the RMILP code written in GAMS was not generic. Since it was problem specific coding, it must be changed for each different problem. Prof. Akman generated a generic GAMS code that can be used even by a novice LP user without modification of the recursive MILP part by the user. In this thesis, final tests of Prof. Akman's work and implementation in MATLAB were performed. With this purpose, a MATLAB code consisting of two parts namely, a function part and a main part is developed. In the main part, the user enters the problem specific data and termination criterion. Then the function part is called, where LP and MILP problems are solved recursively. The same function part can be used for any standard LP problem without any modification. The MATLAB code is generic; it can accommodate any LP/MILP solver. The MATLAB code developed provides the user with all the alternate optimal solutions, as well as next K best vertex solutions of an LP problem. With this capability, the decision maker will see the difference between the optimal value and the next best objective function value, or next K best objective function values with a single run of the code. Alternate solutions enable decision maker to make choice by considering other incremental factors that are not explicitly incorporated into the optimization model. Several LP problems were solved using GAMS and MATLAB and the same results were obtained on both software platforms. In both GAMS and MATLAB, some solutions were replicated depending on the LP/MILP solvers. A dendrogram, representing hierarchical solution clusters, was used to visualize the proximity of the alternate solutions and to eliminate replicates.Item Optimal design under uncertainty via CVaR (conditional value at risk)(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2012., 2012.) Kumbasar, Sıla.; Akman, Şükrü Uğur.The aim of this thesis work is to investigate the possibility of controlling and (re)shaping the statistical probability distribution of optimal objective function values in optimization problems related to process synthesis, design, and operation under uncertainty via imposing CVaR (Conditional Value at Risk) constraints. Probability distributions of the process model outputs are obtained by Monte Carlo Sampling/Simulation (MCS). Both the sequential and simultaneous computations of CVaR are studied. In the sequential approach, distribution of the optimal process output is generated first via MCS and then CVaR of this distribution is assessed. In the simultaneous approach, CVaR of the process output’s distribution is obtained in a single stage by augmenting the process/optimization model equations for each and every realization of the input uncertainties and by solving these augmented equations together with the equations of the CVaR. These sequential and simultaneous approaches are applied to simple yet illustrious benzoic acid plant and alkylation plant examples. For the profit and cost distributions of these process models, expected value, skewness/kurtosis, CVaR+, CVaR−, difference between CVaR+ and CVaR−, Rachev Ratio (RR), linearized RR, and some linear combinations of them are considered. CVaR− and CVaR+ are defined for the risk (left) and reward (right) sides of a probability distribution disjointedly. The results show that under the simultaneous scheme, where minimization of the difference between CVaR+ and CVaR− or minimization of the RR is used as the objective, or when they are linearly adjoined to a main objective such as the expected profit it is possible to (re)shape the probability distribution of optimal objective function values. Contrary to applications in economics and finance where CVaR and the RR are exclusively used to make the loss distribution less skewed to the left, in this work, the difference between CVaR+ and CVaR− or the RR are both successfully utilized to compress the distribution of the optimal profit around its mean in order to increase certainty on the mean optimal profit, despite uncertainties in process inputs.Item Optimizatıon-driven data-based constraints identification via explicit mathematical and implicit machine-learning-based constitutives(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2023., 2023) Aladağ, Abdullah.; Akman, Şükrü Uğur.The major aim of “data-based constraint identification” is to identify feasible regions within which a process can be operated. Our approach is based on the quantitative-feasibility information of sample points metamorphosed into single- and multiple- mathematical equations constituting the data-based constraints. We firstly devise an “overall objective function” which is capable of identifying feasible regions with multiple- constitutive inequality constraints by resorting to the technique of “constraint aggregation”. We then equip our algorithm with the “form-specific constitutives” build via the generic mathematical description of some plausible inequality constraints such as the bound, linear, circular, and ellipsoidal, as single or aggregated multiple constitutives. We then build the “form-specific” and “form-free” constitutives via the “design matrix” approach, also as single or aggregated multiple constitutives. We devise the “implicit neural constitutives” as well via some Machine Learning algorithms such as Neural Networks and Extreme Learning Machines, as single implicit or aggregated multiple implicit constitutives. All of these data-based constitutive constraints are generic such that they can identify N-dimensional feasible regions. We solve the demonstrative examples with the Differential Evolution or Covariance Matrix Adaptation Evolution Strategy global optimizers. Via many diversified examples, including several chemical-engineering related ones, we show that our algorithm can identify joint, disjoint, convex, or nonconvex regions or their combinations. We also apply classification techniques, such as Probabilistic Neural Network, k-Nearest Neighbour, Support Vector Machine, Gaussian Process Regression, and Regression Trees to constraint identification. Our algorithm is also successful in identifying constraints from image boundaries, i.e., in “image-to-constraints” conversion tasks.Item Wavelet coherence :|analysis of time series and exploration towards fault detection(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2020., 2020.) Duman, Ali İsmail.; Akman, Şükrü Uğur.Wavelet Coherence Analysis (WCA) is a tool for depicting degree of coherency and phase differences between binary time series. The advantages of WCA are the capability to cope with non-stationary time series and to monitor the time- and frequency-domain information collectively. In this thesis, commonly used WCA software toolboxes were comparatively evaluated and a hybrid MATLAB code was developed. WCA was used to elucidate possible coherency and lead-lag relationships between binary time series. Data pertaining to engineering and economics were used. Studies with CAB (Chemical Activity Barometer) and IPI (US Industrial Production Index) disclosed the power of WCA in explicating and interpreting the coherency and lead-lag relationships hidden between these series and confirmed the claims made by ACC (American Chemistry Council) that the CAB leads IPI. Additionally, it was shown that at US Business Cycle periods (0.5 to two years), the troughs (ends of economic recessions) observed with WCA of CAB and IPI lead the troughs claimed by ACC. WCA supports that CAB is a leading indicator of the US economy, especially during economic recessions between 1945 and 2007. Comparative studies demonstrated that working with detrended series increased resolution of WCA while working with moving-averaged series distorted WCA due to introduction of artificial lags in averaging. WCA application to yearly CAB and Chemical Engineering Plant Cost Index (CEPCI) and yearly IPI and CEPCI pairs exhibited that it was not possible to decide whether CEPCI is a leading indicator for the US economy or not. Furthermore, for the first time in literature, WCA was used as a tool for Fault Detection (FD). Fault containing synthetic time series along with unfaulty one were used to evaluate the potential of WCA in FD. It was shown that WCA can detect faults quickly and is a viable tool for FD, change point identification, and template matching tasks.