Browsing by Author "Arslan, Sanem."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Performance and cost efficient reliability framework for multicore architectures(Thesis (Ph.D.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2017., 2017.) Arslan, Sanem.; Özturan, Can.; Topcuoğlu, Haluk Rahim.Modern architectures become more vulnerable to soft errors with technology scaling. Enabling fault tolerance capabilities on all cache structures in a system is ine cient in terms of performance and power consumption. In this study, we propose an enhanced protection mechanism for code segments, which are critical in terms of reliability, by utilizing asymmetrically reliable cores under performance and power constraints. Our proposed system contains at least one high-reliability core, which has an ECC-protected L1 cache, and several low-reliability cores, which have no protection mechanisms. Our framework protects only reliability-based critical code regions of each application, which are determined based on critical data usage, user annotations, or static analysis. In our rst attempt, the framework dynamically assigns the software threads executing critical code fragments to the protected core(s) by using the First Come First Served (FCFS) algorithm. Our experimental evaluation shows that the proposed approach takes advantage of protecting only critical code regions and presents comparable performance and reliability results with fully protected systems having lower power consumption and cost values for a set of applications. However, the FCFS-based scheduling algorithm may degrade the system performance and unfairly slow down applications for some workloads. Therefore, a set of scheduling algorithms is proposed to improve both the system performance and fairness perspectives. Various static priority techniques that require preliminary information about the applications and dynamic priority techniques that target to equalize the total time spent of applications on the protected core(s) are presented as part of this thesis. Extensive evaluations using multi-application workloads validate signi cant improvements of proposed scheduling techniques on system performance and fairness over the FCFS algorithm.Item Scheduling of multiple multi-threaded applications on CMPs(Thesis (M.S.) - Bogazici University. Institute for Graduate Studies in Science and Engineering, 2011., 2011.) Arslan, Sanem.; Tosun, Oğuz.; Topçuoğlu, Haluk Rahmi.Due to the limitations in the conventional processor designs, chip multiprocessors (CMPs), which have multiple cores on a single chip, are a promising alternative to singlecore architectures for performance improvements. The potential performance gains that can be achieved by the using CMPs decline when there is contention for the shared cache structure for multiple multi-threaded applications. Our main focus is to present mapping strategies of multiple multi-threaded applications on multicore architectures. We propose and develop a novel prediction-based mapping strategy. Our approach analyzes thread behavior of di erent applications on the shared cache by considering all possible thread combinations of di erent applications. It nds the best thread combinations of di erent applications that result in minimum cache disturbance. Our prediction-based framework has two components: a static component and a dynamic component. The collection of the training data which is given to the curve tting model as an input is done o -line at the static component. After receiving the predicted values, the threads of each application that shares the same core are arranged. Communication with curve tting model, receiving predicted results, and nally mapping according to these values are done on-line at the dynamic component. The communication between the application code and the curve tting model is provided by a runtime module which collects the training data from the application code and sends them to the curve tting model and receives predicted data from the curve tting model and sends them to the application code. Any interference with the program is avoided at every step of the execution.