HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Information Initiative Center >
Peer-reviewed Journal Articles, etc >

Hybrid of genetic algorithm and local search to solve MAX-SAT problem using nVidia CUDA framework

Files in This Item:
Latex.pdf1.59 MBPDFView/Open
Please use this identifier to cite or link to this item:

Title: Hybrid of genetic algorithm and local search to solve MAX-SAT problem using nVidia CUDA framework
Authors: Munawar, Asim Browse this author
Wahib, Mohamed Browse this author
Munetomo, Masaharu Browse this author →KAKEN DB
Akama, Kiyoshi Browse this author
Keywords: Compute unified device architecture (CUDA)
General-purpose computing on graphics processing unit (GPGPU)
Genetic algorithm (GA)
MAXimum SATisfiability problem (MAX-SAT)
Single instruction multiple data (SIMD)
Single instruction multiple threads (SIMT)
Issue Date: 2009
Publisher: Springer
Journal Title: Genetic Programming and Evolvable Machines
Volume: 10
Issue: 4
Start Page: 391
End Page: 415
Publisher DOI: 10.1007/s10710-009-9091-4
Abstract: General Purpose computing over Graphical Processing Units (GPGPUs) is a huge shift of paradigm in parallel computing that promises a dramatic increase in performance. But GPGPUs also bring an unprecedented level of complexity in algorithmic design and software development. In this paper we describe the challenges and design choices involved in parallelizing a hybrid of Genetic Algorithm (GA) and Local Search (LS) to solve MAXimum SATisfiability (MAX-SAT) problem on a state-of-the-art nVidia Tesla GPU using nVidia Compute Unified Device Architecture (CUDA). MAX-SAT is a problem of practical importance and is often solved by employing metaheuristics based search methods like GAs and hybrid of GA with LS. Almost all the parallel GAs (pGAs) designed in the last two decades were designed for either clusters or MPPs. Unfortunately, very little research is done on the implementation of such algorithms over commodity graphics hardware. GAs in their simple form are not suitable for implementation over the Single Instruction Multiple Thread (SIMT) architecture of a GPU, and the same is the case with conventional LS algorithms. In this paper we explore different genetic operators that can be used for an efficient implementation of GAs over nVidia GPUs. We also design and introduce new techniques/operators for an efficient implementation of GAs and LS over such architectures. We use nVidia Tesla C1060 to perform several numerical tests and performance measurements and show that in the best case we obtain a speedup of 25×. We also discuss the effects of different optimization techniques on the overall execution time.
Rights: The original publication is available at
Type: article (author version)
Appears in Collections:情報基盤センター (Information Initiative Center) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Submitter: 棟朝 雅晴

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 - Hokkaido University