We discuss applications of quantum computation to geometric data processing. These applications include problems on convex hulls, minimum enclosing balls, linear programming, and intersection problems. Technically, we apply well-known Grover’s algorithm (and its variants) combined with geometric algorithms, and no further knowledge of quantum computing is required. However, revealing these applications and emphasizing potential usefulness of quantum computation in geometric data processing will promote research and development of quantum computers and algorithms.
The Hough transform is a well-established scheme for detecting digital line components in a binary edge image. A key to its success in practice is the notion of voting on an accumulator array in the parameter plane. This paper discusses computational limitation of such voting-based schema under the constraint that all possible line components in a given image must be reported. Various improvements are presented based on algorithmic techniques and data structures. New schema with less computation time and working space based on totally different ideas are also proposed with some experimental results.
In the branches of both differential geometry and graph theory, Cheeger constant plays a central role in the study of eigenvalues of Laplacians. In this paper, we give a new aspect of Cheeger constant of graphs, that is, some relations of Cheeger constant and connectivities of digraphs and graphs.
In order to characterize the Hardy spaces on an n-dimensional Euclidean space, several maximal functions have been introduced and studied until recently. Among them, vertical maximal functions Mφf, non-tangential maximal functions M*φf, tangential modification M*φ*,Nf of M*φf and grand maximal functions MFf will be treated with some other maximal functions. And we will compare the Orlicz norms of these maximal functions.
The purpose of this note is to define and characterize different kinds of quasi-norms for a double sequence obtained by generalizing the lp-norm. In particular we focus on two weak lp-norms: successive weak lp-norm and standard weak lp-norm. We give upper and lower bound of the ratio of a successive weak lp-norm to a standard weak lp-norm for any double sequence.
We give simple explicit formulas of the power sum and the exponential sum of digital sums of the Gray code representation of natural numbers by use of the distribution function of the singular measure.
We construct a differentiable family of non-contactomorphic contact structures on a non-compact (2n-1)-dimensional manifold. This generalizes a result of Eliashberg which shows the existence of non-contactomorphic open solid tori.
We analyze two algorithms for the k-exclusion problem on the asynchronous multi-writer/reader shared memory model and show their correctness. The first algorithm is a natural extension of the n-process algorithm by Peterson for the mutual exclusion algorithm to the k-exclusion problem, and the second algorithm is a combination of the first algorithm and the tournament algorithm by Peterson and Fischer for the mutual exclusion problem. These two algorithms satisfy k-exclusion, and can tolerate up to k-1 process failures of the stopping type. The running times by the first algorithm and by the second algorithm are bounded by (n-k)c+O(n(n-k)2)l and (n/k)kc+O((n/k)k+1k)l, respectively, even if there exist at most k-1 process failures of the stopping type, where n is the number of processes, l is an upper bound on the time between successive two atomic steps for any faultless process, and c is an upper bound on the time that any user spends in the critical region.
A plane drawing of a graph is called a floorplan if every face (including the outer face) is a rectangle. A based floorplan is a floorplan with a designated base line segment on the outer face. In this paper we give a simple algorithm to generate all based floorplans with at most n faces. The algorithm uses O(n) space and generates such floorplans in O(1) time per floorplan without duplications. The algorithm does not output entire floorplans but the difference from the previous floorplan. By modifying the algorithm we can generate without duplications all based floorplans having exactly n faces in O(1) time per floorplan, and all (non-based) floorplans having exactly n faces in O(n) time per floorplan. Also, given three integers n, k1 and k2, we can generate all based floorplans with exactly n faces containing at least k1 and at most k2 inner rooms in O(1) time per floorplan, where an inner room means a face which does not contain a line segment of the contour of the outer face.
The firing squad synchronization problem has been studied extensively for more than forty years, and a rich variety of synchronization algorithms have been proposed. In the present paper, we describe a computer-assisted investigation into state transition tables for which optimum-time synchronization algorithms have been designed. We show that the first transition rule set designed by Waksman [(1966) Inf. Control, 9: 66-78] includes fundamental errors which cause unsuccessful firings and that ninety-three percent of the rules are redundant. In addition, the transition rule sets reported by Balzer [(1967) Inf. Control, 10: 22-42], Gerken [(1987), Diplomarbeit, Institut für Theoretische Informatik, Technische Universität Braunschweig, 502] and Mazoyer [(1987) Theor. Comput. Sci., 50: 183-238] are found to include several redundant rules. We also present herein a survey and a comparison of the quantitative aspects of the optimum-time synchronization algorithms developed thus far for one-dimensional cellular arrays.
Digital halftoning is a problem of computing a binary image approximating an input gray (or color) image. We consider two problems on digital halftoning: mathematical evaluation of a halftoning image and design of optimization-based halftoning algorithms. We propose an efficient automatic evaluation system of halftoning images by using quality evaluation functions based on discrepancy measures. Our experimental results on the evaluation system infer that the discrepancy corresponding to a regional error is a good evaluation measurement, and thus we design algorithms to reduce this discrepancy measure.