# Research paper topics, free example research papers

You are welcome to search thousands of free research papers and essays. Search for your research paper topic now!

**Research paper example essay prompt: Quantam Computing** - 1993 words

**NOTE: The samle research paper or essay prompt you see on this page is a free essay, available to anyone. You can use any paper as a sample on how to write research paper, essay prompts or as a source of information. We strongly discourage you to directly copy/paste any essay and turn it in for credit. If your school uses any plagiarism detecting software, you might be caught and accused of plagiarism. If you need a custom essay or research paper, written from scratch exclusively for you, please use our paid research paper writing service!**

.. taken to multiply a particular pairs of number but the fact that the time does not increase too sharply when we apply the same method to ever larger numbers. The same standard text-book method of multiplication requires little extra work when we switch from two three digit numbers to two thirty digits numbers. By contrast, factoring a thirty digit number using the simplest trial divison method (see inset 1) is about 1013 times more time or memory consuming than factoring a three digit number. The use of computational resources is enormous when we keep increasing the number of digits.

The largest number that has been factorised as a mathematical challenge, i.e. a number whose factors were secretly chosen by mathematicians in order to present a challenge to other mathematicians, had 129 digits. No one can even conceive of how one might factorise say thousand-digit numbers; the computation would take much more that the estimated age of the universe. Skipping details of the computational complexity we only mention that computer scientists have a rigorous way of defining what makes an algorithm fast (and usable) or slow (and unusable). For an algorithm to be fast, the time it takes to execute the algorithm must increase no faster than a polynomial function of the size of the input. Informally think about the input size as the total number of bits needed to specify the input to the problem, for example, the number of bits needed to encode the number we want to factorise. If the best algorithm we know for a particular problem has the execution time (viewed as a function of the size of the input) bounded by a polynomial then we say that the problem belongs to class P. Problems outside class P are known as hard problems.

Thus we say, for example, that multiplication is in P whereas factorisation is not in P and that is why it is a hard problem. Hard does not mean ``impossible to solve'' or ``non-computable'' --- factorisation is perfectly computable using a classical computer, however, the physical resources needed to factor a large number are such that for all practical purposes, it can be regarded as intractable (see inset 1). It worth pointing out that computer scientists have carefully constructed the definitions of efficient and inefficient algorithms trying to avoid any reference to a physical hardware. According to the above definition factorisation is a hard problem for any classical computer regardless its make and the clock-speed. Have a look at Fig.3 and compare a modern computer with its ancestor of the nineteenth century, the Babbage differential engine. The technological gap is obvious and yet the Babbage engine can perform the same computations as the modern digital computer. Moreover, factoring is equally difficult both for the Babbage engine and top-of-the-line connection machine; the execution time grows exponentially with the size of the number in both cases. Thus purely technological progress can only increase the computational speed by a fixed multiplicative factor which does not help to change the exponential dependance between the size of the input and the execution time.

Such change requires inventing new, better algorithms. Although quantum computation requires new quantum technology its real power lies in new quantum algorithms which allow to exploit quantum superposition that can contain an exponential number of different terms. Quantum computers can be programed in a qualitatively new way. For example, a quantum program can incorporate instructions such as `.. and now take a superposition of all numbers from the previous operations..'; this instruction is meaningless for any classical data processing device but makes lots of sense to a quantum computer. As the result we can construct new algorithms for solving problems, some of which can turn difficult mathematical problems, such as factorisation, into easy ones! The story of quantum computation started as early as 1982, when the physicist Richard Feynman considered simulation of quantum-mechanical objects by other quantum systems[1].

However, the unusual power of quantum computation was not really anticipated untill the 1985 when David Deutsch of the University of Oxford published a crucial theoretical paper[2] in which he described a universal quantum computer. After the Deutsch paper, the hunt was on for something interesting for quantum computers to do. At the time all that could be found were a few rather contrived mathematical problems and the whole issue of quantum computation seemed little more than an academic curiosity. It all changed rather suddenly in 1994 when Peter Shor from AT&T's Bell Laboratories in New Jersey devised the first quantum algorithm that, in principle, can perform efficient factorisation[3].This became a `killer application' --- something very useful that only a quantum computer could do. Difficulty of factorisation underpins security of many common methods of encryption; for example, RSA --- the most popular public key cryptosystem which is often used to protect electronic bank accounts gets its security from the difficulty of factoring large numbers. Potential use of quantum computation for code-breaking purposes has raised an obvious question --- what about building a quantum computer. In principle we know how to build a quantum computer; we can start with simple quantum logic gates and try to integrate them together into quantum circuits.

A quantum logic gate, like a classical gate, is a very simple computing device that performs one elementary quantum operation, usually on two qubits, in a given period of time[4]. Of course, quantum logic gates are different from their classical counterparts because they can create and perform operations on quantum superpositions (cf. inset 2). However if we keep on putting quantum gates together into circuits we will quickly run into some serious practical problems. The more interacting qubits are involved the harder it tends to be to engineer the interaction that would display the quantum interference.

Apart from the technical difficulties of working at single-atom and single-photon scales, one of the most important problems is that of preventing the surrounding environment from being affected by the interactions that generate quantum superpositions. The more components the more likely it is that quantum computation will spread outside the computational unit and will irreversibly dissipate useful information to the environment. This process is called decoherence. Thus the race is to engineer sub-microscopic systems in which qubits interact only with themselves but not not with the environment. Some physicists are pessimistic about the prospects of substantial experimental advances in the field[5]. They believe that decoherence will in practice never be reduced to the point where more than a few consecutive quantum computational steps can be performed.

Others, more optimistic researchers, believe that practical quantum computers will appear in a matter of years rather than decades. This may prove to be a wishful thinking but the fact is the optimism, however naive, makes things happen. After all it used to be a widely accepted ``scientific truth'' that no machine heavier than air will ever fly ! So, many experimentalists do not give up. The current challenge is not to build a full quantum computer right away but rather to move from the experiments in which we merely observe quantum phenomena to experiments in which we can control these phenomena. This is a first step towards quantum logic gates and simple quantum networks. Can we then control nature at the level of single photons and atoms? Yes, to some degree we can! For example in the so called cavity quantum electrodynamics experiments, which were performed by Serge Haroche, Jean-Michel Raimond and colleagues at the Ecole Normale Superieure in Paris, atoms can be controlled by single photons trapped in small superconducting cavities[6]. Another approach, advocated by Christopher Monroe, David Wineland and coworkers from the NIST in Boulder, USA, uses ions sitting in a radio-frequency trap[7].

Ions interact with each other exchanging vibrational excitations and each ion can be separately controlled by a properly focused and polarised laser beam. Experimental and theoretical research in quantum computation is accelerating world-wide. New technologies for realising quantum computers are being proposed, and new types of quantum computation with various advantages over classical computation are continually being discovered and analysed and we believe some of them will bear technological fruit. From a fundamental standpoint, however, it does not matter how useful quantum computation turns out to be, nor does it matter whether we build the first quantum computer tomorrow, next year or centuries from now. The quantum theory of computation must in any case be an integral part of the world view of anyone who seeks a fundamental understanding of the quantum theory and the processing of information.

How quantum mechanics can be used to improve computation. Our challenge: solving an exponentially difficult problem for a conventional computer---that of factoring a large number. As a prelude, we review the standard tools of computation, universal gates and machines. These ideas are then applied first to classical, dissipationless computers and then to quantum computers. A schematic model of a quantum computer is described as well as some of the subtleties in its programming.

The Shor algorithm [1,2] for efficiently factoring numbers on a quantum computer is presented in two parts: the quantum procedure within the algorithm and the classical algorithm that calls the quantum procedure. The mathematical structure in factoring which makes the Shor algorithm possible is discussed. We conclude with an outlook to the feasibility and prospects for quantum computation in the coming years. Let us start by describing the problem at hand: factoring a number N into its prime factors (e.g., the number 51688 may be decomposed as ). A convenient way to quantify how quickly a particular algorithm may solve a problem is to ask how the number of steps to complete the algorithm scales with the size of the ``input'' the algorithm is fed.

For the factoring problem, this input is just the number N we wish to factor; hence the length of the input is . (The base of the logarithm is determined by our numbering system. Thus a base of 2 gives the length in binary; a base of 10 in decimal.) `Reasonable' algorithms are ones which scale as some small-degree polynomial in the input size (with a degree of perhaps 2 or 3). On conventional computers the best known factoring algorithm runs in steps [3]. This algorithm, therefore, scales exponentially with the input size .

For instance, in 1994 a 129 digit number (known as RSA129 [3']) was successfully factored using this algorithm on approximately 1600 workstations scattered around the world; the entire factorization took eight months [4]. Using this to estimate the prefactor of the above exponential scaling, we find that it would take roughly 800,000 years to factor a 250 digit number with the same computer power; similarly, a 1000 digit number would require years (significantly lon ger than the age of the universe). The difficulty of factoring large numbers is crucial for public-key cryptosystems, such as ones used by banks. There, such codes rely on the difficulty of factoring numbers with around 250 digits. Recently, an algorithm was developed for factoring numbers on a quantum computer which runs in steps where is small [1].

This is roughly quadratic in the input size, so factoring a 1000 digit number with such an algorithm would require only a few million steps. The implication is that public key cryptosystems based on factoring may be breakable. To give you an idea of how this exponential improvement might be possible, we review an elementary quantum mechanical experiment that demonstrates where such power may lie hidden [5]. The two-slit experiment is prototypic for observing quantum mechanical behavior: A source emits photons, electrons or other particles that arrive at a pair of slits. These particles undergo unitary evolution and finally measurement.

We see an interference pattern, with both slits open, which wholely vanishes if either slit is covered. In some sense, the particles pass through both slits in parallel. If such unitary evolution were to represent a calculation (or an operation within a calculation) then the quantum system would be performing computations in parallel. Quantum parallelism comes for free. The output of this system would be given by the constructive interference among the parallel computations. Science.

Related: computing, mathematical problems, bell laboratories, quantum theory, reasonable

Research paper topics, free essay prompts, sample research papers on Quantam Computing

Example research papers produced by our company:

We write: custom term papers, custom essay writing, admission essays, persuasive and argumentative essays, critical essays, dissertations and theses

Research paper topics, free essays: conger, spanish armada, chaos theory, barnet, etc.

Copyright © 2002-2017 PromptPapers.com. All rights reserved.