Posts tagged: Complexity theory

This was asked to patternstream so i will only answer in regards to that.
As mentioned in the last question, i don’t really consider myself making the patterns.
The time it takes Mathematica to create each .gif varies for different patterns. This time mostly depends on the content of the graphics and how many frames each .gif animation consists of, which is generally anywhere from 15-30 frames. The time is largely dependent on the degree of rotational symmetry that the pattern exhibits. This directly corresponds to the amount of ‘ripples’ present in the space.
All the patterns posted on the stream thus far have all resulted from the same code. The algorithm is straightforward enough that it was constructed in no more than 30 minutes. My computer equipped with a 2.4 GHz Intel Core i5 processor takes anywhere between 15 to 45 seconds for the average pattern. I expect this time could be sped up at least a little if the algorithm was optimized, and if a lot of the trigonometric functions (Sins and Cosines) were calculated in advance.

This was asked to patternstream so i will only answer in regards to that.

As mentioned in the last question, i don’t really consider myself making the patterns.

The time it takes Mathematica to create each .gif varies for different patterns. This time mostly depends on the content of the graphics and how many frames each .gif animation consists of, which is generally anywhere from 15-30 frames. The time is largely dependent on the degree of rotational symmetry that the pattern exhibits. This directly corresponds to the amount of ‘ripples’ present in the space.

All the patterns posted on the stream thus far have all resulted from the same code. The algorithm is straightforward enough that it was constructed in no more than 30 minutes. My computer equipped with a 2.4 GHz Intel Core i5 processor takes anywhere between 15 to 45 seconds for the average pattern. I expect this time could be sped up at least a little if the algorithm was optimized, and if a lot of the trigonometric functions (Sins and Cosines) were calculated in advance.

Some background information:
The number Pi is irrational, and when written as a decimal (in base 10) the sequence of digits continues without repetition: Pi = 3.141592…
Numbers can be equivalently written in different ways using different counting systems or bases. For instance, in base 2 numbers are represented using just 0s and 1s. Pi expressed in this binary form may look something like this up to several digits: Pi = 11.00100100001111110110I am assuming “calculating" Pi means being able to calculate this sequence to any finite length of this infinite sequence.
Besides numbers themselves, anything reasonable that permits a finite description of itself (like books, movies, source code, etc.) can be encoded as a string of 0s and 1s.
A normal number has the property that its digit expansion in some counting base contains all possible finite sequence of digits. Consider the decimal that goes on forever formed by joining all the whole numbers in order: .123456789101112131415… This number is normal by construction.
The conjecture that Pi is normal in binary is saying that any finite sequence of 0s and 1s is contained somewhere in the infinite sequence that represents Pi.
I understand that the statements in the warning were probably intended as a joke.
This is an argument for why I think someone should not be considered guilty for calculating Pi in binary. This is because only knowing the sequence of digits in the binary expansion of Pi is not the same as knowing where any particular string of 1s and 0s is located in that infinite list. Nor is it the same as having knowledge that any arbitrary sequence is a proper encoding of somethings else.
Assuming Pi is normal and despite having the ability to calculate Pi in binary to an arbitrarily length, would this necessarily imply having the ability to efficiently find the position of any particular finite string in the sequence? It could be computed in principle but the efficiency of such a task is the important part. Algorithmically speaking, unless provided with some deeper structure of the binary expansion of Pi, the best way to locate a given string within that sequence may come down to a brute search.
Therefore, we should be careful in claiming someone to be guilty of things like those listed in the warning. Maybe if there was evidence suggesting that the person could locate a certain string in practice or had located one, then we could justify them being guilty.
Otherwise, having access to any part of the infinite expansion of Pi does not yield any information alone. How would someone even know where some string representing a certain book is? You could try to keep decoding random strings until you finally find the one you think you were looking for. Keep in mind that you could very well come across strings which seem to represent the book you’re looking for, but there would be typos or other variants of the original that you would need to sort through. 
Even if you were provided with the proper encoding of something, which would defeat the point anyway, it would be difficult to locate or search for the position of that given string in the expansion of Pi. Imagine how long the bit strings would be for some of these things in any suitable encoding. There is no way of knowing which part of Pi’s expansion to even begin searching from. Trying to find a string of length N in some region of length M of Pi, could take 2^((M-N)N) steps in the worst case which is exponential in both N and M. For N greater than just 300, you would already be dealing with numbers exceeding estimates of the number of atoms in the universe (~10^80).
Calculating Pi, or any other normal number, doesn’t really help anyone trying to get a certain encoding of anything anymore then having a trivial list of all possible bit strings, which ought to be freely available to anyone interested.
Suppose there could be some elaborate way to use the positions of the stars in the sky to encode information about certain strings of bits within that framework. Even though we all have access to the stars we do not claim to have knowledge or ‘possession’ of information for any of the things that could be suitably encoded.
Essentially, calculating Pi should be a guilt-free experience because when it comes down to it encodings are arbitrary and there is no a priori meaning to a sequence of symbols.
To my defense, here is a nice site that lists the binary expansion of Pi to some number of digits which also happened to be ranked as #4 in a list of most useless pages on the WWW.

Some background information:

  • The number Pi is irrational, and when written as a decimal (in base 10) the sequence of digits continues without repetition: Pi = 3.141592…
  • Numbers can be equivalently written in different ways using different counting systems or bases. For instance, in base 2 numbers are represented using just 0s and 1s. Pi expressed in this binary form may look something like this up to several digits:
    Pi = 11.00100100001111110110
    I am assuming “calculating" Pi means being able to calculate this sequence to any finite length of this infinite sequence.
  • Besides numbers themselves, anything reasonable that permits a finite description of itself (like books, movies, source code, etc.) can be encoded as a string of 0s and 1s.
  • normal number has the property that its digit expansion in some counting base contains all possible finite sequence of digits. Consider the decimal that goes on forever formed by joining all the whole numbers in order: .123456789101112131415…
    This number is normal by construction.
  • The conjecture that Pi is normal in binary is saying that any finite sequence of 0s and 1s is contained somewhere in the infinite sequence that represents Pi.
  • I understand that the statements in the warning were probably intended as a joke.

This is an argument for why I think someone should not be considered guilty for calculating Pi in binary. This is because only knowing the sequence of digits in the binary expansion of Pi is not the same as knowing where any particular string of 1s and 0s is located in that infinite list. Nor is it the same as having knowledge that any arbitrary sequence is a proper encoding of somethings else.

Assuming Pi is normal and despite having the ability to calculate Pi in binary to an arbitrarily length, would this necessarily imply having the ability to efficiently find the position of any particular finite string in the sequence? It could be computed in principle but the efficiency of such a task is the important part. Algorithmically speaking, unless provided with some deeper structure of the binary expansion of Pi, the best way to locate a given string within that sequence may come down to a brute search.

Therefore, we should be careful in claiming someone to be guilty of things like those listed in the warning. Maybe if there was evidence suggesting that the person could locate a certain string in practice or had located one, then we could justify them being guilty.

Otherwise, having access to any part of the infinite expansion of Pi does not yield any information alone. How would someone even know where some string representing a certain book is? You could try to keep decoding random strings until you finally find the one you think you were looking for. Keep in mind that you could very well come across strings which seem to represent the book you’re looking for, but there would be typos or other variants of the original that you would need to sort through. 

Even if you were provided with the proper encoding of something, which would defeat the point anyway, it would be difficult to locate or search for the position of that given string in the expansion of Pi. Imagine how long the bit strings would be for some of these things in any suitable encoding. There is no way of knowing which part of Pi’s expansion to even begin searching from. Trying to find a string of length N in some region of length M of Pi, could take 2^((M-N)N) steps in the worst case which is exponential in both N and M. For N greater than just 300, you would already be dealing with numbers exceeding estimates of the number of atoms in the universe (~10^80).

Calculating Pi, or any other normal number, doesn’t really help anyone trying to get a certain encoding of anything anymore then having a trivial list of all possible bit strings, which ought to be freely available to anyone interested.

Suppose there could be some elaborate way to use the positions of the stars in the sky to encode information about certain strings of bits within that framework. Even though we all have access to the stars we do not claim to have knowledge or ‘possession’ of information for any of the things that could be suitably encoded.

Essentially, calculating Pi should be a guilt-free experience because when it comes down to it encodings are arbitrary and there is no a priori meaning to a sequence of symbols.

To my defense, here is a nice site that lists the binary expansion of Pi to some number of digits which also happened to be ranked as #4 in a list of most useless pages on the WWW.

Planar 2-colorability

Take some region of the plane, and any number of distinct lines that pass anywhere through this region. Consider these random lines for instance:

Notice how the lines and their crossings create polygonal shapes in the region.

Using just two colors, say black and white, is it possible to color the entire region such that any two polygons that are next to each other sharing a common edge are different colors?

This is possible in the above example as this 2-coloring indicates:

There is also another possible 2-coloring that satisfies these requirements, but its really just the same as the 2-coloring above with the colors switched.

You can convince yourself that these are the only two permissible 2-colorings meeting the criterion with this particular configuration of lines.

One may wonder if its always possible to achieve such a 2-coloring, or precisely under what circumstances it is or is not possible.

It does seem to be the case that if each line passes completely through the region, then a 2-coloring will always be possible.

The following examples show this for two particular cases, where not only are there a different number of randomly chosen lines in each case, but each line is even allowed to move. Regardless, at each instance, the 2-coloring is always preserved!

Still, these examples do not prove the claim in general since there remains an infinite number of cases left unconsidered. How would one prove this?

Well, when would such a 2-coloring not be possible?

At any intersection of lines in the region, the crossings create corners for the polygons that are formed. If there happens to be an intersection with an odd number of corners, then for any assignment of 2 colors to the parts around this intersection, there would have to exist two adjacent parts that have the same color. Otherwise, for an even number of corners around an intersection, it is always possible to assign a 2-coloring so that adjacent parts have different colors.

Therefore, as long as all the intersections formed within the region have an even number of corners, there will exist a 2-coloring. This criterion will be met if we assume that the lines always pass completely through the region as in previous considerations

These conditions are special and do limit the possible configurations that are 2-colorable.

What if configurations were allowed to have intersections with an odd number of corners?

What if lines didn’t have to pass completely through the region and were allowed to end somewhere inside of it?

What if we didn’t have to use straight lines to partition the region?

If it is not possible to color the region in the above sense with 2 colors, how many would it take?

Does there exist some maximum finite number of colors that can be used to color any possible partition of a region?

The 4-color theorem, first stated in 1852, which concerns the problem under consideration, states that only 4 colors are needed to color any configuration so that adjacent regions are not colored the same.

The truth of this theorem went without correct proof until 1976 when it was proved by Kenneth Appel and Wolfgang Haken using a computer! This computer-assisted proof may be considered controversial and has interesting implications.

It is worth mentioning that the related problem of deciding whether a given configuration is 2-colorable is easy to solve since there are efficient computer algorithms that can check. However, the problem of deciding if 3 colors are needed is hard to do in general since there are currently no known computers algorithms that can efficiently solve this problem.

If you can find a fast algorithm, or if it you can prove that no efficient algorithm can exist for deciding the 3-coloring problem, then you could win $1,000,000 solving a big open problem in computer science.

For any orthogonal maze shape you can possibly think of, there exists a crease pattern on a flat piece of paper which can be folded into that maze shape. There also exists an algorithm generating the crease pattern given the maze shape that is easy to implement in terms of its algorithmic complexity. You can design your own here.

Shown above is a crease pattern that folds into the “maze” on the left, which is a representation of the letters on the right. Red lines are mountain folds and blue lines are valley folds. I personally just like looking at the crease patterns.

(Image source)

Quantum computers and 2^N

Simulating quantum systems using classical computers is hard to do because of the size and complexities of quantum systems. To describe a single particle in a classical system, you would need 6 different parameters: one for each of the 3 directions of motion and another 3 for the velocities of the particle in each of these directions. So for a system consisting on N particles, one would need 6N different parameters to completely specify the state space of the system.

On the other hand, due to the superposition principle and the linearity of quantum mechanics, to describe the simplest quantum system consisting of N quantum particles (or qubits), 2N parameters are needed to describe the state space completely. The thing is, for large enough values of N (greater than 4), 2N > 6N. Moreover, 2N grows exponentially faster than 6N as N increases. In essence, this is one of the main reasons why quantum computers would be able to outperform their classical counterparts.

Here is an example illustrating how large and how quickly exponential behavior like this can grow.

Suppose you have a chess board and a lot of coins. There are 64 spaces in total on the board. Start in one corner and place 2 coins on that square. Move to the next square and stack 22=4 coins on it. Then stack 23=8 coins on the third square and continue in this manner stacking 2N coins on the Nth square. The last square will have 264 = 18,446,744,073,709,551,616 coins stacked on top of each other.

How high do you think this stack would be?

It would stretch further than the distance from Earth to the Moon (400,000 kilometers), further than than the distance from the Earth to the Sun (150 million kilometers), and would be able to reach the next closest star Proxima Centauri (about 4.2 light-years or 3.97×1013 kilometers)!

There are many problems out there that require computers to use resources that grow exponentially like in this example. It has been difficult to say whether or not classical computers could even compute fast enough to make them practical problem solvers in this case, but we do have reasons to believe that quantum computers may be able to help out in this regard.

matthen:

When designing an art gallery, you might want to make it so that people can walk through it visiting each exhibit exactly once returning back to the front door.  In the plan on the left, this is possible.  In the other plan, it is impossible to visit every dot (exhibit) without revisiting an exhibit.  There is no quick method to check a general art gallery to see if it has this property (that it is Hamiltonian) - you just have to check all possible paths.  [more] [code]

I think its important to mention that there is no quick method known to anyone for solving this problem in general. If someone does know one, no one probably knows that this person knows it! In complexity theoretic terms, the Hamiltonian path problem is NP-complete. This basically means that a quick solution to this problem could be used to get a solution to any other problem (in NP) that usually requires you to check all the possible solutions.If a quick algorithm could be found for this problem, or if it can be shown that no such algorithm could exist, then the P=?NP problem would be solved and someone would get $1,000,000 and become famous for solving one of the biggest open problems in computer science for the past 40 years.

matthen:

When designing an art gallery, you might want to make it so that people can walk through it visiting each exhibit exactly once returning back to the front door.  In the plan on the left, this is possible.  In the other plan, it is impossible to visit every dot (exhibit) without revisiting an exhibit.  There is no quick method to check a general art gallery to see if it has this property (that it is Hamiltonian) - you just have to check all possible paths.  [more] [code]


I think its important to mention that there is no quick method known to anyone for solving this problem in general. If someone does know one, no one probably knows that this person knows it! In complexity theoretic terms, the Hamiltonian path problem is NP-complete. This basically means that a quick solution to this problem could be used to get a solution to any other problem (in NP) that usually requires you to check all the possible solutions.

If a quick algorithm could be found for this problem, or if it can be shown that no such algorithm could exist, then the P=?NP problem would be solved and someone would get $1,000,000 and become famous for solving one of the biggest open problems in computer science for the past 40 years.



A representation of the quantum Fourier transform on n qubits as the tensor product of n single qubit operations.


This decomposition allows for the quantum Fourier transform to be implemented on a quantum computer efficiently (fast enough for practical purposes). It is used in various algorithms, including Shor’s algorithm, which is an algorithm for factoring large (200+ digit) integers. The computational task of factoring large integers on a classical computer takes so long that the government, banks, and the internet rely on this difficulty to keep their information secure from the public (RSA cryptography).So if you have a large scale quantum computer, you could have the ability to render the RSA cryptographic protocol obsolete! The thing is, quantum computers are really hard to make. However, in 2001 IBM researchers in the foothills behind my house managed to use a quantum computer to factor the number 15 into 3 x 5!

A representation of the quantum Fourier transform on n qubits as the tensor product of n single qubit operations.


This decomposition allows for the quantum Fourier transform to be implemented on a quantum computer efficiently (fast enough for practical purposes). It is used in various algorithms, including Shor’s algorithm, which is an algorithm for factoring large (200+ digit) integers.

The computational task of factoring large integers on a classical computer takes so long that the government, banks, and the internet rely on this difficulty to keep their information secure from the public (RSA cryptography).

So if you have a large scale quantum computer, you could have the ability to render the RSA cryptographic protocol obsolete!

The thing is, quantum computers are really hard to make. However, in 2001 IBM researchers in the foothills behind my house managed to use a quantum computer to factor the number 15 into 3 x 5!