Ecco un approccio diverso, basato sul trovare iterativamente numeri che non possono apparire tra . Chiamare un insieme A un over-approssimazione della un 's se sappiamo che { a 1 , ... , un 6 } ⊆ A . Allo stesso modo, B è un overapproximation del b 's se sappiamo che { b 1 , ... , b 6 } ⊆ B . Ovviamente, la più piccola A{a1,…,a6}Aa{a1,…,a6}⊆ABb{b1,…,b6}⊆BA is, the more useful this over-approximation is, and the same goes for B. My approach is based upon iteratively refining these over-approximations, i.e., iteratively reducing the size of these sets (as we rule out more and more values as impossible).
The core of this approach is a method for refinement: given an over-approximation A for the a's and an over-approximation B for the b's, find a new over-approximation A∗ for the a's such that A∗⊊A. In particular, normally A∗ will be smaller than A, so this lets us refine the over-approximation for the a's.
By symmetry, essentially the same trick will let us refine our over-approximation for the b's: given an over-approximation A for the a's and an over-approximation B for the b's, it will produce a new over-approximation B∗ for the b's.
So, let me tell you how do refinement, then I'll put everything together to get a full algorithm for this problem. In what follows, let D denote the multi-set of differences, i.e., D={ai−bj:1≤i,j≤6}; we'll focus on finding a refined over-approximation A∗, given A,B.
How to compute a refinement. Consider a single difference d∈D. Consider the set d+B={d+y:y∈B}. Based on our knowledge that B is an over-approximation of the b's, we know that at least one element of d+B must be an element of {a1,…,a6}. Therefore, we can treat each of the elements in d+B as a "suggestion" for a number to possibly include in A. So, let's sweep over all differences d∈D and, for each, identify which numbers are "suggested" by d.
Now I'm going to observe that the number a1 is sure to be suggested at least 6 times during this process. Why? Because the difference a1−b1 is in D, and when we process it, a1 will be one of the numbers it suggests (since we're guaranteed that b1∈B, (a1−b1)+B will surely include a1). Similarly, the difference a1−b2 appears somewhere in D, and it'll cause a1 to be suggested again. In this way, we see that the correct value of a1 will be suggested at least 6 times. The same holds for a2, and a3, and so on.
So, let A∗ be the set of numbers a∗ that have been suggested at least 6 times. This is sure to be an over-approximation of the a's, by the above comments.
As an optimization, we can filter out all suggestions that are not present in A immediately: in other words, we can treat the difference d as suggesting all of the values (d+B)∩A. This ensures that we will have A∗⊆A. We are hoping that A∗ is strictly smaller than A; no guarantees, but if all goes well, maybe it will be.
Putting this together, the algorithm to refine A,B to yield A∗ is as follows:
Let S=∪d∈D(d+B)∩A. This is the multi-set of suggestions.
Count how many times each value appears in S. Let A∗ be the set of values that appear at least 6 times in S. (This can be implemented efficiently by building an array a of 251 initially, initially all zero, and each time the number s is suggested, you increment a[s]; at the end you sweep through a looking for elements whose value is 6 or larger)
A similar method can be built to refine A,B to get B∗. You basically reverse things above and flip some signs: e.g., instead of d+B, you look at −d+A.
How to compute an initial over-approximation. To get our initial over-approximation, one idea is to assume (wlog) that b1=0. It follows that each value ai must appear somewhere among D, thus the list of differences D can be used as our initial over-approximation for the a's. Unfortunately, this doesn't give us a very useful over-approximation for the b's.
A better approach is to additionally guess the value of one of the a's. In other words, we assume (wlog) that b1=0, and use A=D as our initial over-approximation of the a's. Then, we guess which one of these 36 values is indeed one of the a's, say a1. That then gives us an over-approximation B=a1−D for the b's. We use this initial over-approximation A,B, then iteratively refine it until convergence, and test whether the result is correct. We repeat up to 36 times, with 36 different guesses at a1 (on average 6 guesses should be enough) till we find one that works.
A full algorithm. Now we can have a full algorithm to compute a1,…,a6,b1,…,b6. Basically, we derive an initial over-approximation for A and B, then iteratively refine.
Make a guess: For each z∈D, guess that a1=z. Do the following:
Initial over-approximation: Define A=D and B=z−D.
Iterative refinement: Repeatedly apply the following until convergence:
- Refine A,B to get a new over-approximation B∗ of the b's.
- Refine A,B∗ to get a new over-approximation A∗ of the a's.
- Let A:=A∗ and B:=B∗.
Check for success: If the resulting sets A,B each have size 6, test whether they are a valid solution to the problem. If they are, stop. If not, continue with the loop over candidate values of z.
Analysis.
Will this work? Will it eventually converge on A={a1,…,a6} and B={b1,…,b6}, or will it get stuck without completely solving the problem? The best way to find out is probably to test it. However, for your parameters, yes, I expect it will be effective.
If we use method #1, as long as |A|,|B| are not too large, heuristically I expect the sizes of the sets to monotonically shrink. Consider deriving A∗ from A,B. Each difference d suggests |B| values; one of them correct, and the other |B|−1 can be treated (heuristically) as random numbers. If x is a number that does not appear among the a's, what is the probability that it survives the filtering and is added to A∗? Well, we expect a to be suggested about (|B|−1)×36/251 times in total (on average, with standard deviation about the square root of that). If |B|≤36, the probability that a wrong x survives the filtering should be about p=0.4 or so (using the normal approximation for the binomial, with continuity correction). (The probability is smaller if |B| is smaller; e.g., for |B|=30, I expect p≈0.25.) I expect the size of A∗ to be about p(|A|−6)+6, which will strictly improve the over-approximation since it is strictly smaller than |A|. For instance, if |A|=|B|=36, then based upon these heuristics I expect |A∗|≈18, which is a big improvement over |A|.
Therefore, I predict that the running time will be very fast. I expect about 3-5 iterations of refinement to be enough for convergence, typically, and about 6 guesses at z should probably be enough. Each refinement operation involves maybe a few thousand memory reads/writes, and we do that maybe 20-30 times. So, I expect this to be very fast, for the parameters you specified. However, the only way to find out for sure is to try it and see if it works well or not.