Come trovare 5 valori ripetuti in O (n) tempo?


Supponiamo di avere un array di dimensioni n6 contenente numeri interi da 1 a n5 , inclusi, con esattamente cinque ripetuti. Ho bisogno di proporre un algoritmo in grado di trovare i numeri ripetuti in O(n) tempo. Non posso, per la vita di me, pensare a niente. Penso che l'ordinamento, nella migliore delle ipotesi, sarebbe O(nlogn) ? Quindi attraversare l'array sarebbe O(n) , risultando in O(n2logn). Tuttavia, non sono davvero sicuro se l'ordinamento sarebbe necessario in quanto ho visto alcune cose complicate con elenco collegato, code, pile, ecc.

O(nlogn)+O(n) non èO(n2logn) . ÈO(nlogn) . SarebbeO(n2logn) se facessi l'ordinamento n volte.
Fondi Monica's Lawsuit,

@leftaroundabout Questi algoritmi sono dove n è la dimensione dell'array e k è la dimensione dell'insieme di input. poiché k = n - c o n s t a n t questi algoritmi funzionano in O ( n 2 )O(kn)nkk=nconstantO(n2)
romano Gräf

@ RomanGräf sembra che la situazione reale sia questa: gli algoritmi funzionano in , dove k è la dimensione del dominio. Quindi, per un problema come gli OP, si ottiene lo stesso se si utilizza un tale algoritmo sul dominio di dimensioni n o un algoritmo O tradizionale ( n log n ) su un dominio di dimensioni illimitate. Ha anche senso. O(logkn)knO(nlogn)
lasciato circa l'

Per , l'unico numero consentito è 1 , secondo la tua descrizione. Ma poi 1 dovrebbe essere ripetuto sei, non cinque, volte. n=611
Alex Reinking,



È possibile creare un ulteriore array di dimensioni n . Inizialmente impostare tutti gli elementi dell'array su 0 . Quindi, passare attraverso l'array di input A e aumentare B [ A [ i ] ] di 1 per ogni i . Dopodiché controlli semplicemente l'array B : passa sopra A e se B [ A [ i ] ] > 1 allora A [ i ] viene ripetuto. Lo risolvi in O ( n )Bn0AB[A[i]]iBAB[A[i]]>1A[i]O(n) time al costo della memoria che è O(n) e perché i tuoi numeri interi sono compresi tra e n - 5 .1n5


La soluzione nella risposta di fade2black è quella standard, ma utilizza lo spazio . Puoi migliorare questo nello spazio O ( 1 ) come segue:O(n)O(1)

  1. Let the array be A[1],,A[n]. For d=1,,5, compute σd=i=1nA[i]d.
  2. Compute τd=σdi=1n5id (you can use the well-known formulas to compute the latter sum in O(1)). Note that τd=m1d++m5d, where m1,,m5 are the repeated numbers.
  3. Compute the polynomial P(t)=(tm1)(tm5). The coefficients of this polynomial are symmetric functions of m1,,m5 which can be computed from τ1,,τ5 in O(1).
  4. Find all roots of the polynomial P(t) by trying all n5 possibilities.

This algorithm assumes the RAM machine model, in which basic arithmetic operations on O(logn)-bit words take O(1) time.

Another way to formulate this solution is along the following lines:

  1. Calculate x1=i=1nA[i], and deduce y1=m1++m5 using the formula y1=x1i=1n5i.
  2. Calculate x2=1i<jA[i]A[j] in O(n) using the formula
  3. y2=1i<j5mimj
  4. Calculate x3,x4,x5 and deduce y3,y4,y5 along similar lines.
  5. The values of y1,,y5 are (up to sign) the coefficients of the polynomial P(t) from the preceding solution.

dO(d2n)O(d2) space, which performs O(dn) arithmetic operations on integers of bit-length O(dlogn), keeping at most O(d) of these at any given time. (This requires careful analysis of the multiplications we perform, most of which involve one operand of length only O(logn)O(dn)O(d) space using modular arithmetic.

Any interpretation of σd and τd, P(t), mi and so on? Why d{1,2,3,4,5}?
styrofoam fly

The insight behind the solution is the summing trick, which appears in many exercises (for example, how do you find the missing element from an array of length n1 containing all but one of the numbers 1,,n?). The summing trick can be used to compute f(m1)++f(m5) for an arbitrary function f, and the question is which f to choose in order to be able to deduce m1,,m5. My answer uses familiar tricks from the elementary theory of symmetric functions.
Yuval Filmus

@hoffmale Actually, O(d2).
Yuval Filmus

@hoffmale Each of them takes d machine words.
Yuval Filmus

@BurnsBA The problem with this approach is that (n5)# is much larger than (n4)(n5)2. Operations on large numbers are slower.
Yuval Filmus


There's also a linear time and constant space algorithm based on partitioning, which may be more flexible if you're trying to apply this to variants of the problem that the mathematical approach doesn't work well on. This requires mutating the underlying array and has worse constant factors than the mathematical approach. More specifically, I believe the costs in terms of the total number of values n and the number of duplicates d are O(nlogd) and O(d) respectively, though proving it rigorously will take more time than I have at the moment.


Start with a list of pairs, where the first pair is the range over the whole array, or [(1,n)] if 1-indexed.

Repeat the following steps until the list is empty:

  1. Take and remove any pair (i,j) from the list.
  2. Find the minimum and maximum, min and max, of the denoted subarray.
  3. If min=max, the subarray consists only of equal elements. Yield its elements except one and skip steps 4 to 6.
  4. If maxmin=ji, the subarray contains no duplicates. Skip steps 5 and 6.
  5. Partition the subarray around min+max2, such that elements up to some index k are smaller than the separator and elements above that index are not.
  6. Add (i,k) and (k+1,j) to the list.

Cursory analysis of time complexity.

Steps 1 to 6 take O(ji) time, since finding the minimum and maximum and partitioning can be done in linear time.

Every pair (i,j) in the list is either the first pair, (1,n), or a child of some pair for which the corresponding subarray contains a duplicate element. There are at most dlog2n+1 such parents, since each traversal halves the range in which a duplicate can be, so there are at most 2dlog2n+1 total when including pairs over subarrays with no duplicates. At any one time, the size of the list is no more than 2d.

Consider the work to find any one duplicate. This consists of a sequence of pairs over an exponentially decreasing range, so the total work is the sum of the geometric sequence, or O(n). This produces an obvious corollary that the total work for d duplicates must be O(nd), which is linear in n.

To find a tighter bound, consider the worst-case scenario of maximally spread out duplicates. Intuitively, the search takes two phases, one where the full array is being traversed each time, in progressively smaller parts, and one where the parts are smaller than nd so only parts of the array are traversed. The first phase can only be logd deep, so has cost O(nlogd), and the second phase has cost O(n) because the total area being searched is again exponentially decreasing.

Thank you for the explanation. Now I understand. A very pretty algorithm!


Leaving this as an answer because it needs more space than a comment gives.

You make a mistake in the OP when you suggest a method. Sorting a list and then transversing it O(nlogn) time, not O(n2logn) time. When you do two things (that take O(f) and O(g) respectively) sequentially then the resulting time complexity is O(f+g)=O(maxf,g) (under most circumstances).

In order to multiply the time complexities, you need to be using a for loop. If you have a loop of length f and for each value in the loop you do a function that takes O(g), then you'll get O(fg) time.

So, in your case you sort in O(nlogn) and then transverse in O(n) resulting in O(nlogn+n)=O(nlogn). If for each comparison of the sorting algorithm you had to do a computation that takes O(n), then it would take O(n2logn) but that's not the case here.

In case your curious about my claim that O(f+g)=O(maxf,g), it's important to note that that's not always true. But if fO(g) or gO(f) (which holds for a whole host of common functions), it will hold. The most common time it doesn't hold is when additional parameters get involved and you get expressions like O(2cn+nlogn).


There's an obvious in-place variant of the boolean array technique using the order of the elements as the store (where arr[x] == x for "found" elements). Unlike the partition variant that can be justified for being more general I'm unsure when you'd actually need something like this, but it is simple.

for idx from n-4 to n
    while arr[arr[idx]] != arr[idx]
        swap(arr[arr[idx]], arr[idx])

This just repeatedly puts arr[idx] at the location arr[idx] until you find that location already taken, at which point it must be a duplicate. Note that the total number of swaps is bounded by n since each swap makes its exit condition correct.

You're going to have to give some sort of argument that the inner while loop runs in constant time on average. Otherwise, this isn't a linear-time algorithm.
David Richerby

@DavidRicherby It doesn't run constant time on average, but the outer loop only runs 5 times so that's fine. Note that the total number of swaps is bounded by n since each swap makes its exit condition correct, so even if the number of duplicate values increases the total time is still linear (aka. it takes n steps rather than nd) .

Oops, I somehow didn't notice that the outer loop runs a constant number of times! (Edited to include your note about the number of swaps and also so I could reverse my downvote.)
David Richerby


Subtract the values you have from the sum i=1ni=(n1)n2.

So, after Θ(n) time (assuming arithmetic is O(1), which it isn't really, but let's pretend) you have a sum σ1 of 5 integers between 1 and n:


Supposedly, this is no good, right? You can't possibly figure out how to break this up into 5 distinct numbers.

Ah, but this is where it gets to be fun! Now do the same thing as before, but subtract the squares of the values from i=1ni2. Now you have:


See where I'm going with this? Do the same for powers 3, 4 and 5 and you have yourself 5 independent equations in 5 variables. I'm pretty sure you can solve for x.

Caveats: Arithmetic is not really O(1). Also, you need a bit of space to represent your sums; but not as much as you would imagine - you can do most everything modularly, as long as you have, oh, log(5n6) bits; that should do it.

Doesn't @YuvalFilmus propose the same solution?

@fade2black: Oh, yes, it does, sorry, I just saw the first line of his solution.


Easiest way to solve the problem is to create array in which we will count the apperances for each number in the original array, and then traverse all number from 1 to n5 and check if the number appears more than once, the complexity for this solution in both memory and time is linear, or O(N)

This is the same @fade2black's answer (although a bit easier on the eyes)


Map an array to 1 << A[i] and then XOR everything together. Your duplicates will be the numbers where corresponding bit is off.

There are five duplicates, so the xor trick will not break in some cases.

The running time of this is O(n2). Each bitvector is n bits long, so you each bitvector operation takes O(n) time, and you do one bit vector operation per element of the original array, for a total of O(n2) time.

@D.W. But given that the machines we normally use are fixed at either 32 or 64-bits, and these don't change at run-time (i.e. they're constant), why shouldn't they be treated as such and assume that the bit operations are in O(1) instead of O(n)?

@ray, I think you answered your own question. Given that the machines we normally use are fixed at 64-bits, the running time to do an operation on a n-bit vector is O(n), not O(1). It takes something like n/64 instructions to do some operation on all n bits of a n-bit vector, and n/64 is O(n), not O(1).

@D.W. What I got out of prev. comments was that a bit vector referred to a single element in an n-sized array, with the bit vector being 64-bits, which would be the constant I'm referring to. Obviously, processing an an array of size n will take O(kn) time, if we assume there're k-bits per element and n the number of elements in the array. But k=64, so an operation for an array element w/ a constant bit count should be O(1) instead of O(k) and the array O(n) instead of O(kn). Are you keeping the k for the sake of completeness/correctness or am I missing something else?


from collections import defaultdict

for item in DATA:
    if len(collated) == 5:
        return item.

# n time

Welcome to the site. We're a computer science site, so we're looking for algorithms and explanations, not code dumps that require understanding of a particular language and its libraries. In particular, your claim that this code runs in linear time assumes that collated[item].append(item) runs in constant time. Is that really true?
David Richerby

Also, you are looking for a value which is repeated five times. In contrast, the OP is looking for five values, which are each repeated twice.
Yuval Filmus
Utilizzando il nostro sito, riconosci di aver letto e compreso le nostre Informativa sui cookie e Informativa sulla privacy.
Licensed under cc by-sa 3.0 with attribution required.