简体   繁体   中英

quickly calculate a point of intersection over two arithmetic progressions

Progression A follows this rule:
Each value is all the odd values summed together up to and including N sub i .

N sub 4. 1+3+5+7 = 16

Progression B follows this rule. Take the ceiling of the square root times 2 plus 1. Subtract N from the ceiling tines itself. Continue to add odd numbers.

N =33.
Ceiling(√33) =6.
6*2+1=13.
36-33=3.
3+13 =16.

Stop as 16 is in both progression A and B. Is it possible to do this quickly? Ie a minimal 1 or 2 step solution? A java or general implementation would be handy

* question *

What is the output you desire? Simply abool saying they do meet? Or do you want the indices at which they do meet, ieA[4]=16 andB[17]=16? Or do you just want the number at which they meet, ie16? And what if they don't meet exactly? Do you want the indices (or number) before, or after, the intersection? Finally, when or how do you decide to halt, if, say, the two sequences will never meet? (I know in this case they do, but I mean in the general case.)

The output i am expecting would be the value 16 or it could be the index that B finds the value as both are equivalent as the index is just the ith term. If they dont meet i realize it is a non terminating program. This scenario I don't care about.

I'll summarize my comments here so it's easier to understand for new visitors.

As others have pointed out, Sequence A is merely a sequence of squares; and as OP clarified through his comments, Sequence B will be ever-changing.

A restatement of OP's problem might be

Is there a faster way to determine the first square in an increasing sequence, than computing each term of the sequence?

Indeed, there is. The obvious idea is to devise a way to "skip" the computation of some terms, based on insight regarding the rates of growth of squares, versus the sequence. But it will be hard to programmatically derive insight about an arbitrary sequence.

A more robust solution might be to reformulate the problem as finding the smallest zero of:

B(x) - x^2 = 0

And for that, there may exist root-finding algorithms that may help. If you don't need to find the smallest zero, then even easier: implement any root-finding algorithm, watch the algorithm converge to a zero, add x^2 to compensate for the reformulation, and there you have it.


EDIT

(The comment box was too limited to reply to yours.)

When I said "bisection", I actually meant "binary search". This requires an upper bound, so doesn't really apply to your problem.

Let me offer a naive algorithm though, as a start, although you've probably already thought exactly this.

  1. Compute B(1) . Say it's 1692 (not a square).
  2. Compute B(2) . Say it's 1707 (not a square).
  3. Compute B(2)-B(1) , call it the "delta", eg 1707-1692 , or 15 . Consider this a naive estimation of the rate of growth of B . It's almost definitely wrong, of course, but all we're aiming for here is some way to skip terms . That's what's to be optimized, later.
  4. What's the next square greater than 1707 ? A formula, (floor(sqrt(1707))+1)^2 , yields 1764 .
  5. How many terms should we skip to try to reach that square? Another formula, (1764-1707)/15 , yields 3.8 , which we might round to 4 .
  6. Compute B(2+4) = B(6) .
    1. If smaller than 1764 , then you need to keep going. But you've saved, in this case, having to compute 3 terms. Exactly how you choose to keep going, is just another choice. You can compute B(7) and go to step 3 (computing B(7)-B(6) as a new delta). You can go directly to step 3 (computing (B(6)-B(2))/4 as a new delta). (You can't really know what's best without characterizing the possible functions for B .)
    2. If larger than 1764 , then you need to go back. Again, there's many ways. Binary search is actually a simple, reasonable way. Compute B(4) since it's directly in between B(2) and B(6) . If less than 1764 , try B(5) . If greater than 1764 , try B(3) . If either don't match, then carry on starting with B(7) . With binary search, at most you'll do log(N) computations.

So that sounds like a good deal, right? You'll either skip a number of computations, or you'll do log(N) at most. (Or, you'll find even better optimizations to this.) But, obviously, it's not that simple, because you're doing extra computations to find these deltas, projections, binary search, etc. Since squares grow very slowly (there's only so many integers between squares), I feel such an algorithm will only beat the "linear search" (computing every term) if you're dealing with large integers, or extremely complex sequences of B (but given that B has to always increase, how complex can a sequence really be?) The key would be to find a characterization that fits all your sequences, and capitalize on that by finding an optimization specific to it.

I still don't know what your application is, but at this point you might as well just try it and benchmark it (versus linear search) over realistic datasets. This would immediately tell you whether there's any practical gain, and whether more time should be invested in optimization. And it'll be faster than trying to do all the theoretical math, characterizing sequences and whatnot.

FYI, your first sequence is simply the squares.

It should be clear that both sequences are monotonically increasing. Therefore, all you need to do is keep an index into each sequence and repeatedly increment whichever index points to a smaller number, until both indices point to the same number.

Note that if the sequences have no numbers in common, this will run forever.

Algorithm in pseudocode:

int i=1, j=1;
int x=func1(i), y=func2(j);
while x!=y {
  if x<y {i++; x=func1(i)}
  else   {j++; y=func2(j)}
}

Assuming all that we know is that func1 and func2 are increasing functions, it's difficult to optimize this algorithm further.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM