Find the smallest regular number that is not less than N












9
















Regular numbers are numbers that evenly divide powers of 60. As an example, 602 = 3600 = 48 × 75, so both 48 and 75 are divisors of a power of 60. Thus, they are also regular numbers.




This is an extension of rounding up to the next power of two.



I have an integer value N which may contain large prime factors and I want to round it up to a number composed of only small prime factors (2, 3 and 5)



Examples:




  • f(18) == 18 == 21 * 32

  • f(19) == 20 == 22 * 51

  • f(257) == 270 == 21 * 33 * 51


What would be an efficient way to find the smallest number satisfying this requirement?



The values involved may be large, so I would like to avoid enumerating all regular numbers starting from 1 or maintaining an array of all possible values.










share|improve this question




















  • 2





    What have you tried? Did you read the citations in the "Algorithms" section of the Wikipedia article you linked, or the related article on smooth numbers?

    – Jordan Running
    Feb 11 '12 at 18:34






  • 2





    @Jordan yes, I am familiar with the lazy functional technique for generating all regular numbers (which could be used as a brute-force solution for my problem.) I also read the part about estimating the number of smooth numbers in a range. Do you think this might be useful here? If so feel free to put it in an answer!

    – finnw
    Feb 11 '12 at 18:39






  • 1





    Also known as "Hamming numbers" "ugly numbers" and "5-smooth numbers". Useful for choose sizes of data to do FFTs on.

    – endolith
    Sep 30 '13 at 19:11


















9
















Regular numbers are numbers that evenly divide powers of 60. As an example, 602 = 3600 = 48 × 75, so both 48 and 75 are divisors of a power of 60. Thus, they are also regular numbers.




This is an extension of rounding up to the next power of two.



I have an integer value N which may contain large prime factors and I want to round it up to a number composed of only small prime factors (2, 3 and 5)



Examples:




  • f(18) == 18 == 21 * 32

  • f(19) == 20 == 22 * 51

  • f(257) == 270 == 21 * 33 * 51


What would be an efficient way to find the smallest number satisfying this requirement?



The values involved may be large, so I would like to avoid enumerating all regular numbers starting from 1 or maintaining an array of all possible values.










share|improve this question




















  • 2





    What have you tried? Did you read the citations in the "Algorithms" section of the Wikipedia article you linked, or the related article on smooth numbers?

    – Jordan Running
    Feb 11 '12 at 18:34






  • 2





    @Jordan yes, I am familiar with the lazy functional technique for generating all regular numbers (which could be used as a brute-force solution for my problem.) I also read the part about estimating the number of smooth numbers in a range. Do you think this might be useful here? If so feel free to put it in an answer!

    – finnw
    Feb 11 '12 at 18:39






  • 1





    Also known as "Hamming numbers" "ugly numbers" and "5-smooth numbers". Useful for choose sizes of data to do FFTs on.

    – endolith
    Sep 30 '13 at 19:11
















9












9








9


2







Regular numbers are numbers that evenly divide powers of 60. As an example, 602 = 3600 = 48 × 75, so both 48 and 75 are divisors of a power of 60. Thus, they are also regular numbers.




This is an extension of rounding up to the next power of two.



I have an integer value N which may contain large prime factors and I want to round it up to a number composed of only small prime factors (2, 3 and 5)



Examples:




  • f(18) == 18 == 21 * 32

  • f(19) == 20 == 22 * 51

  • f(257) == 270 == 21 * 33 * 51


What would be an efficient way to find the smallest number satisfying this requirement?



The values involved may be large, so I would like to avoid enumerating all regular numbers starting from 1 or maintaining an array of all possible values.










share|improve this question

















Regular numbers are numbers that evenly divide powers of 60. As an example, 602 = 3600 = 48 × 75, so both 48 and 75 are divisors of a power of 60. Thus, they are also regular numbers.




This is an extension of rounding up to the next power of two.



I have an integer value N which may contain large prime factors and I want to round it up to a number composed of only small prime factors (2, 3 and 5)



Examples:




  • f(18) == 18 == 21 * 32

  • f(19) == 20 == 22 * 51

  • f(257) == 270 == 21 * 33 * 51


What would be an efficient way to find the smallest number satisfying this requirement?



The values involved may be large, so I would like to avoid enumerating all regular numbers starting from 1 or maintaining an array of all possible values.







algorithm math prime-factoring hamming-numbers smooth-numbers






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 23 '17 at 12:16









Community

11




11










asked Feb 11 '12 at 18:16









finnwfinnw

38.5k16125200




38.5k16125200








  • 2





    What have you tried? Did you read the citations in the "Algorithms" section of the Wikipedia article you linked, or the related article on smooth numbers?

    – Jordan Running
    Feb 11 '12 at 18:34






  • 2





    @Jordan yes, I am familiar with the lazy functional technique for generating all regular numbers (which could be used as a brute-force solution for my problem.) I also read the part about estimating the number of smooth numbers in a range. Do you think this might be useful here? If so feel free to put it in an answer!

    – finnw
    Feb 11 '12 at 18:39






  • 1





    Also known as "Hamming numbers" "ugly numbers" and "5-smooth numbers". Useful for choose sizes of data to do FFTs on.

    – endolith
    Sep 30 '13 at 19:11
















  • 2





    What have you tried? Did you read the citations in the "Algorithms" section of the Wikipedia article you linked, or the related article on smooth numbers?

    – Jordan Running
    Feb 11 '12 at 18:34






  • 2





    @Jordan yes, I am familiar with the lazy functional technique for generating all regular numbers (which could be used as a brute-force solution for my problem.) I also read the part about estimating the number of smooth numbers in a range. Do you think this might be useful here? If so feel free to put it in an answer!

    – finnw
    Feb 11 '12 at 18:39






  • 1





    Also known as "Hamming numbers" "ugly numbers" and "5-smooth numbers". Useful for choose sizes of data to do FFTs on.

    – endolith
    Sep 30 '13 at 19:11










2




2





What have you tried? Did you read the citations in the "Algorithms" section of the Wikipedia article you linked, or the related article on smooth numbers?

– Jordan Running
Feb 11 '12 at 18:34





What have you tried? Did you read the citations in the "Algorithms" section of the Wikipedia article you linked, or the related article on smooth numbers?

– Jordan Running
Feb 11 '12 at 18:34




2




2





@Jordan yes, I am familiar with the lazy functional technique for generating all regular numbers (which could be used as a brute-force solution for my problem.) I also read the part about estimating the number of smooth numbers in a range. Do you think this might be useful here? If so feel free to put it in an answer!

– finnw
Feb 11 '12 at 18:39





@Jordan yes, I am familiar with the lazy functional technique for generating all regular numbers (which could be used as a brute-force solution for my problem.) I also read the part about estimating the number of smooth numbers in a range. Do you think this might be useful here? If so feel free to put it in an answer!

– finnw
Feb 11 '12 at 18:39




1




1





Also known as "Hamming numbers" "ugly numbers" and "5-smooth numbers". Useful for choose sizes of data to do FFTs on.

– endolith
Sep 30 '13 at 19:11







Also known as "Hamming numbers" "ugly numbers" and "5-smooth numbers". Useful for choose sizes of data to do FFTs on.

– endolith
Sep 30 '13 at 19:11














8 Answers
8






active

oldest

votes


















5














Okay, hopefully third time's a charm here. A recursive, branching algorithm for an initial input of p, where N is the number being 'built' within each thread. NB 3a-c here are launched as separate threads or otherwise done (quasi-)asynchronously.




  1. Calculate the next-largest power of 2 after p, call this R. N = p.


  2. Is N > R? Quit this thread. Is p composed of only small prime factors? You're done. Otherwise, go to step 3.



  3. After any of 3a-c, go to step 4.



    a) Round p up to the nearest multiple of 2. This number can be expressed as m * 2.

    b) Round p up to the nearest multiple of 3. This number can be expressed as m * 3.

    c) Round p up to the nearest multiple of 5. This number can be expressed as m * 5.



  4. Go to step 2, with p = m.



I've omitted the bookkeeping to do regarding keeping track of N but that's fairly straightforward I take it.



Edit: Forgot 6, thanks ypercube.



Edit 2: Had this up to 30, (5, 6, 10, 15, 30) realized that was unnecessary, took that out.



Edit 3: (The last one I promise!) Added the power-of-30 check, which helps prevent this algorithm from eating up all your RAM.



Edit 4: Changed power-of-30 to power-of-2, per finnw's observation.






share|improve this answer





















  • 1





    You forgot m * 6

    – ypercubeᵀᴹ
    Feb 11 '12 at 18:43






  • 2





    In step 1, can you not use the next largest power of 2 instead of 30?

    – finnw
    Feb 11 '12 at 19:40











  • @finnw Yes, you're right. Breaking my promise and editing accordingly.

    – Matt Phillips
    Feb 11 '12 at 19:45











  • Have you implemented this? I have tried to follow how this algorithm will proceed when N=1025; The best solution is 1080 but I don't think it will find it.

    – finnw
    Feb 12 '12 at 15:16






  • 1





    This answer works only if one starts with "p = 1" (with some validity checks for input integers less than one) which isn't specified in the text, and is inefficient as compared to later answers by WillNess and @endolith, which only loop by two of the three variables, as the third one is implied by the other two.

    – GordonBGood
    Aug 25 '16 at 1:27



















5














One can produce arbitrarily thin a slice of the Hamming sequence around the n-th member in time ~ n^(2/3) by direct enumeration of triples (i,j,k) such that N = 2^i * 3^j * 5^k.



The algorithm works from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it in a "band" if inside the given "width" below the given high logarithmic top value (when width < 1 there can be at most one such i) then sorts them by their logarithms.



WP says that n ~ (log N)^3, i.e. run time ~ (log N)^2. Here we don't care for the exact position of the found triple in the sequence, so all the count calculations from the original code can be thrown away:



slice hi w = sortBy (compare `on` fst) b where       -- hi>log2(N) is a top value
lb5=logBase 2 5 ; lb3=logBase 2 3 -- w<1 (NB!) is log2(width)
b = concat -- the slice
[ [ (r,(i,j,k)) | frac < w ] -- store it, if inside width
| k <- [ 0 .. floor ( hi /lb5) ], let p = fromIntegral k*lb5,
j <- [ 0 .. floor ((hi-p)/lb3) ], let q = fromIntegral j*lb3 + p,
let (i,frac)=properFraction(hi-q) ; r = hi - frac ] -- r = i + q
-- properFraction 12.7 == (12, 0.7)

-- update: in pseudocode:
def slice(hi, w):
lb5, lb3 = logBase(2, 5), logBase(2, 3) -- logs base 2 of 5 and 3
for k from 0 step 1 to floor(hi/lb5) inclusive:
p = k*lb5
for j from 0 step 1 to floor((hi-p)/lb3) inclusive:
q = j*lb3 + p
i = floor(hi-q)
frac = hi-q-i -- frac < 1 , always
r = hi - frac -- r == i + q
if frac < w:
place (r,(i,j,k)) into the output array
sort the output array's entries by their "r" component
in ascending order, and return thus sorted array


Having enumerated the triples in the slice, it is a simple matter of sorting and searching, taking practically O(1) time (for arbitrarily thin a slice) to find the first triple above N. Well, actually, for constant width (logarithmic), the amount of numbers in the slice (members of the "upper crust" in the (i,j,k)-space below the log(N) plane) is again m ~ n^2/3 ~ (log N)^2 and sorting takes m log m time (so that searching, even linear, takes ~ m run time then). But the width can be made smaller for bigger Ns, following some empirical observations; and constant factors for the enumeration of triples are much higher than for the subsequent sorting anyway.



Even with constant width (logarthmic) it runs very fast, calculating the 1,000,000-th value in the Hamming sequence instantly and the billionth in 0.05s.



The original idea of "top band of triples" is due to Louis Klauder, as cited in my post on a DDJ blogs discussion back in 2008.



update: as noted by GordonBGood in the comments, there's no need for the whole band but rather just about one or two values above and below the target. The algorithm is easily amended to that effect. The input should also be tested for being a Hamming number itself before proceeding with the algorithm, to avoid round-off issues with double precision. There are no round-off issues comparing the logarithms of the Hamming numbers known in advance to be different (though going up to a trillionth entry in the sequence uses about 14 significant digits in logarithm values, leaving only 1-2 digits to spare, so the situation may in fact be turning iffy there; but for 1-billionth we only need 11 significant digits).



update2: turns out the Double precision for logarithms limits this to numbers below about 20,000 to 40,000 decimal digits (i.e. 10 trillionth to 100 trillionth Hamming number). If there's a real need for this for such big numbers, the algorithm can be switched back to working with the Integer values themselves instead of their logarithms, which will be slower.






share|improve this answer


























  • I wish I understood Haskell. :/ This looks superficially like the best answer.

    – endolith
    Oct 3 '13 at 20:26








  • 1





    @endolith this here is very basic stuff. f x y is a function application, f(x,y). the list comprehension [x | p x] is a list containing one x if p(x) is true; empty otherwise. list comprehension [x | k <- [0..10], j<- [k..10], let x=k+j*42] pulls each k from a range from 0 to 10, and for each k it pulls each j from a range from k to 10 (as if there were two nested loops there). properFraction is a built-in, for e.g. 3.14 it returns a pair (3,0.14). fst(a,b) == a is another built-in. concat concatenates lists in a given list, to form one list: [[1],,[3]] --> [1,3].

    – Will Ness
    Oct 3 '13 at 22:28






  • 1





    @endolith lastly, fromIntegral i*x is fromIntegral(i) * x is just i*x, where i is a value of some integer-like type, and x some floating type. fromIntegral is like a type cast; we aren't allowed to multiply an int and a double directly, in Haskell. so the algo goes from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it if inside the given "width" below the given high logarithmic top value (when width < 1 there can be only one such i) then sorts them by their logvals.

    – Will Ness
    Oct 3 '13 at 22:48






  • 1





    @endolith running that ideone entry with 11 as its input produces the 11-th number in the Hamming sequence, 1-based. Running it with a character 'a' as input produces first 20 elements in the sequence: [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]. As you can see, the 11-th number there is 15.

    – Will Ness
    May 11 '14 at 7:10






  • 1





    What isn't specified here is that we don't need to save a band at all as we can just check for the each <= the input. Also unspecified is the problem of using the log representation as to precision: with round-off errors: if the input value is already a regular number, the log approximation comparison may find the the approximate log is either slightly too high or slightly too low than the log approximation of the input answer. To solve this, one needs to keep a couple of values above and a couple below the input value, then as a final step scan for the real number minimum <= the input.

    – GordonBGood
    Aug 25 '16 at 1:35



















4














Here's a solution in Python, based on Will Ness answer but taking some shortcuts and using pure integer math to avoid running into log space numerical accuracy errors:





import math

def next_regular(target):
"""
Find the next regular number greater than or equal to target.
"""
# Check if it's already a power of 2 (or a non-integer)
try:
if not (target & (target-1)):
return target
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))

if target <= 6:
return target

match = float('inf') # Anything found will be smaller
p5 = 1
while p5 < target:
p35 = p5
while p35 < target:
# Ceiling integer division, avoiding conversion to float
# (quotient = ceil(target / p35))
# From https://stackoverflow.com/a/17511341/125507
quotient = -(-target // p35)

# Quickly find next power of 2 >= quotient
# See https://stackoverflow.com/a/19164783/125507
try:
p2 = 2**((quotient - 1).bit_length())
except AttributeError:
# Fallback for Python <2.7
p2 = 2**(len(bin(quotient - 1)) - 2)

N = p2 * p35
if N == target:
return N
elif N < match:
match = N
p35 *= 3
if p35 == target:
return p35
if p35 < match:
match = p35
p5 *= 5
if p5 == target:
return p5
if p5 < match:
match = p5
return match


In English: iterate through every combination of 5s and 3s, quickly finding the next power of 2 >= target for each pair and keeping the smallest result. (It's a waste of time to iterate through every possible multiple of 2 if only one of them can be correct). It also returns early if it ever finds that the target is already a regular number, though this is not strictly necessary.



I've tested it pretty thoroughly, testing every integer from 0 to 51200000 and comparing to the list on OEIS http://oeis.org/A051037, as well as many large numbers that are ±1 from regular numbers, etc. It's now available in SciPy as fftpack.helper.next_fast_len, to find optimal sizes for FFTs (source code).



I'm not sure if the log method is faster because I couldn't get it to work reliably enough to test it. I think it has a similar number of operations, though? I'm not sure, but this is reasonably fast. Takes <3 seconds (or 0.7 second with gmpy) to calculate that 2142 × 380 × 5444 is the next regular number above 22 × 3454 × 5249+1 (the 100,000,000th regular number, which has 392 digits)






share|improve this answer


























  • You are right that the log method takes about the same number of operations, just much faster as it doesn't have to deal with multi-precision math. The problem with getting it to work is that these are approximations, and especially if the input value is already a regular number, the determined log for it as compared to the generated regular number log may not quite match due to round-off errors. The solution is to add a bit of logic to keep a couple of values just <= and a couple just > for the log comparison, then as a final step convert these to bigint and find min >= the input value.

    – GordonBGood
    Aug 25 '16 at 1:49











  • @GordonBGood That sounds like a good idea. Want to post an answer? :)

    – endolith
    Aug 25 '16 at 2:20











  • the precision is usually enough to compare quite large Hamming numbers that are known to be different. Here, just pre-test the input whether it's already a regular number or not.

    – Will Ness
    Aug 25 '16 at 7:41













  • Working on an answer; need to estimate the difference in precision between an approximate log determined directly and one calculated through the regular number loops. @WillNess, yes, the precision is adequate to compare very large Hamming numbers (10 million digits or so) as those become very sparse with range, but this needs to be compared to the approximate log of the input value determined by other means (for the input number), which doesn't have the same error terms.

    – GordonBGood
    Aug 26 '16 at 0:11






  • 1





    @endolith, please see my answer which carries this work forward and is faster due to using logarithms for mostly eliminate the bigint operations.

    – GordonBGood
    Nov 25 '18 at 7:20



















3














You want to find the smallest number m that is m >= N and m = 2^i * 3^j * 5^k where all i,j,k >= 0.



Taking logarithms the equations can be rewritten as:



 log m >= log N
log m = i*log2 + j*log3 + k*log5


You can calculate log2, log3, log5 and logN to (enough high, depending on the size of N) accuracy. Then this problem looks like a Integer Linear programming problem and you could try to solve it using one of the known algorithms for this NP-hard problem.






share|improve this answer
























  • NP-hard? Oh dear

    – finnw
    Feb 11 '12 at 19:24






  • 2





    The transformated problem is (in general) NP-hard. This particular instance of the general Integer Programming problem may have nicer solutions. Or the original (non-tranformated) number theory problem may have nicer algorithm.

    – ypercubeᵀᴹ
    Feb 11 '12 at 19:27



















1














EDITED/CORRECTED: Corrected the codes to pass the scipy tests:



Here's an answer based on endolith's answer, but almost eliminating long multi-precision integer calculations by using float64 logarithm representations to do a base comparison to find triple values that pass the criteria, only resorting to full precision comparisons when there is a chance that the logarithm value may not be accurate enough, which only occurs when the target is very close to either the previous or the next regular number:



import math

def next_regulary(target):
"""
Find the next regular number greater than or equal to target.
"""
if target < 2: return ( 0, 0, 0 )
log2hi = 0
mant = 0
# Check if it's already a power of 2 (or a non-integer)
try:
mant = target & (target - 1)
target = int(target) # take care of case where not int/float/decimal
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
mant = target & (target - 1)

# Quickly find next power of 2 >= target
# See https://stackoverflow.com/a/19164783/125507
try:
log2hi = target.bit_length()
except AttributeError:
# Fallback for Python <2.7
log2hi = len(bin(target)) - 2

# exit if this is a power of two already...
if not mant: return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9:
if target < 4: return ( 0, 1, 0 )
elif target < 6: return ( 0, 0, 1 )
elif target < 7: return ( 1, 1, 0 )
else: return ( 3, 0, 0 )

# find log of target, which may exceed the float64 limit...
if log2hi < 53: mant = target << (53 - log2hi)
else: mant = target >> (log2hi - 53)
log2target = log2hi + math.log2(float(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = math.log2(3); log2of5 = math.log2(5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target**3 + 2 * fctr)**(1/3) # for up to 2 numbers higher
btm = 2 * log2target - top # or up to 2 numbers lower

match = log2hi # Anything found will be smaller
result = ( log2hi, 0, 0 ) # placeholder for eventual matches
count = 0 # only used for debugging counting band
fives = 0; fiveslmt = int(math.ceil(top / log2of5))
while fives < fiveslmt:
log2p = top - fives * log2of5
threes = 0; threeslmt = int(math.ceil(log2p / log2of3))
while threes < threeslmt:
log2q = log2p - threes * log2of3
twos = int(math.floor(log2q)); log2this = top - log2q + twos

if log2this >= btm: count += 1 # only used for counting band
if log2this >= btm and log2this < match:
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (2**twos * 3**threes * 5**fives) >= target:
match = log2this; result = ( twos, threes, fives )
threes += 1
fives += 1

return result

print(next_regular(2**2 * 3**454 * 5**249 + 1)) # prints (142, 80, 444)


Since most long multi-precision calculations have been eliminated, gmpy isn't needed, and on IDEOne the above code takes 0.11 seconds instead of 0.48 seconds for endolith's solution to find the next regular number greater than the 100 millionth one as shown; it takes 0.49 seconds instead of 5.48 seconds to find the next regular number past the billionth (next one is (761,572,489) past (1334,335,404) + 1), and the difference will get even larger as the range goes up as the multi-precision calculations get increasingly longer for the endolith version compared to almost none here. Thus, this version could calculate the next regular number from the trillionth in the sequence in about 50 seconds on IDEOne, where it would likely take over an hour with the endolith version.



The English description of the algorithm is almost the same as for the endolith version, differing as follows:
1) calculates the float log estimation of the argument target value (we can't use the built-in log function directly as the range may be much too large for representation as a 64-bit float),
2) compares the log representation values in determining qualifying values inside an estimated range above and below the target value of only about two or three numbers (depending on round-off),
3) compare multi-precision values only if within the above defined narrow band,
4) outputs the triple indices rather than the full long multi-precision integer (would be about 840 decimal digits for the one past the billionth, ten times that for the trillionth), which can then easily be converted to the long multi-precision value if required.



This algorithm uses almost no memory other than for the potentially very large multi-precision integer target value, the intermediate evaluation comparison values of about the same size, and the output expansion of the triples if required. This algorithm is an improvement over the endolith version in that it successfully uses the logarithm values for most comparisons in spite of their lack of precision, and that it narrows the band of compared numbers to just a few.



This algorithm will work for argument ranges somewhat above ten trillion (a few minute's calculation time at IDEOne rates) when it will no longer be correct due to lack of precision in the log representation values as per @WillNess's discussion; in order to fix this, we can change the log representation to a "roll-your-own" logarithm representation consisting of a fixed-length integer (124 bits for about double the exponent range, good for targets of over a hundred thousand digits if one is willing to wait); this will be a little slower due to the smallish multi-precision integer operations being slower than float64 operations, but not that much slower since the size is limited (maybe a factor of three or so slower).



Now none of these Python implementations (without using C or Cython or PyPy or something) are particularly fast, as they are about a hundred times slower than as implemented in a compiled language. For reference sake, here is a Haskell version:



{-# OPTIONS_GHC -O3 #-}

import Data.Word
import Data.Bits

nextRegular :: Integer -> ( Word32, Word32, Word32 )
nextRegular target
| target < 2 = ( 0, 0, 0 )
| target .&. (target - 1) == 0 = ( fromIntegral lg2hi - 1, 0, 0 )
| target < 9 = case target of
3 -> ( 0, 1, 0 )
5 -> ( 0, 0, 1 )
6 -> ( 1, 1, 0 )
_ -> ( 3, 0, 0 )
| otherwise = match
where
lg3 = logBase 2 3 :: Double; lg5 = logBase 2 5 :: Double
lg2hi = let cntplcs v cnt =
let nv = v `shiftR` 31 in
if nv <= 0 then
let cntbts x c =
if x <= 0 then c else
case c + 1 of
nc -> nc `seq` cntbts (x `shiftR` 1) nc in
cntbts (fromIntegral v :: Word32) cnt
else case cnt + 31 of ncnt -> ncnt `seq` cntplcs nv ncnt
in cntplcs target 0
lg2tgt = let mant = if lg2hi <= 53 then target `shiftL` (53 - lg2hi)
else target `shiftR` (lg2hi - 53)
in fromIntegral lg2hi +
logBase 2 (fromIntegral mant / 2^53 :: Double)
lg2top = (lg2tgt^3 + 2 * 6 * lg3 * lg5)**(1/3) -- for 2 numbers or so higher
lg2btm = 2* lg2tgt - lg2top -- or two numbers or so lower
match =
let klmt = floor (lg2top / lg5)
loopk k mtchlgk mtchtplk =
if k > klmt then mtchtplk else
let p = lg2top - fromIntegral k * lg5
jlmt = fromIntegral $ floor (p / lg3)
loopj j mtchlgj mtchtplj =
if j > jlmt then loopk (k + 1) mtchlgj mtchtplj else
let q = p - fromIntegral j * lg3
( i, frac ) = properFraction q; r = lg2top - frac
( nmtchlg, nmtchtpl ) =
if r < lg2btm || r >= mtchlgj then
( mtchlgj, mtchtplj ) else
if 2^i * 3^j * 5^k >= target then
( r, ( i, j, k ) ) else ( mtchlgj, mtchtplj )
in nmtchlg `seq` nmtchtpl `seq` loopj (j + 1) nmtchlg nmtchtpl
in loopj 0 mtchlgk mtchtplk
in loopk 0 (fromIntegral lg2hi) ( fromIntegral lg2hi, 0, 0 )


trival :: ( Word32, Word32, Word32 ) -> Integer
trival (i,j,k) = 2^i * 3^j * 5^k

main = putStrLn $ show $ nextRegular $ (trival (1334,335,404)) + 1 -- (1126,16930,40)


This code calculates the next regular number following the billionth in too small a time to be measured and following the trillionth in 0.69 seconds on IDEOne (and potentially could run even faster except that IDEOne doesn't support LLVM). Even Julia will run at something like this Haskell speed after the "warm-up" for JIT compilation.



EDIT_ADD: The Julia code is as per the following:



function nextregular(target :: BigInt) :: Tuple{ UInt32, UInt32, UInt32 }
# trivial case of first value or anything less...
target < 2 && return ( 0, 0, 0 )

# Check if it's already a power of 2 (or a non-integer)
mant = target & (target - 1)

# Quickly find next power of 2 >= target
log2hi :: UInt32 = 0
test = target
while true
next = test & 0x7FFFFFFF
test >>>= 31; log2hi += 31
test <= 0 && (log2hi -= leading_zeros(UInt32(next)) - 1; break)
end

# exit if this is a power of two already...
mant == 0 && return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9
target < 4 && return ( 0, 1, 0 )
target < 6 && return ( 0, 0, 1 )
target < 7 && return ( 1, 1, 0 )
return ( 3, 0, 0 )
end

# find log of target, which may exceed the Float64 limit...
if log2hi < 53 mant = target << (53 - log2hi)
else mant = target >>> (log2hi - 53) end
log2target = log2hi + log(2, Float64(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = log(2, 3); log2of5 = log(2, 5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target^3 + 2 * fctr)^(1/3) # for 2 numbers or so higher
btm = 2 * log2target - top # or 2 numbers or so lower

# scan for values in the given narrow range that satisfy the criteria...
match = log2hi # Anything found will be smaller
result :: Tuple{UInt32,UInt32,UInt32} = ( log2hi, 0, 0 ) # placeholder for eventual matches
fives :: UInt32 = 0; fiveslmt = UInt32(ceil(top / log2of5))
while fives < fiveslmt
log2p = top - fives * log2of5
threes :: UInt32 = 0; threeslmt = UInt32(ceil(log2p / log2of3))
while threes < threeslmt
log2q = log2p - threes * log2of3
twos = UInt32(floor(log2q)); log2this = top - log2q + twos

if log2this >= btm && log2this < match
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (big(2)^twos * big(3)^threes * big(5)^fives) >= target
match = log2this; result = ( twos, threes, fives )
end
end
threes += 1
end
fives += 1
end
result
end





share|improve this answer


























  • Cool. Did you run all the tests on it? github.com/scipy/scipy/blob/master/scipy/fftpack/tests/… Is it faster for small numbers (~10000) or just for huge ones?

    – endolith
    Nov 25 '18 at 19:47








  • 1





    @endolith, it isn't faster (not much different) for small arguments, but increasingly faster for larger arguments. Thank's for the link to the tests; I used them to find a couple of problems in the code, which have now been corrected. The current corrected code passes all of the provided tests..

    – GordonBGood
    Nov 26 '18 at 6:59



















0














Here's another possibility I just thought of:



If N is X bits long, then the smallest regular number RN will be in the range
[2X-1, 2X]



e.g. if N = 257 (binary 100000001) then we know R is 1xxxxxxxx unless R is exactly equal to the next power of 2 (512)



To generate all the regular numbers in this range, we can generate the odd regular numbers (i.e. multiples of powers of 3 and 5) first, then take each value and multiply by 2 (by bit-shifting) as many times as necessary to bring it into this range.



In Python:



from itertools import ifilter, takewhile
from Queue import PriorityQueue

def nextPowerOf2(n):
p = max(1, n)
while p != (p & -p):
p += p & -p
return p

# Generate multiples of powers of 3, 5
def oddRegulars():
q = PriorityQueue()
q.put(1)
prev = None
while not q.empty():
n = q.get()
if n != prev:
prev = n
yield n
if n % 3 == 0:
q.put(n // 3 * 5)
q.put(n * 3)

# Generate regular numbers with the same number of bits as n
def regularsCloseTo(n):
p = nextPowerOf2(n)
numBits = len(bin(n))
for i in takewhile(lambda x: x <= p, oddRegulars()):
yield i << max(0, numBits - len(bin(i)))

def nextRegular(n):
bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n))
return min(bigEnough)





share|improve this answer


























  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19











  • actually, this is pretty close in its intention to what I posted, just differs in implementation. :)

    – Will Ness
    Aug 22 '12 at 0:21











  • This fails for nextRegular(7), nextRegular(31), nextRegular(61), etc. with ValueError: min() arg is an empty sequence

    – endolith
    Oct 6 '13 at 2:11



















-1














You know what? I'll put money on the proposition that actually, the 'dumb' algorithm is fastest. This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input. So simply start counting up, and after each increment, refactor and see if you've found a regular number. But create one processing thread for each available core you have, and for N cores have each thread examine every Nth number. When each thread has found a number or crossed the power-of-2 threshold, compare the results (keep a running best number) and there you are.






share|improve this answer



















  • 2





    Yes it would be interesting to benchmark it. I expect you are right if we are talking about small numbers (< 10000 or so.) But as the numbers get larger so do the distances between regular numbers. An extreme example is 48000001 (the next regular number is 48600000, and it will take 2.8M divisions to find it.)

    – finnw
    Feb 12 '12 at 16:32











  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19






  • 1





    "This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input." I don't think that's a good assumption. They get farther and father apart as you go up. The next regular number above 50000001 is 50331648, and that's only the 995th number. I suspect generating the list of regular numbers until you get one above your target is faster.

    – endolith
    Sep 30 '13 at 19:28











  • I tested an "iterate and factor" algorithm vs a "generate list until you go over" algorithm. The factoring algorithm is faster for small numbers, but gets far slower for large numbers. 854296876 → 859963392 takes 26 ms vs 18 seconds with the factoring method.

    – endolith
    Oct 1 '13 at 1:24











  • in fact, the nth Hamming number's magnitude is M(n) ~ exp(n**(1/3)), so they grow exponentially more and more apart as n grows.

    – Will Ness
    Dec 6 '18 at 9:09



















-1














I wrote a small c# program to solve this problem. It's not very optimised but it's a start.
This solution is pretty fast for numbers as big as 11 digits.



private long GetRegularNumber(long n)
{
long result = n - 1;
long quotient = result;

while (quotient > 1)
{
result++;
quotient = result;

quotient = RemoveFactor(quotient, 2);
quotient = RemoveFactor(quotient, 3);
quotient = RemoveFactor(quotient, 5);
}

return result;
}

private static long RemoveFactor(long dividend, long divisor)
{
long remainder = 0;
long quotient = dividend;
while (remainder == 0)
{
dividend = quotient;
quotient = Math.DivRem(dividend, divisor, out remainder);
}
return dividend;
}





share|improve this answer


























  • Is C# the right language for this? Won't it be slower, particularly in iterations, than C or C++?

    – Matt Phillips
    Feb 11 '12 at 19:48






  • 4





    I'm pretty sure any programmer can rewrite this in c/c++ fairly easy. C# was the quickest way for me to test my idea.

    – david.s
    Feb 11 '12 at 19:52








  • 1





    N_i ~ exp( i^(1/3) ) i.e. gaps between Hamming numbers grow exponentially. So your approach will experience a very pronounced slowdown as the numbers grow in magnitude, so it seems. 11 digits is not very big.

    – Will Ness
    Aug 21 '12 at 6:44











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f9242733%2ffind-the-smallest-regular-number-that-is-not-less-than-n%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























8 Answers
8






active

oldest

votes








8 Answers
8






active

oldest

votes









active

oldest

votes






active

oldest

votes









5














Okay, hopefully third time's a charm here. A recursive, branching algorithm for an initial input of p, where N is the number being 'built' within each thread. NB 3a-c here are launched as separate threads or otherwise done (quasi-)asynchronously.




  1. Calculate the next-largest power of 2 after p, call this R. N = p.


  2. Is N > R? Quit this thread. Is p composed of only small prime factors? You're done. Otherwise, go to step 3.



  3. After any of 3a-c, go to step 4.



    a) Round p up to the nearest multiple of 2. This number can be expressed as m * 2.

    b) Round p up to the nearest multiple of 3. This number can be expressed as m * 3.

    c) Round p up to the nearest multiple of 5. This number can be expressed as m * 5.



  4. Go to step 2, with p = m.



I've omitted the bookkeeping to do regarding keeping track of N but that's fairly straightforward I take it.



Edit: Forgot 6, thanks ypercube.



Edit 2: Had this up to 30, (5, 6, 10, 15, 30) realized that was unnecessary, took that out.



Edit 3: (The last one I promise!) Added the power-of-30 check, which helps prevent this algorithm from eating up all your RAM.



Edit 4: Changed power-of-30 to power-of-2, per finnw's observation.






share|improve this answer





















  • 1





    You forgot m * 6

    – ypercubeᵀᴹ
    Feb 11 '12 at 18:43






  • 2





    In step 1, can you not use the next largest power of 2 instead of 30?

    – finnw
    Feb 11 '12 at 19:40











  • @finnw Yes, you're right. Breaking my promise and editing accordingly.

    – Matt Phillips
    Feb 11 '12 at 19:45











  • Have you implemented this? I have tried to follow how this algorithm will proceed when N=1025; The best solution is 1080 but I don't think it will find it.

    – finnw
    Feb 12 '12 at 15:16






  • 1





    This answer works only if one starts with "p = 1" (with some validity checks for input integers less than one) which isn't specified in the text, and is inefficient as compared to later answers by WillNess and @endolith, which only loop by two of the three variables, as the third one is implied by the other two.

    – GordonBGood
    Aug 25 '16 at 1:27
















5














Okay, hopefully third time's a charm here. A recursive, branching algorithm for an initial input of p, where N is the number being 'built' within each thread. NB 3a-c here are launched as separate threads or otherwise done (quasi-)asynchronously.




  1. Calculate the next-largest power of 2 after p, call this R. N = p.


  2. Is N > R? Quit this thread. Is p composed of only small prime factors? You're done. Otherwise, go to step 3.



  3. After any of 3a-c, go to step 4.



    a) Round p up to the nearest multiple of 2. This number can be expressed as m * 2.

    b) Round p up to the nearest multiple of 3. This number can be expressed as m * 3.

    c) Round p up to the nearest multiple of 5. This number can be expressed as m * 5.



  4. Go to step 2, with p = m.



I've omitted the bookkeeping to do regarding keeping track of N but that's fairly straightforward I take it.



Edit: Forgot 6, thanks ypercube.



Edit 2: Had this up to 30, (5, 6, 10, 15, 30) realized that was unnecessary, took that out.



Edit 3: (The last one I promise!) Added the power-of-30 check, which helps prevent this algorithm from eating up all your RAM.



Edit 4: Changed power-of-30 to power-of-2, per finnw's observation.






share|improve this answer





















  • 1





    You forgot m * 6

    – ypercubeᵀᴹ
    Feb 11 '12 at 18:43






  • 2





    In step 1, can you not use the next largest power of 2 instead of 30?

    – finnw
    Feb 11 '12 at 19:40











  • @finnw Yes, you're right. Breaking my promise and editing accordingly.

    – Matt Phillips
    Feb 11 '12 at 19:45











  • Have you implemented this? I have tried to follow how this algorithm will proceed when N=1025; The best solution is 1080 but I don't think it will find it.

    – finnw
    Feb 12 '12 at 15:16






  • 1





    This answer works only if one starts with "p = 1" (with some validity checks for input integers less than one) which isn't specified in the text, and is inefficient as compared to later answers by WillNess and @endolith, which only loop by two of the three variables, as the third one is implied by the other two.

    – GordonBGood
    Aug 25 '16 at 1:27














5












5








5







Okay, hopefully third time's a charm here. A recursive, branching algorithm for an initial input of p, where N is the number being 'built' within each thread. NB 3a-c here are launched as separate threads or otherwise done (quasi-)asynchronously.




  1. Calculate the next-largest power of 2 after p, call this R. N = p.


  2. Is N > R? Quit this thread. Is p composed of only small prime factors? You're done. Otherwise, go to step 3.



  3. After any of 3a-c, go to step 4.



    a) Round p up to the nearest multiple of 2. This number can be expressed as m * 2.

    b) Round p up to the nearest multiple of 3. This number can be expressed as m * 3.

    c) Round p up to the nearest multiple of 5. This number can be expressed as m * 5.



  4. Go to step 2, with p = m.



I've omitted the bookkeeping to do regarding keeping track of N but that's fairly straightforward I take it.



Edit: Forgot 6, thanks ypercube.



Edit 2: Had this up to 30, (5, 6, 10, 15, 30) realized that was unnecessary, took that out.



Edit 3: (The last one I promise!) Added the power-of-30 check, which helps prevent this algorithm from eating up all your RAM.



Edit 4: Changed power-of-30 to power-of-2, per finnw's observation.






share|improve this answer















Okay, hopefully third time's a charm here. A recursive, branching algorithm for an initial input of p, where N is the number being 'built' within each thread. NB 3a-c here are launched as separate threads or otherwise done (quasi-)asynchronously.




  1. Calculate the next-largest power of 2 after p, call this R. N = p.


  2. Is N > R? Quit this thread. Is p composed of only small prime factors? You're done. Otherwise, go to step 3.



  3. After any of 3a-c, go to step 4.



    a) Round p up to the nearest multiple of 2. This number can be expressed as m * 2.

    b) Round p up to the nearest multiple of 3. This number can be expressed as m * 3.

    c) Round p up to the nearest multiple of 5. This number can be expressed as m * 5.



  4. Go to step 2, with p = m.



I've omitted the bookkeeping to do regarding keeping track of N but that's fairly straightforward I take it.



Edit: Forgot 6, thanks ypercube.



Edit 2: Had this up to 30, (5, 6, 10, 15, 30) realized that was unnecessary, took that out.



Edit 3: (The last one I promise!) Added the power-of-30 check, which helps prevent this algorithm from eating up all your RAM.



Edit 4: Changed power-of-30 to power-of-2, per finnw's observation.







share|improve this answer














share|improve this answer



share|improve this answer








edited Feb 11 '12 at 19:46

























answered Feb 11 '12 at 18:40









Matt PhillipsMatt Phillips

6,79353562




6,79353562








  • 1





    You forgot m * 6

    – ypercubeᵀᴹ
    Feb 11 '12 at 18:43






  • 2





    In step 1, can you not use the next largest power of 2 instead of 30?

    – finnw
    Feb 11 '12 at 19:40











  • @finnw Yes, you're right. Breaking my promise and editing accordingly.

    – Matt Phillips
    Feb 11 '12 at 19:45











  • Have you implemented this? I have tried to follow how this algorithm will proceed when N=1025; The best solution is 1080 but I don't think it will find it.

    – finnw
    Feb 12 '12 at 15:16






  • 1





    This answer works only if one starts with "p = 1" (with some validity checks for input integers less than one) which isn't specified in the text, and is inefficient as compared to later answers by WillNess and @endolith, which only loop by two of the three variables, as the third one is implied by the other two.

    – GordonBGood
    Aug 25 '16 at 1:27














  • 1





    You forgot m * 6

    – ypercubeᵀᴹ
    Feb 11 '12 at 18:43






  • 2





    In step 1, can you not use the next largest power of 2 instead of 30?

    – finnw
    Feb 11 '12 at 19:40











  • @finnw Yes, you're right. Breaking my promise and editing accordingly.

    – Matt Phillips
    Feb 11 '12 at 19:45











  • Have you implemented this? I have tried to follow how this algorithm will proceed when N=1025; The best solution is 1080 but I don't think it will find it.

    – finnw
    Feb 12 '12 at 15:16






  • 1





    This answer works only if one starts with "p = 1" (with some validity checks for input integers less than one) which isn't specified in the text, and is inefficient as compared to later answers by WillNess and @endolith, which only loop by two of the three variables, as the third one is implied by the other two.

    – GordonBGood
    Aug 25 '16 at 1:27








1




1





You forgot m * 6

– ypercubeᵀᴹ
Feb 11 '12 at 18:43





You forgot m * 6

– ypercubeᵀᴹ
Feb 11 '12 at 18:43




2




2





In step 1, can you not use the next largest power of 2 instead of 30?

– finnw
Feb 11 '12 at 19:40





In step 1, can you not use the next largest power of 2 instead of 30?

– finnw
Feb 11 '12 at 19:40













@finnw Yes, you're right. Breaking my promise and editing accordingly.

– Matt Phillips
Feb 11 '12 at 19:45





@finnw Yes, you're right. Breaking my promise and editing accordingly.

– Matt Phillips
Feb 11 '12 at 19:45













Have you implemented this? I have tried to follow how this algorithm will proceed when N=1025; The best solution is 1080 but I don't think it will find it.

– finnw
Feb 12 '12 at 15:16





Have you implemented this? I have tried to follow how this algorithm will proceed when N=1025; The best solution is 1080 but I don't think it will find it.

– finnw
Feb 12 '12 at 15:16




1




1





This answer works only if one starts with "p = 1" (with some validity checks for input integers less than one) which isn't specified in the text, and is inefficient as compared to later answers by WillNess and @endolith, which only loop by two of the three variables, as the third one is implied by the other two.

– GordonBGood
Aug 25 '16 at 1:27





This answer works only if one starts with "p = 1" (with some validity checks for input integers less than one) which isn't specified in the text, and is inefficient as compared to later answers by WillNess and @endolith, which only loop by two of the three variables, as the third one is implied by the other two.

– GordonBGood
Aug 25 '16 at 1:27













5














One can produce arbitrarily thin a slice of the Hamming sequence around the n-th member in time ~ n^(2/3) by direct enumeration of triples (i,j,k) such that N = 2^i * 3^j * 5^k.



The algorithm works from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it in a "band" if inside the given "width" below the given high logarithmic top value (when width < 1 there can be at most one such i) then sorts them by their logarithms.



WP says that n ~ (log N)^3, i.e. run time ~ (log N)^2. Here we don't care for the exact position of the found triple in the sequence, so all the count calculations from the original code can be thrown away:



slice hi w = sortBy (compare `on` fst) b where       -- hi>log2(N) is a top value
lb5=logBase 2 5 ; lb3=logBase 2 3 -- w<1 (NB!) is log2(width)
b = concat -- the slice
[ [ (r,(i,j,k)) | frac < w ] -- store it, if inside width
| k <- [ 0 .. floor ( hi /lb5) ], let p = fromIntegral k*lb5,
j <- [ 0 .. floor ((hi-p)/lb3) ], let q = fromIntegral j*lb3 + p,
let (i,frac)=properFraction(hi-q) ; r = hi - frac ] -- r = i + q
-- properFraction 12.7 == (12, 0.7)

-- update: in pseudocode:
def slice(hi, w):
lb5, lb3 = logBase(2, 5), logBase(2, 3) -- logs base 2 of 5 and 3
for k from 0 step 1 to floor(hi/lb5) inclusive:
p = k*lb5
for j from 0 step 1 to floor((hi-p)/lb3) inclusive:
q = j*lb3 + p
i = floor(hi-q)
frac = hi-q-i -- frac < 1 , always
r = hi - frac -- r == i + q
if frac < w:
place (r,(i,j,k)) into the output array
sort the output array's entries by their "r" component
in ascending order, and return thus sorted array


Having enumerated the triples in the slice, it is a simple matter of sorting and searching, taking practically O(1) time (for arbitrarily thin a slice) to find the first triple above N. Well, actually, for constant width (logarithmic), the amount of numbers in the slice (members of the "upper crust" in the (i,j,k)-space below the log(N) plane) is again m ~ n^2/3 ~ (log N)^2 and sorting takes m log m time (so that searching, even linear, takes ~ m run time then). But the width can be made smaller for bigger Ns, following some empirical observations; and constant factors for the enumeration of triples are much higher than for the subsequent sorting anyway.



Even with constant width (logarthmic) it runs very fast, calculating the 1,000,000-th value in the Hamming sequence instantly and the billionth in 0.05s.



The original idea of "top band of triples" is due to Louis Klauder, as cited in my post on a DDJ blogs discussion back in 2008.



update: as noted by GordonBGood in the comments, there's no need for the whole band but rather just about one or two values above and below the target. The algorithm is easily amended to that effect. The input should also be tested for being a Hamming number itself before proceeding with the algorithm, to avoid round-off issues with double precision. There are no round-off issues comparing the logarithms of the Hamming numbers known in advance to be different (though going up to a trillionth entry in the sequence uses about 14 significant digits in logarithm values, leaving only 1-2 digits to spare, so the situation may in fact be turning iffy there; but for 1-billionth we only need 11 significant digits).



update2: turns out the Double precision for logarithms limits this to numbers below about 20,000 to 40,000 decimal digits (i.e. 10 trillionth to 100 trillionth Hamming number). If there's a real need for this for such big numbers, the algorithm can be switched back to working with the Integer values themselves instead of their logarithms, which will be slower.






share|improve this answer


























  • I wish I understood Haskell. :/ This looks superficially like the best answer.

    – endolith
    Oct 3 '13 at 20:26








  • 1





    @endolith this here is very basic stuff. f x y is a function application, f(x,y). the list comprehension [x | p x] is a list containing one x if p(x) is true; empty otherwise. list comprehension [x | k <- [0..10], j<- [k..10], let x=k+j*42] pulls each k from a range from 0 to 10, and for each k it pulls each j from a range from k to 10 (as if there were two nested loops there). properFraction is a built-in, for e.g. 3.14 it returns a pair (3,0.14). fst(a,b) == a is another built-in. concat concatenates lists in a given list, to form one list: [[1],,[3]] --> [1,3].

    – Will Ness
    Oct 3 '13 at 22:28






  • 1





    @endolith lastly, fromIntegral i*x is fromIntegral(i) * x is just i*x, where i is a value of some integer-like type, and x some floating type. fromIntegral is like a type cast; we aren't allowed to multiply an int and a double directly, in Haskell. so the algo goes from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it if inside the given "width" below the given high logarithmic top value (when width < 1 there can be only one such i) then sorts them by their logvals.

    – Will Ness
    Oct 3 '13 at 22:48






  • 1





    @endolith running that ideone entry with 11 as its input produces the 11-th number in the Hamming sequence, 1-based. Running it with a character 'a' as input produces first 20 elements in the sequence: [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]. As you can see, the 11-th number there is 15.

    – Will Ness
    May 11 '14 at 7:10






  • 1





    What isn't specified here is that we don't need to save a band at all as we can just check for the each <= the input. Also unspecified is the problem of using the log representation as to precision: with round-off errors: if the input value is already a regular number, the log approximation comparison may find the the approximate log is either slightly too high or slightly too low than the log approximation of the input answer. To solve this, one needs to keep a couple of values above and a couple below the input value, then as a final step scan for the real number minimum <= the input.

    – GordonBGood
    Aug 25 '16 at 1:35
















5














One can produce arbitrarily thin a slice of the Hamming sequence around the n-th member in time ~ n^(2/3) by direct enumeration of triples (i,j,k) such that N = 2^i * 3^j * 5^k.



The algorithm works from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it in a "band" if inside the given "width" below the given high logarithmic top value (when width < 1 there can be at most one such i) then sorts them by their logarithms.



WP says that n ~ (log N)^3, i.e. run time ~ (log N)^2. Here we don't care for the exact position of the found triple in the sequence, so all the count calculations from the original code can be thrown away:



slice hi w = sortBy (compare `on` fst) b where       -- hi>log2(N) is a top value
lb5=logBase 2 5 ; lb3=logBase 2 3 -- w<1 (NB!) is log2(width)
b = concat -- the slice
[ [ (r,(i,j,k)) | frac < w ] -- store it, if inside width
| k <- [ 0 .. floor ( hi /lb5) ], let p = fromIntegral k*lb5,
j <- [ 0 .. floor ((hi-p)/lb3) ], let q = fromIntegral j*lb3 + p,
let (i,frac)=properFraction(hi-q) ; r = hi - frac ] -- r = i + q
-- properFraction 12.7 == (12, 0.7)

-- update: in pseudocode:
def slice(hi, w):
lb5, lb3 = logBase(2, 5), logBase(2, 3) -- logs base 2 of 5 and 3
for k from 0 step 1 to floor(hi/lb5) inclusive:
p = k*lb5
for j from 0 step 1 to floor((hi-p)/lb3) inclusive:
q = j*lb3 + p
i = floor(hi-q)
frac = hi-q-i -- frac < 1 , always
r = hi - frac -- r == i + q
if frac < w:
place (r,(i,j,k)) into the output array
sort the output array's entries by their "r" component
in ascending order, and return thus sorted array


Having enumerated the triples in the slice, it is a simple matter of sorting and searching, taking practically O(1) time (for arbitrarily thin a slice) to find the first triple above N. Well, actually, for constant width (logarithmic), the amount of numbers in the slice (members of the "upper crust" in the (i,j,k)-space below the log(N) plane) is again m ~ n^2/3 ~ (log N)^2 and sorting takes m log m time (so that searching, even linear, takes ~ m run time then). But the width can be made smaller for bigger Ns, following some empirical observations; and constant factors for the enumeration of triples are much higher than for the subsequent sorting anyway.



Even with constant width (logarthmic) it runs very fast, calculating the 1,000,000-th value in the Hamming sequence instantly and the billionth in 0.05s.



The original idea of "top band of triples" is due to Louis Klauder, as cited in my post on a DDJ blogs discussion back in 2008.



update: as noted by GordonBGood in the comments, there's no need for the whole band but rather just about one or two values above and below the target. The algorithm is easily amended to that effect. The input should also be tested for being a Hamming number itself before proceeding with the algorithm, to avoid round-off issues with double precision. There are no round-off issues comparing the logarithms of the Hamming numbers known in advance to be different (though going up to a trillionth entry in the sequence uses about 14 significant digits in logarithm values, leaving only 1-2 digits to spare, so the situation may in fact be turning iffy there; but for 1-billionth we only need 11 significant digits).



update2: turns out the Double precision for logarithms limits this to numbers below about 20,000 to 40,000 decimal digits (i.e. 10 trillionth to 100 trillionth Hamming number). If there's a real need for this for such big numbers, the algorithm can be switched back to working with the Integer values themselves instead of their logarithms, which will be slower.






share|improve this answer


























  • I wish I understood Haskell. :/ This looks superficially like the best answer.

    – endolith
    Oct 3 '13 at 20:26








  • 1





    @endolith this here is very basic stuff. f x y is a function application, f(x,y). the list comprehension [x | p x] is a list containing one x if p(x) is true; empty otherwise. list comprehension [x | k <- [0..10], j<- [k..10], let x=k+j*42] pulls each k from a range from 0 to 10, and for each k it pulls each j from a range from k to 10 (as if there were two nested loops there). properFraction is a built-in, for e.g. 3.14 it returns a pair (3,0.14). fst(a,b) == a is another built-in. concat concatenates lists in a given list, to form one list: [[1],,[3]] --> [1,3].

    – Will Ness
    Oct 3 '13 at 22:28






  • 1





    @endolith lastly, fromIntegral i*x is fromIntegral(i) * x is just i*x, where i is a value of some integer-like type, and x some floating type. fromIntegral is like a type cast; we aren't allowed to multiply an int and a double directly, in Haskell. so the algo goes from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it if inside the given "width" below the given high logarithmic top value (when width < 1 there can be only one such i) then sorts them by their logvals.

    – Will Ness
    Oct 3 '13 at 22:48






  • 1





    @endolith running that ideone entry with 11 as its input produces the 11-th number in the Hamming sequence, 1-based. Running it with a character 'a' as input produces first 20 elements in the sequence: [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]. As you can see, the 11-th number there is 15.

    – Will Ness
    May 11 '14 at 7:10






  • 1





    What isn't specified here is that we don't need to save a band at all as we can just check for the each <= the input. Also unspecified is the problem of using the log representation as to precision: with round-off errors: if the input value is already a regular number, the log approximation comparison may find the the approximate log is either slightly too high or slightly too low than the log approximation of the input answer. To solve this, one needs to keep a couple of values above and a couple below the input value, then as a final step scan for the real number minimum <= the input.

    – GordonBGood
    Aug 25 '16 at 1:35














5












5








5







One can produce arbitrarily thin a slice of the Hamming sequence around the n-th member in time ~ n^(2/3) by direct enumeration of triples (i,j,k) such that N = 2^i * 3^j * 5^k.



The algorithm works from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it in a "band" if inside the given "width" below the given high logarithmic top value (when width < 1 there can be at most one such i) then sorts them by their logarithms.



WP says that n ~ (log N)^3, i.e. run time ~ (log N)^2. Here we don't care for the exact position of the found triple in the sequence, so all the count calculations from the original code can be thrown away:



slice hi w = sortBy (compare `on` fst) b where       -- hi>log2(N) is a top value
lb5=logBase 2 5 ; lb3=logBase 2 3 -- w<1 (NB!) is log2(width)
b = concat -- the slice
[ [ (r,(i,j,k)) | frac < w ] -- store it, if inside width
| k <- [ 0 .. floor ( hi /lb5) ], let p = fromIntegral k*lb5,
j <- [ 0 .. floor ((hi-p)/lb3) ], let q = fromIntegral j*lb3 + p,
let (i,frac)=properFraction(hi-q) ; r = hi - frac ] -- r = i + q
-- properFraction 12.7 == (12, 0.7)

-- update: in pseudocode:
def slice(hi, w):
lb5, lb3 = logBase(2, 5), logBase(2, 3) -- logs base 2 of 5 and 3
for k from 0 step 1 to floor(hi/lb5) inclusive:
p = k*lb5
for j from 0 step 1 to floor((hi-p)/lb3) inclusive:
q = j*lb3 + p
i = floor(hi-q)
frac = hi-q-i -- frac < 1 , always
r = hi - frac -- r == i + q
if frac < w:
place (r,(i,j,k)) into the output array
sort the output array's entries by their "r" component
in ascending order, and return thus sorted array


Having enumerated the triples in the slice, it is a simple matter of sorting and searching, taking practically O(1) time (for arbitrarily thin a slice) to find the first triple above N. Well, actually, for constant width (logarithmic), the amount of numbers in the slice (members of the "upper crust" in the (i,j,k)-space below the log(N) plane) is again m ~ n^2/3 ~ (log N)^2 and sorting takes m log m time (so that searching, even linear, takes ~ m run time then). But the width can be made smaller for bigger Ns, following some empirical observations; and constant factors for the enumeration of triples are much higher than for the subsequent sorting anyway.



Even with constant width (logarthmic) it runs very fast, calculating the 1,000,000-th value in the Hamming sequence instantly and the billionth in 0.05s.



The original idea of "top band of triples" is due to Louis Klauder, as cited in my post on a DDJ blogs discussion back in 2008.



update: as noted by GordonBGood in the comments, there's no need for the whole band but rather just about one or two values above and below the target. The algorithm is easily amended to that effect. The input should also be tested for being a Hamming number itself before proceeding with the algorithm, to avoid round-off issues with double precision. There are no round-off issues comparing the logarithms of the Hamming numbers known in advance to be different (though going up to a trillionth entry in the sequence uses about 14 significant digits in logarithm values, leaving only 1-2 digits to spare, so the situation may in fact be turning iffy there; but for 1-billionth we only need 11 significant digits).



update2: turns out the Double precision for logarithms limits this to numbers below about 20,000 to 40,000 decimal digits (i.e. 10 trillionth to 100 trillionth Hamming number). If there's a real need for this for such big numbers, the algorithm can be switched back to working with the Integer values themselves instead of their logarithms, which will be slower.






share|improve this answer















One can produce arbitrarily thin a slice of the Hamming sequence around the n-th member in time ~ n^(2/3) by direct enumeration of triples (i,j,k) such that N = 2^i * 3^j * 5^k.



The algorithm works from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it in a "band" if inside the given "width" below the given high logarithmic top value (when width < 1 there can be at most one such i) then sorts them by their logarithms.



WP says that n ~ (log N)^3, i.e. run time ~ (log N)^2. Here we don't care for the exact position of the found triple in the sequence, so all the count calculations from the original code can be thrown away:



slice hi w = sortBy (compare `on` fst) b where       -- hi>log2(N) is a top value
lb5=logBase 2 5 ; lb3=logBase 2 3 -- w<1 (NB!) is log2(width)
b = concat -- the slice
[ [ (r,(i,j,k)) | frac < w ] -- store it, if inside width
| k <- [ 0 .. floor ( hi /lb5) ], let p = fromIntegral k*lb5,
j <- [ 0 .. floor ((hi-p)/lb3) ], let q = fromIntegral j*lb3 + p,
let (i,frac)=properFraction(hi-q) ; r = hi - frac ] -- r = i + q
-- properFraction 12.7 == (12, 0.7)

-- update: in pseudocode:
def slice(hi, w):
lb5, lb3 = logBase(2, 5), logBase(2, 3) -- logs base 2 of 5 and 3
for k from 0 step 1 to floor(hi/lb5) inclusive:
p = k*lb5
for j from 0 step 1 to floor((hi-p)/lb3) inclusive:
q = j*lb3 + p
i = floor(hi-q)
frac = hi-q-i -- frac < 1 , always
r = hi - frac -- r == i + q
if frac < w:
place (r,(i,j,k)) into the output array
sort the output array's entries by their "r" component
in ascending order, and return thus sorted array


Having enumerated the triples in the slice, it is a simple matter of sorting and searching, taking practically O(1) time (for arbitrarily thin a slice) to find the first triple above N. Well, actually, for constant width (logarithmic), the amount of numbers in the slice (members of the "upper crust" in the (i,j,k)-space below the log(N) plane) is again m ~ n^2/3 ~ (log N)^2 and sorting takes m log m time (so that searching, even linear, takes ~ m run time then). But the width can be made smaller for bigger Ns, following some empirical observations; and constant factors for the enumeration of triples are much higher than for the subsequent sorting anyway.



Even with constant width (logarthmic) it runs very fast, calculating the 1,000,000-th value in the Hamming sequence instantly and the billionth in 0.05s.



The original idea of "top band of triples" is due to Louis Klauder, as cited in my post on a DDJ blogs discussion back in 2008.



update: as noted by GordonBGood in the comments, there's no need for the whole band but rather just about one or two values above and below the target. The algorithm is easily amended to that effect. The input should also be tested for being a Hamming number itself before proceeding with the algorithm, to avoid round-off issues with double precision. There are no round-off issues comparing the logarithms of the Hamming numbers known in advance to be different (though going up to a trillionth entry in the sequence uses about 14 significant digits in logarithm values, leaving only 1-2 digits to spare, so the situation may in fact be turning iffy there; but for 1-billionth we only need 11 significant digits).



update2: turns out the Double precision for logarithms limits this to numbers below about 20,000 to 40,000 decimal digits (i.e. 10 trillionth to 100 trillionth Hamming number). If there's a real need for this for such big numbers, the algorithm can be switched back to working with the Integer values themselves instead of their logarithms, which will be slower.







share|improve this answer














share|improve this answer



share|improve this answer








edited Dec 5 '18 at 9:11

























answered Aug 20 '12 at 16:51









Will NessWill Ness

45.7k468124




45.7k468124













  • I wish I understood Haskell. :/ This looks superficially like the best answer.

    – endolith
    Oct 3 '13 at 20:26








  • 1





    @endolith this here is very basic stuff. f x y is a function application, f(x,y). the list comprehension [x | p x] is a list containing one x if p(x) is true; empty otherwise. list comprehension [x | k <- [0..10], j<- [k..10], let x=k+j*42] pulls each k from a range from 0 to 10, and for each k it pulls each j from a range from k to 10 (as if there were two nested loops there). properFraction is a built-in, for e.g. 3.14 it returns a pair (3,0.14). fst(a,b) == a is another built-in. concat concatenates lists in a given list, to form one list: [[1],,[3]] --> [1,3].

    – Will Ness
    Oct 3 '13 at 22:28






  • 1





    @endolith lastly, fromIntegral i*x is fromIntegral(i) * x is just i*x, where i is a value of some integer-like type, and x some floating type. fromIntegral is like a type cast; we aren't allowed to multiply an int and a double directly, in Haskell. so the algo goes from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it if inside the given "width" below the given high logarithmic top value (when width < 1 there can be only one such i) then sorts them by their logvals.

    – Will Ness
    Oct 3 '13 at 22:48






  • 1





    @endolith running that ideone entry with 11 as its input produces the 11-th number in the Hamming sequence, 1-based. Running it with a character 'a' as input produces first 20 elements in the sequence: [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]. As you can see, the 11-th number there is 15.

    – Will Ness
    May 11 '14 at 7:10






  • 1





    What isn't specified here is that we don't need to save a band at all as we can just check for the each <= the input. Also unspecified is the problem of using the log representation as to precision: with round-off errors: if the input value is already a regular number, the log approximation comparison may find the the approximate log is either slightly too high or slightly too low than the log approximation of the input answer. To solve this, one needs to keep a couple of values above and a couple below the input value, then as a final step scan for the real number minimum <= the input.

    – GordonBGood
    Aug 25 '16 at 1:35



















  • I wish I understood Haskell. :/ This looks superficially like the best answer.

    – endolith
    Oct 3 '13 at 20:26








  • 1





    @endolith this here is very basic stuff. f x y is a function application, f(x,y). the list comprehension [x | p x] is a list containing one x if p(x) is true; empty otherwise. list comprehension [x | k <- [0..10], j<- [k..10], let x=k+j*42] pulls each k from a range from 0 to 10, and for each k it pulls each j from a range from k to 10 (as if there were two nested loops there). properFraction is a built-in, for e.g. 3.14 it returns a pair (3,0.14). fst(a,b) == a is another built-in. concat concatenates lists in a given list, to form one list: [[1],,[3]] --> [1,3].

    – Will Ness
    Oct 3 '13 at 22:28






  • 1





    @endolith lastly, fromIntegral i*x is fromIntegral(i) * x is just i*x, where i is a value of some integer-like type, and x some floating type. fromIntegral is like a type cast; we aren't allowed to multiply an int and a double directly, in Haskell. so the algo goes from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it if inside the given "width" below the given high logarithmic top value (when width < 1 there can be only one such i) then sorts them by their logvals.

    – Will Ness
    Oct 3 '13 at 22:48






  • 1





    @endolith running that ideone entry with 11 as its input produces the 11-th number in the Hamming sequence, 1-based. Running it with a character 'a' as input produces first 20 elements in the sequence: [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]. As you can see, the 11-th number there is 15.

    – Will Ness
    May 11 '14 at 7:10






  • 1





    What isn't specified here is that we don't need to save a band at all as we can just check for the each <= the input. Also unspecified is the problem of using the log representation as to precision: with round-off errors: if the input value is already a regular number, the log approximation comparison may find the the approximate log is either slightly too high or slightly too low than the log approximation of the input answer. To solve this, one needs to keep a couple of values above and a couple below the input value, then as a final step scan for the real number minimum <= the input.

    – GordonBGood
    Aug 25 '16 at 1:35

















I wish I understood Haskell. :/ This looks superficially like the best answer.

– endolith
Oct 3 '13 at 20:26







I wish I understood Haskell. :/ This looks superficially like the best answer.

– endolith
Oct 3 '13 at 20:26






1




1





@endolith this here is very basic stuff. f x y is a function application, f(x,y). the list comprehension [x | p x] is a list containing one x if p(x) is true; empty otherwise. list comprehension [x | k <- [0..10], j<- [k..10], let x=k+j*42] pulls each k from a range from 0 to 10, and for each k it pulls each j from a range from k to 10 (as if there were two nested loops there). properFraction is a built-in, for e.g. 3.14 it returns a pair (3,0.14). fst(a,b) == a is another built-in. concat concatenates lists in a given list, to form one list: [[1],,[3]] --> [1,3].

– Will Ness
Oct 3 '13 at 22:28





@endolith this here is very basic stuff. f x y is a function application, f(x,y). the list comprehension [x | p x] is a list containing one x if p(x) is true; empty otherwise. list comprehension [x | k <- [0..10], j<- [k..10], let x=k+j*42] pulls each k from a range from 0 to 10, and for each k it pulls each j from a range from k to 10 (as if there were two nested loops there). properFraction is a built-in, for e.g. 3.14 it returns a pair (3,0.14). fst(a,b) == a is another built-in. concat concatenates lists in a given list, to form one list: [[1],,[3]] --> [1,3].

– Will Ness
Oct 3 '13 at 22:28




1




1





@endolith lastly, fromIntegral i*x is fromIntegral(i) * x is just i*x, where i is a value of some integer-like type, and x some floating type. fromIntegral is like a type cast; we aren't allowed to multiply an int and a double directly, in Haskell. so the algo goes from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it if inside the given "width" below the given high logarithmic top value (when width < 1 there can be only one such i) then sorts them by their logvals.

– Will Ness
Oct 3 '13 at 22:48





@endolith lastly, fromIntegral i*x is fromIntegral(i) * x is just i*x, where i is a value of some integer-like type, and x some floating type. fromIntegral is like a type cast; we aren't allowed to multiply an int and a double directly, in Haskell. so the algo goes from log2(N) = i+j*log2(3)+k*log2(5); enumerates all possible ks and for each, all possible js, finds the top i and thus the triple (k,j,i) and keeps it if inside the given "width" below the given high logarithmic top value (when width < 1 there can be only one such i) then sorts them by their logvals.

– Will Ness
Oct 3 '13 at 22:48




1




1





@endolith running that ideone entry with 11 as its input produces the 11-th number in the Hamming sequence, 1-based. Running it with a character 'a' as input produces first 20 elements in the sequence: [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]. As you can see, the 11-th number there is 15.

– Will Ness
May 11 '14 at 7:10





@endolith running that ideone entry with 11 as its input produces the 11-th number in the Hamming sequence, 1-based. Running it with a character 'a' as input produces first 20 elements in the sequence: [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]. As you can see, the 11-th number there is 15.

– Will Ness
May 11 '14 at 7:10




1




1





What isn't specified here is that we don't need to save a band at all as we can just check for the each <= the input. Also unspecified is the problem of using the log representation as to precision: with round-off errors: if the input value is already a regular number, the log approximation comparison may find the the approximate log is either slightly too high or slightly too low than the log approximation of the input answer. To solve this, one needs to keep a couple of values above and a couple below the input value, then as a final step scan for the real number minimum <= the input.

– GordonBGood
Aug 25 '16 at 1:35





What isn't specified here is that we don't need to save a band at all as we can just check for the each <= the input. Also unspecified is the problem of using the log representation as to precision: with round-off errors: if the input value is already a regular number, the log approximation comparison may find the the approximate log is either slightly too high or slightly too low than the log approximation of the input answer. To solve this, one needs to keep a couple of values above and a couple below the input value, then as a final step scan for the real number minimum <= the input.

– GordonBGood
Aug 25 '16 at 1:35











4














Here's a solution in Python, based on Will Ness answer but taking some shortcuts and using pure integer math to avoid running into log space numerical accuracy errors:





import math

def next_regular(target):
"""
Find the next regular number greater than or equal to target.
"""
# Check if it's already a power of 2 (or a non-integer)
try:
if not (target & (target-1)):
return target
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))

if target <= 6:
return target

match = float('inf') # Anything found will be smaller
p5 = 1
while p5 < target:
p35 = p5
while p35 < target:
# Ceiling integer division, avoiding conversion to float
# (quotient = ceil(target / p35))
# From https://stackoverflow.com/a/17511341/125507
quotient = -(-target // p35)

# Quickly find next power of 2 >= quotient
# See https://stackoverflow.com/a/19164783/125507
try:
p2 = 2**((quotient - 1).bit_length())
except AttributeError:
# Fallback for Python <2.7
p2 = 2**(len(bin(quotient - 1)) - 2)

N = p2 * p35
if N == target:
return N
elif N < match:
match = N
p35 *= 3
if p35 == target:
return p35
if p35 < match:
match = p35
p5 *= 5
if p5 == target:
return p5
if p5 < match:
match = p5
return match


In English: iterate through every combination of 5s and 3s, quickly finding the next power of 2 >= target for each pair and keeping the smallest result. (It's a waste of time to iterate through every possible multiple of 2 if only one of them can be correct). It also returns early if it ever finds that the target is already a regular number, though this is not strictly necessary.



I've tested it pretty thoroughly, testing every integer from 0 to 51200000 and comparing to the list on OEIS http://oeis.org/A051037, as well as many large numbers that are ±1 from regular numbers, etc. It's now available in SciPy as fftpack.helper.next_fast_len, to find optimal sizes for FFTs (source code).



I'm not sure if the log method is faster because I couldn't get it to work reliably enough to test it. I think it has a similar number of operations, though? I'm not sure, but this is reasonably fast. Takes <3 seconds (or 0.7 second with gmpy) to calculate that 2142 × 380 × 5444 is the next regular number above 22 × 3454 × 5249+1 (the 100,000,000th regular number, which has 392 digits)






share|improve this answer


























  • You are right that the log method takes about the same number of operations, just much faster as it doesn't have to deal with multi-precision math. The problem with getting it to work is that these are approximations, and especially if the input value is already a regular number, the determined log for it as compared to the generated regular number log may not quite match due to round-off errors. The solution is to add a bit of logic to keep a couple of values just <= and a couple just > for the log comparison, then as a final step convert these to bigint and find min >= the input value.

    – GordonBGood
    Aug 25 '16 at 1:49











  • @GordonBGood That sounds like a good idea. Want to post an answer? :)

    – endolith
    Aug 25 '16 at 2:20











  • the precision is usually enough to compare quite large Hamming numbers that are known to be different. Here, just pre-test the input whether it's already a regular number or not.

    – Will Ness
    Aug 25 '16 at 7:41













  • Working on an answer; need to estimate the difference in precision between an approximate log determined directly and one calculated through the regular number loops. @WillNess, yes, the precision is adequate to compare very large Hamming numbers (10 million digits or so) as those become very sparse with range, but this needs to be compared to the approximate log of the input value determined by other means (for the input number), which doesn't have the same error terms.

    – GordonBGood
    Aug 26 '16 at 0:11






  • 1





    @endolith, please see my answer which carries this work forward and is faster due to using logarithms for mostly eliminate the bigint operations.

    – GordonBGood
    Nov 25 '18 at 7:20
















4














Here's a solution in Python, based on Will Ness answer but taking some shortcuts and using pure integer math to avoid running into log space numerical accuracy errors:





import math

def next_regular(target):
"""
Find the next regular number greater than or equal to target.
"""
# Check if it's already a power of 2 (or a non-integer)
try:
if not (target & (target-1)):
return target
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))

if target <= 6:
return target

match = float('inf') # Anything found will be smaller
p5 = 1
while p5 < target:
p35 = p5
while p35 < target:
# Ceiling integer division, avoiding conversion to float
# (quotient = ceil(target / p35))
# From https://stackoverflow.com/a/17511341/125507
quotient = -(-target // p35)

# Quickly find next power of 2 >= quotient
# See https://stackoverflow.com/a/19164783/125507
try:
p2 = 2**((quotient - 1).bit_length())
except AttributeError:
# Fallback for Python <2.7
p2 = 2**(len(bin(quotient - 1)) - 2)

N = p2 * p35
if N == target:
return N
elif N < match:
match = N
p35 *= 3
if p35 == target:
return p35
if p35 < match:
match = p35
p5 *= 5
if p5 == target:
return p5
if p5 < match:
match = p5
return match


In English: iterate through every combination of 5s and 3s, quickly finding the next power of 2 >= target for each pair and keeping the smallest result. (It's a waste of time to iterate through every possible multiple of 2 if only one of them can be correct). It also returns early if it ever finds that the target is already a regular number, though this is not strictly necessary.



I've tested it pretty thoroughly, testing every integer from 0 to 51200000 and comparing to the list on OEIS http://oeis.org/A051037, as well as many large numbers that are ±1 from regular numbers, etc. It's now available in SciPy as fftpack.helper.next_fast_len, to find optimal sizes for FFTs (source code).



I'm not sure if the log method is faster because I couldn't get it to work reliably enough to test it. I think it has a similar number of operations, though? I'm not sure, but this is reasonably fast. Takes <3 seconds (or 0.7 second with gmpy) to calculate that 2142 × 380 × 5444 is the next regular number above 22 × 3454 × 5249+1 (the 100,000,000th regular number, which has 392 digits)






share|improve this answer


























  • You are right that the log method takes about the same number of operations, just much faster as it doesn't have to deal with multi-precision math. The problem with getting it to work is that these are approximations, and especially if the input value is already a regular number, the determined log for it as compared to the generated regular number log may not quite match due to round-off errors. The solution is to add a bit of logic to keep a couple of values just <= and a couple just > for the log comparison, then as a final step convert these to bigint and find min >= the input value.

    – GordonBGood
    Aug 25 '16 at 1:49











  • @GordonBGood That sounds like a good idea. Want to post an answer? :)

    – endolith
    Aug 25 '16 at 2:20











  • the precision is usually enough to compare quite large Hamming numbers that are known to be different. Here, just pre-test the input whether it's already a regular number or not.

    – Will Ness
    Aug 25 '16 at 7:41













  • Working on an answer; need to estimate the difference in precision between an approximate log determined directly and one calculated through the regular number loops. @WillNess, yes, the precision is adequate to compare very large Hamming numbers (10 million digits or so) as those become very sparse with range, but this needs to be compared to the approximate log of the input value determined by other means (for the input number), which doesn't have the same error terms.

    – GordonBGood
    Aug 26 '16 at 0:11






  • 1





    @endolith, please see my answer which carries this work forward and is faster due to using logarithms for mostly eliminate the bigint operations.

    – GordonBGood
    Nov 25 '18 at 7:20














4












4








4







Here's a solution in Python, based on Will Ness answer but taking some shortcuts and using pure integer math to avoid running into log space numerical accuracy errors:





import math

def next_regular(target):
"""
Find the next regular number greater than or equal to target.
"""
# Check if it's already a power of 2 (or a non-integer)
try:
if not (target & (target-1)):
return target
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))

if target <= 6:
return target

match = float('inf') # Anything found will be smaller
p5 = 1
while p5 < target:
p35 = p5
while p35 < target:
# Ceiling integer division, avoiding conversion to float
# (quotient = ceil(target / p35))
# From https://stackoverflow.com/a/17511341/125507
quotient = -(-target // p35)

# Quickly find next power of 2 >= quotient
# See https://stackoverflow.com/a/19164783/125507
try:
p2 = 2**((quotient - 1).bit_length())
except AttributeError:
# Fallback for Python <2.7
p2 = 2**(len(bin(quotient - 1)) - 2)

N = p2 * p35
if N == target:
return N
elif N < match:
match = N
p35 *= 3
if p35 == target:
return p35
if p35 < match:
match = p35
p5 *= 5
if p5 == target:
return p5
if p5 < match:
match = p5
return match


In English: iterate through every combination of 5s and 3s, quickly finding the next power of 2 >= target for each pair and keeping the smallest result. (It's a waste of time to iterate through every possible multiple of 2 if only one of them can be correct). It also returns early if it ever finds that the target is already a regular number, though this is not strictly necessary.



I've tested it pretty thoroughly, testing every integer from 0 to 51200000 and comparing to the list on OEIS http://oeis.org/A051037, as well as many large numbers that are ±1 from regular numbers, etc. It's now available in SciPy as fftpack.helper.next_fast_len, to find optimal sizes for FFTs (source code).



I'm not sure if the log method is faster because I couldn't get it to work reliably enough to test it. I think it has a similar number of operations, though? I'm not sure, but this is reasonably fast. Takes <3 seconds (or 0.7 second with gmpy) to calculate that 2142 × 380 × 5444 is the next regular number above 22 × 3454 × 5249+1 (the 100,000,000th regular number, which has 392 digits)






share|improve this answer















Here's a solution in Python, based on Will Ness answer but taking some shortcuts and using pure integer math to avoid running into log space numerical accuracy errors:





import math

def next_regular(target):
"""
Find the next regular number greater than or equal to target.
"""
# Check if it's already a power of 2 (or a non-integer)
try:
if not (target & (target-1)):
return target
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))

if target <= 6:
return target

match = float('inf') # Anything found will be smaller
p5 = 1
while p5 < target:
p35 = p5
while p35 < target:
# Ceiling integer division, avoiding conversion to float
# (quotient = ceil(target / p35))
# From https://stackoverflow.com/a/17511341/125507
quotient = -(-target // p35)

# Quickly find next power of 2 >= quotient
# See https://stackoverflow.com/a/19164783/125507
try:
p2 = 2**((quotient - 1).bit_length())
except AttributeError:
# Fallback for Python <2.7
p2 = 2**(len(bin(quotient - 1)) - 2)

N = p2 * p35
if N == target:
return N
elif N < match:
match = N
p35 *= 3
if p35 == target:
return p35
if p35 < match:
match = p35
p5 *= 5
if p5 == target:
return p5
if p5 < match:
match = p5
return match


In English: iterate through every combination of 5s and 3s, quickly finding the next power of 2 >= target for each pair and keeping the smallest result. (It's a waste of time to iterate through every possible multiple of 2 if only one of them can be correct). It also returns early if it ever finds that the target is already a regular number, though this is not strictly necessary.



I've tested it pretty thoroughly, testing every integer from 0 to 51200000 and comparing to the list on OEIS http://oeis.org/A051037, as well as many large numbers that are ±1 from regular numbers, etc. It's now available in SciPy as fftpack.helper.next_fast_len, to find optimal sizes for FFTs (source code).



I'm not sure if the log method is faster because I couldn't get it to work reliably enough to test it. I think it has a similar number of operations, though? I'm not sure, but this is reasonably fast. Takes <3 seconds (or 0.7 second with gmpy) to calculate that 2142 × 380 × 5444 is the next regular number above 22 × 3454 × 5249+1 (the 100,000,000th regular number, which has 392 digits)







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 25 '18 at 19:54

























answered Oct 29 '13 at 4:35









endolithendolith

10.8k1883157




10.8k1883157













  • You are right that the log method takes about the same number of operations, just much faster as it doesn't have to deal with multi-precision math. The problem with getting it to work is that these are approximations, and especially if the input value is already a regular number, the determined log for it as compared to the generated regular number log may not quite match due to round-off errors. The solution is to add a bit of logic to keep a couple of values just <= and a couple just > for the log comparison, then as a final step convert these to bigint and find min >= the input value.

    – GordonBGood
    Aug 25 '16 at 1:49











  • @GordonBGood That sounds like a good idea. Want to post an answer? :)

    – endolith
    Aug 25 '16 at 2:20











  • the precision is usually enough to compare quite large Hamming numbers that are known to be different. Here, just pre-test the input whether it's already a regular number or not.

    – Will Ness
    Aug 25 '16 at 7:41













  • Working on an answer; need to estimate the difference in precision between an approximate log determined directly and one calculated through the regular number loops. @WillNess, yes, the precision is adequate to compare very large Hamming numbers (10 million digits or so) as those become very sparse with range, but this needs to be compared to the approximate log of the input value determined by other means (for the input number), which doesn't have the same error terms.

    – GordonBGood
    Aug 26 '16 at 0:11






  • 1





    @endolith, please see my answer which carries this work forward and is faster due to using logarithms for mostly eliminate the bigint operations.

    – GordonBGood
    Nov 25 '18 at 7:20



















  • You are right that the log method takes about the same number of operations, just much faster as it doesn't have to deal with multi-precision math. The problem with getting it to work is that these are approximations, and especially if the input value is already a regular number, the determined log for it as compared to the generated regular number log may not quite match due to round-off errors. The solution is to add a bit of logic to keep a couple of values just <= and a couple just > for the log comparison, then as a final step convert these to bigint and find min >= the input value.

    – GordonBGood
    Aug 25 '16 at 1:49











  • @GordonBGood That sounds like a good idea. Want to post an answer? :)

    – endolith
    Aug 25 '16 at 2:20











  • the precision is usually enough to compare quite large Hamming numbers that are known to be different. Here, just pre-test the input whether it's already a regular number or not.

    – Will Ness
    Aug 25 '16 at 7:41













  • Working on an answer; need to estimate the difference in precision between an approximate log determined directly and one calculated through the regular number loops. @WillNess, yes, the precision is adequate to compare very large Hamming numbers (10 million digits or so) as those become very sparse with range, but this needs to be compared to the approximate log of the input value determined by other means (for the input number), which doesn't have the same error terms.

    – GordonBGood
    Aug 26 '16 at 0:11






  • 1





    @endolith, please see my answer which carries this work forward and is faster due to using logarithms for mostly eliminate the bigint operations.

    – GordonBGood
    Nov 25 '18 at 7:20

















You are right that the log method takes about the same number of operations, just much faster as it doesn't have to deal with multi-precision math. The problem with getting it to work is that these are approximations, and especially if the input value is already a regular number, the determined log for it as compared to the generated regular number log may not quite match due to round-off errors. The solution is to add a bit of logic to keep a couple of values just <= and a couple just > for the log comparison, then as a final step convert these to bigint and find min >= the input value.

– GordonBGood
Aug 25 '16 at 1:49





You are right that the log method takes about the same number of operations, just much faster as it doesn't have to deal with multi-precision math. The problem with getting it to work is that these are approximations, and especially if the input value is already a regular number, the determined log for it as compared to the generated regular number log may not quite match due to round-off errors. The solution is to add a bit of logic to keep a couple of values just <= and a couple just > for the log comparison, then as a final step convert these to bigint and find min >= the input value.

– GordonBGood
Aug 25 '16 at 1:49













@GordonBGood That sounds like a good idea. Want to post an answer? :)

– endolith
Aug 25 '16 at 2:20





@GordonBGood That sounds like a good idea. Want to post an answer? :)

– endolith
Aug 25 '16 at 2:20













the precision is usually enough to compare quite large Hamming numbers that are known to be different. Here, just pre-test the input whether it's already a regular number or not.

– Will Ness
Aug 25 '16 at 7:41







the precision is usually enough to compare quite large Hamming numbers that are known to be different. Here, just pre-test the input whether it's already a regular number or not.

– Will Ness
Aug 25 '16 at 7:41















Working on an answer; need to estimate the difference in precision between an approximate log determined directly and one calculated through the regular number loops. @WillNess, yes, the precision is adequate to compare very large Hamming numbers (10 million digits or so) as those become very sparse with range, but this needs to be compared to the approximate log of the input value determined by other means (for the input number), which doesn't have the same error terms.

– GordonBGood
Aug 26 '16 at 0:11





Working on an answer; need to estimate the difference in precision between an approximate log determined directly and one calculated through the regular number loops. @WillNess, yes, the precision is adequate to compare very large Hamming numbers (10 million digits or so) as those become very sparse with range, but this needs to be compared to the approximate log of the input value determined by other means (for the input number), which doesn't have the same error terms.

– GordonBGood
Aug 26 '16 at 0:11




1




1





@endolith, please see my answer which carries this work forward and is faster due to using logarithms for mostly eliminate the bigint operations.

– GordonBGood
Nov 25 '18 at 7:20





@endolith, please see my answer which carries this work forward and is faster due to using logarithms for mostly eliminate the bigint operations.

– GordonBGood
Nov 25 '18 at 7:20











3














You want to find the smallest number m that is m >= N and m = 2^i * 3^j * 5^k where all i,j,k >= 0.



Taking logarithms the equations can be rewritten as:



 log m >= log N
log m = i*log2 + j*log3 + k*log5


You can calculate log2, log3, log5 and logN to (enough high, depending on the size of N) accuracy. Then this problem looks like a Integer Linear programming problem and you could try to solve it using one of the known algorithms for this NP-hard problem.






share|improve this answer
























  • NP-hard? Oh dear

    – finnw
    Feb 11 '12 at 19:24






  • 2





    The transformated problem is (in general) NP-hard. This particular instance of the general Integer Programming problem may have nicer solutions. Or the original (non-tranformated) number theory problem may have nicer algorithm.

    – ypercubeᵀᴹ
    Feb 11 '12 at 19:27
















3














You want to find the smallest number m that is m >= N and m = 2^i * 3^j * 5^k where all i,j,k >= 0.



Taking logarithms the equations can be rewritten as:



 log m >= log N
log m = i*log2 + j*log3 + k*log5


You can calculate log2, log3, log5 and logN to (enough high, depending on the size of N) accuracy. Then this problem looks like a Integer Linear programming problem and you could try to solve it using one of the known algorithms for this NP-hard problem.






share|improve this answer
























  • NP-hard? Oh dear

    – finnw
    Feb 11 '12 at 19:24






  • 2





    The transformated problem is (in general) NP-hard. This particular instance of the general Integer Programming problem may have nicer solutions. Or the original (non-tranformated) number theory problem may have nicer algorithm.

    – ypercubeᵀᴹ
    Feb 11 '12 at 19:27














3












3








3







You want to find the smallest number m that is m >= N and m = 2^i * 3^j * 5^k where all i,j,k >= 0.



Taking logarithms the equations can be rewritten as:



 log m >= log N
log m = i*log2 + j*log3 + k*log5


You can calculate log2, log3, log5 and logN to (enough high, depending on the size of N) accuracy. Then this problem looks like a Integer Linear programming problem and you could try to solve it using one of the known algorithms for this NP-hard problem.






share|improve this answer













You want to find the smallest number m that is m >= N and m = 2^i * 3^j * 5^k where all i,j,k >= 0.



Taking logarithms the equations can be rewritten as:



 log m >= log N
log m = i*log2 + j*log3 + k*log5


You can calculate log2, log3, log5 and logN to (enough high, depending on the size of N) accuracy. Then this problem looks like a Integer Linear programming problem and you could try to solve it using one of the known algorithms for this NP-hard problem.







share|improve this answer












share|improve this answer



share|improve this answer










answered Feb 11 '12 at 18:58









ypercubeᵀᴹypercubeᵀᴹ

94.1k11137197




94.1k11137197













  • NP-hard? Oh dear

    – finnw
    Feb 11 '12 at 19:24






  • 2





    The transformated problem is (in general) NP-hard. This particular instance of the general Integer Programming problem may have nicer solutions. Or the original (non-tranformated) number theory problem may have nicer algorithm.

    – ypercubeᵀᴹ
    Feb 11 '12 at 19:27



















  • NP-hard? Oh dear

    – finnw
    Feb 11 '12 at 19:24






  • 2





    The transformated problem is (in general) NP-hard. This particular instance of the general Integer Programming problem may have nicer solutions. Or the original (non-tranformated) number theory problem may have nicer algorithm.

    – ypercubeᵀᴹ
    Feb 11 '12 at 19:27

















NP-hard? Oh dear

– finnw
Feb 11 '12 at 19:24





NP-hard? Oh dear

– finnw
Feb 11 '12 at 19:24




2




2





The transformated problem is (in general) NP-hard. This particular instance of the general Integer Programming problem may have nicer solutions. Or the original (non-tranformated) number theory problem may have nicer algorithm.

– ypercubeᵀᴹ
Feb 11 '12 at 19:27





The transformated problem is (in general) NP-hard. This particular instance of the general Integer Programming problem may have nicer solutions. Or the original (non-tranformated) number theory problem may have nicer algorithm.

– ypercubeᵀᴹ
Feb 11 '12 at 19:27











1














EDITED/CORRECTED: Corrected the codes to pass the scipy tests:



Here's an answer based on endolith's answer, but almost eliminating long multi-precision integer calculations by using float64 logarithm representations to do a base comparison to find triple values that pass the criteria, only resorting to full precision comparisons when there is a chance that the logarithm value may not be accurate enough, which only occurs when the target is very close to either the previous or the next regular number:



import math

def next_regulary(target):
"""
Find the next regular number greater than or equal to target.
"""
if target < 2: return ( 0, 0, 0 )
log2hi = 0
mant = 0
# Check if it's already a power of 2 (or a non-integer)
try:
mant = target & (target - 1)
target = int(target) # take care of case where not int/float/decimal
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
mant = target & (target - 1)

# Quickly find next power of 2 >= target
# See https://stackoverflow.com/a/19164783/125507
try:
log2hi = target.bit_length()
except AttributeError:
# Fallback for Python <2.7
log2hi = len(bin(target)) - 2

# exit if this is a power of two already...
if not mant: return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9:
if target < 4: return ( 0, 1, 0 )
elif target < 6: return ( 0, 0, 1 )
elif target < 7: return ( 1, 1, 0 )
else: return ( 3, 0, 0 )

# find log of target, which may exceed the float64 limit...
if log2hi < 53: mant = target << (53 - log2hi)
else: mant = target >> (log2hi - 53)
log2target = log2hi + math.log2(float(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = math.log2(3); log2of5 = math.log2(5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target**3 + 2 * fctr)**(1/3) # for up to 2 numbers higher
btm = 2 * log2target - top # or up to 2 numbers lower

match = log2hi # Anything found will be smaller
result = ( log2hi, 0, 0 ) # placeholder for eventual matches
count = 0 # only used for debugging counting band
fives = 0; fiveslmt = int(math.ceil(top / log2of5))
while fives < fiveslmt:
log2p = top - fives * log2of5
threes = 0; threeslmt = int(math.ceil(log2p / log2of3))
while threes < threeslmt:
log2q = log2p - threes * log2of3
twos = int(math.floor(log2q)); log2this = top - log2q + twos

if log2this >= btm: count += 1 # only used for counting band
if log2this >= btm and log2this < match:
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (2**twos * 3**threes * 5**fives) >= target:
match = log2this; result = ( twos, threes, fives )
threes += 1
fives += 1

return result

print(next_regular(2**2 * 3**454 * 5**249 + 1)) # prints (142, 80, 444)


Since most long multi-precision calculations have been eliminated, gmpy isn't needed, and on IDEOne the above code takes 0.11 seconds instead of 0.48 seconds for endolith's solution to find the next regular number greater than the 100 millionth one as shown; it takes 0.49 seconds instead of 5.48 seconds to find the next regular number past the billionth (next one is (761,572,489) past (1334,335,404) + 1), and the difference will get even larger as the range goes up as the multi-precision calculations get increasingly longer for the endolith version compared to almost none here. Thus, this version could calculate the next regular number from the trillionth in the sequence in about 50 seconds on IDEOne, where it would likely take over an hour with the endolith version.



The English description of the algorithm is almost the same as for the endolith version, differing as follows:
1) calculates the float log estimation of the argument target value (we can't use the built-in log function directly as the range may be much too large for representation as a 64-bit float),
2) compares the log representation values in determining qualifying values inside an estimated range above and below the target value of only about two or three numbers (depending on round-off),
3) compare multi-precision values only if within the above defined narrow band,
4) outputs the triple indices rather than the full long multi-precision integer (would be about 840 decimal digits for the one past the billionth, ten times that for the trillionth), which can then easily be converted to the long multi-precision value if required.



This algorithm uses almost no memory other than for the potentially very large multi-precision integer target value, the intermediate evaluation comparison values of about the same size, and the output expansion of the triples if required. This algorithm is an improvement over the endolith version in that it successfully uses the logarithm values for most comparisons in spite of their lack of precision, and that it narrows the band of compared numbers to just a few.



This algorithm will work for argument ranges somewhat above ten trillion (a few minute's calculation time at IDEOne rates) when it will no longer be correct due to lack of precision in the log representation values as per @WillNess's discussion; in order to fix this, we can change the log representation to a "roll-your-own" logarithm representation consisting of a fixed-length integer (124 bits for about double the exponent range, good for targets of over a hundred thousand digits if one is willing to wait); this will be a little slower due to the smallish multi-precision integer operations being slower than float64 operations, but not that much slower since the size is limited (maybe a factor of three or so slower).



Now none of these Python implementations (without using C or Cython or PyPy or something) are particularly fast, as they are about a hundred times slower than as implemented in a compiled language. For reference sake, here is a Haskell version:



{-# OPTIONS_GHC -O3 #-}

import Data.Word
import Data.Bits

nextRegular :: Integer -> ( Word32, Word32, Word32 )
nextRegular target
| target < 2 = ( 0, 0, 0 )
| target .&. (target - 1) == 0 = ( fromIntegral lg2hi - 1, 0, 0 )
| target < 9 = case target of
3 -> ( 0, 1, 0 )
5 -> ( 0, 0, 1 )
6 -> ( 1, 1, 0 )
_ -> ( 3, 0, 0 )
| otherwise = match
where
lg3 = logBase 2 3 :: Double; lg5 = logBase 2 5 :: Double
lg2hi = let cntplcs v cnt =
let nv = v `shiftR` 31 in
if nv <= 0 then
let cntbts x c =
if x <= 0 then c else
case c + 1 of
nc -> nc `seq` cntbts (x `shiftR` 1) nc in
cntbts (fromIntegral v :: Word32) cnt
else case cnt + 31 of ncnt -> ncnt `seq` cntplcs nv ncnt
in cntplcs target 0
lg2tgt = let mant = if lg2hi <= 53 then target `shiftL` (53 - lg2hi)
else target `shiftR` (lg2hi - 53)
in fromIntegral lg2hi +
logBase 2 (fromIntegral mant / 2^53 :: Double)
lg2top = (lg2tgt^3 + 2 * 6 * lg3 * lg5)**(1/3) -- for 2 numbers or so higher
lg2btm = 2* lg2tgt - lg2top -- or two numbers or so lower
match =
let klmt = floor (lg2top / lg5)
loopk k mtchlgk mtchtplk =
if k > klmt then mtchtplk else
let p = lg2top - fromIntegral k * lg5
jlmt = fromIntegral $ floor (p / lg3)
loopj j mtchlgj mtchtplj =
if j > jlmt then loopk (k + 1) mtchlgj mtchtplj else
let q = p - fromIntegral j * lg3
( i, frac ) = properFraction q; r = lg2top - frac
( nmtchlg, nmtchtpl ) =
if r < lg2btm || r >= mtchlgj then
( mtchlgj, mtchtplj ) else
if 2^i * 3^j * 5^k >= target then
( r, ( i, j, k ) ) else ( mtchlgj, mtchtplj )
in nmtchlg `seq` nmtchtpl `seq` loopj (j + 1) nmtchlg nmtchtpl
in loopj 0 mtchlgk mtchtplk
in loopk 0 (fromIntegral lg2hi) ( fromIntegral lg2hi, 0, 0 )


trival :: ( Word32, Word32, Word32 ) -> Integer
trival (i,j,k) = 2^i * 3^j * 5^k

main = putStrLn $ show $ nextRegular $ (trival (1334,335,404)) + 1 -- (1126,16930,40)


This code calculates the next regular number following the billionth in too small a time to be measured and following the trillionth in 0.69 seconds on IDEOne (and potentially could run even faster except that IDEOne doesn't support LLVM). Even Julia will run at something like this Haskell speed after the "warm-up" for JIT compilation.



EDIT_ADD: The Julia code is as per the following:



function nextregular(target :: BigInt) :: Tuple{ UInt32, UInt32, UInt32 }
# trivial case of first value or anything less...
target < 2 && return ( 0, 0, 0 )

# Check if it's already a power of 2 (or a non-integer)
mant = target & (target - 1)

# Quickly find next power of 2 >= target
log2hi :: UInt32 = 0
test = target
while true
next = test & 0x7FFFFFFF
test >>>= 31; log2hi += 31
test <= 0 && (log2hi -= leading_zeros(UInt32(next)) - 1; break)
end

# exit if this is a power of two already...
mant == 0 && return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9
target < 4 && return ( 0, 1, 0 )
target < 6 && return ( 0, 0, 1 )
target < 7 && return ( 1, 1, 0 )
return ( 3, 0, 0 )
end

# find log of target, which may exceed the Float64 limit...
if log2hi < 53 mant = target << (53 - log2hi)
else mant = target >>> (log2hi - 53) end
log2target = log2hi + log(2, Float64(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = log(2, 3); log2of5 = log(2, 5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target^3 + 2 * fctr)^(1/3) # for 2 numbers or so higher
btm = 2 * log2target - top # or 2 numbers or so lower

# scan for values in the given narrow range that satisfy the criteria...
match = log2hi # Anything found will be smaller
result :: Tuple{UInt32,UInt32,UInt32} = ( log2hi, 0, 0 ) # placeholder for eventual matches
fives :: UInt32 = 0; fiveslmt = UInt32(ceil(top / log2of5))
while fives < fiveslmt
log2p = top - fives * log2of5
threes :: UInt32 = 0; threeslmt = UInt32(ceil(log2p / log2of3))
while threes < threeslmt
log2q = log2p - threes * log2of3
twos = UInt32(floor(log2q)); log2this = top - log2q + twos

if log2this >= btm && log2this < match
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (big(2)^twos * big(3)^threes * big(5)^fives) >= target
match = log2this; result = ( twos, threes, fives )
end
end
threes += 1
end
fives += 1
end
result
end





share|improve this answer


























  • Cool. Did you run all the tests on it? github.com/scipy/scipy/blob/master/scipy/fftpack/tests/… Is it faster for small numbers (~10000) or just for huge ones?

    – endolith
    Nov 25 '18 at 19:47








  • 1





    @endolith, it isn't faster (not much different) for small arguments, but increasingly faster for larger arguments. Thank's for the link to the tests; I used them to find a couple of problems in the code, which have now been corrected. The current corrected code passes all of the provided tests..

    – GordonBGood
    Nov 26 '18 at 6:59
















1














EDITED/CORRECTED: Corrected the codes to pass the scipy tests:



Here's an answer based on endolith's answer, but almost eliminating long multi-precision integer calculations by using float64 logarithm representations to do a base comparison to find triple values that pass the criteria, only resorting to full precision comparisons when there is a chance that the logarithm value may not be accurate enough, which only occurs when the target is very close to either the previous or the next regular number:



import math

def next_regulary(target):
"""
Find the next regular number greater than or equal to target.
"""
if target < 2: return ( 0, 0, 0 )
log2hi = 0
mant = 0
# Check if it's already a power of 2 (or a non-integer)
try:
mant = target & (target - 1)
target = int(target) # take care of case where not int/float/decimal
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
mant = target & (target - 1)

# Quickly find next power of 2 >= target
# See https://stackoverflow.com/a/19164783/125507
try:
log2hi = target.bit_length()
except AttributeError:
# Fallback for Python <2.7
log2hi = len(bin(target)) - 2

# exit if this is a power of two already...
if not mant: return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9:
if target < 4: return ( 0, 1, 0 )
elif target < 6: return ( 0, 0, 1 )
elif target < 7: return ( 1, 1, 0 )
else: return ( 3, 0, 0 )

# find log of target, which may exceed the float64 limit...
if log2hi < 53: mant = target << (53 - log2hi)
else: mant = target >> (log2hi - 53)
log2target = log2hi + math.log2(float(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = math.log2(3); log2of5 = math.log2(5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target**3 + 2 * fctr)**(1/3) # for up to 2 numbers higher
btm = 2 * log2target - top # or up to 2 numbers lower

match = log2hi # Anything found will be smaller
result = ( log2hi, 0, 0 ) # placeholder for eventual matches
count = 0 # only used for debugging counting band
fives = 0; fiveslmt = int(math.ceil(top / log2of5))
while fives < fiveslmt:
log2p = top - fives * log2of5
threes = 0; threeslmt = int(math.ceil(log2p / log2of3))
while threes < threeslmt:
log2q = log2p - threes * log2of3
twos = int(math.floor(log2q)); log2this = top - log2q + twos

if log2this >= btm: count += 1 # only used for counting band
if log2this >= btm and log2this < match:
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (2**twos * 3**threes * 5**fives) >= target:
match = log2this; result = ( twos, threes, fives )
threes += 1
fives += 1

return result

print(next_regular(2**2 * 3**454 * 5**249 + 1)) # prints (142, 80, 444)


Since most long multi-precision calculations have been eliminated, gmpy isn't needed, and on IDEOne the above code takes 0.11 seconds instead of 0.48 seconds for endolith's solution to find the next regular number greater than the 100 millionth one as shown; it takes 0.49 seconds instead of 5.48 seconds to find the next regular number past the billionth (next one is (761,572,489) past (1334,335,404) + 1), and the difference will get even larger as the range goes up as the multi-precision calculations get increasingly longer for the endolith version compared to almost none here. Thus, this version could calculate the next regular number from the trillionth in the sequence in about 50 seconds on IDEOne, where it would likely take over an hour with the endolith version.



The English description of the algorithm is almost the same as for the endolith version, differing as follows:
1) calculates the float log estimation of the argument target value (we can't use the built-in log function directly as the range may be much too large for representation as a 64-bit float),
2) compares the log representation values in determining qualifying values inside an estimated range above and below the target value of only about two or three numbers (depending on round-off),
3) compare multi-precision values only if within the above defined narrow band,
4) outputs the triple indices rather than the full long multi-precision integer (would be about 840 decimal digits for the one past the billionth, ten times that for the trillionth), which can then easily be converted to the long multi-precision value if required.



This algorithm uses almost no memory other than for the potentially very large multi-precision integer target value, the intermediate evaluation comparison values of about the same size, and the output expansion of the triples if required. This algorithm is an improvement over the endolith version in that it successfully uses the logarithm values for most comparisons in spite of their lack of precision, and that it narrows the band of compared numbers to just a few.



This algorithm will work for argument ranges somewhat above ten trillion (a few minute's calculation time at IDEOne rates) when it will no longer be correct due to lack of precision in the log representation values as per @WillNess's discussion; in order to fix this, we can change the log representation to a "roll-your-own" logarithm representation consisting of a fixed-length integer (124 bits for about double the exponent range, good for targets of over a hundred thousand digits if one is willing to wait); this will be a little slower due to the smallish multi-precision integer operations being slower than float64 operations, but not that much slower since the size is limited (maybe a factor of three or so slower).



Now none of these Python implementations (without using C or Cython or PyPy or something) are particularly fast, as they are about a hundred times slower than as implemented in a compiled language. For reference sake, here is a Haskell version:



{-# OPTIONS_GHC -O3 #-}

import Data.Word
import Data.Bits

nextRegular :: Integer -> ( Word32, Word32, Word32 )
nextRegular target
| target < 2 = ( 0, 0, 0 )
| target .&. (target - 1) == 0 = ( fromIntegral lg2hi - 1, 0, 0 )
| target < 9 = case target of
3 -> ( 0, 1, 0 )
5 -> ( 0, 0, 1 )
6 -> ( 1, 1, 0 )
_ -> ( 3, 0, 0 )
| otherwise = match
where
lg3 = logBase 2 3 :: Double; lg5 = logBase 2 5 :: Double
lg2hi = let cntplcs v cnt =
let nv = v `shiftR` 31 in
if nv <= 0 then
let cntbts x c =
if x <= 0 then c else
case c + 1 of
nc -> nc `seq` cntbts (x `shiftR` 1) nc in
cntbts (fromIntegral v :: Word32) cnt
else case cnt + 31 of ncnt -> ncnt `seq` cntplcs nv ncnt
in cntplcs target 0
lg2tgt = let mant = if lg2hi <= 53 then target `shiftL` (53 - lg2hi)
else target `shiftR` (lg2hi - 53)
in fromIntegral lg2hi +
logBase 2 (fromIntegral mant / 2^53 :: Double)
lg2top = (lg2tgt^3 + 2 * 6 * lg3 * lg5)**(1/3) -- for 2 numbers or so higher
lg2btm = 2* lg2tgt - lg2top -- or two numbers or so lower
match =
let klmt = floor (lg2top / lg5)
loopk k mtchlgk mtchtplk =
if k > klmt then mtchtplk else
let p = lg2top - fromIntegral k * lg5
jlmt = fromIntegral $ floor (p / lg3)
loopj j mtchlgj mtchtplj =
if j > jlmt then loopk (k + 1) mtchlgj mtchtplj else
let q = p - fromIntegral j * lg3
( i, frac ) = properFraction q; r = lg2top - frac
( nmtchlg, nmtchtpl ) =
if r < lg2btm || r >= mtchlgj then
( mtchlgj, mtchtplj ) else
if 2^i * 3^j * 5^k >= target then
( r, ( i, j, k ) ) else ( mtchlgj, mtchtplj )
in nmtchlg `seq` nmtchtpl `seq` loopj (j + 1) nmtchlg nmtchtpl
in loopj 0 mtchlgk mtchtplk
in loopk 0 (fromIntegral lg2hi) ( fromIntegral lg2hi, 0, 0 )


trival :: ( Word32, Word32, Word32 ) -> Integer
trival (i,j,k) = 2^i * 3^j * 5^k

main = putStrLn $ show $ nextRegular $ (trival (1334,335,404)) + 1 -- (1126,16930,40)


This code calculates the next regular number following the billionth in too small a time to be measured and following the trillionth in 0.69 seconds on IDEOne (and potentially could run even faster except that IDEOne doesn't support LLVM). Even Julia will run at something like this Haskell speed after the "warm-up" for JIT compilation.



EDIT_ADD: The Julia code is as per the following:



function nextregular(target :: BigInt) :: Tuple{ UInt32, UInt32, UInt32 }
# trivial case of first value or anything less...
target < 2 && return ( 0, 0, 0 )

# Check if it's already a power of 2 (or a non-integer)
mant = target & (target - 1)

# Quickly find next power of 2 >= target
log2hi :: UInt32 = 0
test = target
while true
next = test & 0x7FFFFFFF
test >>>= 31; log2hi += 31
test <= 0 && (log2hi -= leading_zeros(UInt32(next)) - 1; break)
end

# exit if this is a power of two already...
mant == 0 && return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9
target < 4 && return ( 0, 1, 0 )
target < 6 && return ( 0, 0, 1 )
target < 7 && return ( 1, 1, 0 )
return ( 3, 0, 0 )
end

# find log of target, which may exceed the Float64 limit...
if log2hi < 53 mant = target << (53 - log2hi)
else mant = target >>> (log2hi - 53) end
log2target = log2hi + log(2, Float64(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = log(2, 3); log2of5 = log(2, 5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target^3 + 2 * fctr)^(1/3) # for 2 numbers or so higher
btm = 2 * log2target - top # or 2 numbers or so lower

# scan for values in the given narrow range that satisfy the criteria...
match = log2hi # Anything found will be smaller
result :: Tuple{UInt32,UInt32,UInt32} = ( log2hi, 0, 0 ) # placeholder for eventual matches
fives :: UInt32 = 0; fiveslmt = UInt32(ceil(top / log2of5))
while fives < fiveslmt
log2p = top - fives * log2of5
threes :: UInt32 = 0; threeslmt = UInt32(ceil(log2p / log2of3))
while threes < threeslmt
log2q = log2p - threes * log2of3
twos = UInt32(floor(log2q)); log2this = top - log2q + twos

if log2this >= btm && log2this < match
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (big(2)^twos * big(3)^threes * big(5)^fives) >= target
match = log2this; result = ( twos, threes, fives )
end
end
threes += 1
end
fives += 1
end
result
end





share|improve this answer


























  • Cool. Did you run all the tests on it? github.com/scipy/scipy/blob/master/scipy/fftpack/tests/… Is it faster for small numbers (~10000) or just for huge ones?

    – endolith
    Nov 25 '18 at 19:47








  • 1





    @endolith, it isn't faster (not much different) for small arguments, but increasingly faster for larger arguments. Thank's for the link to the tests; I used them to find a couple of problems in the code, which have now been corrected. The current corrected code passes all of the provided tests..

    – GordonBGood
    Nov 26 '18 at 6:59














1












1








1







EDITED/CORRECTED: Corrected the codes to pass the scipy tests:



Here's an answer based on endolith's answer, but almost eliminating long multi-precision integer calculations by using float64 logarithm representations to do a base comparison to find triple values that pass the criteria, only resorting to full precision comparisons when there is a chance that the logarithm value may not be accurate enough, which only occurs when the target is very close to either the previous or the next regular number:



import math

def next_regulary(target):
"""
Find the next regular number greater than or equal to target.
"""
if target < 2: return ( 0, 0, 0 )
log2hi = 0
mant = 0
# Check if it's already a power of 2 (or a non-integer)
try:
mant = target & (target - 1)
target = int(target) # take care of case where not int/float/decimal
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
mant = target & (target - 1)

# Quickly find next power of 2 >= target
# See https://stackoverflow.com/a/19164783/125507
try:
log2hi = target.bit_length()
except AttributeError:
# Fallback for Python <2.7
log2hi = len(bin(target)) - 2

# exit if this is a power of two already...
if not mant: return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9:
if target < 4: return ( 0, 1, 0 )
elif target < 6: return ( 0, 0, 1 )
elif target < 7: return ( 1, 1, 0 )
else: return ( 3, 0, 0 )

# find log of target, which may exceed the float64 limit...
if log2hi < 53: mant = target << (53 - log2hi)
else: mant = target >> (log2hi - 53)
log2target = log2hi + math.log2(float(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = math.log2(3); log2of5 = math.log2(5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target**3 + 2 * fctr)**(1/3) # for up to 2 numbers higher
btm = 2 * log2target - top # or up to 2 numbers lower

match = log2hi # Anything found will be smaller
result = ( log2hi, 0, 0 ) # placeholder for eventual matches
count = 0 # only used for debugging counting band
fives = 0; fiveslmt = int(math.ceil(top / log2of5))
while fives < fiveslmt:
log2p = top - fives * log2of5
threes = 0; threeslmt = int(math.ceil(log2p / log2of3))
while threes < threeslmt:
log2q = log2p - threes * log2of3
twos = int(math.floor(log2q)); log2this = top - log2q + twos

if log2this >= btm: count += 1 # only used for counting band
if log2this >= btm and log2this < match:
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (2**twos * 3**threes * 5**fives) >= target:
match = log2this; result = ( twos, threes, fives )
threes += 1
fives += 1

return result

print(next_regular(2**2 * 3**454 * 5**249 + 1)) # prints (142, 80, 444)


Since most long multi-precision calculations have been eliminated, gmpy isn't needed, and on IDEOne the above code takes 0.11 seconds instead of 0.48 seconds for endolith's solution to find the next regular number greater than the 100 millionth one as shown; it takes 0.49 seconds instead of 5.48 seconds to find the next regular number past the billionth (next one is (761,572,489) past (1334,335,404) + 1), and the difference will get even larger as the range goes up as the multi-precision calculations get increasingly longer for the endolith version compared to almost none here. Thus, this version could calculate the next regular number from the trillionth in the sequence in about 50 seconds on IDEOne, where it would likely take over an hour with the endolith version.



The English description of the algorithm is almost the same as for the endolith version, differing as follows:
1) calculates the float log estimation of the argument target value (we can't use the built-in log function directly as the range may be much too large for representation as a 64-bit float),
2) compares the log representation values in determining qualifying values inside an estimated range above and below the target value of only about two or three numbers (depending on round-off),
3) compare multi-precision values only if within the above defined narrow band,
4) outputs the triple indices rather than the full long multi-precision integer (would be about 840 decimal digits for the one past the billionth, ten times that for the trillionth), which can then easily be converted to the long multi-precision value if required.



This algorithm uses almost no memory other than for the potentially very large multi-precision integer target value, the intermediate evaluation comparison values of about the same size, and the output expansion of the triples if required. This algorithm is an improvement over the endolith version in that it successfully uses the logarithm values for most comparisons in spite of their lack of precision, and that it narrows the band of compared numbers to just a few.



This algorithm will work for argument ranges somewhat above ten trillion (a few minute's calculation time at IDEOne rates) when it will no longer be correct due to lack of precision in the log representation values as per @WillNess's discussion; in order to fix this, we can change the log representation to a "roll-your-own" logarithm representation consisting of a fixed-length integer (124 bits for about double the exponent range, good for targets of over a hundred thousand digits if one is willing to wait); this will be a little slower due to the smallish multi-precision integer operations being slower than float64 operations, but not that much slower since the size is limited (maybe a factor of three or so slower).



Now none of these Python implementations (without using C or Cython or PyPy or something) are particularly fast, as they are about a hundred times slower than as implemented in a compiled language. For reference sake, here is a Haskell version:



{-# OPTIONS_GHC -O3 #-}

import Data.Word
import Data.Bits

nextRegular :: Integer -> ( Word32, Word32, Word32 )
nextRegular target
| target < 2 = ( 0, 0, 0 )
| target .&. (target - 1) == 0 = ( fromIntegral lg2hi - 1, 0, 0 )
| target < 9 = case target of
3 -> ( 0, 1, 0 )
5 -> ( 0, 0, 1 )
6 -> ( 1, 1, 0 )
_ -> ( 3, 0, 0 )
| otherwise = match
where
lg3 = logBase 2 3 :: Double; lg5 = logBase 2 5 :: Double
lg2hi = let cntplcs v cnt =
let nv = v `shiftR` 31 in
if nv <= 0 then
let cntbts x c =
if x <= 0 then c else
case c + 1 of
nc -> nc `seq` cntbts (x `shiftR` 1) nc in
cntbts (fromIntegral v :: Word32) cnt
else case cnt + 31 of ncnt -> ncnt `seq` cntplcs nv ncnt
in cntplcs target 0
lg2tgt = let mant = if lg2hi <= 53 then target `shiftL` (53 - lg2hi)
else target `shiftR` (lg2hi - 53)
in fromIntegral lg2hi +
logBase 2 (fromIntegral mant / 2^53 :: Double)
lg2top = (lg2tgt^3 + 2 * 6 * lg3 * lg5)**(1/3) -- for 2 numbers or so higher
lg2btm = 2* lg2tgt - lg2top -- or two numbers or so lower
match =
let klmt = floor (lg2top / lg5)
loopk k mtchlgk mtchtplk =
if k > klmt then mtchtplk else
let p = lg2top - fromIntegral k * lg5
jlmt = fromIntegral $ floor (p / lg3)
loopj j mtchlgj mtchtplj =
if j > jlmt then loopk (k + 1) mtchlgj mtchtplj else
let q = p - fromIntegral j * lg3
( i, frac ) = properFraction q; r = lg2top - frac
( nmtchlg, nmtchtpl ) =
if r < lg2btm || r >= mtchlgj then
( mtchlgj, mtchtplj ) else
if 2^i * 3^j * 5^k >= target then
( r, ( i, j, k ) ) else ( mtchlgj, mtchtplj )
in nmtchlg `seq` nmtchtpl `seq` loopj (j + 1) nmtchlg nmtchtpl
in loopj 0 mtchlgk mtchtplk
in loopk 0 (fromIntegral lg2hi) ( fromIntegral lg2hi, 0, 0 )


trival :: ( Word32, Word32, Word32 ) -> Integer
trival (i,j,k) = 2^i * 3^j * 5^k

main = putStrLn $ show $ nextRegular $ (trival (1334,335,404)) + 1 -- (1126,16930,40)


This code calculates the next regular number following the billionth in too small a time to be measured and following the trillionth in 0.69 seconds on IDEOne (and potentially could run even faster except that IDEOne doesn't support LLVM). Even Julia will run at something like this Haskell speed after the "warm-up" for JIT compilation.



EDIT_ADD: The Julia code is as per the following:



function nextregular(target :: BigInt) :: Tuple{ UInt32, UInt32, UInt32 }
# trivial case of first value or anything less...
target < 2 && return ( 0, 0, 0 )

# Check if it's already a power of 2 (or a non-integer)
mant = target & (target - 1)

# Quickly find next power of 2 >= target
log2hi :: UInt32 = 0
test = target
while true
next = test & 0x7FFFFFFF
test >>>= 31; log2hi += 31
test <= 0 && (log2hi -= leading_zeros(UInt32(next)) - 1; break)
end

# exit if this is a power of two already...
mant == 0 && return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9
target < 4 && return ( 0, 1, 0 )
target < 6 && return ( 0, 0, 1 )
target < 7 && return ( 1, 1, 0 )
return ( 3, 0, 0 )
end

# find log of target, which may exceed the Float64 limit...
if log2hi < 53 mant = target << (53 - log2hi)
else mant = target >>> (log2hi - 53) end
log2target = log2hi + log(2, Float64(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = log(2, 3); log2of5 = log(2, 5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target^3 + 2 * fctr)^(1/3) # for 2 numbers or so higher
btm = 2 * log2target - top # or 2 numbers or so lower

# scan for values in the given narrow range that satisfy the criteria...
match = log2hi # Anything found will be smaller
result :: Tuple{UInt32,UInt32,UInt32} = ( log2hi, 0, 0 ) # placeholder for eventual matches
fives :: UInt32 = 0; fiveslmt = UInt32(ceil(top / log2of5))
while fives < fiveslmt
log2p = top - fives * log2of5
threes :: UInt32 = 0; threeslmt = UInt32(ceil(log2p / log2of3))
while threes < threeslmt
log2q = log2p - threes * log2of3
twos = UInt32(floor(log2q)); log2this = top - log2q + twos

if log2this >= btm && log2this < match
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (big(2)^twos * big(3)^threes * big(5)^fives) >= target
match = log2this; result = ( twos, threes, fives )
end
end
threes += 1
end
fives += 1
end
result
end





share|improve this answer















EDITED/CORRECTED: Corrected the codes to pass the scipy tests:



Here's an answer based on endolith's answer, but almost eliminating long multi-precision integer calculations by using float64 logarithm representations to do a base comparison to find triple values that pass the criteria, only resorting to full precision comparisons when there is a chance that the logarithm value may not be accurate enough, which only occurs when the target is very close to either the previous or the next regular number:



import math

def next_regulary(target):
"""
Find the next regular number greater than or equal to target.
"""
if target < 2: return ( 0, 0, 0 )
log2hi = 0
mant = 0
# Check if it's already a power of 2 (or a non-integer)
try:
mant = target & (target - 1)
target = int(target) # take care of case where not int/float/decimal
except TypeError:
# Convert floats/decimals for further processing
target = int(math.ceil(target))
mant = target & (target - 1)

# Quickly find next power of 2 >= target
# See https://stackoverflow.com/a/19164783/125507
try:
log2hi = target.bit_length()
except AttributeError:
# Fallback for Python <2.7
log2hi = len(bin(target)) - 2

# exit if this is a power of two already...
if not mant: return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9:
if target < 4: return ( 0, 1, 0 )
elif target < 6: return ( 0, 0, 1 )
elif target < 7: return ( 1, 1, 0 )
else: return ( 3, 0, 0 )

# find log of target, which may exceed the float64 limit...
if log2hi < 53: mant = target << (53 - log2hi)
else: mant = target >> (log2hi - 53)
log2target = log2hi + math.log2(float(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = math.log2(3); log2of5 = math.log2(5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target**3 + 2 * fctr)**(1/3) # for up to 2 numbers higher
btm = 2 * log2target - top # or up to 2 numbers lower

match = log2hi # Anything found will be smaller
result = ( log2hi, 0, 0 ) # placeholder for eventual matches
count = 0 # only used for debugging counting band
fives = 0; fiveslmt = int(math.ceil(top / log2of5))
while fives < fiveslmt:
log2p = top - fives * log2of5
threes = 0; threeslmt = int(math.ceil(log2p / log2of3))
while threes < threeslmt:
log2q = log2p - threes * log2of3
twos = int(math.floor(log2q)); log2this = top - log2q + twos

if log2this >= btm: count += 1 # only used for counting band
if log2this >= btm and log2this < match:
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (2**twos * 3**threes * 5**fives) >= target:
match = log2this; result = ( twos, threes, fives )
threes += 1
fives += 1

return result

print(next_regular(2**2 * 3**454 * 5**249 + 1)) # prints (142, 80, 444)


Since most long multi-precision calculations have been eliminated, gmpy isn't needed, and on IDEOne the above code takes 0.11 seconds instead of 0.48 seconds for endolith's solution to find the next regular number greater than the 100 millionth one as shown; it takes 0.49 seconds instead of 5.48 seconds to find the next regular number past the billionth (next one is (761,572,489) past (1334,335,404) + 1), and the difference will get even larger as the range goes up as the multi-precision calculations get increasingly longer for the endolith version compared to almost none here. Thus, this version could calculate the next regular number from the trillionth in the sequence in about 50 seconds on IDEOne, where it would likely take over an hour with the endolith version.



The English description of the algorithm is almost the same as for the endolith version, differing as follows:
1) calculates the float log estimation of the argument target value (we can't use the built-in log function directly as the range may be much too large for representation as a 64-bit float),
2) compares the log representation values in determining qualifying values inside an estimated range above and below the target value of only about two or three numbers (depending on round-off),
3) compare multi-precision values only if within the above defined narrow band,
4) outputs the triple indices rather than the full long multi-precision integer (would be about 840 decimal digits for the one past the billionth, ten times that for the trillionth), which can then easily be converted to the long multi-precision value if required.



This algorithm uses almost no memory other than for the potentially very large multi-precision integer target value, the intermediate evaluation comparison values of about the same size, and the output expansion of the triples if required. This algorithm is an improvement over the endolith version in that it successfully uses the logarithm values for most comparisons in spite of their lack of precision, and that it narrows the band of compared numbers to just a few.



This algorithm will work for argument ranges somewhat above ten trillion (a few minute's calculation time at IDEOne rates) when it will no longer be correct due to lack of precision in the log representation values as per @WillNess's discussion; in order to fix this, we can change the log representation to a "roll-your-own" logarithm representation consisting of a fixed-length integer (124 bits for about double the exponent range, good for targets of over a hundred thousand digits if one is willing to wait); this will be a little slower due to the smallish multi-precision integer operations being slower than float64 operations, but not that much slower since the size is limited (maybe a factor of three or so slower).



Now none of these Python implementations (without using C or Cython or PyPy or something) are particularly fast, as they are about a hundred times slower than as implemented in a compiled language. For reference sake, here is a Haskell version:



{-# OPTIONS_GHC -O3 #-}

import Data.Word
import Data.Bits

nextRegular :: Integer -> ( Word32, Word32, Word32 )
nextRegular target
| target < 2 = ( 0, 0, 0 )
| target .&. (target - 1) == 0 = ( fromIntegral lg2hi - 1, 0, 0 )
| target < 9 = case target of
3 -> ( 0, 1, 0 )
5 -> ( 0, 0, 1 )
6 -> ( 1, 1, 0 )
_ -> ( 3, 0, 0 )
| otherwise = match
where
lg3 = logBase 2 3 :: Double; lg5 = logBase 2 5 :: Double
lg2hi = let cntplcs v cnt =
let nv = v `shiftR` 31 in
if nv <= 0 then
let cntbts x c =
if x <= 0 then c else
case c + 1 of
nc -> nc `seq` cntbts (x `shiftR` 1) nc in
cntbts (fromIntegral v :: Word32) cnt
else case cnt + 31 of ncnt -> ncnt `seq` cntplcs nv ncnt
in cntplcs target 0
lg2tgt = let mant = if lg2hi <= 53 then target `shiftL` (53 - lg2hi)
else target `shiftR` (lg2hi - 53)
in fromIntegral lg2hi +
logBase 2 (fromIntegral mant / 2^53 :: Double)
lg2top = (lg2tgt^3 + 2 * 6 * lg3 * lg5)**(1/3) -- for 2 numbers or so higher
lg2btm = 2* lg2tgt - lg2top -- or two numbers or so lower
match =
let klmt = floor (lg2top / lg5)
loopk k mtchlgk mtchtplk =
if k > klmt then mtchtplk else
let p = lg2top - fromIntegral k * lg5
jlmt = fromIntegral $ floor (p / lg3)
loopj j mtchlgj mtchtplj =
if j > jlmt then loopk (k + 1) mtchlgj mtchtplj else
let q = p - fromIntegral j * lg3
( i, frac ) = properFraction q; r = lg2top - frac
( nmtchlg, nmtchtpl ) =
if r < lg2btm || r >= mtchlgj then
( mtchlgj, mtchtplj ) else
if 2^i * 3^j * 5^k >= target then
( r, ( i, j, k ) ) else ( mtchlgj, mtchtplj )
in nmtchlg `seq` nmtchtpl `seq` loopj (j + 1) nmtchlg nmtchtpl
in loopj 0 mtchlgk mtchtplk
in loopk 0 (fromIntegral lg2hi) ( fromIntegral lg2hi, 0, 0 )


trival :: ( Word32, Word32, Word32 ) -> Integer
trival (i,j,k) = 2^i * 3^j * 5^k

main = putStrLn $ show $ nextRegular $ (trival (1334,335,404)) + 1 -- (1126,16930,40)


This code calculates the next regular number following the billionth in too small a time to be measured and following the trillionth in 0.69 seconds on IDEOne (and potentially could run even faster except that IDEOne doesn't support LLVM). Even Julia will run at something like this Haskell speed after the "warm-up" for JIT compilation.



EDIT_ADD: The Julia code is as per the following:



function nextregular(target :: BigInt) :: Tuple{ UInt32, UInt32, UInt32 }
# trivial case of first value or anything less...
target < 2 && return ( 0, 0, 0 )

# Check if it's already a power of 2 (or a non-integer)
mant = target & (target - 1)

# Quickly find next power of 2 >= target
log2hi :: UInt32 = 0
test = target
while true
next = test & 0x7FFFFFFF
test >>>= 31; log2hi += 31
test <= 0 && (log2hi -= leading_zeros(UInt32(next)) - 1; break)
end

# exit if this is a power of two already...
mant == 0 && return ( log2hi - 1, 0, 0 )

# take care of trivial cases...
if target < 9
target < 4 && return ( 0, 1, 0 )
target < 6 && return ( 0, 0, 1 )
target < 7 && return ( 1, 1, 0 )
return ( 3, 0, 0 )
end

# find log of target, which may exceed the Float64 limit...
if log2hi < 53 mant = target << (53 - log2hi)
else mant = target >>> (log2hi - 53) end
log2target = log2hi + log(2, Float64(mant) / (1 << 53))

# log2 constants
log2of2 = 1.0; log2of3 = log(2, 3); log2of5 = log(2, 5)

# calculate range of log2 values close to target;
# desired number has a logarithm of log2target <= x <= top...
fctr = 6 * log2of3 * log2of5
top = (log2target^3 + 2 * fctr)^(1/3) # for 2 numbers or so higher
btm = 2 * log2target - top # or 2 numbers or so lower

# scan for values in the given narrow range that satisfy the criteria...
match = log2hi # Anything found will be smaller
result :: Tuple{UInt32,UInt32,UInt32} = ( log2hi, 0, 0 ) # placeholder for eventual matches
fives :: UInt32 = 0; fiveslmt = UInt32(ceil(top / log2of5))
while fives < fiveslmt
log2p = top - fives * log2of5
threes :: UInt32 = 0; threeslmt = UInt32(ceil(log2p / log2of3))
while threes < threeslmt
log2q = log2p - threes * log2of3
twos = UInt32(floor(log2q)); log2this = top - log2q + twos

if log2this >= btm && log2this < match
# logarithm precision may not be enough to differential between
# the next lower regular number and the target, so do
# a full resolution comparison to eliminate this case...
if (big(2)^twos * big(3)^threes * big(5)^fives) >= target
match = log2this; result = ( twos, threes, fives )
end
end
threes += 1
end
fives += 1
end
result
end






share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 26 '18 at 7:03

























answered Nov 22 '18 at 8:57









GordonBGoodGordonBGood

2,2342429




2,2342429













  • Cool. Did you run all the tests on it? github.com/scipy/scipy/blob/master/scipy/fftpack/tests/… Is it faster for small numbers (~10000) or just for huge ones?

    – endolith
    Nov 25 '18 at 19:47








  • 1





    @endolith, it isn't faster (not much different) for small arguments, but increasingly faster for larger arguments. Thank's for the link to the tests; I used them to find a couple of problems in the code, which have now been corrected. The current corrected code passes all of the provided tests..

    – GordonBGood
    Nov 26 '18 at 6:59



















  • Cool. Did you run all the tests on it? github.com/scipy/scipy/blob/master/scipy/fftpack/tests/… Is it faster for small numbers (~10000) or just for huge ones?

    – endolith
    Nov 25 '18 at 19:47








  • 1





    @endolith, it isn't faster (not much different) for small arguments, but increasingly faster for larger arguments. Thank's for the link to the tests; I used them to find a couple of problems in the code, which have now been corrected. The current corrected code passes all of the provided tests..

    – GordonBGood
    Nov 26 '18 at 6:59

















Cool. Did you run all the tests on it? github.com/scipy/scipy/blob/master/scipy/fftpack/tests/… Is it faster for small numbers (~10000) or just for huge ones?

– endolith
Nov 25 '18 at 19:47







Cool. Did you run all the tests on it? github.com/scipy/scipy/blob/master/scipy/fftpack/tests/… Is it faster for small numbers (~10000) or just for huge ones?

– endolith
Nov 25 '18 at 19:47






1




1





@endolith, it isn't faster (not much different) for small arguments, but increasingly faster for larger arguments. Thank's for the link to the tests; I used them to find a couple of problems in the code, which have now been corrected. The current corrected code passes all of the provided tests..

– GordonBGood
Nov 26 '18 at 6:59





@endolith, it isn't faster (not much different) for small arguments, but increasingly faster for larger arguments. Thank's for the link to the tests; I used them to find a couple of problems in the code, which have now been corrected. The current corrected code passes all of the provided tests..

– GordonBGood
Nov 26 '18 at 6:59











0














Here's another possibility I just thought of:



If N is X bits long, then the smallest regular number RN will be in the range
[2X-1, 2X]



e.g. if N = 257 (binary 100000001) then we know R is 1xxxxxxxx unless R is exactly equal to the next power of 2 (512)



To generate all the regular numbers in this range, we can generate the odd regular numbers (i.e. multiples of powers of 3 and 5) first, then take each value and multiply by 2 (by bit-shifting) as many times as necessary to bring it into this range.



In Python:



from itertools import ifilter, takewhile
from Queue import PriorityQueue

def nextPowerOf2(n):
p = max(1, n)
while p != (p & -p):
p += p & -p
return p

# Generate multiples of powers of 3, 5
def oddRegulars():
q = PriorityQueue()
q.put(1)
prev = None
while not q.empty():
n = q.get()
if n != prev:
prev = n
yield n
if n % 3 == 0:
q.put(n // 3 * 5)
q.put(n * 3)

# Generate regular numbers with the same number of bits as n
def regularsCloseTo(n):
p = nextPowerOf2(n)
numBits = len(bin(n))
for i in takewhile(lambda x: x <= p, oddRegulars()):
yield i << max(0, numBits - len(bin(i)))

def nextRegular(n):
bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n))
return min(bigEnough)





share|improve this answer


























  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19











  • actually, this is pretty close in its intention to what I posted, just differs in implementation. :)

    – Will Ness
    Aug 22 '12 at 0:21











  • This fails for nextRegular(7), nextRegular(31), nextRegular(61), etc. with ValueError: min() arg is an empty sequence

    – endolith
    Oct 6 '13 at 2:11
















0














Here's another possibility I just thought of:



If N is X bits long, then the smallest regular number RN will be in the range
[2X-1, 2X]



e.g. if N = 257 (binary 100000001) then we know R is 1xxxxxxxx unless R is exactly equal to the next power of 2 (512)



To generate all the regular numbers in this range, we can generate the odd regular numbers (i.e. multiples of powers of 3 and 5) first, then take each value and multiply by 2 (by bit-shifting) as many times as necessary to bring it into this range.



In Python:



from itertools import ifilter, takewhile
from Queue import PriorityQueue

def nextPowerOf2(n):
p = max(1, n)
while p != (p & -p):
p += p & -p
return p

# Generate multiples of powers of 3, 5
def oddRegulars():
q = PriorityQueue()
q.put(1)
prev = None
while not q.empty():
n = q.get()
if n != prev:
prev = n
yield n
if n % 3 == 0:
q.put(n // 3 * 5)
q.put(n * 3)

# Generate regular numbers with the same number of bits as n
def regularsCloseTo(n):
p = nextPowerOf2(n)
numBits = len(bin(n))
for i in takewhile(lambda x: x <= p, oddRegulars()):
yield i << max(0, numBits - len(bin(i)))

def nextRegular(n):
bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n))
return min(bigEnough)





share|improve this answer


























  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19











  • actually, this is pretty close in its intention to what I posted, just differs in implementation. :)

    – Will Ness
    Aug 22 '12 at 0:21











  • This fails for nextRegular(7), nextRegular(31), nextRegular(61), etc. with ValueError: min() arg is an empty sequence

    – endolith
    Oct 6 '13 at 2:11














0












0








0







Here's another possibility I just thought of:



If N is X bits long, then the smallest regular number RN will be in the range
[2X-1, 2X]



e.g. if N = 257 (binary 100000001) then we know R is 1xxxxxxxx unless R is exactly equal to the next power of 2 (512)



To generate all the regular numbers in this range, we can generate the odd regular numbers (i.e. multiples of powers of 3 and 5) first, then take each value and multiply by 2 (by bit-shifting) as many times as necessary to bring it into this range.



In Python:



from itertools import ifilter, takewhile
from Queue import PriorityQueue

def nextPowerOf2(n):
p = max(1, n)
while p != (p & -p):
p += p & -p
return p

# Generate multiples of powers of 3, 5
def oddRegulars():
q = PriorityQueue()
q.put(1)
prev = None
while not q.empty():
n = q.get()
if n != prev:
prev = n
yield n
if n % 3 == 0:
q.put(n // 3 * 5)
q.put(n * 3)

# Generate regular numbers with the same number of bits as n
def regularsCloseTo(n):
p = nextPowerOf2(n)
numBits = len(bin(n))
for i in takewhile(lambda x: x <= p, oddRegulars()):
yield i << max(0, numBits - len(bin(i)))

def nextRegular(n):
bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n))
return min(bigEnough)





share|improve this answer















Here's another possibility I just thought of:



If N is X bits long, then the smallest regular number RN will be in the range
[2X-1, 2X]



e.g. if N = 257 (binary 100000001) then we know R is 1xxxxxxxx unless R is exactly equal to the next power of 2 (512)



To generate all the regular numbers in this range, we can generate the odd regular numbers (i.e. multiples of powers of 3 and 5) first, then take each value and multiply by 2 (by bit-shifting) as many times as necessary to bring it into this range.



In Python:



from itertools import ifilter, takewhile
from Queue import PriorityQueue

def nextPowerOf2(n):
p = max(1, n)
while p != (p & -p):
p += p & -p
return p

# Generate multiples of powers of 3, 5
def oddRegulars():
q = PriorityQueue()
q.put(1)
prev = None
while not q.empty():
n = q.get()
if n != prev:
prev = n
yield n
if n % 3 == 0:
q.put(n // 3 * 5)
q.put(n * 3)

# Generate regular numbers with the same number of bits as n
def regularsCloseTo(n):
p = nextPowerOf2(n)
numBits = len(bin(n))
for i in takewhile(lambda x: x <= p, oddRegulars()):
yield i << max(0, numBits - len(bin(i)))

def nextRegular(n):
bigEnough = ifilter(lambda x: x >= n, regularsCloseTo(n))
return min(bigEnough)






share|improve this answer














share|improve this answer



share|improve this answer








edited Oct 3 '13 at 18:20









endolith

10.8k1883157




10.8k1883157










answered Feb 12 '12 at 15:10









finnwfinnw

38.5k16125200




38.5k16125200













  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19











  • actually, this is pretty close in its intention to what I posted, just differs in implementation. :)

    – Will Ness
    Aug 22 '12 at 0:21











  • This fails for nextRegular(7), nextRegular(31), nextRegular(61), etc. with ValueError: min() arg is an empty sequence

    – endolith
    Oct 6 '13 at 2:11



















  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19











  • actually, this is pretty close in its intention to what I posted, just differs in implementation. :)

    – Will Ness
    Aug 22 '12 at 0:21











  • This fails for nextRegular(7), nextRegular(31), nextRegular(61), etc. with ValueError: min() arg is an empty sequence

    – endolith
    Oct 6 '13 at 2:11

















hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

– Will Ness
Aug 20 '12 at 17:19





hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

– Will Ness
Aug 20 '12 at 17:19













actually, this is pretty close in its intention to what I posted, just differs in implementation. :)

– Will Ness
Aug 22 '12 at 0:21





actually, this is pretty close in its intention to what I posted, just differs in implementation. :)

– Will Ness
Aug 22 '12 at 0:21













This fails for nextRegular(7), nextRegular(31), nextRegular(61), etc. with ValueError: min() arg is an empty sequence

– endolith
Oct 6 '13 at 2:11





This fails for nextRegular(7), nextRegular(31), nextRegular(61), etc. with ValueError: min() arg is an empty sequence

– endolith
Oct 6 '13 at 2:11











-1














You know what? I'll put money on the proposition that actually, the 'dumb' algorithm is fastest. This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input. So simply start counting up, and after each increment, refactor and see if you've found a regular number. But create one processing thread for each available core you have, and for N cores have each thread examine every Nth number. When each thread has found a number or crossed the power-of-2 threshold, compare the results (keep a running best number) and there you are.






share|improve this answer



















  • 2





    Yes it would be interesting to benchmark it. I expect you are right if we are talking about small numbers (< 10000 or so.) But as the numbers get larger so do the distances between regular numbers. An extreme example is 48000001 (the next regular number is 48600000, and it will take 2.8M divisions to find it.)

    – finnw
    Feb 12 '12 at 16:32











  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19






  • 1





    "This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input." I don't think that's a good assumption. They get farther and father apart as you go up. The next regular number above 50000001 is 50331648, and that's only the 995th number. I suspect generating the list of regular numbers until you get one above your target is faster.

    – endolith
    Sep 30 '13 at 19:28











  • I tested an "iterate and factor" algorithm vs a "generate list until you go over" algorithm. The factoring algorithm is faster for small numbers, but gets far slower for large numbers. 854296876 → 859963392 takes 26 ms vs 18 seconds with the factoring method.

    – endolith
    Oct 1 '13 at 1:24











  • in fact, the nth Hamming number's magnitude is M(n) ~ exp(n**(1/3)), so they grow exponentially more and more apart as n grows.

    – Will Ness
    Dec 6 '18 at 9:09
















-1














You know what? I'll put money on the proposition that actually, the 'dumb' algorithm is fastest. This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input. So simply start counting up, and after each increment, refactor and see if you've found a regular number. But create one processing thread for each available core you have, and for N cores have each thread examine every Nth number. When each thread has found a number or crossed the power-of-2 threshold, compare the results (keep a running best number) and there you are.






share|improve this answer



















  • 2





    Yes it would be interesting to benchmark it. I expect you are right if we are talking about small numbers (< 10000 or so.) But as the numbers get larger so do the distances between regular numbers. An extreme example is 48000001 (the next regular number is 48600000, and it will take 2.8M divisions to find it.)

    – finnw
    Feb 12 '12 at 16:32











  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19






  • 1





    "This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input." I don't think that's a good assumption. They get farther and father apart as you go up. The next regular number above 50000001 is 50331648, and that's only the 995th number. I suspect generating the list of regular numbers until you get one above your target is faster.

    – endolith
    Sep 30 '13 at 19:28











  • I tested an "iterate and factor" algorithm vs a "generate list until you go over" algorithm. The factoring algorithm is faster for small numbers, but gets far slower for large numbers. 854296876 → 859963392 takes 26 ms vs 18 seconds with the factoring method.

    – endolith
    Oct 1 '13 at 1:24











  • in fact, the nth Hamming number's magnitude is M(n) ~ exp(n**(1/3)), so they grow exponentially more and more apart as n grows.

    – Will Ness
    Dec 6 '18 at 9:09














-1












-1








-1







You know what? I'll put money on the proposition that actually, the 'dumb' algorithm is fastest. This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input. So simply start counting up, and after each increment, refactor and see if you've found a regular number. But create one processing thread for each available core you have, and for N cores have each thread examine every Nth number. When each thread has found a number or crossed the power-of-2 threshold, compare the results (keep a running best number) and there you are.






share|improve this answer













You know what? I'll put money on the proposition that actually, the 'dumb' algorithm is fastest. This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input. So simply start counting up, and after each increment, refactor and see if you've found a regular number. But create one processing thread for each available core you have, and for N cores have each thread examine every Nth number. When each thread has found a number or crossed the power-of-2 threshold, compare the results (keep a running best number) and there you are.







share|improve this answer












share|improve this answer



share|improve this answer










answered Feb 12 '12 at 16:02









Matt PhillipsMatt Phillips

6,79353562




6,79353562








  • 2





    Yes it would be interesting to benchmark it. I expect you are right if we are talking about small numbers (< 10000 or so.) But as the numbers get larger so do the distances between regular numbers. An extreme example is 48000001 (the next regular number is 48600000, and it will take 2.8M divisions to find it.)

    – finnw
    Feb 12 '12 at 16:32











  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19






  • 1





    "This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input." I don't think that's a good assumption. They get farther and father apart as you go up. The next regular number above 50000001 is 50331648, and that's only the 995th number. I suspect generating the list of regular numbers until you get one above your target is faster.

    – endolith
    Sep 30 '13 at 19:28











  • I tested an "iterate and factor" algorithm vs a "generate list until you go over" algorithm. The factoring algorithm is faster for small numbers, but gets far slower for large numbers. 854296876 → 859963392 takes 26 ms vs 18 seconds with the factoring method.

    – endolith
    Oct 1 '13 at 1:24











  • in fact, the nth Hamming number's magnitude is M(n) ~ exp(n**(1/3)), so they grow exponentially more and more apart as n grows.

    – Will Ness
    Dec 6 '18 at 9:09














  • 2





    Yes it would be interesting to benchmark it. I expect you are right if we are talking about small numbers (< 10000 or so.) But as the numbers get larger so do the distances between regular numbers. An extreme example is 48000001 (the next regular number is 48600000, and it will take 2.8M divisions to find it.)

    – finnw
    Feb 12 '12 at 16:32











  • hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

    – Will Ness
    Aug 20 '12 at 17:19






  • 1





    "This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input." I don't think that's a good assumption. They get farther and father apart as you go up. The next regular number above 50000001 is 50331648, and that's only the 995th number. I suspect generating the list of regular numbers until you get one above your target is faster.

    – endolith
    Sep 30 '13 at 19:28











  • I tested an "iterate and factor" algorithm vs a "generate list until you go over" algorithm. The factoring algorithm is faster for small numbers, but gets far slower for large numbers. 854296876 → 859963392 takes 26 ms vs 18 seconds with the factoring method.

    – endolith
    Oct 1 '13 at 1:24











  • in fact, the nth Hamming number's magnitude is M(n) ~ exp(n**(1/3)), so they grow exponentially more and more apart as n grows.

    – Will Ness
    Dec 6 '18 at 9:09








2




2





Yes it would be interesting to benchmark it. I expect you are right if we are talking about small numbers (< 10000 or so.) But as the numbers get larger so do the distances between regular numbers. An extreme example is 48000001 (the next regular number is 48600000, and it will take 2.8M divisions to find it.)

– finnw
Feb 12 '12 at 16:32





Yes it would be interesting to benchmark it. I expect you are right if we are talking about small numbers (< 10000 or so.) But as the numbers get larger so do the distances between regular numbers. An extreme example is 48000001 (the next regular number is 48600000, and it will take 2.8M divisions to find it.)

– finnw
Feb 12 '12 at 16:32













hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

– Will Ness
Aug 20 '12 at 17:19





hi, I've added a new answer here that shows how to directly enumerate (i,j,k) triples in the narrow vicinity of log(N), which is very fast.

– Will Ness
Aug 20 '12 at 17:19




1




1





"This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input." I don't think that's a good assumption. They get farther and father apart as you go up. The next regular number above 50000001 is 50331648, and that's only the 995th number. I suspect generating the list of regular numbers until you get one above your target is faster.

– endolith
Sep 30 '13 at 19:28





"This is based on the observation that the next regular number does not, in general, seem to be much larger than the given input." I don't think that's a good assumption. They get farther and father apart as you go up. The next regular number above 50000001 is 50331648, and that's only the 995th number. I suspect generating the list of regular numbers until you get one above your target is faster.

– endolith
Sep 30 '13 at 19:28













I tested an "iterate and factor" algorithm vs a "generate list until you go over" algorithm. The factoring algorithm is faster for small numbers, but gets far slower for large numbers. 854296876 → 859963392 takes 26 ms vs 18 seconds with the factoring method.

– endolith
Oct 1 '13 at 1:24





I tested an "iterate and factor" algorithm vs a "generate list until you go over" algorithm. The factoring algorithm is faster for small numbers, but gets far slower for large numbers. 854296876 → 859963392 takes 26 ms vs 18 seconds with the factoring method.

– endolith
Oct 1 '13 at 1:24













in fact, the nth Hamming number's magnitude is M(n) ~ exp(n**(1/3)), so they grow exponentially more and more apart as n grows.

– Will Ness
Dec 6 '18 at 9:09





in fact, the nth Hamming number's magnitude is M(n) ~ exp(n**(1/3)), so they grow exponentially more and more apart as n grows.

– Will Ness
Dec 6 '18 at 9:09











-1














I wrote a small c# program to solve this problem. It's not very optimised but it's a start.
This solution is pretty fast for numbers as big as 11 digits.



private long GetRegularNumber(long n)
{
long result = n - 1;
long quotient = result;

while (quotient > 1)
{
result++;
quotient = result;

quotient = RemoveFactor(quotient, 2);
quotient = RemoveFactor(quotient, 3);
quotient = RemoveFactor(quotient, 5);
}

return result;
}

private static long RemoveFactor(long dividend, long divisor)
{
long remainder = 0;
long quotient = dividend;
while (remainder == 0)
{
dividend = quotient;
quotient = Math.DivRem(dividend, divisor, out remainder);
}
return dividend;
}





share|improve this answer


























  • Is C# the right language for this? Won't it be slower, particularly in iterations, than C or C++?

    – Matt Phillips
    Feb 11 '12 at 19:48






  • 4





    I'm pretty sure any programmer can rewrite this in c/c++ fairly easy. C# was the quickest way for me to test my idea.

    – david.s
    Feb 11 '12 at 19:52








  • 1





    N_i ~ exp( i^(1/3) ) i.e. gaps between Hamming numbers grow exponentially. So your approach will experience a very pronounced slowdown as the numbers grow in magnitude, so it seems. 11 digits is not very big.

    – Will Ness
    Aug 21 '12 at 6:44
















-1














I wrote a small c# program to solve this problem. It's not very optimised but it's a start.
This solution is pretty fast for numbers as big as 11 digits.



private long GetRegularNumber(long n)
{
long result = n - 1;
long quotient = result;

while (quotient > 1)
{
result++;
quotient = result;

quotient = RemoveFactor(quotient, 2);
quotient = RemoveFactor(quotient, 3);
quotient = RemoveFactor(quotient, 5);
}

return result;
}

private static long RemoveFactor(long dividend, long divisor)
{
long remainder = 0;
long quotient = dividend;
while (remainder == 0)
{
dividend = quotient;
quotient = Math.DivRem(dividend, divisor, out remainder);
}
return dividend;
}





share|improve this answer


























  • Is C# the right language for this? Won't it be slower, particularly in iterations, than C or C++?

    – Matt Phillips
    Feb 11 '12 at 19:48






  • 4





    I'm pretty sure any programmer can rewrite this in c/c++ fairly easy. C# was the quickest way for me to test my idea.

    – david.s
    Feb 11 '12 at 19:52








  • 1





    N_i ~ exp( i^(1/3) ) i.e. gaps between Hamming numbers grow exponentially. So your approach will experience a very pronounced slowdown as the numbers grow in magnitude, so it seems. 11 digits is not very big.

    – Will Ness
    Aug 21 '12 at 6:44














-1












-1








-1







I wrote a small c# program to solve this problem. It's not very optimised but it's a start.
This solution is pretty fast for numbers as big as 11 digits.



private long GetRegularNumber(long n)
{
long result = n - 1;
long quotient = result;

while (quotient > 1)
{
result++;
quotient = result;

quotient = RemoveFactor(quotient, 2);
quotient = RemoveFactor(quotient, 3);
quotient = RemoveFactor(quotient, 5);
}

return result;
}

private static long RemoveFactor(long dividend, long divisor)
{
long remainder = 0;
long quotient = dividend;
while (remainder == 0)
{
dividend = quotient;
quotient = Math.DivRem(dividend, divisor, out remainder);
}
return dividend;
}





share|improve this answer















I wrote a small c# program to solve this problem. It's not very optimised but it's a start.
This solution is pretty fast for numbers as big as 11 digits.



private long GetRegularNumber(long n)
{
long result = n - 1;
long quotient = result;

while (quotient > 1)
{
result++;
quotient = result;

quotient = RemoveFactor(quotient, 2);
quotient = RemoveFactor(quotient, 3);
quotient = RemoveFactor(quotient, 5);
}

return result;
}

private static long RemoveFactor(long dividend, long divisor)
{
long remainder = 0;
long quotient = dividend;
while (remainder == 0)
{
dividend = quotient;
quotient = Math.DivRem(dividend, divisor, out remainder);
}
return dividend;
}






share|improve this answer














share|improve this answer



share|improve this answer








edited Dec 9 '15 at 8:41









Werner Henze

10.7k72854




10.7k72854










answered Feb 11 '12 at 19:17









david.sdavid.s

9,27354075




9,27354075













  • Is C# the right language for this? Won't it be slower, particularly in iterations, than C or C++?

    – Matt Phillips
    Feb 11 '12 at 19:48






  • 4





    I'm pretty sure any programmer can rewrite this in c/c++ fairly easy. C# was the quickest way for me to test my idea.

    – david.s
    Feb 11 '12 at 19:52








  • 1





    N_i ~ exp( i^(1/3) ) i.e. gaps between Hamming numbers grow exponentially. So your approach will experience a very pronounced slowdown as the numbers grow in magnitude, so it seems. 11 digits is not very big.

    – Will Ness
    Aug 21 '12 at 6:44



















  • Is C# the right language for this? Won't it be slower, particularly in iterations, than C or C++?

    – Matt Phillips
    Feb 11 '12 at 19:48






  • 4





    I'm pretty sure any programmer can rewrite this in c/c++ fairly easy. C# was the quickest way for me to test my idea.

    – david.s
    Feb 11 '12 at 19:52








  • 1





    N_i ~ exp( i^(1/3) ) i.e. gaps between Hamming numbers grow exponentially. So your approach will experience a very pronounced slowdown as the numbers grow in magnitude, so it seems. 11 digits is not very big.

    – Will Ness
    Aug 21 '12 at 6:44

















Is C# the right language for this? Won't it be slower, particularly in iterations, than C or C++?

– Matt Phillips
Feb 11 '12 at 19:48





Is C# the right language for this? Won't it be slower, particularly in iterations, than C or C++?

– Matt Phillips
Feb 11 '12 at 19:48




4




4





I'm pretty sure any programmer can rewrite this in c/c++ fairly easy. C# was the quickest way for me to test my idea.

– david.s
Feb 11 '12 at 19:52







I'm pretty sure any programmer can rewrite this in c/c++ fairly easy. C# was the quickest way for me to test my idea.

– david.s
Feb 11 '12 at 19:52






1




1





N_i ~ exp( i^(1/3) ) i.e. gaps between Hamming numbers grow exponentially. So your approach will experience a very pronounced slowdown as the numbers grow in magnitude, so it seems. 11 digits is not very big.

– Will Ness
Aug 21 '12 at 6:44





N_i ~ exp( i^(1/3) ) i.e. gaps between Hamming numbers grow exponentially. So your approach will experience a very pronounced slowdown as the numbers grow in magnitude, so it seems. 11 digits is not very big.

– Will Ness
Aug 21 '12 at 6:44


















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f9242733%2ffind-the-smallest-regular-number-that-is-not-less-than-n%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

404 Error Contact Form 7 ajax form submitting

How to know if a Active Directory user can login interactively

Refactoring coordinates for Minecraft Pi buildings written in Python