Prob 0001 ‐ Multiples of 3 and 5 - maccergit/Euler GitHub Wiki

This is the first step in a very long journey and is pretty easy - it provides small taste of what is to come.

01

Simply loop through the numbers 1..limit, capture those that are divisible by 3 or 5, and sum them up.

Python's modulo operator and generator comprehension syntax make this super easy, and we don't care if the numbers get large in Python - it has built in automatic support for large numbers. This is almost too easy in Python...

The sample limit of 10 and problem limit of 1000 are not a stretch for modern machines - the table timing output shows 1.4 µsec and 88.4 µsec on my current machine. For the plot, I try to do what I can to show any broad trends I can while still running in only a few seconds (good for demos). To stretch the machine and see if there is a larger trend, the plot data covers a much larger range of limits than asked for. With the resulting times in milliseconds, the count is set to 100, so each plot point is the average of 100 runs for each limit point. This helps reduce noise in the plot, but is still small enough to run the plot in a few seconds (millisec * 100 * 10 tick marks = seconds). Appears to scale linearly (which matches my expectation) :

02

Another brute force method, but smarter - because we count by 3's or 5's to skip the trial division. However, we need to be careful to avoid double counting values that are multiples of both 3 and 5 (such as 15). These will all be multiples of 15, so we can do the same approach to compute the sum of 15's and remove the extra value at the end :

\displaystyle \sum_{i=1}^{3i \le limit}{3i} + \sum_{j=1}^{5j \le limit}{5j}  - \sum_{k=1}^{15k \le limit}{15k}

This also appears to scale linearly, but is about 3x faster :

03

A much better approach is to use what is commonly called Gauss's Formula :

1 + 2 + 3 ... n = \sum_{i=1}^{n}{i} = \frac{n(n + 1)}{2}

But how to make this work for every nth integer? A little observation shows the same formula multiplied by "n" will produce :

1 \times n + 2 \times n + 3 \times n ... n \times n

and the first few elements are what we want - but it goes beyond the provided limit. However, if we truncate the series to only have "limit // n" elements (where "//" is "floor division"), then we get the values we want, and we do this for 3, 5, and 15 :

\frac{(limit // n)^2(limit // n + 1)}{2} \implies \frac{(limit // 3)^2(limit // 3 + 1)}{2} + \frac{(limit // 5)^2(limit // 5 + 1)}{2} - \frac{(limit // 15)^2(limit //15 + 1)}{2}

This results in sub-microsecond execution times (!), so we can benchmark these by running them 1 million times each. These also appear to grow in scale as the limit increases, and extending the graph out to 10 billion increased to just over 1 microsecond. I believe Python's support for unlimited size numbers is what caused the scale to increase, but it's a very small increase. Deeper investigation reveals for most purposes, it's a fixed time of around 1 µsec - but at sub-µsec resolution, the signal is noisy.