Project Euler #15: counting paths through a 20 Ã 20 grid
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
8
down vote
favorite
I've recently solved Project Euler's 15th problem stating:
Starting in the top left corner of a 2ÃÂ2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20ÃÂ20 grid?
This is my dynamic approach in C++
#include <cstdio>
const unsigned short n = 20;
int main()
unsigned long long p[n][n];
for (unsigned short i = 2; i <= n; i++)
p[n - i][n - 1] = i + 1;
for (unsigned short i = n - 2; i >= 1; i--)
p[i][i] = 2 * p[i][i + 1];
for (unsigned short k = i; k >= 1; k--)
p[k - 1][i] = p[k][i] + p[k - 1][i + 1];
printf("%llu", 2 * p[0][1]);
I am aware that this problem could be solved way more efficiently and faster by simply calculating $(2n)!/(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
c++ programming-challenge combinatorics dynamic-programming
add a comment |Â
up vote
8
down vote
favorite
I've recently solved Project Euler's 15th problem stating:
Starting in the top left corner of a 2ÃÂ2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20ÃÂ20 grid?
This is my dynamic approach in C++
#include <cstdio>
const unsigned short n = 20;
int main()
unsigned long long p[n][n];
for (unsigned short i = 2; i <= n; i++)
p[n - i][n - 1] = i + 1;
for (unsigned short i = n - 2; i >= 1; i--)
p[i][i] = 2 * p[i][i + 1];
for (unsigned short k = i; k >= 1; k--)
p[k - 1][i] = p[k][i] + p[k - 1][i + 1];
printf("%llu", 2 * p[0][1]);
I am aware that this problem could be solved way more efficiently and faster by simply calculating $(2n)!/(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
c++ programming-challenge combinatorics dynamic-programming
8
It's worth mentioning that the "programmer's point of view" should include mathematical simplification of algorithms.
â MooseBoys
Jul 14 at 15:53
Another solution to this problem is to use the combination formula -> n!/k!(n-k)! Also, I think the programmer's point of view is to find the simplest solution to the problem.
â austingae
Jul 15 at 6:42
add a comment |Â
up vote
8
down vote
favorite
up vote
8
down vote
favorite
I've recently solved Project Euler's 15th problem stating:
Starting in the top left corner of a 2ÃÂ2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20ÃÂ20 grid?
This is my dynamic approach in C++
#include <cstdio>
const unsigned short n = 20;
int main()
unsigned long long p[n][n];
for (unsigned short i = 2; i <= n; i++)
p[n - i][n - 1] = i + 1;
for (unsigned short i = n - 2; i >= 1; i--)
p[i][i] = 2 * p[i][i + 1];
for (unsigned short k = i; k >= 1; k--)
p[k - 1][i] = p[k][i] + p[k - 1][i + 1];
printf("%llu", 2 * p[0][1]);
I am aware that this problem could be solved way more efficiently and faster by simply calculating $(2n)!/(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
c++ programming-challenge combinatorics dynamic-programming
I've recently solved Project Euler's 15th problem stating:
Starting in the top left corner of a 2ÃÂ2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20ÃÂ20 grid?
This is my dynamic approach in C++
#include <cstdio>
const unsigned short n = 20;
int main()
unsigned long long p[n][n];
for (unsigned short i = 2; i <= n; i++)
p[n - i][n - 1] = i + 1;
for (unsigned short i = n - 2; i >= 1; i--)
p[i][i] = 2 * p[i][i + 1];
for (unsigned short k = i; k >= 1; k--)
p[k - 1][i] = p[k][i] + p[k - 1][i + 1];
printf("%llu", 2 * p[0][1]);
I am aware that this problem could be solved way more efficiently and faster by simply calculating $(2n)!/(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
c++ programming-challenge combinatorics dynamic-programming
edited Jul 14 at 13:41
200_success
123k14143399
123k14143399
asked Jul 14 at 12:30
Gentlebud
436
436
8
It's worth mentioning that the "programmer's point of view" should include mathematical simplification of algorithms.
â MooseBoys
Jul 14 at 15:53
Another solution to this problem is to use the combination formula -> n!/k!(n-k)! Also, I think the programmer's point of view is to find the simplest solution to the problem.
â austingae
Jul 15 at 6:42
add a comment |Â
8
It's worth mentioning that the "programmer's point of view" should include mathematical simplification of algorithms.
â MooseBoys
Jul 14 at 15:53
Another solution to this problem is to use the combination formula -> n!/k!(n-k)! Also, I think the programmer's point of view is to find the simplest solution to the problem.
â austingae
Jul 15 at 6:42
8
8
It's worth mentioning that the "programmer's point of view" should include mathematical simplification of algorithms.
â MooseBoys
Jul 14 at 15:53
It's worth mentioning that the "programmer's point of view" should include mathematical simplification of algorithms.
â MooseBoys
Jul 14 at 15:53
Another solution to this problem is to use the combination formula -> n!/k!(n-k)! Also, I think the programmer's point of view is to find the simplest solution to the problem.
â austingae
Jul 15 at 6:42
Another solution to this problem is to use the combination formula -> n!/k!(n-k)! Also, I think the programmer's point of view is to find the simplest solution to the problem.
â austingae
Jul 15 at 6:42
add a comment |Â
4 Answers
4
active
oldest
votes
up vote
8
down vote
accepted
Why are you using unsigned short
?
I presume the answer is somewhere along the lines of worrying about memory usage. There isn't a problem with that, but given that p
will consume almost all of the memory used by this routine, it would be more productive checking whether you need the whole grid or can get away with just a few rows at once.
Consider using constexpr
rather than const
when defining n. That is slightly more explicit that the value is known at compile time and is therefore legal for use in setting array sizes.
It is generally considered a better habit to use ++i
rather than i++
if you're going to discard the result. This has no practical difference with the primitive integral types you're using, but for more complex objects can be more efficient.
It is generally considered safer to always include the braces after flow control statements like
for
and if
.
Finally, a quick note about your closing sentence. I appreciate the desire to explore other ways of doing things, but doing calculations that bypass a whole load of loops is often the mark of a very good programmer! (For that matter, it doesn't necessarily make for easy programming either: 40!
wouldn't fit in your variable types so implementing something that handled that case would be a worthwhile challenge on its own.)
I have deeply considered using BigInt libraries, since Project Euler problems require rather huge numbers (e.g. Problem #20: Finding the sum of the digits of the number 100 factorial). It's just something I haven't tried yet, but definitely will.
â Gentlebud
Jul 14 at 13:44
add a comment |Â
up vote
9
down vote
#include <cstdio>
If your goal is to write C++, you're really off to a bad start using the C standard I/O library.
const unsigned short n = 20;
The modern way to do compile time constants is with constexpr
.
But I don't understand why you chose to use unsigned short
for all your indexing calculations. Any operations you try to do on it will automatically promote it to unsigned int
. You're not really gaining anything.
for (unsigned short i = 2; i <= n; i++)
Once again, once you've bought into unsigned short
, you keep using it everywhere. But aside from the assignment, every single operation in this statement is promoting all your unsigned short
s to unsigned int
s. Even the comparison. You might as well have just used unsigned
from the start to save yourself a lot of typing:
constexpr auto n = 20u; // unsigned int
for (auto i = 2u; i <= n; ++i)
And so on.
If you want to be technically correct, indexes should be std::size_t
. But unsigned
works well enough if you know you're not going any higher than 20. unsigned short
is useless except for storage, because any operations you do on it get promoted.
printf("%llu", 2 * p[0][1]);
First, you're missing the std::
here.
But more importantly, you should be using C++ I/O:
std::cout << (2 * p[0][1]);
Other than that, the most important element you're missing here would be comments, I'd say.
Using <cstdio> rather than <iostream> was suggested to me by a colleague of mine, because it allegedly runs faster. Also, I wasn't aware that my variables would get promoted to ints, I was just trying to get away with allocating less memory.
â Gentlebud
Jul 14 at 13:39
You aren't doing formatted output inside a loop, just printing the result at the end.
â Davislor
Jul 14 at 14:36
You also aren't saving any significant memory: your loop indices and the like are probably kept in registers and promoted. Your array elements, which account for almost all your memory usage, areunsigned long long
.
â Davislor
Jul 14 at 14:39
1
That<cstdio>
is faster than<iostream>
is an old myth; it all comes down to how you use it, and the quality of your implementation, and as several people have discovered, some timescout
is faster. As for saving memory withunsigned short
... do realize that even if that were true that this saves any memory (it isn't) andshort
is 2 bytes whileint
is 4 (as is common), you would be saving a grand total of 4 bytes of memory in your program. Worth it? (Of course, you won't really be saving anything, in reality.)
â indi
Jul 14 at 14:53
add a comment |Â
up vote
5
down vote
Package your code into meaningful operations as carefully named functions.
There will be a day that you or someone else is going to maintain some code, they'll open the file, and they'll see something like your code. You can't unit-test main()
. Whatever you have main()
doing should be kept simple. Delegate to other functions and objects you can unit-test.
Names are important and you should call things what they are. Select names that provide context, denote relationships, and/or promotes understanding. What is p
?
Always initialize variables. When you declare p
, the values within p
s extents are all indeterminate values until they are set.
Use int
when you need an integer type. Use std::size_t
/ptrdiff_t
for sizes and indices. If you need an integer type of fixed-size, use the appropriate type from <cstdint>
. In every expression evaluated in the context of operator
, the result of the expression is converted to std::size_t
for C-Arrays and the C++ containers.
template <std::size_t length, std::size_t width>
std::uint64_t matrix_count_routes()
std::uint64_t path_counts[length][width] = ;
// your code here
void euler_15()
constexpr int matrix_width = 20;
std::cout << matrix_count_routes<matrix_width, matrix_width>() << 'n';
int main()
euler_15();
I am aware that this problem could be solved way more efficiently and faster by simply calculating $frac(2n)!(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
The programmer would implement both and measure.
add a comment |Â
up vote
4
down vote
Here are some things that may help you improve your code.
Prefer iostream
s to printf
The printf
function needs to evaluate the format string at runtime and then apply the result, while something like std::cout << n;
already knows at compile time what kind of thing that n
is. The result is that the C++ version can be faster. It's a common misconception that C++ iostream
operations are slower than printf
(measuring is the way to go!) but in this case, it's only used once, so it probably doesn't matter.
Prefer constexpr
to const
In modern C++, constexpr
implies that the value can be derived at compile time, so anything that can be constexpr
reduces to essentially zero runtime overhead. That's a huge "win" for performance, so it should be something you strive for.
Understand integer promotion rules
The effect of declaring short
everywhere its used in this program is ... absolutely nothing! The reason is that there are integral numeric promotions being applied to promote from a short
to an int
.
Use objects
Since you're writing in C++, let's use objects! In this case, we might wrap everything up into a Grid
object and have a const
function that returns the answer. My rewrite on a 64-bit Linux machine has this for main
:
int main()
Grid<22> g;
std::cout << g.answer(20) << 'n';
Reduce memory use by choosing a better data structure
If we think about the algorithm carefully, it's possible to do everything in a single array of length n
rather than using an n x n
grid. Here's one way to do it in the constructor of the Grid
class:
Grid()
for (unsigned i = 0; i < n; ++i)
p[i] = i + 2;
for (unsigned i = 1; i < n; ++i)
p[i] *= 2;
for (unsigned j = i+1; j < n; ++j)
p[j] += p[j-1];
This is essentially the same algorithm, but reduced to a single in-place array. More on that in the next suggestion.
Prefer std::array
to plain C-style arrays
The std::array
is just as efficient as a plain C-style array, but has the considerable advantage that it can be used with all of the standard library algorithms. In this case, it seems prudent to make a templated class, so the first few lines of the Grid
class are these:
template <unsigned n>
class Grid
std::array<unsigned long long,n> p;
// rest of the class
;
Carefully consider usage
Note that as the array is constructed, each successive entry in the final version of the array is the answer for that size grid. So we can use that by writing the answer
function like so:
unsigned long long answer(int i) const return p.at(i-1);
The at
function will throw an exception if the index is out of bounds, so that that's one less consideration that we need to write code to handle (except perhaps at the caller's end).
Further optimizations
With this same basic algorithm, we can move almost all of the calculation to compile-time. The way we can do that is to use constexpr
and recursive templates to calculate values for all sizes at compile time and then the entire program reduces, effectively, to a runtime lookup in a static table. I'll leave this final optimization step to you, but this compile-time sieve of Eratosthenes might inspire you and give you ideas on how to do that.
add a comment |Â
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
8
down vote
accepted
Why are you using unsigned short
?
I presume the answer is somewhere along the lines of worrying about memory usage. There isn't a problem with that, but given that p
will consume almost all of the memory used by this routine, it would be more productive checking whether you need the whole grid or can get away with just a few rows at once.
Consider using constexpr
rather than const
when defining n. That is slightly more explicit that the value is known at compile time and is therefore legal for use in setting array sizes.
It is generally considered a better habit to use ++i
rather than i++
if you're going to discard the result. This has no practical difference with the primitive integral types you're using, but for more complex objects can be more efficient.
It is generally considered safer to always include the braces after flow control statements like
for
and if
.
Finally, a quick note about your closing sentence. I appreciate the desire to explore other ways of doing things, but doing calculations that bypass a whole load of loops is often the mark of a very good programmer! (For that matter, it doesn't necessarily make for easy programming either: 40!
wouldn't fit in your variable types so implementing something that handled that case would be a worthwhile challenge on its own.)
I have deeply considered using BigInt libraries, since Project Euler problems require rather huge numbers (e.g. Problem #20: Finding the sum of the digits of the number 100 factorial). It's just something I haven't tried yet, but definitely will.
â Gentlebud
Jul 14 at 13:44
add a comment |Â
up vote
8
down vote
accepted
Why are you using unsigned short
?
I presume the answer is somewhere along the lines of worrying about memory usage. There isn't a problem with that, but given that p
will consume almost all of the memory used by this routine, it would be more productive checking whether you need the whole grid or can get away with just a few rows at once.
Consider using constexpr
rather than const
when defining n. That is slightly more explicit that the value is known at compile time and is therefore legal for use in setting array sizes.
It is generally considered a better habit to use ++i
rather than i++
if you're going to discard the result. This has no practical difference with the primitive integral types you're using, but for more complex objects can be more efficient.
It is generally considered safer to always include the braces after flow control statements like
for
and if
.
Finally, a quick note about your closing sentence. I appreciate the desire to explore other ways of doing things, but doing calculations that bypass a whole load of loops is often the mark of a very good programmer! (For that matter, it doesn't necessarily make for easy programming either: 40!
wouldn't fit in your variable types so implementing something that handled that case would be a worthwhile challenge on its own.)
I have deeply considered using BigInt libraries, since Project Euler problems require rather huge numbers (e.g. Problem #20: Finding the sum of the digits of the number 100 factorial). It's just something I haven't tried yet, but definitely will.
â Gentlebud
Jul 14 at 13:44
add a comment |Â
up vote
8
down vote
accepted
up vote
8
down vote
accepted
Why are you using unsigned short
?
I presume the answer is somewhere along the lines of worrying about memory usage. There isn't a problem with that, but given that p
will consume almost all of the memory used by this routine, it would be more productive checking whether you need the whole grid or can get away with just a few rows at once.
Consider using constexpr
rather than const
when defining n. That is slightly more explicit that the value is known at compile time and is therefore legal for use in setting array sizes.
It is generally considered a better habit to use ++i
rather than i++
if you're going to discard the result. This has no practical difference with the primitive integral types you're using, but for more complex objects can be more efficient.
It is generally considered safer to always include the braces after flow control statements like
for
and if
.
Finally, a quick note about your closing sentence. I appreciate the desire to explore other ways of doing things, but doing calculations that bypass a whole load of loops is often the mark of a very good programmer! (For that matter, it doesn't necessarily make for easy programming either: 40!
wouldn't fit in your variable types so implementing something that handled that case would be a worthwhile challenge on its own.)
Why are you using unsigned short
?
I presume the answer is somewhere along the lines of worrying about memory usage. There isn't a problem with that, but given that p
will consume almost all of the memory used by this routine, it would be more productive checking whether you need the whole grid or can get away with just a few rows at once.
Consider using constexpr
rather than const
when defining n. That is slightly more explicit that the value is known at compile time and is therefore legal for use in setting array sizes.
It is generally considered a better habit to use ++i
rather than i++
if you're going to discard the result. This has no practical difference with the primitive integral types you're using, but for more complex objects can be more efficient.
It is generally considered safer to always include the braces after flow control statements like
for
and if
.
Finally, a quick note about your closing sentence. I appreciate the desire to explore other ways of doing things, but doing calculations that bypass a whole load of loops is often the mark of a very good programmer! (For that matter, it doesn't necessarily make for easy programming either: 40!
wouldn't fit in your variable types so implementing something that handled that case would be a worthwhile challenge on its own.)
answered Jul 14 at 13:15
Josiah
3,112326
3,112326
I have deeply considered using BigInt libraries, since Project Euler problems require rather huge numbers (e.g. Problem #20: Finding the sum of the digits of the number 100 factorial). It's just something I haven't tried yet, but definitely will.
â Gentlebud
Jul 14 at 13:44
add a comment |Â
I have deeply considered using BigInt libraries, since Project Euler problems require rather huge numbers (e.g. Problem #20: Finding the sum of the digits of the number 100 factorial). It's just something I haven't tried yet, but definitely will.
â Gentlebud
Jul 14 at 13:44
I have deeply considered using BigInt libraries, since Project Euler problems require rather huge numbers (e.g. Problem #20: Finding the sum of the digits of the number 100 factorial). It's just something I haven't tried yet, but definitely will.
â Gentlebud
Jul 14 at 13:44
I have deeply considered using BigInt libraries, since Project Euler problems require rather huge numbers (e.g. Problem #20: Finding the sum of the digits of the number 100 factorial). It's just something I haven't tried yet, but definitely will.
â Gentlebud
Jul 14 at 13:44
add a comment |Â
up vote
9
down vote
#include <cstdio>
If your goal is to write C++, you're really off to a bad start using the C standard I/O library.
const unsigned short n = 20;
The modern way to do compile time constants is with constexpr
.
But I don't understand why you chose to use unsigned short
for all your indexing calculations. Any operations you try to do on it will automatically promote it to unsigned int
. You're not really gaining anything.
for (unsigned short i = 2; i <= n; i++)
Once again, once you've bought into unsigned short
, you keep using it everywhere. But aside from the assignment, every single operation in this statement is promoting all your unsigned short
s to unsigned int
s. Even the comparison. You might as well have just used unsigned
from the start to save yourself a lot of typing:
constexpr auto n = 20u; // unsigned int
for (auto i = 2u; i <= n; ++i)
And so on.
If you want to be technically correct, indexes should be std::size_t
. But unsigned
works well enough if you know you're not going any higher than 20. unsigned short
is useless except for storage, because any operations you do on it get promoted.
printf("%llu", 2 * p[0][1]);
First, you're missing the std::
here.
But more importantly, you should be using C++ I/O:
std::cout << (2 * p[0][1]);
Other than that, the most important element you're missing here would be comments, I'd say.
Using <cstdio> rather than <iostream> was suggested to me by a colleague of mine, because it allegedly runs faster. Also, I wasn't aware that my variables would get promoted to ints, I was just trying to get away with allocating less memory.
â Gentlebud
Jul 14 at 13:39
You aren't doing formatted output inside a loop, just printing the result at the end.
â Davislor
Jul 14 at 14:36
You also aren't saving any significant memory: your loop indices and the like are probably kept in registers and promoted. Your array elements, which account for almost all your memory usage, areunsigned long long
.
â Davislor
Jul 14 at 14:39
1
That<cstdio>
is faster than<iostream>
is an old myth; it all comes down to how you use it, and the quality of your implementation, and as several people have discovered, some timescout
is faster. As for saving memory withunsigned short
... do realize that even if that were true that this saves any memory (it isn't) andshort
is 2 bytes whileint
is 4 (as is common), you would be saving a grand total of 4 bytes of memory in your program. Worth it? (Of course, you won't really be saving anything, in reality.)
â indi
Jul 14 at 14:53
add a comment |Â
up vote
9
down vote
#include <cstdio>
If your goal is to write C++, you're really off to a bad start using the C standard I/O library.
const unsigned short n = 20;
The modern way to do compile time constants is with constexpr
.
But I don't understand why you chose to use unsigned short
for all your indexing calculations. Any operations you try to do on it will automatically promote it to unsigned int
. You're not really gaining anything.
for (unsigned short i = 2; i <= n; i++)
Once again, once you've bought into unsigned short
, you keep using it everywhere. But aside from the assignment, every single operation in this statement is promoting all your unsigned short
s to unsigned int
s. Even the comparison. You might as well have just used unsigned
from the start to save yourself a lot of typing:
constexpr auto n = 20u; // unsigned int
for (auto i = 2u; i <= n; ++i)
And so on.
If you want to be technically correct, indexes should be std::size_t
. But unsigned
works well enough if you know you're not going any higher than 20. unsigned short
is useless except for storage, because any operations you do on it get promoted.
printf("%llu", 2 * p[0][1]);
First, you're missing the std::
here.
But more importantly, you should be using C++ I/O:
std::cout << (2 * p[0][1]);
Other than that, the most important element you're missing here would be comments, I'd say.
Using <cstdio> rather than <iostream> was suggested to me by a colleague of mine, because it allegedly runs faster. Also, I wasn't aware that my variables would get promoted to ints, I was just trying to get away with allocating less memory.
â Gentlebud
Jul 14 at 13:39
You aren't doing formatted output inside a loop, just printing the result at the end.
â Davislor
Jul 14 at 14:36
You also aren't saving any significant memory: your loop indices and the like are probably kept in registers and promoted. Your array elements, which account for almost all your memory usage, areunsigned long long
.
â Davislor
Jul 14 at 14:39
1
That<cstdio>
is faster than<iostream>
is an old myth; it all comes down to how you use it, and the quality of your implementation, and as several people have discovered, some timescout
is faster. As for saving memory withunsigned short
... do realize that even if that were true that this saves any memory (it isn't) andshort
is 2 bytes whileint
is 4 (as is common), you would be saving a grand total of 4 bytes of memory in your program. Worth it? (Of course, you won't really be saving anything, in reality.)
â indi
Jul 14 at 14:53
add a comment |Â
up vote
9
down vote
up vote
9
down vote
#include <cstdio>
If your goal is to write C++, you're really off to a bad start using the C standard I/O library.
const unsigned short n = 20;
The modern way to do compile time constants is with constexpr
.
But I don't understand why you chose to use unsigned short
for all your indexing calculations. Any operations you try to do on it will automatically promote it to unsigned int
. You're not really gaining anything.
for (unsigned short i = 2; i <= n; i++)
Once again, once you've bought into unsigned short
, you keep using it everywhere. But aside from the assignment, every single operation in this statement is promoting all your unsigned short
s to unsigned int
s. Even the comparison. You might as well have just used unsigned
from the start to save yourself a lot of typing:
constexpr auto n = 20u; // unsigned int
for (auto i = 2u; i <= n; ++i)
And so on.
If you want to be technically correct, indexes should be std::size_t
. But unsigned
works well enough if you know you're not going any higher than 20. unsigned short
is useless except for storage, because any operations you do on it get promoted.
printf("%llu", 2 * p[0][1]);
First, you're missing the std::
here.
But more importantly, you should be using C++ I/O:
std::cout << (2 * p[0][1]);
Other than that, the most important element you're missing here would be comments, I'd say.
#include <cstdio>
If your goal is to write C++, you're really off to a bad start using the C standard I/O library.
const unsigned short n = 20;
The modern way to do compile time constants is with constexpr
.
But I don't understand why you chose to use unsigned short
for all your indexing calculations. Any operations you try to do on it will automatically promote it to unsigned int
. You're not really gaining anything.
for (unsigned short i = 2; i <= n; i++)
Once again, once you've bought into unsigned short
, you keep using it everywhere. But aside from the assignment, every single operation in this statement is promoting all your unsigned short
s to unsigned int
s. Even the comparison. You might as well have just used unsigned
from the start to save yourself a lot of typing:
constexpr auto n = 20u; // unsigned int
for (auto i = 2u; i <= n; ++i)
And so on.
If you want to be technically correct, indexes should be std::size_t
. But unsigned
works well enough if you know you're not going any higher than 20. unsigned short
is useless except for storage, because any operations you do on it get promoted.
printf("%llu", 2 * p[0][1]);
First, you're missing the std::
here.
But more importantly, you should be using C++ I/O:
std::cout << (2 * p[0][1]);
Other than that, the most important element you're missing here would be comments, I'd say.
answered Jul 14 at 13:14
indi
3,021417
3,021417
Using <cstdio> rather than <iostream> was suggested to me by a colleague of mine, because it allegedly runs faster. Also, I wasn't aware that my variables would get promoted to ints, I was just trying to get away with allocating less memory.
â Gentlebud
Jul 14 at 13:39
You aren't doing formatted output inside a loop, just printing the result at the end.
â Davislor
Jul 14 at 14:36
You also aren't saving any significant memory: your loop indices and the like are probably kept in registers and promoted. Your array elements, which account for almost all your memory usage, areunsigned long long
.
â Davislor
Jul 14 at 14:39
1
That<cstdio>
is faster than<iostream>
is an old myth; it all comes down to how you use it, and the quality of your implementation, and as several people have discovered, some timescout
is faster. As for saving memory withunsigned short
... do realize that even if that were true that this saves any memory (it isn't) andshort
is 2 bytes whileint
is 4 (as is common), you would be saving a grand total of 4 bytes of memory in your program. Worth it? (Of course, you won't really be saving anything, in reality.)
â indi
Jul 14 at 14:53
add a comment |Â
Using <cstdio> rather than <iostream> was suggested to me by a colleague of mine, because it allegedly runs faster. Also, I wasn't aware that my variables would get promoted to ints, I was just trying to get away with allocating less memory.
â Gentlebud
Jul 14 at 13:39
You aren't doing formatted output inside a loop, just printing the result at the end.
â Davislor
Jul 14 at 14:36
You also aren't saving any significant memory: your loop indices and the like are probably kept in registers and promoted. Your array elements, which account for almost all your memory usage, areunsigned long long
.
â Davislor
Jul 14 at 14:39
1
That<cstdio>
is faster than<iostream>
is an old myth; it all comes down to how you use it, and the quality of your implementation, and as several people have discovered, some timescout
is faster. As for saving memory withunsigned short
... do realize that even if that were true that this saves any memory (it isn't) andshort
is 2 bytes whileint
is 4 (as is common), you would be saving a grand total of 4 bytes of memory in your program. Worth it? (Of course, you won't really be saving anything, in reality.)
â indi
Jul 14 at 14:53
Using <cstdio> rather than <iostream> was suggested to me by a colleague of mine, because it allegedly runs faster. Also, I wasn't aware that my variables would get promoted to ints, I was just trying to get away with allocating less memory.
â Gentlebud
Jul 14 at 13:39
Using <cstdio> rather than <iostream> was suggested to me by a colleague of mine, because it allegedly runs faster. Also, I wasn't aware that my variables would get promoted to ints, I was just trying to get away with allocating less memory.
â Gentlebud
Jul 14 at 13:39
You aren't doing formatted output inside a loop, just printing the result at the end.
â Davislor
Jul 14 at 14:36
You aren't doing formatted output inside a loop, just printing the result at the end.
â Davislor
Jul 14 at 14:36
You also aren't saving any significant memory: your loop indices and the like are probably kept in registers and promoted. Your array elements, which account for almost all your memory usage, are
unsigned long long
.â Davislor
Jul 14 at 14:39
You also aren't saving any significant memory: your loop indices and the like are probably kept in registers and promoted. Your array elements, which account for almost all your memory usage, are
unsigned long long
.â Davislor
Jul 14 at 14:39
1
1
That
<cstdio>
is faster than <iostream>
is an old myth; it all comes down to how you use it, and the quality of your implementation, and as several people have discovered, some times cout
is faster. As for saving memory with unsigned short
... do realize that even if that were true that this saves any memory (it isn't) and short
is 2 bytes while int
is 4 (as is common), you would be saving a grand total of 4 bytes of memory in your program. Worth it? (Of course, you won't really be saving anything, in reality.)â indi
Jul 14 at 14:53
That
<cstdio>
is faster than <iostream>
is an old myth; it all comes down to how you use it, and the quality of your implementation, and as several people have discovered, some times cout
is faster. As for saving memory with unsigned short
... do realize that even if that were true that this saves any memory (it isn't) and short
is 2 bytes while int
is 4 (as is common), you would be saving a grand total of 4 bytes of memory in your program. Worth it? (Of course, you won't really be saving anything, in reality.)â indi
Jul 14 at 14:53
add a comment |Â
up vote
5
down vote
Package your code into meaningful operations as carefully named functions.
There will be a day that you or someone else is going to maintain some code, they'll open the file, and they'll see something like your code. You can't unit-test main()
. Whatever you have main()
doing should be kept simple. Delegate to other functions and objects you can unit-test.
Names are important and you should call things what they are. Select names that provide context, denote relationships, and/or promotes understanding. What is p
?
Always initialize variables. When you declare p
, the values within p
s extents are all indeterminate values until they are set.
Use int
when you need an integer type. Use std::size_t
/ptrdiff_t
for sizes and indices. If you need an integer type of fixed-size, use the appropriate type from <cstdint>
. In every expression evaluated in the context of operator
, the result of the expression is converted to std::size_t
for C-Arrays and the C++ containers.
template <std::size_t length, std::size_t width>
std::uint64_t matrix_count_routes()
std::uint64_t path_counts[length][width] = ;
// your code here
void euler_15()
constexpr int matrix_width = 20;
std::cout << matrix_count_routes<matrix_width, matrix_width>() << 'n';
int main()
euler_15();
I am aware that this problem could be solved way more efficiently and faster by simply calculating $frac(2n)!(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
The programmer would implement both and measure.
add a comment |Â
up vote
5
down vote
Package your code into meaningful operations as carefully named functions.
There will be a day that you or someone else is going to maintain some code, they'll open the file, and they'll see something like your code. You can't unit-test main()
. Whatever you have main()
doing should be kept simple. Delegate to other functions and objects you can unit-test.
Names are important and you should call things what they are. Select names that provide context, denote relationships, and/or promotes understanding. What is p
?
Always initialize variables. When you declare p
, the values within p
s extents are all indeterminate values until they are set.
Use int
when you need an integer type. Use std::size_t
/ptrdiff_t
for sizes and indices. If you need an integer type of fixed-size, use the appropriate type from <cstdint>
. In every expression evaluated in the context of operator
, the result of the expression is converted to std::size_t
for C-Arrays and the C++ containers.
template <std::size_t length, std::size_t width>
std::uint64_t matrix_count_routes()
std::uint64_t path_counts[length][width] = ;
// your code here
void euler_15()
constexpr int matrix_width = 20;
std::cout << matrix_count_routes<matrix_width, matrix_width>() << 'n';
int main()
euler_15();
I am aware that this problem could be solved way more efficiently and faster by simply calculating $frac(2n)!(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
The programmer would implement both and measure.
add a comment |Â
up vote
5
down vote
up vote
5
down vote
Package your code into meaningful operations as carefully named functions.
There will be a day that you or someone else is going to maintain some code, they'll open the file, and they'll see something like your code. You can't unit-test main()
. Whatever you have main()
doing should be kept simple. Delegate to other functions and objects you can unit-test.
Names are important and you should call things what they are. Select names that provide context, denote relationships, and/or promotes understanding. What is p
?
Always initialize variables. When you declare p
, the values within p
s extents are all indeterminate values until they are set.
Use int
when you need an integer type. Use std::size_t
/ptrdiff_t
for sizes and indices. If you need an integer type of fixed-size, use the appropriate type from <cstdint>
. In every expression evaluated in the context of operator
, the result of the expression is converted to std::size_t
for C-Arrays and the C++ containers.
template <std::size_t length, std::size_t width>
std::uint64_t matrix_count_routes()
std::uint64_t path_counts[length][width] = ;
// your code here
void euler_15()
constexpr int matrix_width = 20;
std::cout << matrix_count_routes<matrix_width, matrix_width>() << 'n';
int main()
euler_15();
I am aware that this problem could be solved way more efficiently and faster by simply calculating $frac(2n)!(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
The programmer would implement both and measure.
Package your code into meaningful operations as carefully named functions.
There will be a day that you or someone else is going to maintain some code, they'll open the file, and they'll see something like your code. You can't unit-test main()
. Whatever you have main()
doing should be kept simple. Delegate to other functions and objects you can unit-test.
Names are important and you should call things what they are. Select names that provide context, denote relationships, and/or promotes understanding. What is p
?
Always initialize variables. When you declare p
, the values within p
s extents are all indeterminate values until they are set.
Use int
when you need an integer type. Use std::size_t
/ptrdiff_t
for sizes and indices. If you need an integer type of fixed-size, use the appropriate type from <cstdint>
. In every expression evaluated in the context of operator
, the result of the expression is converted to std::size_t
for C-Arrays and the C++ containers.
template <std::size_t length, std::size_t width>
std::uint64_t matrix_count_routes()
std::uint64_t path_counts[length][width] = ;
// your code here
void euler_15()
constexpr int matrix_width = 20;
std::cout << matrix_count_routes<matrix_width, matrix_width>() << 'n';
int main()
euler_15();
I am aware that this problem could be solved way more efficiently and faster by simply calculating $frac(2n)!(n!)^2$, however I decided to look at this problem from a programmer's point of view, rather than a mathematician's.
The programmer would implement both and measure.
answered Jul 15 at 5:43
Snowhawk
3,9801925
3,9801925
add a comment |Â
add a comment |Â
up vote
4
down vote
Here are some things that may help you improve your code.
Prefer iostream
s to printf
The printf
function needs to evaluate the format string at runtime and then apply the result, while something like std::cout << n;
already knows at compile time what kind of thing that n
is. The result is that the C++ version can be faster. It's a common misconception that C++ iostream
operations are slower than printf
(measuring is the way to go!) but in this case, it's only used once, so it probably doesn't matter.
Prefer constexpr
to const
In modern C++, constexpr
implies that the value can be derived at compile time, so anything that can be constexpr
reduces to essentially zero runtime overhead. That's a huge "win" for performance, so it should be something you strive for.
Understand integer promotion rules
The effect of declaring short
everywhere its used in this program is ... absolutely nothing! The reason is that there are integral numeric promotions being applied to promote from a short
to an int
.
Use objects
Since you're writing in C++, let's use objects! In this case, we might wrap everything up into a Grid
object and have a const
function that returns the answer. My rewrite on a 64-bit Linux machine has this for main
:
int main()
Grid<22> g;
std::cout << g.answer(20) << 'n';
Reduce memory use by choosing a better data structure
If we think about the algorithm carefully, it's possible to do everything in a single array of length n
rather than using an n x n
grid. Here's one way to do it in the constructor of the Grid
class:
Grid()
for (unsigned i = 0; i < n; ++i)
p[i] = i + 2;
for (unsigned i = 1; i < n; ++i)
p[i] *= 2;
for (unsigned j = i+1; j < n; ++j)
p[j] += p[j-1];
This is essentially the same algorithm, but reduced to a single in-place array. More on that in the next suggestion.
Prefer std::array
to plain C-style arrays
The std::array
is just as efficient as a plain C-style array, but has the considerable advantage that it can be used with all of the standard library algorithms. In this case, it seems prudent to make a templated class, so the first few lines of the Grid
class are these:
template <unsigned n>
class Grid
std::array<unsigned long long,n> p;
// rest of the class
;
Carefully consider usage
Note that as the array is constructed, each successive entry in the final version of the array is the answer for that size grid. So we can use that by writing the answer
function like so:
unsigned long long answer(int i) const return p.at(i-1);
The at
function will throw an exception if the index is out of bounds, so that that's one less consideration that we need to write code to handle (except perhaps at the caller's end).
Further optimizations
With this same basic algorithm, we can move almost all of the calculation to compile-time. The way we can do that is to use constexpr
and recursive templates to calculate values for all sizes at compile time and then the entire program reduces, effectively, to a runtime lookup in a static table. I'll leave this final optimization step to you, but this compile-time sieve of Eratosthenes might inspire you and give you ideas on how to do that.
add a comment |Â
up vote
4
down vote
Here are some things that may help you improve your code.
Prefer iostream
s to printf
The printf
function needs to evaluate the format string at runtime and then apply the result, while something like std::cout << n;
already knows at compile time what kind of thing that n
is. The result is that the C++ version can be faster. It's a common misconception that C++ iostream
operations are slower than printf
(measuring is the way to go!) but in this case, it's only used once, so it probably doesn't matter.
Prefer constexpr
to const
In modern C++, constexpr
implies that the value can be derived at compile time, so anything that can be constexpr
reduces to essentially zero runtime overhead. That's a huge "win" for performance, so it should be something you strive for.
Understand integer promotion rules
The effect of declaring short
everywhere its used in this program is ... absolutely nothing! The reason is that there are integral numeric promotions being applied to promote from a short
to an int
.
Use objects
Since you're writing in C++, let's use objects! In this case, we might wrap everything up into a Grid
object and have a const
function that returns the answer. My rewrite on a 64-bit Linux machine has this for main
:
int main()
Grid<22> g;
std::cout << g.answer(20) << 'n';
Reduce memory use by choosing a better data structure
If we think about the algorithm carefully, it's possible to do everything in a single array of length n
rather than using an n x n
grid. Here's one way to do it in the constructor of the Grid
class:
Grid()
for (unsigned i = 0; i < n; ++i)
p[i] = i + 2;
for (unsigned i = 1; i < n; ++i)
p[i] *= 2;
for (unsigned j = i+1; j < n; ++j)
p[j] += p[j-1];
This is essentially the same algorithm, but reduced to a single in-place array. More on that in the next suggestion.
Prefer std::array
to plain C-style arrays
The std::array
is just as efficient as a plain C-style array, but has the considerable advantage that it can be used with all of the standard library algorithms. In this case, it seems prudent to make a templated class, so the first few lines of the Grid
class are these:
template <unsigned n>
class Grid
std::array<unsigned long long,n> p;
// rest of the class
;
Carefully consider usage
Note that as the array is constructed, each successive entry in the final version of the array is the answer for that size grid. So we can use that by writing the answer
function like so:
unsigned long long answer(int i) const return p.at(i-1);
The at
function will throw an exception if the index is out of bounds, so that that's one less consideration that we need to write code to handle (except perhaps at the caller's end).
Further optimizations
With this same basic algorithm, we can move almost all of the calculation to compile-time. The way we can do that is to use constexpr
and recursive templates to calculate values for all sizes at compile time and then the entire program reduces, effectively, to a runtime lookup in a static table. I'll leave this final optimization step to you, but this compile-time sieve of Eratosthenes might inspire you and give you ideas on how to do that.
add a comment |Â
up vote
4
down vote
up vote
4
down vote
Here are some things that may help you improve your code.
Prefer iostream
s to printf
The printf
function needs to evaluate the format string at runtime and then apply the result, while something like std::cout << n;
already knows at compile time what kind of thing that n
is. The result is that the C++ version can be faster. It's a common misconception that C++ iostream
operations are slower than printf
(measuring is the way to go!) but in this case, it's only used once, so it probably doesn't matter.
Prefer constexpr
to const
In modern C++, constexpr
implies that the value can be derived at compile time, so anything that can be constexpr
reduces to essentially zero runtime overhead. That's a huge "win" for performance, so it should be something you strive for.
Understand integer promotion rules
The effect of declaring short
everywhere its used in this program is ... absolutely nothing! The reason is that there are integral numeric promotions being applied to promote from a short
to an int
.
Use objects
Since you're writing in C++, let's use objects! In this case, we might wrap everything up into a Grid
object and have a const
function that returns the answer. My rewrite on a 64-bit Linux machine has this for main
:
int main()
Grid<22> g;
std::cout << g.answer(20) << 'n';
Reduce memory use by choosing a better data structure
If we think about the algorithm carefully, it's possible to do everything in a single array of length n
rather than using an n x n
grid. Here's one way to do it in the constructor of the Grid
class:
Grid()
for (unsigned i = 0; i < n; ++i)
p[i] = i + 2;
for (unsigned i = 1; i < n; ++i)
p[i] *= 2;
for (unsigned j = i+1; j < n; ++j)
p[j] += p[j-1];
This is essentially the same algorithm, but reduced to a single in-place array. More on that in the next suggestion.
Prefer std::array
to plain C-style arrays
The std::array
is just as efficient as a plain C-style array, but has the considerable advantage that it can be used with all of the standard library algorithms. In this case, it seems prudent to make a templated class, so the first few lines of the Grid
class are these:
template <unsigned n>
class Grid
std::array<unsigned long long,n> p;
// rest of the class
;
Carefully consider usage
Note that as the array is constructed, each successive entry in the final version of the array is the answer for that size grid. So we can use that by writing the answer
function like so:
unsigned long long answer(int i) const return p.at(i-1);
The at
function will throw an exception if the index is out of bounds, so that that's one less consideration that we need to write code to handle (except perhaps at the caller's end).
Further optimizations
With this same basic algorithm, we can move almost all of the calculation to compile-time. The way we can do that is to use constexpr
and recursive templates to calculate values for all sizes at compile time and then the entire program reduces, effectively, to a runtime lookup in a static table. I'll leave this final optimization step to you, but this compile-time sieve of Eratosthenes might inspire you and give you ideas on how to do that.
Here are some things that may help you improve your code.
Prefer iostream
s to printf
The printf
function needs to evaluate the format string at runtime and then apply the result, while something like std::cout << n;
already knows at compile time what kind of thing that n
is. The result is that the C++ version can be faster. It's a common misconception that C++ iostream
operations are slower than printf
(measuring is the way to go!) but in this case, it's only used once, so it probably doesn't matter.
Prefer constexpr
to const
In modern C++, constexpr
implies that the value can be derived at compile time, so anything that can be constexpr
reduces to essentially zero runtime overhead. That's a huge "win" for performance, so it should be something you strive for.
Understand integer promotion rules
The effect of declaring short
everywhere its used in this program is ... absolutely nothing! The reason is that there are integral numeric promotions being applied to promote from a short
to an int
.
Use objects
Since you're writing in C++, let's use objects! In this case, we might wrap everything up into a Grid
object and have a const
function that returns the answer. My rewrite on a 64-bit Linux machine has this for main
:
int main()
Grid<22> g;
std::cout << g.answer(20) << 'n';
Reduce memory use by choosing a better data structure
If we think about the algorithm carefully, it's possible to do everything in a single array of length n
rather than using an n x n
grid. Here's one way to do it in the constructor of the Grid
class:
Grid()
for (unsigned i = 0; i < n; ++i)
p[i] = i + 2;
for (unsigned i = 1; i < n; ++i)
p[i] *= 2;
for (unsigned j = i+1; j < n; ++j)
p[j] += p[j-1];
This is essentially the same algorithm, but reduced to a single in-place array. More on that in the next suggestion.
Prefer std::array
to plain C-style arrays
The std::array
is just as efficient as a plain C-style array, but has the considerable advantage that it can be used with all of the standard library algorithms. In this case, it seems prudent to make a templated class, so the first few lines of the Grid
class are these:
template <unsigned n>
class Grid
std::array<unsigned long long,n> p;
// rest of the class
;
Carefully consider usage
Note that as the array is constructed, each successive entry in the final version of the array is the answer for that size grid. So we can use that by writing the answer
function like so:
unsigned long long answer(int i) const return p.at(i-1);
The at
function will throw an exception if the index is out of bounds, so that that's one less consideration that we need to write code to handle (except perhaps at the caller's end).
Further optimizations
With this same basic algorithm, we can move almost all of the calculation to compile-time. The way we can do that is to use constexpr
and recursive templates to calculate values for all sizes at compile time and then the entire program reduces, effectively, to a runtime lookup in a static table. I'll leave this final optimization step to you, but this compile-time sieve of Eratosthenes might inspire you and give you ideas on how to do that.
answered Jul 14 at 17:07
Edward
43.9k373200
43.9k373200
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f198485%2fproject-euler-15-counting-paths-through-a-20-%25c3%2597-20-grid%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
8
It's worth mentioning that the "programmer's point of view" should include mathematical simplification of algorithms.
â MooseBoys
Jul 14 at 15:53
Another solution to this problem is to use the combination formula -> n!/k!(n-k)! Also, I think the programmer's point of view is to find the simplest solution to the problem.
â austingae
Jul 15 at 6:42