Ackermann function in Rust
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
2
down vote
favorite
Using the algorithm at the end of this research paper, I've written an extremely fast 1 Ackermann function in Rust. Can the implementation be made faster?
Thoughts:
- Using a struct makes a lot of sense, but it's weird to have to
Some(Box::new(v2))
. Can this be avoided in some way? - Are for loops the best way of doing the
v
transformations? Would(1..(i + 1)).fold(Record::new(1, 0, None), |v, k| a_use_p(k - 1, 1, v))
be more "Rustic"? It seems slightly faster, but that's hard to tell. - Is there a way to parallelize this? Or any part of it?
Thanks.
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in 1..20
let s = PreciseTime::now();
let res = fast_ackermann(m, n);
let e = PreciseTime::now();
println!("a_opt(3, ) -> took ", n, res, s.to(e))
#[derive(Clone)]
struct Record
result: u64,
previous_result: u64,
cache: Option<Box<Record>>,
impl Record
fn new(result: u64, previous_result: u64, cache: Option<Box<Record>>) -> Record
Record result, previous_result, cache
fn fast_ackermann(m: u64, n: u64) -> u64
let mut cache = Record::new(1, 0, None);
for m_builder in 0..m
cache = ack_with_incrementalization(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = ack_with_incrementalization(m, n_builder, cache)
cache.result
fn ack_with_incrementalization(m: u64, n: u64, current_cache: Record) -> Record
if m == 0
Record::new(n + 1, 0, None)
else if n == 0
let mut new_cache = Record::new(1, 0, None);
for m_builder in 0..m
new_cache = ack_with_incrementalization(m_builder, 1, new_cache);
new_cache
else if n == 1
let cache_result = current_cache.result;
let mut new_cache = current_cache;
for n_builder in 2..(cache_result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);
Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
else
let cache_result = current_cache.result;
let mut new_cache = *current_cache.cache.unwrap();
for n_builder in (current_cache.previous_result + 1)..(current_cache.result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);
Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
1 With m
= 3, n
= 20, the code takes 0.95 seconds. With n
= 30 it takes 16 minutes. In the paper, n
= 20 takes 5.05 seconds and n
=30 takes 87 minutes. A version with an intermediate cache (in Rust), n
= 20 took 2.1 seconds and n
= 25 (not 30 like the others) took 39 minutes.
recursion rust
 |Â
show 1 more comment
up vote
2
down vote
favorite
Using the algorithm at the end of this research paper, I've written an extremely fast 1 Ackermann function in Rust. Can the implementation be made faster?
Thoughts:
- Using a struct makes a lot of sense, but it's weird to have to
Some(Box::new(v2))
. Can this be avoided in some way? - Are for loops the best way of doing the
v
transformations? Would(1..(i + 1)).fold(Record::new(1, 0, None), |v, k| a_use_p(k - 1, 1, v))
be more "Rustic"? It seems slightly faster, but that's hard to tell. - Is there a way to parallelize this? Or any part of it?
Thanks.
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in 1..20
let s = PreciseTime::now();
let res = fast_ackermann(m, n);
let e = PreciseTime::now();
println!("a_opt(3, ) -> took ", n, res, s.to(e))
#[derive(Clone)]
struct Record
result: u64,
previous_result: u64,
cache: Option<Box<Record>>,
impl Record
fn new(result: u64, previous_result: u64, cache: Option<Box<Record>>) -> Record
Record result, previous_result, cache
fn fast_ackermann(m: u64, n: u64) -> u64
let mut cache = Record::new(1, 0, None);
for m_builder in 0..m
cache = ack_with_incrementalization(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = ack_with_incrementalization(m, n_builder, cache)
cache.result
fn ack_with_incrementalization(m: u64, n: u64, current_cache: Record) -> Record
if m == 0
Record::new(n + 1, 0, None)
else if n == 0
let mut new_cache = Record::new(1, 0, None);
for m_builder in 0..m
new_cache = ack_with_incrementalization(m_builder, 1, new_cache);
new_cache
else if n == 1
let cache_result = current_cache.result;
let mut new_cache = current_cache;
for n_builder in 2..(cache_result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);
Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
else
let cache_result = current_cache.result;
let mut new_cache = *current_cache.cache.unwrap();
for n_builder in (current_cache.previous_result + 1)..(current_cache.result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);
Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
1 With m
= 3, n
= 20, the code takes 0.95 seconds. With n
= 30 it takes 16 minutes. In the paper, n
= 20 takes 5.05 seconds and n
=30 takes 87 minutes. A version with an intermediate cache (in Rust), n
= 20 took 2.1 seconds and n
= 25 (not 30 like the others) took 39 minutes.
recursion rust
1
Can you quantify "extremely fast"? Very impressive beard btw.
â yuri
Mar 1 at 20:24
Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
â Noah Bogart
Mar 1 at 21:42
1
You missed a fewa_opt
â make sure your code compiles ;-)
â Shepmaster
Mar 2 at 17:23
My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
â Noah Bogart
Mar 2 at 18:00
1
@NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
â Zeta
Mar 2 at 19:03
 |Â
show 1 more comment
up vote
2
down vote
favorite
up vote
2
down vote
favorite
Using the algorithm at the end of this research paper, I've written an extremely fast 1 Ackermann function in Rust. Can the implementation be made faster?
Thoughts:
- Using a struct makes a lot of sense, but it's weird to have to
Some(Box::new(v2))
. Can this be avoided in some way? - Are for loops the best way of doing the
v
transformations? Would(1..(i + 1)).fold(Record::new(1, 0, None), |v, k| a_use_p(k - 1, 1, v))
be more "Rustic"? It seems slightly faster, but that's hard to tell. - Is there a way to parallelize this? Or any part of it?
Thanks.
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in 1..20
let s = PreciseTime::now();
let res = fast_ackermann(m, n);
let e = PreciseTime::now();
println!("a_opt(3, ) -> took ", n, res, s.to(e))
#[derive(Clone)]
struct Record
result: u64,
previous_result: u64,
cache: Option<Box<Record>>,
impl Record
fn new(result: u64, previous_result: u64, cache: Option<Box<Record>>) -> Record
Record result, previous_result, cache
fn fast_ackermann(m: u64, n: u64) -> u64
let mut cache = Record::new(1, 0, None);
for m_builder in 0..m
cache = ack_with_incrementalization(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = ack_with_incrementalization(m, n_builder, cache)
cache.result
fn ack_with_incrementalization(m: u64, n: u64, current_cache: Record) -> Record
if m == 0
Record::new(n + 1, 0, None)
else if n == 0
let mut new_cache = Record::new(1, 0, None);
for m_builder in 0..m
new_cache = ack_with_incrementalization(m_builder, 1, new_cache);
new_cache
else if n == 1
let cache_result = current_cache.result;
let mut new_cache = current_cache;
for n_builder in 2..(cache_result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);
Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
else
let cache_result = current_cache.result;
let mut new_cache = *current_cache.cache.unwrap();
for n_builder in (current_cache.previous_result + 1)..(current_cache.result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);
Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
1 With m
= 3, n
= 20, the code takes 0.95 seconds. With n
= 30 it takes 16 minutes. In the paper, n
= 20 takes 5.05 seconds and n
=30 takes 87 minutes. A version with an intermediate cache (in Rust), n
= 20 took 2.1 seconds and n
= 25 (not 30 like the others) took 39 minutes.
recursion rust
Using the algorithm at the end of this research paper, I've written an extremely fast 1 Ackermann function in Rust. Can the implementation be made faster?
Thoughts:
- Using a struct makes a lot of sense, but it's weird to have to
Some(Box::new(v2))
. Can this be avoided in some way? - Are for loops the best way of doing the
v
transformations? Would(1..(i + 1)).fold(Record::new(1, 0, None), |v, k| a_use_p(k - 1, 1, v))
be more "Rustic"? It seems slightly faster, but that's hard to tell. - Is there a way to parallelize this? Or any part of it?
Thanks.
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in 1..20
let s = PreciseTime::now();
let res = fast_ackermann(m, n);
let e = PreciseTime::now();
println!("a_opt(3, ) -> took ", n, res, s.to(e))
#[derive(Clone)]
struct Record
result: u64,
previous_result: u64,
cache: Option<Box<Record>>,
impl Record
fn new(result: u64, previous_result: u64, cache: Option<Box<Record>>) -> Record
Record result, previous_result, cache
fn fast_ackermann(m: u64, n: u64) -> u64
let mut cache = Record::new(1, 0, None);
for m_builder in 0..m
cache = ack_with_incrementalization(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = ack_with_incrementalization(m, n_builder, cache)
cache.result
fn ack_with_incrementalization(m: u64, n: u64, current_cache: Record) -> Record
if m == 0
Record::new(n + 1, 0, None)
else if n == 0
let mut new_cache = Record::new(1, 0, None);
for m_builder in 0..m
new_cache = ack_with_incrementalization(m_builder, 1, new_cache);
new_cache
else if n == 1
let cache_result = current_cache.result;
let mut new_cache = current_cache;
for n_builder in 2..(cache_result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);
Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
else
let cache_result = current_cache.result;
let mut new_cache = *current_cache.cache.unwrap();
for n_builder in (current_cache.previous_result + 1)..(current_cache.result + 1)
new_cache = ack_with_incrementalization(m - 1, n_builder, new_cache);
Record::new(new_cache.result, cache_result, Some(Box::new(new_cache)))
1 With m
= 3, n
= 20, the code takes 0.95 seconds. With n
= 30 it takes 16 minutes. In the paper, n
= 20 takes 5.05 seconds and n
=30 takes 87 minutes. A version with an intermediate cache (in Rust), n
= 20 took 2.1 seconds and n
= 25 (not 30 like the others) took 39 minutes.
recursion rust
edited Mar 2 at 18:46
Shepmaster
6,5981018
6,5981018
asked Mar 1 at 20:01
Noah Bogart
1536
1536
1
Can you quantify "extremely fast"? Very impressive beard btw.
â yuri
Mar 1 at 20:24
Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
â Noah Bogart
Mar 1 at 21:42
1
You missed a fewa_opt
â make sure your code compiles ;-)
â Shepmaster
Mar 2 at 17:23
My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
â Noah Bogart
Mar 2 at 18:00
1
@NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
â Zeta
Mar 2 at 19:03
 |Â
show 1 more comment
1
Can you quantify "extremely fast"? Very impressive beard btw.
â yuri
Mar 1 at 20:24
Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
â Noah Bogart
Mar 1 at 21:42
1
You missed a fewa_opt
â make sure your code compiles ;-)
â Shepmaster
Mar 2 at 17:23
My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
â Noah Bogart
Mar 2 at 18:00
1
@NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
â Zeta
Mar 2 at 19:03
1
1
Can you quantify "extremely fast"? Very impressive beard btw.
â yuri
Mar 1 at 20:24
Can you quantify "extremely fast"? Very impressive beard btw.
â yuri
Mar 1 at 20:24
Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
â Noah Bogart
Mar 1 at 21:42
Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
â Noah Bogart
Mar 1 at 21:42
1
1
You missed a few
a_opt
â make sure your code compiles ;-)â Shepmaster
Mar 2 at 17:23
You missed a few
a_opt
â make sure your code compiles ;-)â Shepmaster
Mar 2 at 17:23
My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
â Noah Bogart
Mar 2 at 18:00
My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
â Noah Bogart
Mar 2 at 18:00
1
1
@NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
â Zeta
Mar 2 at 19:03
@NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
â Zeta
Mar 2 at 19:03
 |Â
show 1 more comment
1 Answer
1
active
oldest
votes
up vote
2
down vote
accepted
I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.
Comparison 1:
n: 20
fas_ack -> 8388605 took PT0.399960323S
arr_ack -> 8388605 took PT0.115302129S
n: 25
fas_ack -> 268435453 took PT11.283535003S
arr_ack -> 268435453 took PT3.553065491S
n: 30
fas_ack -> 8589934589 took PT757.605898139S
arr_ack -> 8589934589 took PT233.040801010S
Comparison 2:
n: 20
fas_ack -> 8388605 took PT0.807311828S
arr_ack -> 8388605 took PT0.442201425S
n: 25
fas_ack -> 268435453 took PT35.765848937S
arr_ack -> 268435453 took PT14.004015773S
n: 30
fas_ack -> 8589934589 took PT959.685955407S
arr_ack -> 8589934589 took PT141.390498532S
Updated code:
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in [20, 25, 30].iter()
println!("n: ", n);
let s = PreciseTime::now();
let res = arr_ack(m, *n);
let e = PreciseTime::now();
println!("arr_ack -> took ", res, s.to(e))
fn arr_ack(m: u64, n: u64) -> u64
let mut cache: [u64; 6] = [0; 6];
cache[0] = 1;
for m_builder in 0..m
cache = _arr_ack(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = _arr_ack(m, n_builder, cache)
cache[0]
fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
if m == 0
[n + 1, 0, 0, 0, 0, 0]
else if m == 1
[n + 2, 0, 0, 0, 0, 0]
else if n == 0
let mut new_cache = [1, 0, 0, 0, 0, 0];
for m_builder in 0..m
new_cache = _arr_ack(m_builder, 1, new_cache);
new_cache
else if n == 1
let previous_result = cache[0];
for n_builder in 2..(previous_result + 1)
cache = _arr_ack(m - 1, n_builder, cache);
[cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
else
let previous_result = cache[0];
let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
for n_builder in (cache[1] + 1)..(previous_result + 1)
new_cache = _arr_ack(m - 1, n_builder, new_cache);
[new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
accepted
I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.
Comparison 1:
n: 20
fas_ack -> 8388605 took PT0.399960323S
arr_ack -> 8388605 took PT0.115302129S
n: 25
fas_ack -> 268435453 took PT11.283535003S
arr_ack -> 268435453 took PT3.553065491S
n: 30
fas_ack -> 8589934589 took PT757.605898139S
arr_ack -> 8589934589 took PT233.040801010S
Comparison 2:
n: 20
fas_ack -> 8388605 took PT0.807311828S
arr_ack -> 8388605 took PT0.442201425S
n: 25
fas_ack -> 268435453 took PT35.765848937S
arr_ack -> 268435453 took PT14.004015773S
n: 30
fas_ack -> 8589934589 took PT959.685955407S
arr_ack -> 8589934589 took PT141.390498532S
Updated code:
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in [20, 25, 30].iter()
println!("n: ", n);
let s = PreciseTime::now();
let res = arr_ack(m, *n);
let e = PreciseTime::now();
println!("arr_ack -> took ", res, s.to(e))
fn arr_ack(m: u64, n: u64) -> u64
let mut cache: [u64; 6] = [0; 6];
cache[0] = 1;
for m_builder in 0..m
cache = _arr_ack(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = _arr_ack(m, n_builder, cache)
cache[0]
fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
if m == 0
[n + 1, 0, 0, 0, 0, 0]
else if m == 1
[n + 2, 0, 0, 0, 0, 0]
else if n == 0
let mut new_cache = [1, 0, 0, 0, 0, 0];
for m_builder in 0..m
new_cache = _arr_ack(m_builder, 1, new_cache);
new_cache
else if n == 1
let previous_result = cache[0];
for n_builder in 2..(previous_result + 1)
cache = _arr_ack(m - 1, n_builder, cache);
[cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
else
let previous_result = cache[0];
let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
for n_builder in (cache[1] + 1)..(previous_result + 1)
new_cache = _arr_ack(m - 1, n_builder, new_cache);
[new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]
add a comment |Â
up vote
2
down vote
accepted
I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.
Comparison 1:
n: 20
fas_ack -> 8388605 took PT0.399960323S
arr_ack -> 8388605 took PT0.115302129S
n: 25
fas_ack -> 268435453 took PT11.283535003S
arr_ack -> 268435453 took PT3.553065491S
n: 30
fas_ack -> 8589934589 took PT757.605898139S
arr_ack -> 8589934589 took PT233.040801010S
Comparison 2:
n: 20
fas_ack -> 8388605 took PT0.807311828S
arr_ack -> 8388605 took PT0.442201425S
n: 25
fas_ack -> 268435453 took PT35.765848937S
arr_ack -> 268435453 took PT14.004015773S
n: 30
fas_ack -> 8589934589 took PT959.685955407S
arr_ack -> 8589934589 took PT141.390498532S
Updated code:
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in [20, 25, 30].iter()
println!("n: ", n);
let s = PreciseTime::now();
let res = arr_ack(m, *n);
let e = PreciseTime::now();
println!("arr_ack -> took ", res, s.to(e))
fn arr_ack(m: u64, n: u64) -> u64
let mut cache: [u64; 6] = [0; 6];
cache[0] = 1;
for m_builder in 0..m
cache = _arr_ack(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = _arr_ack(m, n_builder, cache)
cache[0]
fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
if m == 0
[n + 1, 0, 0, 0, 0, 0]
else if m == 1
[n + 2, 0, 0, 0, 0, 0]
else if n == 0
let mut new_cache = [1, 0, 0, 0, 0, 0];
for m_builder in 0..m
new_cache = _arr_ack(m_builder, 1, new_cache);
new_cache
else if n == 1
let previous_result = cache[0];
for n_builder in 2..(previous_result + 1)
cache = _arr_ack(m - 1, n_builder, cache);
[cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
else
let previous_result = cache[0];
let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
for n_builder in (cache[1] + 1)..(previous_result + 1)
new_cache = _arr_ack(m - 1, n_builder, new_cache);
[new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]
add a comment |Â
up vote
2
down vote
accepted
up vote
2
down vote
accepted
I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.
Comparison 1:
n: 20
fas_ack -> 8388605 took PT0.399960323S
arr_ack -> 8388605 took PT0.115302129S
n: 25
fas_ack -> 268435453 took PT11.283535003S
arr_ack -> 268435453 took PT3.553065491S
n: 30
fas_ack -> 8589934589 took PT757.605898139S
arr_ack -> 8589934589 took PT233.040801010S
Comparison 2:
n: 20
fas_ack -> 8388605 took PT0.807311828S
arr_ack -> 8388605 took PT0.442201425S
n: 25
fas_ack -> 268435453 took PT35.765848937S
arr_ack -> 268435453 took PT14.004015773S
n: 30
fas_ack -> 8589934589 took PT959.685955407S
arr_ack -> 8589934589 took PT141.390498532S
Updated code:
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in [20, 25, 30].iter()
println!("n: ", n);
let s = PreciseTime::now();
let res = arr_ack(m, *n);
let e = PreciseTime::now();
println!("arr_ack -> took ", res, s.to(e))
fn arr_ack(m: u64, n: u64) -> u64
let mut cache: [u64; 6] = [0; 6];
cache[0] = 1;
for m_builder in 0..m
cache = _arr_ack(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = _arr_ack(m, n_builder, cache)
cache[0]
fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
if m == 0
[n + 1, 0, 0, 0, 0, 0]
else if m == 1
[n + 2, 0, 0, 0, 0, 0]
else if n == 0
let mut new_cache = [1, 0, 0, 0, 0, 0];
for m_builder in 0..m
new_cache = _arr_ack(m_builder, 1, new_cache);
new_cache
else if n == 1
let previous_result = cache[0];
for n_builder in 2..(previous_result + 1)
cache = _arr_ack(m - 1, n_builder, cache);
[cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
else
let previous_result = cache[0];
let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
for n_builder in (cache[1] + 1)..(previous_result + 1)
new_cache = _arr_ack(m - 1, n_builder, new_cache);
[new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]
I realized that the nested Records never go more than 3 deep (outermost, middle, innermost), which means it could be rewritten to use primitive arrays.
Comparison 1:
n: 20
fas_ack -> 8388605 took PT0.399960323S
arr_ack -> 8388605 took PT0.115302129S
n: 25
fas_ack -> 268435453 took PT11.283535003S
arr_ack -> 268435453 took PT3.553065491S
n: 30
fas_ack -> 8589934589 took PT757.605898139S
arr_ack -> 8589934589 took PT233.040801010S
Comparison 2:
n: 20
fas_ack -> 8388605 took PT0.807311828S
arr_ack -> 8388605 took PT0.442201425S
n: 25
fas_ack -> 268435453 took PT35.765848937S
arr_ack -> 268435453 took PT14.004015773S
n: 30
fas_ack -> 8589934589 took PT959.685955407S
arr_ack -> 8589934589 took PT141.390498532S
Updated code:
extern crate time;
use time::PreciseTime;
fn main()
let m = 3;
for n in [20, 25, 30].iter()
println!("n: ", n);
let s = PreciseTime::now();
let res = arr_ack(m, *n);
let e = PreciseTime::now();
println!("arr_ack -> took ", res, s.to(e))
fn arr_ack(m: u64, n: u64) -> u64
let mut cache: [u64; 6] = [0; 6];
cache[0] = 1;
for m_builder in 0..m
cache = _arr_ack(m_builder, 1, cache)
for n_builder in 1..(n + 1)
cache = _arr_ack(m, n_builder, cache)
cache[0]
fn _arr_ack(m: u64, n: u64, mut cache: [u64; 6]) -> [u64; 6]
if m == 0
[n + 1, 0, 0, 0, 0, 0]
else if m == 1
[n + 2, 0, 0, 0, 0, 0]
else if n == 0
let mut new_cache = [1, 0, 0, 0, 0, 0];
for m_builder in 0..m
new_cache = _arr_ack(m_builder, 1, new_cache);
new_cache
else if n == 1
let previous_result = cache[0];
for n_builder in 2..(previous_result + 1)
cache = _arr_ack(m - 1, n_builder, cache);
[cache[0], previous_result, cache[0], cache[1], cache[2], cache[3]]
else
let previous_result = cache[0];
let mut new_cache = [cache[2], cache[3], cache[4], cache[5], 0, 0];
for n_builder in (cache[1] + 1)..(previous_result + 1)
new_cache = _arr_ack(m - 1, n_builder, new_cache);
[new_cache[0], previous_result, new_cache[0], new_cache[1], new_cache[2], new_cache[3]]
answered Mar 2 at 20:17
Noah Bogart
1536
1536
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f188638%2fackermann-function-in-rust%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
1
Can you quantify "extremely fast"? Very impressive beard btw.
â yuri
Mar 1 at 20:24
Thanks! With i=3, n=20 takes 0.95 seconds and n=30 takes 16 minutes, whereas in the paper, n=20 takes 5.05 seconds and n=30 takes 87 minutes. Compare to a version with an intermediate cache (in Rust): n=20 took 2.1 seconds and n=25 (not 30 like the others) took 39 minutes.
â Noah Bogart
Mar 1 at 21:42
1
You missed a few
a_opt
â make sure your code compiles ;-)â Shepmaster
Mar 2 at 17:23
My apologies, Zeta. A (now deleted) answer said it was okay as they were the only answer so far. I'll remember to not do so in the future.
â Noah Bogart
Mar 2 at 18:00
1
@NoahBogart I should have diverged from the comment template a little bit more, sorry. Since Shepmaster was fine with the edit and deleted the review, everything's alright, it was just a remark for future questions, no worries.
â Zeta
Mar 2 at 19:03