Queue-based fixed-duration memory cache
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
3
down vote
favorite
I used to use the MemoryCache in .Net but recently read about a DB using using a linked-list, which only had to check the prefix of the list in $O(1)$ complexity, to time values out, as opposed to ln N; the heap-based approach that's default for variable timeout, min-heaps.
I'd appreciate a review of this code; in terms of how optimal it is and if there are obvious mistakes in it, that I'm missing. The general idea is that count(R) >>> count(W).
I considered using ReaderWriterLockSlim in upgradeable mode, but I may very well have two threads with read locks trying to acquire a write upgrade simultaneously in this piece of code, so I decided against. The check of the queue head is supposed to be quick enough to avoid contention around this resource.
This code is intended to be used in a low-latency F# API as part of Logibit Hawk.
I'd also like this code to be non-allocating, besides the single int64 that's the Instant
(x2) and the allocated-outside string reference. Returning Choice-values (allocating) is intentional however, and will be changed to Result in the future.
module Settings =
open NodaTime
open System.Collections.Generic
open System.Collections.Concurrent
let private tsem = obj ()
let private runTimeouts (timeouts: Queue<_>) (cache: ConcurrentDictionary<string, Instant>) (now: Instant) =
/// Iterate until the head is due later than now. Tail-recursion needed.
let rec iter () =
if timeouts.Count = 0 then () else
let struct (nonce, timeout) = timeouts.Peek()
if timeout > now then () else
ignore (timeouts.Dequeue())
ignore (cache.TryRemove nonce)
iter ()
lock tsem iter
let nonceValidatorMem (clock: IClock) (keepFor: Duration) =
// We will use a queue, because it's O(1) on dequeue/peek and since the duration is a constant for the
// life time of this validator, we know the head is due closest in time.
let timeouts = Queue<_>()
// The cache stores the nonces and when they were added
let cache = ConcurrentDictionary<_, _>()
fun (nonce: string, ts: Instant) ->
let now = clock.GetCurrentInstant()
do runTimeouts timeouts cache now
// https://docs.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.tryadd?view=netframework-4.7.1
// returns true if the nonce was successfully added
if cache.TryAdd (nonce, now) then
do lock tsem (fun () -> timeouts.Enqueue (struct (nonce, now + keepFor)))
Choice1Of2 ()
else
Choice2Of2 AlreadySeen
multithreading f#
add a comment |Â
up vote
3
down vote
favorite
I used to use the MemoryCache in .Net but recently read about a DB using using a linked-list, which only had to check the prefix of the list in $O(1)$ complexity, to time values out, as opposed to ln N; the heap-based approach that's default for variable timeout, min-heaps.
I'd appreciate a review of this code; in terms of how optimal it is and if there are obvious mistakes in it, that I'm missing. The general idea is that count(R) >>> count(W).
I considered using ReaderWriterLockSlim in upgradeable mode, but I may very well have two threads with read locks trying to acquire a write upgrade simultaneously in this piece of code, so I decided against. The check of the queue head is supposed to be quick enough to avoid contention around this resource.
This code is intended to be used in a low-latency F# API as part of Logibit Hawk.
I'd also like this code to be non-allocating, besides the single int64 that's the Instant
(x2) and the allocated-outside string reference. Returning Choice-values (allocating) is intentional however, and will be changed to Result in the future.
module Settings =
open NodaTime
open System.Collections.Generic
open System.Collections.Concurrent
let private tsem = obj ()
let private runTimeouts (timeouts: Queue<_>) (cache: ConcurrentDictionary<string, Instant>) (now: Instant) =
/// Iterate until the head is due later than now. Tail-recursion needed.
let rec iter () =
if timeouts.Count = 0 then () else
let struct (nonce, timeout) = timeouts.Peek()
if timeout > now then () else
ignore (timeouts.Dequeue())
ignore (cache.TryRemove nonce)
iter ()
lock tsem iter
let nonceValidatorMem (clock: IClock) (keepFor: Duration) =
// We will use a queue, because it's O(1) on dequeue/peek and since the duration is a constant for the
// life time of this validator, we know the head is due closest in time.
let timeouts = Queue<_>()
// The cache stores the nonces and when they were added
let cache = ConcurrentDictionary<_, _>()
fun (nonce: string, ts: Instant) ->
let now = clock.GetCurrentInstant()
do runTimeouts timeouts cache now
// https://docs.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.tryadd?view=netframework-4.7.1
// returns true if the nonce was successfully added
if cache.TryAdd (nonce, now) then
do lock tsem (fun () -> timeouts.Enqueue (struct (nonce, now + keepFor)))
Choice1Of2 ()
else
Choice2Of2 AlreadySeen
multithreading f#
I'd performance test it against a version based on a doubly linked list instead of the Queue.
â Ant
May 10 at 22:03
add a comment |Â
up vote
3
down vote
favorite
up vote
3
down vote
favorite
I used to use the MemoryCache in .Net but recently read about a DB using using a linked-list, which only had to check the prefix of the list in $O(1)$ complexity, to time values out, as opposed to ln N; the heap-based approach that's default for variable timeout, min-heaps.
I'd appreciate a review of this code; in terms of how optimal it is and if there are obvious mistakes in it, that I'm missing. The general idea is that count(R) >>> count(W).
I considered using ReaderWriterLockSlim in upgradeable mode, but I may very well have two threads with read locks trying to acquire a write upgrade simultaneously in this piece of code, so I decided against. The check of the queue head is supposed to be quick enough to avoid contention around this resource.
This code is intended to be used in a low-latency F# API as part of Logibit Hawk.
I'd also like this code to be non-allocating, besides the single int64 that's the Instant
(x2) and the allocated-outside string reference. Returning Choice-values (allocating) is intentional however, and will be changed to Result in the future.
module Settings =
open NodaTime
open System.Collections.Generic
open System.Collections.Concurrent
let private tsem = obj ()
let private runTimeouts (timeouts: Queue<_>) (cache: ConcurrentDictionary<string, Instant>) (now: Instant) =
/// Iterate until the head is due later than now. Tail-recursion needed.
let rec iter () =
if timeouts.Count = 0 then () else
let struct (nonce, timeout) = timeouts.Peek()
if timeout > now then () else
ignore (timeouts.Dequeue())
ignore (cache.TryRemove nonce)
iter ()
lock tsem iter
let nonceValidatorMem (clock: IClock) (keepFor: Duration) =
// We will use a queue, because it's O(1) on dequeue/peek and since the duration is a constant for the
// life time of this validator, we know the head is due closest in time.
let timeouts = Queue<_>()
// The cache stores the nonces and when they were added
let cache = ConcurrentDictionary<_, _>()
fun (nonce: string, ts: Instant) ->
let now = clock.GetCurrentInstant()
do runTimeouts timeouts cache now
// https://docs.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.tryadd?view=netframework-4.7.1
// returns true if the nonce was successfully added
if cache.TryAdd (nonce, now) then
do lock tsem (fun () -> timeouts.Enqueue (struct (nonce, now + keepFor)))
Choice1Of2 ()
else
Choice2Of2 AlreadySeen
multithreading f#
I used to use the MemoryCache in .Net but recently read about a DB using using a linked-list, which only had to check the prefix of the list in $O(1)$ complexity, to time values out, as opposed to ln N; the heap-based approach that's default for variable timeout, min-heaps.
I'd appreciate a review of this code; in terms of how optimal it is and if there are obvious mistakes in it, that I'm missing. The general idea is that count(R) >>> count(W).
I considered using ReaderWriterLockSlim in upgradeable mode, but I may very well have two threads with read locks trying to acquire a write upgrade simultaneously in this piece of code, so I decided against. The check of the queue head is supposed to be quick enough to avoid contention around this resource.
This code is intended to be used in a low-latency F# API as part of Logibit Hawk.
I'd also like this code to be non-allocating, besides the single int64 that's the Instant
(x2) and the allocated-outside string reference. Returning Choice-values (allocating) is intentional however, and will be changed to Result in the future.
module Settings =
open NodaTime
open System.Collections.Generic
open System.Collections.Concurrent
let private tsem = obj ()
let private runTimeouts (timeouts: Queue<_>) (cache: ConcurrentDictionary<string, Instant>) (now: Instant) =
/// Iterate until the head is due later than now. Tail-recursion needed.
let rec iter () =
if timeouts.Count = 0 then () else
let struct (nonce, timeout) = timeouts.Peek()
if timeout > now then () else
ignore (timeouts.Dequeue())
ignore (cache.TryRemove nonce)
iter ()
lock tsem iter
let nonceValidatorMem (clock: IClock) (keepFor: Duration) =
// We will use a queue, because it's O(1) on dequeue/peek and since the duration is a constant for the
// life time of this validator, we know the head is due closest in time.
let timeouts = Queue<_>()
// The cache stores the nonces and when they were added
let cache = ConcurrentDictionary<_, _>()
fun (nonce: string, ts: Instant) ->
let now = clock.GetCurrentInstant()
do runTimeouts timeouts cache now
// https://docs.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.tryadd?view=netframework-4.7.1
// returns true if the nonce was successfully added
if cache.TryAdd (nonce, now) then
do lock tsem (fun () -> timeouts.Enqueue (struct (nonce, now + keepFor)))
Choice1Of2 ()
else
Choice2Of2 AlreadySeen
multithreading f#
edited Jun 19 at 10:05
Billal BEGUERADJ
1
1
asked May 1 at 17:35
Henrik
416
416
I'd performance test it against a version based on a doubly linked list instead of the Queue.
â Ant
May 10 at 22:03
add a comment |Â
I'd performance test it against a version based on a doubly linked list instead of the Queue.
â Ant
May 10 at 22:03
I'd performance test it against a version based on a doubly linked list instead of the Queue.
â Ant
May 10 at 22:03
I'd performance test it against a version based on a doubly linked list instead of the Queue.
â Ant
May 10 at 22:03
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f193370%2fqueue-based-fixed-duration-memory-cache%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I'd performance test it against a version based on a doubly linked list instead of the Queue.
â Ant
May 10 at 22:03