diff --git a/backoff/Backoff/index.html b/backoff/Backoff/index.html deleted file mode 100644 index 0e9f94f4..00000000 --- a/backoff/Backoff/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Backoff (backoff.Backoff)

Module Backoff

Randomized exponential backoff mechanism.

type t

Type of backoff values.

val max_wait_log : int

Logarithm of the maximum allowed value for wait.

val create : ?lower_wait_log:int -> ?upper_wait_log:int -> unit -> t

create creates a backoff value. upper_wait_log, lower_wait_log override the logarithmic upper and lower bound on the number of spins executed by once.

val default : t

default is equivalent to create ().

val once : t -> t

once b executes one random wait and returns a new backoff with logarithm of the current maximum value incremented unless it is already at upper_wait_log of b.

Note that this uses the default Stdlib Random per-domain generator.

val reset : t -> t

reset b returns a backoff equivalent to b except with current value set to the lower_wait_log of b.

val get_wait_log : t -> int

get_wait_log b returns logarithm of the maximum value of wait for next once.

diff --git a/backoff/_doc-dir/CHANGES.md b/backoff/_doc-dir/CHANGES.md deleted file mode 100644 index 7e01cc5d..00000000 --- a/backoff/_doc-dir/CHANGES.md +++ /dev/null @@ -1,7 +0,0 @@ -## 0.1.1 - -- Ported to 4.12 and optimized for size (@polytypic) - -## 0.1.0 - -- Initial version based on backoff from kcas (@lyrm, @polytypic) diff --git a/backoff/_doc-dir/LICENSE.md b/backoff/_doc-dir/LICENSE.md deleted file mode 100644 index e107a366..00000000 --- a/backoff/_doc-dir/LICENSE.md +++ /dev/null @@ -1,16 +0,0 @@ -Copyright (c) 2015, Théo Laurent -Copyright (c) 2016, KC Sivaramakrishnan -Copyright (c) 2021, Sudha Parimala -Copyright (c) 2023, Vesa Karvonen - -Permission to use, copy, modify, and/or distribute this software for any -purpose with or without fee is hereby granted, provided that the above -copyright notice and this permission notice appear in all copies. - -THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES -WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR -ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF -OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. diff --git a/backoff/_doc-dir/README.md b/backoff/_doc-dir/README.md deleted file mode 100644 index 36d05f61..00000000 --- a/backoff/_doc-dir/README.md +++ /dev/null @@ -1,94 +0,0 @@ -[API reference](https://ocaml-multicore.github.io/backoff/doc/backoff/Backoff/index.html) - -# backoff - exponential backoff mechanism - -**backoff** provides an -[exponential backoff mechanism](https://en.wikipedia.org/wiki/Exponential_backoff) -[1]. It reduces contention by making a domain back off after failing an -operation contested by another domain, like acquiring a lock or performing a -`CAS` operation. - -## About contention - -Contention is what happens when multiple CPU cores try to access the same -location(s) in parallel. Let's take the example of multiple CPU cores trying to -perform a `CAS` on the same location at the same time. Only one is going to -success at each round of retries. By writing on a shared location, it -invalidates all other CPUs' caches. So at each round each CPU will have to read -the memory location again, leading to quadratic O(n²) bus traffic. - -## Exponential backoff - -Failing to access a shared resource means there is contention: some other CPU -cores are trying to access it at the same time. To avoid quadratic bus traffic, -the idea exploited by exponential backoff is to make each CPU core wait (spin) a -random bit before retrying. This way, they will try to access the resource at a -different time: that not only strongly decreases bus traffic but that also gets -them a better chance to get the resource, at they probably will compete for it -against less other CPU cores. Failing again probably means contention is high, -and they need to wait longer. In fact, each consecutive fail of a single CPU -core will make it wait twice longer (_exponential_ backoff !). - -Obviously, they cannot wait forever: there is an upper limit on the number of -times the initial waiting time can be doubled (see [Tuning](#tuning)), but -intuitively, a good waiting time should be at least around the time the -contested operation takes (in our example, the operation is a CAS) and at most a -few times that amount. - -## Tuning - -For better performance, backoff can be tuned. `Backoff.create` function has two -optional arguments for that: `upper_wait_log` and `lower_wait_log` that defines -the logarithmic upper and lower bound on the number of spins executed by -{!once}. - -## Drawbacks - -This mechanism has some drawbacks. First, it adds some delays: for example, when -a domain releases a contended lock, another domain, that has backed off after -failing acquiring it, will still have to finish its back-off loop before -retrying. Second, this increases any unfairness: any other thread that arrives -at that time or that has failed acquiring the lock for a lesser number of times -is more likely to acquire it as it will probably have a shorter waiting time. - -## Example - -To illustrate how to use backoff, here is a small implementation of -`test and test-and-set` spin lock [2]. - -```ocaml - type t = bool Atomic.t - - let create () = Atomic.make false - - let rec acquire ?(backoff = Backoff.detault) t = - if Atomic.get t then begin - Domain.cpu_relax (); - acquire ~backoff t - end - else if not (Atomic.compare_and_set t false true) then - acquire ~backoff:(Backoff.once backoff) t - - let release t = Atomic.set t false -``` - -This implementation can also be found [here](bench/taslock.ml), as well as a -small [benchmark](bench/test_tas.ml) to compare it to the same TAS lock but -without backoff. It can be launched with: - -```sh -dune exec ./bench/test_tas.exe > bench.data -``` - -and displayed (on linux) with: - -```sh -gnuplot -p -e 'plot for [col=2:4] "bench.data" using 1:col with lines title columnheader' -``` - -## References - -[1] Adaptive backoff synchronization techniques, A. Agarwal, M. Cherian (1989) - -[2] Dynamic Decentralized Cache Schemes for MIMD Parallel Processors, L.Rudolf, -Z.Segall (1984) diff --git a/backoff/index.html b/backoff/index.html deleted file mode 100644 index 39ffc5c9..00000000 --- a/backoff/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -index (backoff.index)

Package backoff

  • Backoff Randomized exponential backoff mechanism.

Package info

changes-files
license-files
readme-files
diff --git a/index.html b/index.html index f7fd7eb6..1556be34 100644 --- a/index.html +++ b/index.html @@ -1,2 +1,2 @@ -_opam

OCaml package documentation

Browse by name, by tag, the standard library and the OCaml manual (online, latest version).

Generated for /home/runner/work/moonpool/moonpool/_opam/lib

\ No newline at end of file +_opam

OCaml package documentation

Browse by name, by tag, the standard library and the OCaml manual (online, latest version).

Generated for /home/runner/work/moonpool/moonpool/_opam/lib

\ No newline at end of file diff --git a/moonpool/Moonpool_sync/Event/Infix/index.html b/moonpool/Moonpool_sync/Event/Infix/index.html deleted file mode 100644 index 56f29d92..00000000 --- a/moonpool/Moonpool_sync/Event/Infix/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Infix (moonpool.Moonpool_sync.Event.Infix)

Module Event.Infix

val (>|=) : 'a t -> ('a -> 'b) -> 'b t
val (let+) : 'a t -> ('a -> 'b) -> 'b t
diff --git a/moonpool/Moonpool_sync/Event/index.html b/moonpool/Moonpool_sync/Event/index.html deleted file mode 100644 index c9857fa8..00000000 --- a/moonpool/Moonpool_sync/Event/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Event (moonpool.Moonpool_sync.Event)

Module Moonpool_sync.Event

include module type of struct include Picos_std_event.Event end
type !'a t = 'a Picos_std_event.Event.t

An event returning a value of type 'a.

type 'a event = 'a t

An alias for the Event.t type to match the Event module signature.

val always : 'a -> 'a t

always value returns an event that can always be committed to resulting in the given value.

Composing events

val choose : 'a t list -> 'a t

choose events return an event that offers all of the given events and then commits to at most one of them.

val wrap : 'b t -> ('b -> 'a) -> 'a t

wrap event fn returns an event that acts as the given event and then applies the given function to the value in case the event is committed to.

val map : ('b -> 'a) -> 'b t -> 'a t

map fn event is equivalent to wrap event fn.

val guard : (unit -> 'a t) -> 'a t

guard thunk returns an event that, when synchronized, calls the thunk, and then behaves like the resulting event.

⚠️ Raising an exception from a guard thunk will result in raising that exception out of the sync. This may result in dropping the result of an event that committed just after the exception was raised. This means that you should treat an unexpected exception raised from sync as a fatal error.

Consuming events

val sync : 'a t -> 'a

sync event synchronizes on the given event.

Synchronizing on an event executes in three phases:

  1. In the first phase offers or requests are made to communicate.
  2. One of the offers or requests is committed to and all the other offers and requests are canceled.
  3. A final result is computed from the value produced by the event.

⚠️ sync event does not wait for the canceled concurrent requests to terminate. This means that you should arrange for guaranteed cleanup through other means such as the use of structured concurrency.

val select : 'a t list -> 'a

select events is equivalent to sync (choose events).

Primitive events

ℹ️ The Computation concept of Picos can be seen as a basic single-shot atomic event. This module builds on that concept to provide a composable API to concurrent services exposed through computations.

type 'a request = 'a Picos_std_event.Event.request = {
  1. request : 'r. (unit -> 'r) Picos.Computation.t -> ('a -> 'r) -> unit;
}

Represents a function that requests a concurrent service to update a computation.

ℹ️ The computation passed to a request may be completed by some other event at any point. All primitive requests should be implemented carefully to take that into account. If the computation is completed by some other event, then the request should be considered as canceled, take no effect, and not leak any resources.

⚠️ Raising an exception from a request function will result in raising that exception out of the sync. This may result in dropping the result of an event that committed just after the exception was raised. This means that you should treat an unexpected exception raised from sync as a fatal error. In addition, you should arrange for concurrent services to report unexpected errors independently of the computation being passed to the service.

val from_request : 'a request -> 'a t

from_request { request } creates an event from the request function.

val from_computation : 'a Picos.Computation.t -> 'a t

from_computation source creates an event that can be committed to once the given source computation has completed.

ℹ️ Committing to some other event does not cancel the source computation.

val of_fut : 'a Moonpool.Fut.t -> 'a t
module Infix : sig ... end
include module type of Infix
val (>|=) : 'a t -> ('a -> 'b) -> 'b t
val (let+) : 'a t -> ('a -> 'b) -> 'b t
diff --git a/moonpool/Moonpool_sync/Lock/index.html b/moonpool/Moonpool_sync/Lock/index.html deleted file mode 100644 index db9ae4d8..00000000 --- a/moonpool/Moonpool_sync/Lock/index.html +++ /dev/null @@ -1,13 +0,0 @@ - -Lock (moonpool.Moonpool_sync.Lock)

Module Moonpool_sync.Lock

Mutex-protected resource.

This lock is a synchronous concurrency primitive, as a thin wrapper around Mutex that encourages proper management of the critical section in RAII style:

  let (let@) = (@@)
-
-
-  …
-  let compute_foo =
-    (* enter critical section *)
-    let@ x = Lock.with_ protected_resource in
-    use_x;
-    return_foo ()
-    (* exit critical section *)
-  in
-  …

This lock is based on Picos_sync.Mutex so it is await-safe.

  • since 0.7
type 'a t

A value protected by a cooperative mutex

val create : 'a -> 'a t

Create a new protected value.

val with_ : 'a t -> ('a -> 'b) -> 'b

with_ l f runs f x where x is the value protected with the lock l, in a critical section. If f x fails, with_lock l f fails too but the lock is released.

val update : 'a t -> ('a -> 'a) -> unit

update l f replaces the content x of l with f x, while protected by the mutex.

val update_map : 'a t -> ('a -> 'a * 'b) -> 'b

update_map l f computes x', y = f (get l), then puts x' in l and returns y, while protected by the mutex.

val mutex : _ t -> Picos_std_sync.Mutex.t

Underlying mutex.

val get : 'a t -> 'a

Atomically get the value in the lock. The value that is returned isn't protected!

val set : 'a t -> 'a -> unit

Atomically set the value.

NOTE caution: using get and set as if this were a ref is an anti pattern and will not protect data against some race conditions.

diff --git a/moonpool/Moonpool_sync/index.html b/moonpool/Moonpool_sync/index.html deleted file mode 100644 index 1c4e45c8..00000000 --- a/moonpool/Moonpool_sync/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Moonpool_sync (moonpool.Moonpool_sync)

Module Moonpool_sync

  • deprecated use Picos_std_sync directly or single threaded solutions
module Mutex = Picos_std_sync.Mutex
module Condition = Picos_std_sync.Condition
module Lock : sig ... end

Mutex-protected resource.

module Event : sig ... end
module Semaphore = Picos_std_sync.Semaphore
module Lazy = Picos_std_sync.Lazy
module Latch = Picos_std_sync.Latch
module Ivar = Picos_std_sync.Ivar
module Stream = Picos_std_sync.Stream
diff --git a/moonpool/Moonpool_sync__/index.html b/moonpool/Moonpool_sync__/index.html deleted file mode 100644 index 731f930e..00000000 --- a/moonpool/Moonpool_sync__/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Moonpool_sync__ (moonpool.Moonpool_sync__)

Module Moonpool_sync__

This module is hidden.

diff --git a/moonpool/Moonpool_sync__Event/index.html b/moonpool/Moonpool_sync__Event/index.html deleted file mode 100644 index 69a71612..00000000 --- a/moonpool/Moonpool_sync__Event/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Moonpool_sync__Event (moonpool.Moonpool_sync__Event)

Module Moonpool_sync__Event

This module is hidden.

diff --git a/moonpool/Moonpool_sync__Lock/index.html b/moonpool/Moonpool_sync__Lock/index.html deleted file mode 100644 index 28277df2..00000000 --- a/moonpool/Moonpool_sync__Lock/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Moonpool_sync__Lock (moonpool.Moonpool_sync__Lock)

Module Moonpool_sync__Lock

This module is hidden.

diff --git a/moonpool/index.html b/moonpool/index.html index c991ade0..06a93f0b 100644 --- a/moonpool/index.html +++ b/moonpool/index.html @@ -1,2 +1,2 @@ -index (moonpool.index)

Package moonpool

Package info

changes-files
readme-files
+index (moonpool.index)

Package moonpool

Package info

changes-files
readme-files
diff --git a/multicore-magic/Multicore_magic/Atomic_array/index.html b/multicore-magic/Multicore_magic/Atomic_array/index.html deleted file mode 100644 index c7ed5c79..00000000 --- a/multicore-magic/Multicore_magic/Atomic_array/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Atomic_array (multicore-magic.Multicore_magic.Atomic_array)

Module Multicore_magic.Atomic_array

Array of (potentially unboxed) atomic locations.

Where available, this uses an undocumented operation exported by the OCaml 5 runtime, caml_atomic_cas_field, which makes it possible to perform sequentially consistent atomic updates of record fields and array elements.

Hopefully a future version of OCaml provides more comprehensive and even more efficient support for both sequentially consistent and relaxed atomic operations on records and arrays.

type !'a t

Represents an array of atomic locations.

val make : int -> 'a -> 'a t

make n value creates a new array of n atomic locations having given value.

val of_array : 'a array -> 'a t

of_array non_atomic_array create a new array of atomic locations as a copy of the given non_atomic_array.

val init : int -> (int -> 'a) -> 'a t

init n fn is equivalent to of_array (Array.init n fn).

val length : 'a t -> int

length atomic_array returns the length of the atomic_array.

val unsafe_fenceless_get : 'a t -> int -> 'a

unsafe_fenceless_get atomic_array index reads and returns the value at the specified index of the atomic_array.

⚠️ The read is relaxed and may be reordered with respect to other reads and writes in program order.

⚠️ No bounds checking is performed.

val unsafe_fenceless_set : 'a t -> int -> 'a -> unit

unsafe_fenceless_set atomic_array index value writes the given value to the specified index of the atomic_array.

⚠️ The write is relaxed and may be reordered with respect to other reads and (non-initializing) writes in program order.

⚠️ No bounds checking is performed.

val unsafe_compare_and_set : 'a t -> int -> 'a -> 'a -> bool

unsafe_compare_and_set atomic_array index before after atomically updates the specified index of the atomic_array to the after value in case it had the before value and returns a boolean indicating whether that was the case. This operation is sequentially consistent and may not be reordered with respect to other reads and writes in program order.

⚠️ No bounds checking is performed.

diff --git a/multicore-magic/Multicore_magic/Transparent_atomic/index.html b/multicore-magic/Multicore_magic/Transparent_atomic/index.html deleted file mode 100644 index 5d484966..00000000 --- a/multicore-magic/Multicore_magic/Transparent_atomic/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Transparent_atomic (multicore-magic.Multicore_magic.Transparent_atomic)

Module Multicore_magic.Transparent_atomic

A replacement for Stdlib.Atomic with fixes and performance improvements

Stdlib.Atomic.get is incorrectly subject to CSE optimization in OCaml 5.0.0 and 5.1.0. This can result in code being generated that can produce results that cannot be explained with the OCaml memory model. It can also sometimes result in code being generated where a manual optimization to avoid writing to memory is defeated by the compiler as the compiler eliminates a (repeated) read access. This module implements get such that argument to Stdlib.Atomic.get is passed through Sys.opaque_identity, which prevents the compiler from applying the CSE optimization.

OCaml 5 generates inefficient accesses of 'a Stdlib.Atomic.t arrays assuming that the array might be an array of floating point numbers. That is because the Stdlib.Atomic.t type constructor is opaque, which means that the compiler cannot assume that _ Stdlib.Atomic.t is not the same as float. This module defines the type as private 'a ref, which allows the compiler to know that it cannot be the same as float, which allows the compiler to generate more efficient array accesses. This can both improve performance and reduce size of generated code when using arrays of atomics.

type !'a t = private 'a ref
val make : 'a -> 'a t
val make_contended : 'a -> 'a t
val get : 'a t -> 'a
val fenceless_get : 'a t -> 'a
val set : 'a t -> 'a -> unit
val fenceless_set : 'a t -> 'a -> unit
val exchange : 'a t -> 'a -> 'a
val compare_and_set : 'a t -> 'a -> 'a -> bool
val fetch_and_add : int t -> int -> int
val incr : int t -> unit
val decr : int t -> unit
diff --git a/multicore-magic/Multicore_magic/index.html b/multicore-magic/Multicore_magic/index.html deleted file mode 100644 index 041446be..00000000 --- a/multicore-magic/Multicore_magic/index.html +++ /dev/null @@ -1,33 +0,0 @@ - -Multicore_magic (multicore-magic.Multicore_magic)

Module Multicore_magic

This is a library of magic multicore utilities intended for experts for extracting the best possible performance from multicore OCaml.

Hopefully future releases of multicore OCaml will make this library obsolete!

Helpers for using padding to avoid false sharing

val copy_as_padded : 'a -> 'a

Depending on the object, either creates a shallow clone of it or returns it as is. When cloned, the clone will have extra padding words added after the last used word.

This is designed to help avoid false sharing. False sharing has a negative impact on multicore performance. Accesses of both atomic and non-atomic locations, whether read-only or read-write, may suffer from false sharing.

The intended use case for this is to pad all long lived objects that are being accessed highly frequently (read or written).

Many kinds of objects can be padded, for example:

  let padded_atomic = Multicore_magic.copy_as_padded (Atomic.make 101)
-  let padded_ref = Multicore_magic.copy_as_padded (ref 42)
-
-  let padded_record =
-    Multicore_magic.copy_as_padded { number = 76; pointer = [ 1; 2; 3 ] }
-
-  let padded_variant = Multicore_magic.copy_as_padded (Some 1)

Padding changes the length of an array. If you need to pad an array, use make_padded_array.

val copy_as : ?padded:bool -> 'a -> 'a

copy_as x by default simply returns x. When ~padded:true is explicitly specified, returns copy_as_padded x.

val make_padded_array : int -> 'a -> 'a array

Creates a padded array. The length of the returned array includes padding. Use length_of_padded_array to get the unpadded length.

val length_of_padded_array : 'a array -> int

Returns the length of an array created by make_padded_array without the padding.

WARNING: This is not guaranteed to work with copy_as_padded.

val length_of_padded_array_minus_1 : 'a array -> int

Returns the length of an array created by make_padded_array without the padding minus 1.

WARNING: This is not guaranteed to work with copy_as_padded.

Missing Atomic operations

val fenceless_get : 'a Stdlib.Atomic.t -> 'a

Get a value from the atomic without performing an acquire fence.

Consider the following prototypical example of a lock-free algorithm:

  let rec prototypical_lock_free_algorithm () =
-    let expected = Atomic.get atomic in
-    let desired = (* computed from expected *) in
-    if not (Atomic.compare_and_set atomic expected desired) then
-      (* failure, maybe retry *)
-    else
-      (* success *)

A potential performance problem with the above example is that it performs two acquire fences. Both the Atomic.get and the Atomic.compare_and_set perform an acquire fence. This may have a negative impact on performance.

Assuming the first fence is not necessary, we can rewrite the example using fenceless_get as follows:

  let rec prototypical_lock_free_algorithm () =
-    let expected = Multicore_magic.fenceless_get atomic in
-    let desired = (* computed from expected *) in
-    if not (Atomic.compare_and_set atomic expected desired) then
-      (* failure, maybe retry *)
-    else
-      (* success *)

Now only a single acquire fence is performed by Atomic.compare_and_set and performance may be improved.

val fenceless_set : 'a Stdlib.Atomic.t -> 'a -> unit

Set the value of an atomic without performing a full fence.

Consider the following example:

  let new_atomic = Atomic.make dummy_value in
-  (* prepare data_structure referring to new_atomic *)
-  Atomic.set new_atomic data_structure;
-  (* publish the data_structure: *)
-  Atomic.exchance old_atomic data_structure

A potential performance problem with the above example is that it performs two full fences. Both the Atomic.set used to initialize the data structure and the Atomic.exchange used to publish the data structure perform a full fence. The same would also apply in cases where Atomic.compare_and_set or Atomic.set would be used to publish the data structure. This may have a negative impact on performance.

Using fenceless_set we can rewrite the example as follows:

  let new_atomic = Atomic.make dummy_value in
-  (* prepare data_structure referring to new_atomic *)
-  Multicore_magic.fenceless_set new_atomic data_structure;
-  (* publish the data_structure: *)
-  Atomic.exchance old_atomic data_structure

Now only a single full fence is performed by Atomic.exchange and performance may be improved.

val fence : int Stdlib.Atomic.t -> unit

Perform a full acquire-release fence on the given atomic.

fence atomic is equivalent to ignore (Atomic.fetch_and_add atomic 0).

Fixes and workarounds

module Transparent_atomic : sig ... end

A replacement for Stdlib.Atomic with fixes and performance improvements

Missing functionality

module Atomic_array : sig ... end

Array of (potentially unboxed) atomic locations.

Avoiding contention

val instantaneous_domain_index : unit -> int

instantaneous_domain_index () potentially (re)allocates and returns a non-negative integer "index" for the current domain. The indices are guaranteed to be unique among the domains that exist at a point in time. Each call of instantaneous_domain_index () may return a different index.

The intention is that the returned value can be used as an index into a contention avoiding parallelism safe data structure. For example, a naïve scalable increment of one counter from an array of counters could be done as follows:

  let incr counters =
-    (* Assuming length of [counters] is a power of two and larger than
-       the number of domains. *)
-    let mask = Array.length counters - 1 in
-    let index = instantaneous_domain_index () in
-    Atomic.incr counters.(index land mask)

The implementation ensures that the indices are allocated as densely as possible at any given moment. This should allow allocating as many counters as needed and essentially eliminate contention.

On OCaml 4 instantaneous_domain_index () will always return 0.

diff --git a/multicore-magic/Multicore_magic__/index.html b/multicore-magic/Multicore_magic__/index.html deleted file mode 100644 index 2d83cc0c..00000000 --- a/multicore-magic/Multicore_magic__/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Multicore_magic__ (multicore-magic.Multicore_magic__)

Module Multicore_magic__

This module is hidden.

diff --git a/multicore-magic/Multicore_magic__Cache/index.html b/multicore-magic/Multicore_magic__Cache/index.html deleted file mode 100644 index 2b10bd13..00000000 --- a/multicore-magic/Multicore_magic__Cache/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Multicore_magic__Cache (multicore-magic.Multicore_magic__Cache)

Module Multicore_magic__Cache

This module is hidden.

diff --git a/multicore-magic/Multicore_magic__Index/index.html b/multicore-magic/Multicore_magic__Index/index.html deleted file mode 100644 index c5418c5c..00000000 --- a/multicore-magic/Multicore_magic__Index/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Multicore_magic__Index (multicore-magic.Multicore_magic__Index)

Module Multicore_magic__Index

This module is hidden.

diff --git a/multicore-magic/Multicore_magic__Padding/index.html b/multicore-magic/Multicore_magic__Padding/index.html deleted file mode 100644 index 1e2aab94..00000000 --- a/multicore-magic/Multicore_magic__Padding/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Multicore_magic__Padding (multicore-magic.Multicore_magic__Padding)

Module Multicore_magic__Padding

This module is hidden.

diff --git a/multicore-magic/Multicore_magic__Transparent_atomic/index.html b/multicore-magic/Multicore_magic__Transparent_atomic/index.html deleted file mode 100644 index 077e6ff1..00000000 --- a/multicore-magic/Multicore_magic__Transparent_atomic/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Multicore_magic__Transparent_atomic (multicore-magic.Multicore_magic__Transparent_atomic)

Module Multicore_magic__Transparent_atomic

This module is hidden.

diff --git a/multicore-magic/_doc-dir/CHANGES.md b/multicore-magic/_doc-dir/CHANGES.md deleted file mode 100644 index 76db60f5..00000000 --- a/multicore-magic/_doc-dir/CHANGES.md +++ /dev/null @@ -1,38 +0,0 @@ -## 2.3.1 - -- Allow unboxed `Atomic_array` on 5.3 (@polytypic) -- Support js_of_ocaml (@polytypic) - -## 2.3.0 - -- Add `copy_as ~padded` for convenient optional padding (@polytypic) -- Add `multicore-magic-dscheck` package and library to help testing with DScheck - (@lyrm, review @polytypic) - -## 2.2.0 - -- Add (unboxed) `Atomic_array` (@polytypic) - -## 2.1.0 - -- Added `instantaneous_domain_index` for the implementation of contention - avoiding data structures. (@polytypic) -- Added `Transparent_atomic` module as a workaround to CSE issues in OCaml 5.0 - and OCaml 5.1 and also to allow more efficient arrays of atomics. (@polytypic) -- Fixed `fenceless_get` to not be subject to CSE. (@polytypic) - -## 2.0.0 - -- Changed the semantics of `copy_as_padded` to not always copy and to not - guarantee that `length_of_padded_array*` works with it. These semantic changes - allow better use of the OCaml allocator to guarantee cache friendly alignment. - (@polytypic) - -## 1.0.1 - -- Ported the library to OCaml 4 (@polytypic) -- License changed to ISC from 0BSD (@tarides) - -## 1.0.0 - -- Initial release (@polytypic) diff --git a/multicore-magic/_doc-dir/LICENSE.md b/multicore-magic/_doc-dir/LICENSE.md deleted file mode 100644 index 5da69623..00000000 --- a/multicore-magic/_doc-dir/LICENSE.md +++ /dev/null @@ -1,13 +0,0 @@ -Copyright © 2023 Vesa Karvonen - -Permission to use, copy, modify, and/or distribute this software for any purpose -with or without fee is hereby granted, provided that the above copyright notice -and this permission notice appear in all copies. - -THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH -REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND -FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, -INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS -OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER -TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF -THIS SOFTWARE. diff --git a/multicore-magic/_doc-dir/README.md b/multicore-magic/_doc-dir/README.md deleted file mode 100644 index f0af7454..00000000 --- a/multicore-magic/_doc-dir/README.md +++ /dev/null @@ -1,8 +0,0 @@ -[API reference](https://ocaml-multicore.github.io/multicore-magic/doc/multicore-magic/Multicore_magic/index.html) - -# **multicore-magic** — Low-level multicore utilities for OCaml - -This is a library of magic multicore utilities intended for experts for -extracting the best possible performance from multicore OCaml. - -Hopefully future releases of multicore OCaml will make this library obsolete! diff --git a/multicore-magic/index.html b/multicore-magic/index.html deleted file mode 100644 index 9d1debcc..00000000 --- a/multicore-magic/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -index (multicore-magic.index)

Package multicore-magic

  • Multicore_magic This is a library of magic multicore utilities intended for experts for extracting the best possible performance from multicore OCaml.

Package info

changes-files
license-files
readme-files
diff --git a/picos_std/Picos_std_awaitable/Awaitable/Awaiter/index.html b/picos_std/Picos_std_awaitable/Awaitable/Awaiter/index.html deleted file mode 100644 index 51047b93..00000000 --- a/picos_std/Picos_std_awaitable/Awaitable/Awaiter/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Awaiter (picos_std.Picos_std_awaitable.Awaitable.Awaiter)

Module Awaitable.Awaiter

Low level interface for more flexible waiting.

type 'a awaitable := 'a t

An erased type alias for Awaitable.t.

type t

Represents a single use awaiter of a signal to an awaitable.

val add : 'a awaitable -> Picos.Trigger.t -> t

add awaitable trigger creates a single use awaiter, adds it to the FIFO associated with the awaitable, and returns the awaiter.

val remove : t -> unit

remove awaiter marks the awaiter as having been signaled and removes it from the FIFO associated with the awaitable.

ℹ️ If the associated trigger is used with only one awaiter and the Trigger.awaitawait on the trigger returns None, there is no need to explicitly remove the awaiter, because it has already been removed.

diff --git a/picos_std/Picos_std_awaitable/Awaitable/index.html b/picos_std/Picos_std_awaitable/Awaitable/index.html deleted file mode 100644 index 96244481..00000000 --- a/picos_std/Picos_std_awaitable/Awaitable/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Awaitable (picos_std.Picos_std_awaitable.Awaitable)

Module Picos_std_awaitable.Awaitable

An awaitable atomic location.

This module provides a superset of the Stdlib Atomic API with more or less identical performance. The main difference is that a non-padded awaitable location takes an extra word of memory. Additionally a futex-like API provides the ability to await until an awaitable location is explicitly signaled to potentially have a different value.

Awaitable locations can be used to implement many kinds of synchronization and communication primitives.

Atomic API

type !'a t

Represents an awaitable atomic location.

val make : ?padded:bool -> 'a -> 'a t

make initial creates a new awaitable atomic location with the given initial value.

val make_contended : 'a -> 'a t

make_contended initial is equivalent to make ~padded:true initial.

val get : 'a t -> 'a

get awaitable is essentially equivalent to Atomic.get awaitable.

val compare_and_set : 'a t -> 'a -> 'a -> bool

compare_and_set awaitable before after is essentially equivalent to Atomic.compare_and_set awaitable before after.

val exchange : 'a t -> 'a -> 'a

exchange awaitable after is essentially equivalent to Atomic.exchange awaitable after.

val set : 'a t -> 'a -> unit

set awaitable value is equivalent to exchange awaitable value |> ignore.

val fetch_and_add : int t -> int -> int

fetch_and_add awaitable delta is essentially equivalent to Atomic.fetch_and_add awaitable delta.

val incr : int t -> unit

incr awaitable is equivalent to fetch_and_add awaitable (+1) |> ignore.

val decr : int t -> unit

incr awaitable is equivalent to fetch_and_add awaitable (-1) |> ignore.

Futex API

val signal : 'a t -> unit

signal awaitable tries to wake up one fiber awaitin on the awaitable location.

🐌 Generally speaking one should avoid calling signal too frequently, because the queue of awaiters is stored separately from the awaitable location and it takes a bit of effort to locate it. For example, calling signal every time a value is added to an empty data structure might not be optimal. In many cases it is faster to explicitly mark the potential presence of awaiters in the data structure and avoid calling signal when it is definitely known that there are no awaiters.

val broadcast : 'a t -> unit

broadcast awaitable tries to wake up all fibers awaiting on the awaitable location.

🐌 The same advice as with signal applies to broadcast. In addition, it is typically a good idea to avoid potentially waking up large numbers of fibers as it can easily lead to the thundering herd phenomana.

val await : 'a t -> 'a -> unit

await awaitable before suspends the current fiber until the awaitable is explicitly signaled and has a value other than before.

⚠️ This operation is subject to the ABA problem. An await for value other than A may not return after the awaitable is signaled while having the value B, because at a later point the awaitable has again the value A. Furthermore, by the time an await for value other than A returns, the awaitable might already again have the value A.

⚠️ Atomic operations that change the value of an awaitable do not implicitly wake up awaiters.

module Awaiter : sig ... end

Low level interface for more flexible waiting.

diff --git a/picos_std/Picos_std_awaitable/index.html b/picos_std/Picos_std_awaitable/index.html deleted file mode 100644 index c4b6399c..00000000 --- a/picos_std/Picos_std_awaitable/index.html +++ /dev/null @@ -1,43 +0,0 @@ - -Picos_std_awaitable (picos_std.Picos_std_awaitable)

Module Picos_std_awaitable

Basic futex-like awaitable atomic location for Picos.

Modules

module Awaitable : sig ... end

An awaitable atomic location.

Examples

We first open the library to bring the Awaitable module into scope:

  # open Picos_std_awaitable

Mutex

Here is a basic mutex implementation using awaitables:

  module Mutex = struct
-    type t = int Awaitable.t
-
-    let create ?padded () = Awaitable.make ?padded 0
-
-    let lock t =
-      if not (Awaitable.compare_and_set t 0 1) then
-        while Awaitable.exchange t 2 <> 0 do
-          Awaitable.await t 2
-        done
-
-    let unlock t =
-      let before = Awaitable.fetch_and_add t (-1) in
-      if before = 2 then begin
-        Awaitable.set t 0;
-        Awaitable.signal t
-      end
-  end

The above mutex outperforms most other mutexes under both no/low and high contention scenarios. In no/low contention scenarios the use of fetch_and_add provides low overhead. In high contention scenarios the above mutex allows unfairness, which avoids performance degradation due to the lock convoy phenomena.

Condition

Let's also implement a condition variable. For that we'll also make use of low level abstractions and operations from the Picos core library:

  # open Picos

To implement a condition variable, we'll use the Awaiter API:

  module Condition = struct
-    type t = unit Awaitable.t
-
-    let create () = Awaitable.make ()
-
-    let wait t mutex =
-      let trigger = Trigger.create () in
-      let awaiter = Awaitable.Awaiter.add t trigger in
-      Mutex.unlock mutex;
-      let lock_forbidden mutex =
-        let fiber = Fiber.current () in
-        let forbid = Fiber.exchange fiber ~forbid:true in
-        Mutex.lock mutex;
-        Fiber.set fiber ~forbid
-      in
-      match Trigger.await trigger with
-      | None -> lock_forbidden mutex
-      | Some exn_bt ->
-          Awaitable.Awaiter.remove awaiter;
-          lock_forbidden mutex;
-          Printexc.raise_with_backtrace (fst exn_bt) (snd exn_bt)
-
-    let signal = Awaitable.signal
-    let broadcast = Awaitable.broadcast
-  end

Notice that the awaitable location used in the above condition variable implementation is never mutated. We just reuse the signaling mechanism of awaitables.

diff --git a/picos_std/Picos_std_event/Event/index.html b/picos_std/Picos_std_event/Event/index.html deleted file mode 100644 index 2fb52dc2..00000000 --- a/picos_std/Picos_std_event/Event/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Event (picos_std.Picos_std_event.Event)

Module Picos_std_event.Event

First-class synchronous communication abstraction.

Events describe a thing that might happen in the future, or a concurrent offer or request that might be accepted or succeed, but is cancelable if some other event happens first.

See the Picos_io_select library for an example.

ℹ️ This module intentionally mimics the Event module provided by the OCaml POSIX threads library.

type !'a t

An event returning a value of type 'a.

type 'a event = 'a t

An alias for the Event.t type to match the Event module signature.

val always : 'a -> 'a t

always value returns an event that can always be committed to resulting in the given value.

Composing events

val choose : 'a t list -> 'a t

choose events return an event that offers all of the given events and then commits to at most one of them.

val wrap : 'b t -> ('b -> 'a) -> 'a t

wrap event fn returns an event that acts as the given event and then applies the given function to the value in case the event is committed to.

val map : ('b -> 'a) -> 'b t -> 'a t

map fn event is equivalent to wrap event fn.

val guard : (unit -> 'a t) -> 'a t

guard thunk returns an event that, when synchronized, calls the thunk, and then behaves like the resulting event.

⚠️ Raising an exception from a guard thunk will result in raising that exception out of the sync. This may result in dropping the result of an event that committed just after the exception was raised. This means that you should treat an unexpected exception raised from sync as a fatal error.

Consuming events

val sync : 'a t -> 'a

sync event synchronizes on the given event.

Synchronizing on an event executes in three phases:

  1. In the first phase offers or requests are made to communicate.
  2. One of the offers or requests is committed to and all the other offers and requests are canceled.
  3. A final result is computed from the value produced by the event.

⚠️ sync event does not wait for the canceled concurrent requests to terminate. This means that you should arrange for guaranteed cleanup through other means such as the use of structured concurrency.

val select : 'a t list -> 'a

select events is equivalent to sync (choose events).

Primitive events

ℹ️ The Computation concept of Picos can be seen as a basic single-shot atomic event. This module builds on that concept to provide a composable API to concurrent services exposed through computations.

type 'a request = {
  1. request : 'r. (unit -> 'r) Picos.Computation.t -> ('a -> 'r) -> unit;
}

Represents a function that requests a concurrent service to update a computation.

ℹ️ The computation passed to a request may be completed by some other event at any point. All primitive requests should be implemented carefully to take that into account. If the computation is completed by some other event, then the request should be considered as canceled, take no effect, and not leak any resources.

⚠️ Raising an exception from a request function will result in raising that exception out of the sync. This may result in dropping the result of an event that committed just after the exception was raised. This means that you should treat an unexpected exception raised from sync as a fatal error. In addition, you should arrange for concurrent services to report unexpected errors independently of the computation being passed to the service.

val from_request : 'a request -> 'a t

from_request { request } creates an event from the request function.

val from_computation : 'a Picos.Computation.t -> 'a t

from_computation source creates an event that can be committed to once the given source computation has completed.

ℹ️ Committing to some other event does not cancel the source computation.

diff --git a/picos_std/Picos_std_event/index.html b/picos_std/Picos_std_event/index.html deleted file mode 100644 index ec26e00b..00000000 --- a/picos_std/Picos_std_event/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_event (picos_std.Picos_std_event)

Module Picos_std_event

Basic event abstraction for Picos.

module Event : sig ... end

First-class synchronous communication abstraction.

diff --git a/picos_std/Picos_std_event__/index.html b/picos_std/Picos_std_event__/index.html deleted file mode 100644 index b9c7aec0..00000000 --- a/picos_std/Picos_std_event__/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_event__ (picos_std.Picos_std_event__)

Module Picos_std_event__

This module is hidden.

diff --git a/picos_std/Picos_std_event__Event/index.html b/picos_std/Picos_std_event__Event/index.html deleted file mode 100644 index 4791df05..00000000 --- a/picos_std/Picos_std_event__Event/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_event__Event (picos_std.Picos_std_event__Event)

Module Picos_std_event__Event

This module is hidden.

diff --git a/picos_std/Picos_std_finally/index.html b/picos_std/Picos_std_finally/index.html deleted file mode 100644 index c3f1ecbd..00000000 --- a/picos_std/Picos_std_finally/index.html +++ /dev/null @@ -1,71 +0,0 @@ - -Picos_std_finally (picos_std.Picos_std_finally)

Module Picos_std_finally

Syntax for avoiding resource leaks for Picos.

A resource is something that is acquired and must be released after it is no longer needed.

⚠️ Beware that the Stdlib Fun.protect ~finally helper does not protect against cancelation propagation when it calls finally (). This means that cancelable operations performed by finally may be terminated and resources might be leaked. So, if you want to avoid resource leaks, you should either use lastly or explicitly protect against cancelation propagation.

We open both this library and a few other libraries

  open Picos_io
-  open Picos_std_finally
-  open Picos_std_structured
-  open Picos_std_sync

for the examples.

API

Basics

val (let@) : ('a -> 'b) -> 'a -> 'b

let@ resource = template in scope is equivalent to template (fun resource -> scope).

ℹ️ You can use this binding operator with any template function that has a type of the form ('r -> 'a) -> 'a.

val finally : ('r -> unit) -> (unit -> 'r) -> ('r -> 'a) -> 'a

finally release acquire scope calls acquire () to obtain a resource, calls scope resource, and then calls release resource after the scope exits.

ℹ️ Cancelation propagation will be forbidden during the call of release.

val lastly : (unit -> unit) -> (unit -> 'a) -> 'a

lastly action scope is equivalent to finally action Fun.id scope.

ℹ️ Cancelation propagation will be forbidden during the call of action.

Instances

type 'r instance

Either contains a resource or is empty as the resource has been transferred, dropped, or has been borrowed temporarily.

val instantiate : ('r -> unit) -> (unit -> 'r) -> ('r instance -> 'a) -> 'a

instantiate release acquire scope calls acquire () to obtain a resource and stores it as an instance, calls scope instance. Then, if scope returns normally, awaits until the instance becomes empty. In case scope raises an exception or the fiber is canceled, the instance will be dropped.

ℹ️ Cancelation propagation will be forbidden during the call of release.

val drop : 'r instance -> unit

drop instance releases the resource, if any, contained by the instance.

  • raises Invalid_argument

    if the resource has been borrowed and hasn't yet been returned.

val borrow : 'r instance -> ('r -> 'a) -> 'a

borrow instance scope borrows the resource stored in the instance, calls scope resource, and then returns the resource to the instance after scope exits.

  • raises Invalid_argument

    if the resource has already been borrowed and hasn't yet been returned, has already been dropped, or has already been transferred.

val transfer : 'r instance -> ('r instance -> 'a) -> 'a

transfer source transfers the resource stored in the source instance into a new target instance, calls scope target. Then, if scope returns normally, awaits until the target instance becomes empty. In case scope raises an exception or the fiber is canceled, the target instance will be dropped.

  • raises Invalid_argument

    if the resource has been borrowed and hasn't yet been returned, has already been transferred, or has been dropped unless the current fiber has been canceled, in which case the exception that the fiber was canceled with will be raised.

val move : 'r instance -> ('r -> 'a) -> 'a

move instance scope is equivalent to transfer instance (fun instance -> borrow instance scope).

Examples

Recursive server

Here is a sketch of a server that recursively forks a fiber to accept and handle a client:

  let recursive_server server_fd =
-    Flock.join_after @@ fun () ->
-
-    (* recursive server *)
-    let rec accept () =
-      let@ client_fd =
-        finally Unix.close @@ fun () ->
-        Unix.accept ~cloexec:true server_fd
-        |> fst
-      in
-
-      (* fork to accept other clients *)
-      Flock.fork accept;
-
-      (* handle this client... omitted *)
-      ()
-    in
-    Flock.fork accept

Looping server

There is also a way to move instantiated resources to allow forking fibers to handle clients without leaks.

Here is a sketch of a server that accepts in a loop and forks fibers to handle clients:

  let looping_server server_fd =
-    Flock.join_after @@ fun () ->
-
-    (* loop to accept clients *)
-    while true do
-      let@ client_fd =
-        instantiate Unix.close @@ fun () ->
-        Unix.accept ~cloexec:true server_fd
-        |> fst
-      in
-
-      (* fork to handle this client *)
-      Flock.fork @@ fun () ->
-        let@ client_fd = move client_fd in
-
-        (* handle client... omitted *)
-        ()
-    done

Move resource from child to parent

You can move an instantiated resource between any two fibers and borrow it before moving it. For example, you can create a resource in a child fiber, use it there, and then move it to the parent fiber:

  let move_from_child_to_parent () =
-    Flock.join_after @@ fun () ->
-
-    (* for communicating a resource *)
-    let shared_ivar = Ivar.create () in
-
-    (* fork a child that creates a resource *)
-    Flock.fork begin fun () ->
-      let pretend_release () = ()
-      and pretend_acquire () = () in
-
-      (* allocate a resource *)
-      let@ instance =
-        instantiate pretend_release pretend_acquire
-      in
-
-      begin
-        (* borrow the resource *)
-        let@ resource = borrow instance in
-
-        (* use the resource... omitted *)
-        ()
-      end;
-
-      (* send the resource to the parent *)
-      Ivar.fill shared_ivar instance
-    end;
-
-    (* await for a resource from the child and own it *)
-    let@ resource = Ivar.read shared_ivar |> move in
-
-    (* use the resource... omitted *)
-    ()

The above uses an Ivar to communicate the movable resource from the child fiber to the parent fiber. Any concurrency safe mechanism could be used.

diff --git a/picos_std/Picos_std_structured/Bundle/index.html b/picos_std/Picos_std_structured/Bundle/index.html deleted file mode 100644 index 7439097f..00000000 --- a/picos_std/Picos_std_structured/Bundle/index.html +++ /dev/null @@ -1,6 +0,0 @@ - -Bundle (picos_std.Picos_std_structured.Bundle)

Module Picos_std_structured.Bundle

An explicit dynamic bundle of fibers guaranteed to be joined at the end.

Bundles allow you to conveniently structure or delimit concurrency into nested scopes. After a bundle returns or raises an exception, no fibers forked to the bundle remain.

An unhandled exception, or error, within any fiber of the bundle causes all of the fibers forked to the bundle to be canceled and the bundle to raise the error exception or error exceptions raised by all of the fibers forked into the bundle.

type t

Represents a bundle of fibers.

val join_after : - ?callstack:int -> - ?on_return:[ `Terminate | `Wait ] -> - (t -> 'a) -> - 'a

join_after scope calls scope with a bundle. A call of join_after returns or raises only after scope has returned or raised and all forked fibers have terminated. If scope raises an exception, error will be called.

The optional on_return argument specifies what to do when the scope returns normally. It defaults to `Wait, which means to just wait for all the fibers to terminate on their own. When explicitly specified as ~on_return:`Terminate, then terminate ?callstack will be called on return. This can be convenient, for example, when dealing with daemon fibers.

val terminate : ?callstack:int -> t -> unit

terminate bundle cancels all of the forked fibers using the Terminate exception. After terminate has been called, no new fibers can be forked to the bundle.

The optional callstack argument specifies the number of callstack entries to capture with the Terminate exception. The default is 0.

ℹ️ Calling terminate at the end of a bundle can be a convenient way to cancel any background fibers started by the bundle.

ℹ️ Calling terminate does not raise the Terminate exception, but blocking operations after terminate will raise the exception to propagate cancelation unless propagation of cancelation is forbidden.

val terminate_after : ?callstack:int -> t -> seconds:float -> unit

terminate_after ~seconds bundle arranges to terminate the bundle after the specified timeout in seconds.

val error : ?callstack:int -> t -> exn -> Stdlib.Printexc.raw_backtrace -> unit

error bundle exn bt first calls terminate and then adds the exception with backtrace to the list of exceptions to be raised, unless the exception is the Terminate exception, which is not considered to signal an error by itself.

The optional callstack argument is passed to terminate.

val fork_as_promise : t -> (unit -> 'a) -> 'a Promise.t

fork_as_promise bundle thunk spawns a new fiber to the bundle that will run the given thunk. The result of the thunk will be written to the promise. If the thunk raises an exception, error will be called with that exception.

val fork : t -> (unit -> unit) -> unit

fork bundle action is equivalent to fork_as_promise bundle action |> ignore.

diff --git a/picos_std/Picos_std_structured/Control/index.html b/picos_std/Picos_std_structured/Control/index.html deleted file mode 100644 index 53cd044a..00000000 --- a/picos_std/Picos_std_structured/Control/index.html +++ /dev/null @@ -1,17 +0,0 @@ - -Control (picos_std.Picos_std_structured.Control)

Module Picos_std_structured.Control

Basic control operations and exceptions for structured concurrency.

exception Terminate

An exception that is used to signal fibers, typically by canceling them, that they should terminate by letting the exception propagate.

ℹ️ Within this library, the Terminate exception does not, by itself, indicate an error. Raising it inside a fiber forked within the structured concurrency constructs of this library simply causes the relevant part of the tree of fibers to be terminated.

⚠️ If Terminate is raised in the main fiber of a Bundle, and no other exceptions are raised within any fiber inside the bundle, the bundle will then, of course, raise the Terminate exception after all of the fibers have been terminated.

exception Errors of (exn * Stdlib.Printexc.raw_backtrace) list

An exception that can be used to collect exceptions, typically indicating errors, from multiple fibers.

ℹ️ The Terminate exception is not considered an error within this library and the structuring constructs do not include it in the list of Errors.

val raise_if_canceled : unit -> unit

raise_if_canceled () checks whether the current fiber has been canceled and if so raises the exception that the fiber was canceled with.

ℹ️ Within this library fibers are canceled using the Terminate exception.

val yield : unit -> unit

yield () asks the current fiber to be rescheduled.

val sleep : seconds:float -> unit

sleep ~seconds suspends the current fiber for the specified number of seconds.

val protect : (unit -> 'a) -> 'a

protect thunk forbids propagation of cancelation for the duration of thunk ().

ℹ️ Many operations are cancelable. In particular, anything that might suspend the current fiber to await for something should typically be cancelable. Operations that release resources may sometimes also be cancelable and calls of such operations should typically be protected to ensure that resources will be properly released. Forbidding propagation of cancelation may also be required when a sequence of cancelable operations must be performed.

ℹ️ With the constructs provided by this library it is not possible to prevent a fiber from being canceled, but it is possible for a fiber to forbid the scheduler from propagating cancelation to the fiber.

val block : unit -> 'a

block () suspends the current fiber until it is canceled at which point the cancelation exception will be raised.

  • raises Invalid_argument

    in case propagation of cancelation has been forbidden.

  • raises Sys_error

    in case the underlying computation of the fiber is forced to return during block. This is only possible when the fiber has been spawned through another library.

val terminate_after : ?callstack:int -> seconds:float -> (unit -> 'a) -> 'a

terminate_after ~seconds thunk arranges to terminate the execution of thunk on the current fiber after the specified timeout in seconds.

Using terminate_after one can attempt any blocking operation that supports cancelation with a timeout. For example, one could try to read an Ivar with a timeout

  let peek_in ~seconds ivar =
-    match
-      Control.terminate_after ~seconds @@ fun () ->
-        Ivar.read ivar
-    with
-    | value -> Some value
-    | exception Control.Terminate -> None

or one could try to connect a socket with a timeout

  let try_connect_in ~seconds socket sockaddr =
-    match
-      Control.terminate_after ~seconds @@ fun () ->
-        Unix.connect socket sockaddr
-    with
-    | () -> true
-    | exception Control.Terminate -> false

using the Picos_io.Unix module.

The optional callstack argument specifies the number of callstack entries to capture with the Terminate exception. The default is 0.

As an example, terminate_after could be implemented using Bundle as follows:

  let terminate_after ?callstack ~seconds thunk =
-    Bundle.join_after @@ fun bundle ->
-    Bundle.terminate_after ?callstack ~seconds bundle;
-    thunk ()
diff --git a/picos_std/Picos_std_structured/Flock/index.html b/picos_std/Picos_std_structured/Flock/index.html deleted file mode 100644 index d483c86f..00000000 --- a/picos_std/Picos_std_structured/Flock/index.html +++ /dev/null @@ -1,6 +0,0 @@ - -Flock (picos_std.Picos_std_structured.Flock)

Module Picos_std_structured.Flock

An implicit dynamic flock of fibers guaranteed to be joined at the end.

Flocks allow you to conveniently structure or delimit concurrency into nested scopes. After a flock returns or raises an exception, no fibers forked to the flock remain.

An unhandled exception, or error, within any fiber of the flock causes all of the fibers forked to the flock to be canceled and the flock to raise the error exception or error exceptions raised by all of the fibers forked into the flock.

ℹ️ This is essentially a very thin convenience wrapper for an implicitly propagated Bundle.

⚠️ All of the operations in this module, except join_after, raise the Invalid_argument exception in case they are called from outside of the dynamic multifiber scope of a flock established by calling join_after.

val join_after : - ?callstack:int -> - ?on_return:[ `Terminate | `Wait ] -> - (unit -> 'a) -> - 'a

join_after scope creates a new flock for fibers, calls scope after setting current flock to the new flock, and restores the previous flock, if any after scope exits. The flock will be implicitly propagated to all fibers forked into the flock. A call of join_after returns or raises only after scope has returned or raised and all forked fibers have terminated. If scope raises an exception, error will be called.

The optional on_return argument specifies what to do when the scope returns normally. It defaults to `Wait, which means to just wait for all the fibers to terminate on their own. When explicitly specified as ~on_return:`Terminate, then terminate ?callstack will be called on return. This can be convenient, for example, when dealing with daemon fibers.

val terminate : ?callstack:int -> unit -> unit

terminate () cancels all of the forked fibers using the Terminate exception. After terminate has been called, no new fibers can be forked to the current flock.

The optional callstack argument specifies the number of callstack entries to capture with the Terminate exception. The default is 0.

ℹ️ Calling terminate at the end of a flock can be a convenient way to cancel any background fibers started by the flock.

ℹ️ Calling terminate does not raise the Terminate exception, but blocking operations after terminate will raise the exception to propagate cancelation unless propagation of cancelation is forbidden.

val terminate_after : ?callstack:int -> seconds:float -> unit -> unit

terminate_after ~seconds () arranges to terminate the current flock after the specified timeout in seconds.

val error : ?callstack:int -> exn -> Stdlib.Printexc.raw_backtrace -> unit

error exn bt first calls terminate and then adds the exception with backtrace to the list of exceptions to be raised, unless the exception is the Terminate exception, which is not considered to signal an error by itself.

The optional callstack argument is passed to terminate.

val fork_as_promise : (unit -> 'a) -> 'a Promise.t

fork_as_promise thunk spawns a new fiber to the current flock that will run the given thunk. The result of the thunk will be written to the promise. If the thunk raises an exception, error will be called with that exception.

val fork : (unit -> unit) -> unit

fork action is equivalent to fork_as_promise action |> ignore.

diff --git a/picos_std/Picos_std_structured/Promise/index.html b/picos_std/Picos_std_structured/Promise/index.html deleted file mode 100644 index ff53885a..00000000 --- a/picos_std/Picos_std_structured/Promise/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Promise (picos_std.Picos_std_structured.Promise)

Module Picos_std_structured.Promise

A cancelable promise.

ℹ️ In addition to using a promise to concurrently compute and return a value, a cancelable promise can also represent a concurrent fiber that will continue until it is explicitly canceled.

⚠️ Canceling a promise does not immediately terminate the fiber or wait for the fiber working to complete the promise to terminate. Constructs like Bundle.join_after and Flock.join_after only guarantee that all fibers forked within their scope have terminated before they return or raise. The reason for this design choice in this library is that synchronization is expensive and delaying synchronization to the join operation is typically sufficient and amortizes the cost.

type !'a t

Represents a promise to produce a value of type 'a.

val of_value : 'a -> 'a t

of_value value returns a constant completed promise that returns the given value.

ℹ️ Promises can also be created in the scope of a Bundle or a Flock.

val await : 'a t -> 'a

await promise awaits until the promise has completed and either returns the value that the evaluation of the promise returned, raises the exception that the evaluation of the promise raised, or raises the Terminate exception in case the promise has been canceled.

⚠️ The fiber corresponding to a canceled promise is not guaranteed to have terminated at the point await raises.

val completed : 'a t -> 'a Picos_std_event.Event.t

completed promise returns an event that can be committed to once the promise has completed.

val is_running : 'a t -> bool

is_running promise determines whether the completion of the promise is still pending.

val try_terminate : ?callstack:int -> 'a t -> bool

try_terminate promise tries to terminate the promise by canceling it with the Terminate exception and returns true in case of success and false in case the promise had already completed, i.e. either returned, raised, or canceled.

The optional callstack argument specifies the number of callstack entries to capture with the Terminate exception. The default is 0.

val terminate : ?callstack:int -> 'a t -> unit

terminate promise is equivalent to try_terminate promise |> ignore.

val terminate_after : ?callstack:int -> 'a t -> seconds:float -> unit

terminate_after ~seconds promise arranges to terminate the promise by canceling it with the Terminate exception after the specified timeout in seconds.

The optional callstack argument specifies the number of callstack entries to capture with the Terminate exception. The default is 0.

diff --git a/picos_std/Picos_std_structured/Run/index.html b/picos_std/Picos_std_structured/Run/index.html deleted file mode 100644 index 49aa0da9..00000000 --- a/picos_std/Picos_std_structured/Run/index.html +++ /dev/null @@ -1,12 +0,0 @@ - -Run (picos_std.Picos_std_structured.Run)

Module Picos_std_structured.Run

Operations for running fibers in specific patterns.

val all : (unit -> unit) list -> unit

all actions starts the actions as separate fibers and waits until they all complete or one of them raises an unhandled exception other than Terminate, which is not counted as an error, after which the remaining fibers will be canceled.

⚠️ One of actions may be run on the current fiber.

⚠️ It is not guaranteed that any of the actions in the list are called. In particular, after any action raises an unhandled exception or after the main fiber is canceled, the actions that have not yet started may be skipped entirely.

all is roughly equivalent to

  let all actions =
-    Bundle.join_after @@ fun bundle ->
-    List.iter (Bundle.fork bundle) actions

but treats the list of actions as a single computation.

val any : (unit -> unit) list -> unit

any actions starts the actions as separate fibers and waits until one of them completes or raises an unhandled exception other than Terminate, which is not counted as an error, after which the rest of the started fibers will be canceled.

⚠️ One of actions may be run on the current fiber.

⚠️ It is not guaranteed that any of the actions in the list are called. In particular, after the first action returns successfully or after any action raises an unhandled exception or after the main fiber is canceled, the actions that have not yet started may be skipped entirely.

any is roughly equivalent to

  let any actions =
-    Bundle.join_after @@ fun bundle ->
-    try
-      actions
-      |> List.iter @@ fun action ->
-         Bundle.fork bundle @@ fun () ->
-         action ();
-         Bundle.terminate bundle
-    with Control.Terminate -> ()

but treats the list of actions as a single computation.

diff --git a/picos_std/Picos_std_structured/index.html b/picos_std/Picos_std_structured/index.html deleted file mode 100644 index 9701a567..00000000 --- a/picos_std/Picos_std_structured/index.html +++ /dev/null @@ -1,189 +0,0 @@ - -Picos_std_structured (picos_std.Picos_std_structured)

Module Picos_std_structured

Basic structured concurrency primitives for Picos.

This library essentially provides one application programming interface for structuring fibers with any Picos compatible scheduler.

For the examples we open some modules:

  open Picos_io
-  open Picos_std_event
-  open Picos_std_finally
-  open Picos_std_structured
-  open Picos_std_sync

Modules

module Control : sig ... end

Basic control operations and exceptions for structured concurrency.

module Promise : sig ... end

A cancelable promise.

module Bundle : sig ... end

An explicit dynamic bundle of fibers guaranteed to be joined at the end.

module Flock : sig ... end

An implicit dynamic flock of fibers guaranteed to be joined at the end.

module Run : sig ... end

Operations for running fibers in specific patterns.

Examples

Understanding cancelation

Consider the following program:

  let main () =
-    Flock.join_after begin fun () ->
-      let promise =
-        Flock.fork_as_promise @@ fun () ->
-        Control.block ()
-      in
-
-      Flock.fork begin fun () ->
-        Promise.await promise
-      end;
-
-      Flock.fork begin fun () ->
-        let condition = Condition.create ()
-        and mutex = Mutex.create () in
-        Mutex.protect mutex begin fun () ->
-          while true do
-            Condition.wait condition mutex
-          done
-        end
-      end;
-
-      Flock.fork begin fun () ->
-        let sem =
-          Semaphore.Binary.make false
-        in
-        Semaphore.Binary.acquire sem
-      end;
-
-      Flock.fork begin fun () ->
-        let sem =
-          Semaphore.Counting.make 0
-        in
-        Semaphore.Counting.acquire sem
-      end;
-
-      Flock.fork begin fun () ->
-        Event.sync (Event.choose [])
-      end;
-
-      Flock.fork begin fun () ->
-        let latch = Latch.create 1 in
-        Latch.await latch
-      end;
-
-      Flock.fork begin fun () ->
-        let ivar = Ivar.create () in
-        Ivar.read ivar
-      end;
-
-      Flock.fork begin fun () ->
-        let stream = Stream.create () in
-        Stream.read (Stream.tap stream)
-        |> ignore
-      end;
-
-      Flock.fork begin fun () ->
-        let@ inn, out = finally
-          Unix.close_pair @@ fun () ->
-          Unix.socketpair ~cloexec:true
-            PF_UNIX SOCK_STREAM 0
-        in
-        Unix.set_nonblock inn;
-        let n =
-          Unix.read inn (Bytes.create 1)
-            0 1
-        in
-        assert (n = 1)
-      end;
-
-      Flock.fork begin fun () ->
-        let a_month =
-          60.0 *. 60.0 *. 24.0 *. 30.0
-        in
-        Control.sleep ~seconds:a_month
-      end;
-
-      (* Let the children get stuck *)
-      Control.sleep ~seconds:0.1;
-
-      Flock.terminate ()
-    end

First of all, note that above the Mutex, Condition, and Semaphore modules come from the Picos_std_sync library and the Unix module comes from the Picos_io library. They do not come from the standard OCaml libraries.

The above program creates a flock of fibers and forks several fibers to the flock that all block in various ways. In detail,

Fibers forked to a flock can be canceled in various ways. In the above program we call Flock.terminate to cancel all of the fibers and effectively close the flock. This allows the program to return normally immediately and without leaking or leaving anything in an invalid state:

  # Picos_mux_random.run_on ~n_domains:2 main
-  - : unit = ()

Now, the point of the above example isn't that you should just call terminate when your program gets stuck. 😅

What the above example hopefully demonstrates is that concurrent abstractions like mutexes and condition variables, asynchronous IO libraries, and others can be designed to support cancelation.

Cancelation is a signaling mechanism that allows structured concurrent abstractions, like the Flock abstraction, to (hopefully) gracefully tear down concurrent fibers in case of errors. Indeed, one of the basic ideas behind the Flock abstraction is that in case any fiber forked to the flock raises an unhandled exception, the whole flock will be terminated and the error will raised from the flock, which allows you to understand what went wrong, instead of having to debug a program that mysteriously gets stuck, for example.

Cancelation can also, with some care, be used as a mechanism to terminate fibers once they are no longer needed. However, just like sleep, for example, cancelation is inherently prone to races, i.e. it is difficult to understand the exact point and state at which a fiber gets canceled and it is usually non-deterministic, and therefore cancelation is not recommended for use as a general synchronization or communication mechanism.

Errors and cancelation

Consider the following program:

  let many_errors () =
-    Flock.join_after @@ fun () ->
-
-    let latch = Latch.create 1 in
-
-    let fork_raising exn =
-      Flock.fork begin fun () ->
-        Control.protect begin fun () ->
-          Latch.await latch
-        end;
-        raise exn
-      end
-    in
-
-    fork_raising Exit;
-    fork_raising Not_found;
-    fork_raising Control.Terminate;
-
-    Latch.decr latch

The above program starts three fibers and uses a latch to ensure that all of them have been started, before two of them raise errors and the third raises Terminate, which is not considered an error in this library. Running the program

  # Picos_mux_fifo.run many_errors
-  Exception: Errors[Stdlib.Exit; Not_found]

raises a collection of all of the errors.

A simple echo server and clients

Let's build a simple TCP echo server and run it with some clients.

We first define a function for the server:

  let run_server server_fd =
-    Flock.join_after begin fun () ->
-      while true do
-        let@ client_fd =
-          instantiate Unix.close @@ fun () ->
-          Unix.accept
-            ~cloexec:true server_fd |> fst
-        in
-
-        (* Fork a fiber for client *)
-        Flock.fork begin fun () ->
-          let@ client_fd =
-            move client_fd
-          in
-          Unix.set_nonblock client_fd;
-
-          let bs = Bytes.create 100 in
-          let n =
-            Unix.read client_fd bs 0
-              (Bytes.length bs)
-          in
-          Unix.write client_fd bs 0 n
-          |> ignore
-        end
-      done
-    end

The server function expects a listening socket. For each accepted client the server forks a new fiber to handle it. The client socket is moved from the server fiber to the client fiber to avoid leaks and to ensure that the socket will be closed.

Let's then define a function for the clients:

  let run_client server_addr =
-    let@ socket =
-      finally Unix.close @@ fun () ->
-      Unix.socket ~cloexec:true
-        PF_INET SOCK_STREAM 0
-    in
-    Unix.set_nonblock socket;
-    Unix.connect socket server_addr;
-
-    let msg = "Hello!" in
-    Unix.write_substring
-      socket msg 0 (String.length msg)
-    |> ignore;
-
-    let bytes =
-      Bytes.create (String.length msg)
-    in
-    let n =
-      Unix.read socket bytes 0
-        (Bytes.length bytes)
-    in
-
-    Printf.printf "Received: %s\n%!"
-      (Bytes.sub_string bytes 0 n)

The client function takes the address of the server and connects a socket to the server address. It then writes a message to the server and reads a reply from the server and prints it.

Here is the main program:

  let main () =
-    let@ server_fd =
-      finally Unix.close @@ fun () ->
-      Unix.socket ~cloexec:true
-        PF_INET SOCK_STREAM 0
-    in
-    Unix.set_nonblock server_fd;
-    (* Let system determine the port *)
-    Unix.bind server_fd Unix.(
-      ADDR_INET(inet_addr_loopback, 0));
-    Unix.listen server_fd 8;
-
-    let server_addr =
-      Unix.getsockname server_fd
-    in
-
-    Flock.join_after ~on_return:`Terminate begin fun () ->
-      (* Start server *)
-      Flock.fork begin fun () ->
-        run_server server_fd
-      end;
-
-      (* Run clients concurrently *)
-      Flock.join_after begin fun () ->
-        for _ = 1 to 5 do
-          Flock.fork @@ fun () ->
-            run_client server_addr
-        done
-      end
-    end

The main program creates a socket for the server and configures it. The server is then started as a fiber in a flock terminated on return. Then the clients are started to run concurrently in an inner flock.

Finally we run the main program with a scheduler:

  # Picos_mux_random.run_on ~n_domains:1 main
-  Received: Hello!
-  Received: Hello!
-  Received: Hello!
-  Received: Hello!
-  Received: Hello!
-  - : unit = ()

As an exercise, you might want to refactor the server to avoid moving the file descriptors and use a recursive accept loop instead. You could also terminate the whole flock at the end instead of just terminating the server.

diff --git a/picos_std/Picos_std_structured__/index.html b/picos_std/Picos_std_structured__/index.html deleted file mode 100644 index 9abcc236..00000000 --- a/picos_std/Picos_std_structured__/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_structured__ (picos_std.Picos_std_structured__)

Module Picos_std_structured__

This module is hidden.

diff --git a/picos_std/Picos_std_structured__Bundle/index.html b/picos_std/Picos_std_structured__Bundle/index.html deleted file mode 100644 index 2e0c530e..00000000 --- a/picos_std/Picos_std_structured__Bundle/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_structured__Bundle (picos_std.Picos_std_structured__Bundle)

Module Picos_std_structured__Bundle

This module is hidden.

diff --git a/picos_std/Picos_std_structured__Control/index.html b/picos_std/Picos_std_structured__Control/index.html deleted file mode 100644 index 82d377a0..00000000 --- a/picos_std/Picos_std_structured__Control/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_structured__Control (picos_std.Picos_std_structured__Control)

Module Picos_std_structured__Control

This module is hidden.

diff --git a/picos_std/Picos_std_structured__Flock/index.html b/picos_std/Picos_std_structured__Flock/index.html deleted file mode 100644 index 33a67257..00000000 --- a/picos_std/Picos_std_structured__Flock/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_structured__Flock (picos_std.Picos_std_structured__Flock)

Module Picos_std_structured__Flock

This module is hidden.

diff --git a/picos_std/Picos_std_structured__Promise/index.html b/picos_std/Picos_std_structured__Promise/index.html deleted file mode 100644 index 19e61456..00000000 --- a/picos_std/Picos_std_structured__Promise/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_structured__Promise (picos_std.Picos_std_structured__Promise)

Module Picos_std_structured__Promise

This module is hidden.

diff --git a/picos_std/Picos_std_structured__Run/index.html b/picos_std/Picos_std_structured__Run/index.html deleted file mode 100644 index 4fb425d7..00000000 --- a/picos_std/Picos_std_structured__Run/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_structured__Run (picos_std.Picos_std_structured__Run)

Module Picos_std_structured__Run

This module is hidden.

diff --git a/picos_std/Picos_std_sync/Condition/index.html b/picos_std/Picos_std_sync/Condition/index.html deleted file mode 100644 index 6660ab62..00000000 --- a/picos_std/Picos_std_sync/Condition/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Condition (picos_std.Picos_std_sync.Condition)

Module Picos_std_sync.Condition

A condition variable.

ℹ️ This intentionally mimics the interface of Stdlib.Condition. Unlike with the standard library condition variable, blocking on this condition variable allows an effects based scheduler to run other fibers on the thread.

type t

Represents a condition variable.

val create : ?padded:bool -> unit -> t

create () return a new condition variable.

val wait : t -> Mutex.t -> unit

wait condition unlocks the mutex, waits for the condition, and locks the mutex before returning or raising due to the operation being canceled.

ℹ️ If the fiber has been canceled and propagation of cancelation is allowed, this may raise the cancelation exception.

val signal : t -> unit

signal condition wakes up one fiber waiting on the condition variable unless there are no such fibers.

val broadcast : t -> unit

broadcast condition wakes up all the fibers waiting on the condition variable.

diff --git a/picos_std/Picos_std_sync/Ivar/index.html b/picos_std/Picos_std_sync/Ivar/index.html deleted file mode 100644 index 84461629..00000000 --- a/picos_std/Picos_std_sync/Ivar/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Ivar (picos_std.Picos_std_sync.Ivar)

Module Picos_std_sync.Ivar

An incremental or single-assignment poisonable variable.

type !'a t

Represents an incremental variable.

val create : unit -> 'a t

create () returns a new empty incremental variable.

val of_value : 'a -> 'a t

of_value value returns an incremental variable prefilled with the given value.

val try_fill : 'a t -> 'a -> bool

try_fill ivar value attempts to assign the given value to the incremental variable. Returns true on success and false in case the variable had already been poisoned or assigned a value.

val fill : 'a t -> 'a -> unit

fill ivar value is equivalent to try_fill ivar value |> ignore.

val try_poison_at : 'a t -> exn -> Stdlib.Printexc.raw_backtrace -> bool

try_poison_at ivar exn bt attempts to poison the incremental variable with the specified exception and backtrace. Returns true on success and false in case the variable had already been poisoned or assigned a value.

ℹ️ This operation is not cancelable.

val try_poison : ?callstack:int -> 'a t -> exn -> bool

try_poison ivar exn is equivalent to try_poison_at ivar exn (Printexc.get_callstack n) where n defaults to 0.

val poison_at : 'a t -> exn -> Stdlib.Printexc.raw_backtrace -> unit

poison_at ivar exn bt is equivalent to try_poison_at ivar exn bt |> ignore.

val poison : ?callstack:int -> 'a t -> exn -> unit

poison ivar exn is equivalent to poison_at ivar exn (Printexc.get_callstack n) where n defaults to 0.

val peek_opt : 'a t -> 'a option

peek_opt ivar either returns Some value in case the variable has been assigned the value, raises an exception in case the variable has been poisoned, or otherwise returns None, which means that the variable has not yet been poisoned or assigned a value.

val read : 'a t -> 'a

read ivar waits until the variable is either assigned a value or the variable is poisoned and then returns the value or raises the exception.

val read_evt : 'a t -> 'a Picos_std_event.Event.t

read_evt ivar returns an event that can be committed to once the variable has either been assigned a value or has been poisoned.

diff --git a/picos_std/Picos_std_sync/Latch/index.html b/picos_std/Picos_std_sync/Latch/index.html deleted file mode 100644 index 7f92bfcf..00000000 --- a/picos_std/Picos_std_sync/Latch/index.html +++ /dev/null @@ -1,4 +0,0 @@ - -Latch (picos_std.Picos_std_sync.Latch)

Module Picos_std_sync.Latch

A dynamic single-use countdown latch.

Latches are typically used for determining when a finite set of parallel computations is done. If the size of the set is known a priori, then the latch can be initialized with the size as initial count and then each computation just decrements the latch.

If the size is unknown, i.e. it is determined dynamically, then a latch is initialized with a count of one, the a priori known computations are started and then the latch is decremented. When a computation is stsrted, the latch is incremented, and then decremented once the computation has finished.

type t

Represents a dynamic countdown latch.

val create : ?padded:bool -> int -> t

create initial creates a new countdown latch with the specified initial count.

  • raises Invalid_argument

    in case the specified initial count is negative.

val try_decr : t -> bool

try_decr latch attempts to decrement the count of the latch and returns true in case the count of the latch was greater than zero and false in case the count already was zero.

val decr : t -> unit

decr latch is equivalent to:

  if not (try_decr latch) then
-    invalid_arg "zero count"

ℹ️ This operation is not cancelable.

  • raises Invalid_argument

    in case the count of the latch is zero.

val try_incr : t -> bool

try_incr latch attempts to increment the count of the latch and returns true on success and false on failure, which means that the latch has already reached zero.

val incr : t -> unit

incr latch is equivalent to:

  if not (try_incr latch) then
-    invalid_arg "zero count"
  • raises Invalid_argument

    in case the count of the latch is zero.

val await : t -> unit

await latch returns after the count of the latch has reached zero.

val await_evt : t -> unit Picos_std_event.Event.t

await_evt latch returns an event that can be committed to once the count of the latch has reached zero.

diff --git a/picos_std/Picos_std_sync/Lazy/index.html b/picos_std/Picos_std_sync/Lazy/index.html deleted file mode 100644 index 54e3df8e..00000000 --- a/picos_std/Picos_std_sync/Lazy/index.html +++ /dev/null @@ -1,5 +0,0 @@ - -Lazy (picos_std.Picos_std_sync.Lazy)

Module Picos_std_sync.Lazy

A lazy suspension.

ℹ️ This intentionally mimics the interface of Stdlib.Lazy. Unlike with the standard library suspensions an attempt to force a suspension from multiple fibers, possibly running on different domains, does not raise the Undefined exception.

exception Undefined
type !'a t

Represents a deferred computation or suspension.

val from_fun : (unit -> 'a) -> 'a t

from_fun thunk returns a suspension.

val from_val : 'a -> 'a t

from_val value returns an already forced suspension whose result is the given value.

val is_val : 'a t -> bool

is_val susp determines whether the suspension has already been forced and didn't raise an exception.

val force : 'a t -> 'a

force susp forces the suspension, i.e. computes thunk () using the thunk passed to from_fun, stores the result of the computation to the suspension and reproduces its result. In case the suspension has already been forced the computation is skipped and stored result is reproduced.

ℹ️ This will check whether the current fiber has been canceled before starting the computation of thunk (). This allows the suspension to be forced by another fiber. However, if the fiber is canceled and the cancelation exception is raised after the computation has been started, the suspension will then store the cancelation exception.

  • raises Undefined

    in case the suspension is currently being forced by the current fiber.

val force_val : 'a t -> 'a

force_val is a synonym for force.

val map : ('a -> 'b) -> 'a t -> 'b t

map fn susp is equivalent to from_fun (fun () -> fn (force susp)).

val map_val : ('a -> 'b) -> 'a t -> 'b t

map_val fn susp is equivalent to:

  if is_val susp then
-    from_val (fn (force susp))
-  else
-    map fn susp
diff --git a/picos_std/Picos_std_sync/Mutex/index.html b/picos_std/Picos_std_sync/Mutex/index.html deleted file mode 100644 index 36d283e5..00000000 --- a/picos_std/Picos_std_sync/Mutex/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Mutex (picos_std.Picos_std_sync.Mutex)

Module Picos_std_sync.Mutex

A mutual-exclusion lock or mutex.

ℹ️ This intentionally mimics the interface of Stdlib.Mutex. Unlike with the standard library mutex, blocking on this mutex potentially allows an effects based scheduler to run other fibers on the thread.

🏎️ The optional checked argument taken by most of the operations defaults to true. When explicitly specified as ~checked:false the mutex implementation may avoid having to obtain the current fiber, which can be expensive relative to locking or unlocking an uncontested mutex. Note that specifying ~checked:false on an operation may prevent error checking also on a subsequent operation.

type t

Represents a mutual-exclusion lock or mutex.

val create : ?padded:bool -> unit -> t

create () returns a new mutex that is initially unlocked.

val lock : ?checked:bool -> t -> unit

lock mutex locks the mutex.

ℹ️ If the fiber has been canceled and propagation of cancelation is allowed, this may raise the cancelation exception before locking the mutex. If ~checked:false was specified, the cancelation exception may or may not be raised.

  • raises Sys_error

    if the mutex is already locked by the fiber. If ~checked:false was specified for some previous operation on the mutex the exception may or may not be raised.

val try_lock : ?checked:bool -> t -> bool

try_lock mutex locks the mutex in case the mutex is unlocked. Returns true on success and false in case the mutex was locked.

ℹ️ If the fiber has been canceled and propagation of cancelation is allowed, this may raise the cancelation exception before locking the mutex. If ~checked:false was specified, the cancelation exception may or may not be raised.

val unlock : ?checked:bool -> t -> unit

unlock mutex unlocks the mutex.

ℹ️ This operation is not cancelable.

  • raises Sys_error

    if the mutex was locked by another fiber. If ~checked:false was specified for some previous operation on the mutex the exception may or may not be raised.

val protect : ?checked:bool -> t -> (unit -> 'a) -> 'a

protect mutex thunk locks the mutex, runs thunk (), and unlocks the mutex after thunk () returns or raises.

ℹ️ If the fiber has been canceled and propagation of cancelation is allowed, this may raise the cancelation exception before locking the mutex. If ~checked:false was specified, the cancelation exception may or may not be raised.

  • raises Sys_error

    for the same reasons as lock and unlock.

diff --git a/picos_std/Picos_std_sync/Semaphore/Binary/index.html b/picos_std/Picos_std_sync/Semaphore/Binary/index.html deleted file mode 100644 index ec906069..00000000 --- a/picos_std/Picos_std_sync/Semaphore/Binary/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Binary (picos_std.Picos_std_sync.Semaphore.Binary)

Module Semaphore.Binary

A binary semaphore.

type t

Represents a binary semaphore.

val make : ?padded:bool -> bool -> t

make initial creates a new binary semaphore with count of 1 in case initial is true and count of 0 otherwise.

val release : t -> unit

release semaphore sets the count of the semaphore to 1.

ℹ️ This operation is not cancelable.

val acquire : t -> unit

acquire semaphore waits until the count of the semaphore is 1 and then atomically changes the count to 0.

val try_acquire : t -> bool

try_acquire semaphore attempts to atomically change the count of the semaphore from 1 to 0.

diff --git a/picos_std/Picos_std_sync/Semaphore/Counting/index.html b/picos_std/Picos_std_sync/Semaphore/Counting/index.html deleted file mode 100644 index 71bb176d..00000000 --- a/picos_std/Picos_std_sync/Semaphore/Counting/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Counting (picos_std.Picos_std_sync.Semaphore.Counting)

Module Semaphore.Counting

A counting semaphore.

type t

Represents a counting semaphore.

val make : ?padded:bool -> int -> t

make initial creates a new counting semaphore with the given initial count.

  • raises Invalid_argument

    in case the given initial count is negative.

val release : t -> unit

release semaphore increments the count of the semaphore.

ℹ️ This operation is not cancelable.

  • raises Sys_error

    in case the count would overflow.

val acquire : t -> unit

acquire semaphore waits until the count of the semaphore is greater than 0 and then atomically decrements the count.

val try_acquire : t -> bool

try_acquire semaphore attempts to atomically decrement the count of the semaphore unless the count is already 0.

val get_value : t -> int

get_value semaphore returns the current count of the semaphore. This should only be used for debugging or informational messages.

diff --git a/picos_std/Picos_std_sync/Semaphore/index.html b/picos_std/Picos_std_sync/Semaphore/index.html deleted file mode 100644 index 6c2de9cf..00000000 --- a/picos_std/Picos_std_sync/Semaphore/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Semaphore (picos_std.Picos_std_sync.Semaphore)

Module Picos_std_sync.Semaphore

Counting and Binary semaphores.

ℹ️ This intentionally mimics the interface of Stdlib.Semaphore. Unlike with the standard library semaphores, blocking on these semaphores allows an effects based scheduler to run other fibers on the thread.

module Counting : sig ... end

A counting semaphore.

module Binary : sig ... end

A binary semaphore.

diff --git a/picos_std/Picos_std_sync/Stream/index.html b/picos_std/Picos_std_sync/Stream/index.html deleted file mode 100644 index 8852e5c5..00000000 --- a/picos_std/Picos_std_sync/Stream/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Stream (picos_std.Picos_std_sync.Stream)

Module Picos_std_sync.Stream

A lock-free, poisonable, many-to-many, stream.

Readers can tap into a stream to get a cursor for reading all the values pushed to the stream starting from the cursor position. Conversely, values pushed to a stream are lost unless a reader has a cursor to the position in the stream.

type !'a t

Represents a stream of values of type 'a.

val create : ?padded:bool -> unit -> 'a t

create () returns a new stream.

val push : 'a t -> 'a -> unit

push stream value adds the value to the current position of the stream and advances the stream to the next position unless the stream has been poisoned in which case only the exception given to poison will be raised.

val poison_at : 'a t -> exn -> Stdlib.Printexc.raw_backtrace -> unit

poison_at stream exn bt marks the stream as poisoned at the current position, which means that subsequent attempts to push to the stream will raise the given exception with backtrace.

ℹ️ This operation is not cancelable.

val poison : ?callstack:int -> 'a t -> exn -> unit

poison stream exn is equivalent to poison_at stream exn (Printexc.get_callstack n) where n defaults to 0.

type !'a cursor

Represents a (past or current) position in a stream.

val tap : 'a t -> 'a cursor

tap stream returns a cursor to the current position of the stream.

val peek_opt : 'a cursor -> ('a * 'a cursor) option

peek_opt cursor immediately returns Some (value, next) with the value pushed to the position and a cursor to the next position, when the cursor points to a past position in the stream. Otherwise returns None or raises the exception that the stream was poisoned with.

val read : 'a cursor -> 'a * 'a cursor

read cursor immediately returns (value, next) with the value pushed to the position and a cursor to the next position, when the cursor points to a past position in the stream. If the cursor points to the current position of the stream, read cursor waits until a value is pushed to the stream or the stream is poisoned, in which case the exception that the stream was poisoned with will be raised.

val read_evt : 'a cursor -> ('a * 'a cursor) Picos_std_event.Event.t

read_evt cursor returns an event that reads from the cursor position.

diff --git a/picos_std/Picos_std_sync/index.html b/picos_std/Picos_std_sync/index.html deleted file mode 100644 index 90478af5..00000000 --- a/picos_std/Picos_std_sync/index.html +++ /dev/null @@ -1,98 +0,0 @@ - -Picos_std_sync (picos_std.Picos_std_sync)

Module Picos_std_sync

Basic communication and synchronization primitives for Picos.

This library essentially provides a conventional set of communication and synchronization primitives for concurrent programming with any Picos compatible scheduler.

For the examples we open some modules:

  open Picos_std_structured
-  open Picos_std_sync

Modules

module Mutex : sig ... end

A mutual-exclusion lock or mutex.

module Condition : sig ... end

A condition variable.

module Semaphore : sig ... end

Counting and Binary semaphores.

module Lazy : sig ... end

A lazy suspension.

module Latch : sig ... end

A dynamic single-use countdown latch.

module Ivar : sig ... end

An incremental or single-assignment poisonable variable.

module Stream : sig ... end

A lock-free, poisonable, many-to-many, stream.

Examples

A simple bounded queue

Here is an example of a simple bounded (blocking) queue using a mutex and condition variables:

  module Bounded_q : sig
-    type 'a t
-    val create : capacity:int -> 'a t
-    val push : 'a t -> 'a -> unit
-    val pop : 'a t -> 'a
-  end = struct
-    type 'a t = {
-      mutex : Mutex.t;
-      queue : 'a Queue.t;
-      capacity : int;
-      not_empty : Condition.t;
-      not_full : Condition.t;
-    }
-
-    let create ~capacity =
-      if capacity < 0 then
-        invalid_arg "negative capacity"
-      else {
-        mutex = Mutex.create ();
-        queue = Queue.create ();
-        capacity;
-        not_empty = Condition.create ();
-        not_full = Condition.create ();
-      }
-
-    let is_full_unsafe t =
-      t.capacity <= Queue.length t.queue
-
-    let push t x =
-      let was_empty =
-        Mutex.protect t.mutex @@ fun () ->
-        while is_full_unsafe t do
-          Condition.wait t.not_full t.mutex
-        done;
-        Queue.push x t.queue;
-        Queue.length t.queue = 1
-      in
-      if was_empty then
-        Condition.broadcast t.not_empty
-
-    let pop t =
-      let elem, was_full =
-        Mutex.protect t.mutex @@ fun () ->
-        while Queue.length t.queue = 0 do
-          Condition.wait
-            t.not_empty t.mutex
-        done;
-        let was_full = is_full_unsafe t in
-        Queue.pop t.queue, was_full
-      in
-      if was_full then
-        Condition.broadcast t.not_full;
-      elem
-  end

The above is definitely not the fastest nor the most scalable bounded queue, but we can now demonstrate it with the cooperative Picos_mux_fifo scheduler:

  # Picos_mux_fifo.run @@ fun () ->
-
-    let bq =
-      Bounded_q.create ~capacity:3
-    in
-
-    Flock.join_after ~on_return:`Terminate begin fun () ->
-      Flock.fork begin fun () ->
-        while true do
-          Printf.printf "Popped %d\n%!"
-            (Bounded_q.pop bq)
-        done
-      end;
-
-      for i=1 to 5 do
-        Printf.printf "Pushing %d\n%!" i;
-        Bounded_q.push bq i
-      done;
-
-      Printf.printf "All done?\n%!";
-
-      Control.yield ();
-    end;
-
-    Printf.printf "Pushing %d\n%!" 101;
-    Bounded_q.push bq 101;
-
-    Printf.printf "Popped %d\n%!"
-      (Bounded_q.pop bq)
-  Pushing 1
-  Pushing 2
-  Pushing 3
-  Pushing 4
-  Popped 1
-  Popped 2
-  Popped 3
-  Pushing 5
-  All done?
-  Popped 4
-  Popped 5
-  Pushing 101
-  Popped 101
-  - : unit = ()

Notice how the producer was able to push three elements to the queue after which the fourth push blocked and the consumer was started. Also, after canceling the consumer, the queue could still be used just fine.

Conventions

The optional padded argument taken by several constructor functions, e.g. Latch.create, Mutex.create, Condition.create, Semaphore.Counting.make, and Semaphore.Binary.make, defaults to false. When explicitly specified as ~padded:true the object is allocated in a way to avoid false sharing. For relatively long lived objects this can improve performance and make performance more stable at the cost of using more memory. It is not recommended to use ~padded:true for short lived objects.

The primitives provided by this library are generally optimized for low contention scenariors and size. Generally speaking, for best performance and scalability, you should try to avoid high contention scenarios by architecting your program to distribute processing such that sequential bottlenecks are avoided. If high contention is unavoidable then other communication and synchronization primitive implementations may provide better performance.

diff --git a/picos_std/Picos_std_sync__/index.html b/picos_std/Picos_std_sync__/index.html deleted file mode 100644 index 5f616c4c..00000000 --- a/picos_std/Picos_std_sync__/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__ (picos_std.Picos_std_sync__)

Module Picos_std_sync__

This module is hidden.

diff --git a/picos_std/Picos_std_sync__Condition/index.html b/picos_std/Picos_std_sync__Condition/index.html deleted file mode 100644 index 5bd19b0b..00000000 --- a/picos_std/Picos_std_sync__Condition/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__Condition (picos_std.Picos_std_sync__Condition)

Module Picos_std_sync__Condition

This module is hidden.

diff --git a/picos_std/Picos_std_sync__Ivar/index.html b/picos_std/Picos_std_sync__Ivar/index.html deleted file mode 100644 index a4857e82..00000000 --- a/picos_std/Picos_std_sync__Ivar/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__Ivar (picos_std.Picos_std_sync__Ivar)

Module Picos_std_sync__Ivar

This module is hidden.

diff --git a/picos_std/Picos_std_sync__Latch/index.html b/picos_std/Picos_std_sync__Latch/index.html deleted file mode 100644 index 560bf543..00000000 --- a/picos_std/Picos_std_sync__Latch/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__Latch (picos_std.Picos_std_sync__Latch)

Module Picos_std_sync__Latch

This module is hidden.

diff --git a/picos_std/Picos_std_sync__Lazy/index.html b/picos_std/Picos_std_sync__Lazy/index.html deleted file mode 100644 index d00b8f64..00000000 --- a/picos_std/Picos_std_sync__Lazy/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__Lazy (picos_std.Picos_std_sync__Lazy)

Module Picos_std_sync__Lazy

This module is hidden.

diff --git a/picos_std/Picos_std_sync__List_ext/index.html b/picos_std/Picos_std_sync__List_ext/index.html deleted file mode 100644 index 6ab5c5d3..00000000 --- a/picos_std/Picos_std_sync__List_ext/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__List_ext (picos_std.Picos_std_sync__List_ext)

Module Picos_std_sync__List_ext

This module is hidden.

diff --git a/picos_std/Picos_std_sync__Mutex/index.html b/picos_std/Picos_std_sync__Mutex/index.html deleted file mode 100644 index 876569f1..00000000 --- a/picos_std/Picos_std_sync__Mutex/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__Mutex (picos_std.Picos_std_sync__Mutex)

Module Picos_std_sync__Mutex

This module is hidden.

diff --git a/picos_std/Picos_std_sync__Q/index.html b/picos_std/Picos_std_sync__Q/index.html deleted file mode 100644 index 325106c0..00000000 --- a/picos_std/Picos_std_sync__Q/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__Q (picos_std.Picos_std_sync__Q)

Module Picos_std_sync__Q

This module is hidden.

diff --git a/picos_std/Picos_std_sync__Semaphore/index.html b/picos_std/Picos_std_sync__Semaphore/index.html deleted file mode 100644 index b9bc7715..00000000 --- a/picos_std/Picos_std_sync__Semaphore/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__Semaphore (picos_std.Picos_std_sync__Semaphore)

Module Picos_std_sync__Semaphore

This module is hidden.

diff --git a/picos_std/Picos_std_sync__Stream/index.html b/picos_std/Picos_std_sync__Stream/index.html deleted file mode 100644 index ffba07e7..00000000 --- a/picos_std/Picos_std_sync__Stream/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -Picos_std_sync__Stream (picos_std.Picos_std_sync__Stream)

Module Picos_std_sync__Stream

This module is hidden.

diff --git a/picos_std/_doc-dir/CHANGES.md b/picos_std/_doc-dir/CHANGES.md deleted file mode 100644 index eb5ce11f..00000000 --- a/picos_std/_doc-dir/CHANGES.md +++ /dev/null @@ -1,191 +0,0 @@ -## 0.6.0 - -- Added a futex-like `Awaitable` abstraction as the `picos_std.awaitable` - library (@polytypic) -- Changed the core Picos library to be internally built from a single `.ml` file - (@polytypic) -- Optimized heap and stack usage of fibers and resource cleanup mechanisms and - added workarounds for compiler generated space leaks due to closures - (@polytypic) -- Added `lastly` as a safe alternative to `Fun.protect` (@polytypic) -- Workarounds for the `Uri` library not being threadsafe (@polytypic) -- Fixed to raise proper error when `Picos_io_select` has not been configured - properly (@polytypic) -- Forbid cancelation propagation during `release` calls in the - `picos_std.finally` library (@polytypic) - - This is a change in behaviour and could be seen as a breaking change, but it - should really be considered a bug fix. -- Renamed `(Ivar|Stream).poison` to `(Ivar|Stream).poison_at` and added - `(Ivar|Stream).poison` with optional `?callstack:int` (@polytypic) - -## 0.5.0 - -- Major additions, changes, bug fixes, improvements, and restructuring - (@polytypic, @c-cube) - - - Additions: - - - Minimalistic Cohttp implementation - - Implicitly propagated `Flock` of fibers for structured concurrency - - Option to terminate `Bundle` and `Flock` on return - - `Event` abstraction - - Synchronization and communication primitives: - - Incremental variable or `Ivar` - - Countdown `Latch` - - `Semaphore` - - `Stream` of events - - Multi-producer, multi-consumer lock-free queue optimized for schedulers - - Multithreaded (work-stealing) FIFO scheduler - - Support `quota` for FIFO based schedulers - - Transactional interface for atomically completing multiple `Computation`s - - - Changes: - - - Redesigned resource management based on `('r -> 'a) -> 'a` functions - - Redesigned `spawn` interface allowing `FLS` entries to be populated before - spawn - - Introduced concept of fatal errors, which must terminate the scheduler or - the whole program - - Simplified `FLS` interface - - Removed `Exn_bt` - - - Improvements: - - - Signficantly reduced per fiber memory usage of various sample schedulers - - - Picos has now been split into multiple packages and libraries: - - - pkg: `picos` - - lib: `picos` - - lib: `picos.domain` - - lib: `picos.thread` - - pkg: `picos_aux` - - lib: `picos_aux.htbl` - - lib: `picos_aux.mpmcq` - - lib: `picos_aux.mpscq` - - lib: `picos_aux.rc` - - pkg: `picos_lwt` - - lib: `picos_lwt` - - lib: `picos_lwt.unix` - - pkg: `picos_meta` (integration tests) - - pkg: `picos_mux` - - lib: `picos_mux.fifo` - - lib: `picos_mux.multififo` - - lib: `picos_mux.random` - - lib: `picos_mux.thread` - - pkg: `picos_std` - - lib: `picos_std.event` - - lib: `picos_std.finally` - - lib: `picos_std.structured` - - lib: `picos_std.sync` - - pkg: `picos_io` - - lib: `picos_io` - - lib: `picos_io.fd` - - lib: `picos_io.select` - - pkg: `picos_io_cohttp` - - lib: `picos_io_cohttp` - -## 0.4.0 - -- Renamed `Picos_mpsc_queue` to `Picos_mpscq`. (@polytypic) - -- Core API changes: - - - Added `Computation.returned`. (@polytypic) - -- `Lwt` interop improvements: - - - Fixed `Picos_lwt` handling of `Cancel_after` to not raise in case of - cancelation. (@polytypic) - - - Redesigned `Picos_lwt` to take a `System` module, which must implement a - semi thread-safe trigger mechanism to allow unblocking `Lwt` promises on the - main thread. (@polytypic) - - - Added `Picos_lwt_unix` interface to `Lwt`, which includes an internal - `System` module implemented using `Lwt_unix`. (@polytypic) - - - Dropped thunking from `Picos_lwt.await`. (@polytypic) - -- Added a randomized multicore scheduler `Picos_randos` for testing. - (@polytypic) - -- Changed `Picos_select.check_configured` to always (re)configure signal - handling on the current thread. (@polytypic) - -- `Picos_structured`: - - - Added a minimalistic `Promise` abstraction. (@polytypic) - - Changed to more consistently not treat `Terminate` as an error. (@polytypic) - -- Changed schedulers to take `~forbid` as an optional argument. (@polytypic) - -- Various minor additions, fixes, and documentation improvements. (@polytypic) - -## 0.3.0 - -- Core API changes: - - - Added `Fiber.set_computation`, which represents a semantic change - - Renamed `Fiber.computation` to `Fiber.get_computation` - - Added `Computation.attach_canceler` - - Added `Fiber.sleep` - - Added `Fiber.create_packed` - - Removed `Fiber.try_attach` - - Removed `Fiber.detach` - - Most of the above changes were motivated by work on and requirements of the - added structured concurrency library (@polytypic) - -- Added a basic user level structured concurrent programming library - `Picos_structured` (@polytypic) - -- Added a functorized `Picos_lwt` providing direct style effects based interface - to programming with Lwt (@polytypic) - -- Added missing `Picos_stdio.Unix.select` (@polytypic) - -## 0.2.0 - -- Documentation fixes and restructuring (@polytypic) -- Scheduler friendly `waitpid`, `wait`, and `system` in `Picos_stdio.Unix` for - platforms other than Windows (@polytypic) -- Added `Picos_select.configure` to allow, and sometimes require, configuring - `Picos_select` for co-operation with libraries that also deal with signals - (@polytypic) -- Moved `Picos_tls` into `Picos_thread.TLS` (@polytypic) -- Enhanced `sleep` and `sleepf` in `Picos_stdio.Unix` to block in a scheduler - friendly manner (@polytypic) - -## 0.1.0 - -- First experimental release of Picos. - - Core: - - - `picos` — A framework for interoperable effects based concurrency. - - Sample schedulers: - - - `picos.fifos` — Basic single-threaded effects based Picos compatible - scheduler for OCaml 5. - - `picos.threaded` — Basic `Thread` based Picos compatible scheduler for - OCaml 4. - - Scheduler agnostic libraries: - - - `picos.sync` — Basic communication and synchronization primitives for Picos. - - `picos.stdio` — Basic IO facilities based on OCaml standard libraries for - Picos. - - `picos.select` — Basic `Unix.select` based IO event loop for Picos. - - Auxiliary libraries: - - - `picos.domain` — Minimalistic domain API available both on OCaml 5 and on - OCaml 4. - - `picos.exn_bt` — Wrapper for exceptions with backtraces. - - `picos.fd` — Externally reference counted file descriptors. - - `picos.htbl` — Lock-free hash table. - - `picos.mpsc_queue` — Multi-producer, single-consumer queue. - - `picos.rc` — External reference counting tables for disposable resources. - - `picos.tls` — Thread-local storage. diff --git a/picos_std/_doc-dir/LICENSE.md b/picos_std/_doc-dir/LICENSE.md deleted file mode 100644 index 5da69623..00000000 --- a/picos_std/_doc-dir/LICENSE.md +++ /dev/null @@ -1,13 +0,0 @@ -Copyright © 2023 Vesa Karvonen - -Permission to use, copy, modify, and/or distribute this software for any purpose -with or without fee is hereby granted, provided that the above copyright notice -and this permission notice appear in all copies. - -THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH -REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND -FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, -INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS -OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER -TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF -THIS SOFTWARE. diff --git a/picos_std/_doc-dir/README.md b/picos_std/_doc-dir/README.md deleted file mode 100644 index 26e31b8e..00000000 --- a/picos_std/_doc-dir/README.md +++ /dev/null @@ -1,791 +0,0 @@ -[API reference](https://ocaml-multicore.github.io/picos/doc/index.html) · -[Benchmarks](https://bench.ci.dev/ocaml-multicore/picos/branch/main?worker=pascal&image=bench.Dockerfile) -· -[Stdlib Benchmarks](https://bench.ci.dev/ocaml-multicore/multicore-bench/branch/main?worker=pascal&image=bench.Dockerfile) - -# **Picos** — Interoperable effects based concurrency - -Picos is a -[systems programming](https://en.wikipedia.org/wiki/Systems_programming) -interface between effects based schedulers and concurrent abstractions. - -

- -Picos is designed to enable an _open ecosystem_ of -[interoperable](https://en.wikipedia.org/wiki/Interoperability) and -interchangeable elements of effects based cooperative concurrent programming -models such as - -- [schedulers]() that - multiplex large numbers of - [user level fibers](https://en.wikipedia.org/wiki/Green_thread) to run on a - small number of system level threads, -- mechanisms for managing fibers and for - [structuring concurrency](https://en.wikipedia.org/wiki/Structured_concurrency), -- communication and synchronization primitives, such as - [mutexes and condition variables](), - message queues, - [STM](https://en.wikipedia.org/wiki/Software_transactional_memory)s, and more, - and -- integrations with low level - [asynchronous IO](https://en.wikipedia.org/wiki/Asynchronous_I/O) systems - -by decoupling such elements from each other. - -Picos comes with a -[reference manual](https://ocaml-multicore.github.io/picos/doc/index.html) and -many sample libraries. - -⚠️ Please note that Picos is still considered experimental and unstable. - -## Introduction - -Picos addresses the incompatibility of effects based schedulers at a fundamental -level by introducing -[an _interface_ to decouple schedulers and other concurrent abstractions](https://ocaml-multicore.github.io/picos/doc/picos/Picos/index.html) -that need services from a scheduler. - -The -[core abstractions of Picos](https://ocaml-multicore.github.io/picos/doc/picos/Picos/index.html#the-architecture-of-picos) -are - -- [`Trigger`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Trigger/index.html) - — the ability to await for a signal, -- [`Computation`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Computation/index.html) - — a cancelable computation, and -- [`Fiber`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Fiber/index.html) - — an independent thread of execution, - -that are implemented partially by the Picos interface in terms of the effects - -- [`Trigger.Await`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Trigger/index.html#extension-Await) - — to suspend and resume a fiber, -- [`Computation.Cancel_after`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Computation/index.html#extension-Cancel_after) - — to cancel a computation after given period of time, -- [`Fiber.Current`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Fiber/index.html#extension-Current) - — to obtain the current fiber, -- [`Fiber.Yield`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Fiber/index.html#extension-Yield) - — to request rescheduling, and -- [`Fiber.Spawn`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Fiber/index.html#extension-Spawn) - — to start a new fiber. - -The partial implementation of the abstractions and the effects define a contract -between schedulers and other concurrent abstractions. By handling the Picos -effects according to the contract a scheduler becomes _Picos compatible_, which -allows any abstractions written against the Picos interface, i.e. _Implemented -in Picos_, to be used with the scheduler. - -### Understanding cancelation - -A central idea or goal of Picos is to provide a collection of building blocks -for parallelism safe cancelation that allows the implementation of both blocking -abstractions as well as the implementation of abstractions for structuring -fibers for cancelation or managing the propagation and scope of cancelation. - -While cancelation, which is essentially a kind of asynchronous exception or -signal, is not necessarily recommended as a general control mechanism, the -ability to cancel fibers in case of errors is crucial for the implementation of -practical concurrent programming models. - -Consider the following characteristic -[example](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_structured/index.html#understanding-cancelation): - -```ocaml skip -Mutex.protect mutex begin fun () -> - while true do - Condition.wait condition mutex - done -end -``` - -Assume that a fiber executing the above code might be canceled, at any point, by -another fiber running in parallel. This could be necessary, for example, due to -an error that requires the application to be shut down. How could that be done -while ensuring both -[safety and liveness](https://en.wikipedia.org/wiki/Safety_and_liveness_properties)? - -- For safety, cancelation should not leave the program in an invalid state or - cause the program to leak memory. In this case, `Condition.wait` must exit - with the mutex locked, even in case of cancelation, and, as `Mutex.protect` - exits, the ownership of the mutex must be transferred to the next fiber, if - any, waiting in queue for the mutex. No references to unused objects may be - left in the mutex or the condition variable. - -- For liveness, cancelation should ensure that the fiber will eventually - continue after cancelation. In this case, cancelation could be triggered - during the `Mutex.lock` operation inside `Mutex.protect` or the - `Condition.wait` operation, when the fiber might be in a suspended state, and - cancelation should then allow the fiber to continue. - -The set of abstractions, `Trigger`, `Computation`, and `Fiber`, work together -[to support cancelation](https://ocaml-multicore.github.io/picos/doc/picos/Picos/index.html#cancelation-in-picos). -Briefly, a fiber corresponds to an independent thread of execution and every -fiber is associated with a computation at all times. When a fiber creates a -trigger in order to await for a signal, it ask the scheduler to suspend the -fiber on the trigger. Assuming the fiber has not forbidden the propagation of -cancelation, which is required, for example, in the implementation of -`Condition.wait` to lock the mutex upon exit, the scheduler must also attach the -trigger to the computation associated with the fiber. If the computation is then -canceled before the trigger is otherwise signaled, the trigger will be signaled -by the cancelation of the computation, and the fiber will be resumed by the -scheduler as canceled. - -This cancelable suspension protocol and its partial implementation designed -around the first-order -[`Trigger.Await`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Trigger/index.html#extension-Await) -effect creates a clear separation between schedulers and user code running in -fibers and is designed to handle the possibility of a trigger being signaled or -a computation being canceled at any point during the suspension of a fiber. -Schedulers are given maximal freedom to decide which fiber to resume next. As an -example, a scheduler could give priority to canceled fibers — going as far -as moving a fiber already in the ready queue of the scheduler to the front of -the queue at the point of cancelation — based on the assumption that user -code promptly cancels external requests and frees critical resources. - -### `Trigger` - -A trigger provides the ability to await for a signal and is perhaps the best -established and least controversial element of the Picos interface. - -Here is an extract from the signature of the -[`Trigger` module](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Trigger/index.html): - - - -```ocaml skip -type t -val create : unit -> t -val await : t -> (exn * Printexc.raw_backtrace) option -val signal : t -> unit -val on_signal : (* for schedulers *) -``` - -The idea is that a fiber may create a trigger, insert it into some shared data -structure, and then call `await` to ask the scheduler to suspend the fiber until -something signals the trigger. When `await` returns an exception with a -backtrace it means that the fiber has been canceled. - -As an example, let's consider the implementation of an `Ivar` or incremental or -single-assignment variable: - -```ocaml skip -type 'a t -val create : unit -> 'a t -val try_fill : 'a t -> 'a -> bool -val read : 'a t -> 'a -``` - -An `Ivar` is created as empty and can be filled with a value once. An attempt to -read an `Ivar` blocks until the `Ivar` is filled. - -Using `Trigger` and `Atomic`, we can represent an `Ivar` as follows: - -```ocaml -type 'a state = - | Filled of 'a - | Empty of Trigger.t list - -type 'a t = 'a state Atomic.t -``` - -The `try_fill` operation is then fairly straightforward to implement: - -```ocaml -let rec try_fill t value = - match Atomic.get t with - | Filled _ -> false - | Empty triggers as before -> - let after = Filled value in - if Atomic.compare_and_set t before after then - begin - List.iter Trigger.signal triggers; (* ! *) - true - end - else - try_fill t value -``` - -The interesting detail above is that after successfully filling an `Ivar`, the -triggers are signaled. This allows the `await` inside the `read` operation to -return: - - - -```ocaml -let rec read t = - match Atomic.get t with - | Filled value -> value - | Empty triggers as before -> - let trigger = Trigger.create () in - let after = Empty (trigger :: triggers) in - if Atomic.compare_and_set t before after then - match Trigger.await trigger with - | None -> read t - | Some (exn, bt) -> - cleanup t trigger; (* ! *) - Printexc.raise_with_backtrace exn bt - else - read t -``` - -An important detail above is that when `await` returns an exception with a -backtrace, meaning that the fiber has been canceled, the `cleanup` operation -(which is omitted) is called to remove the `trigger` from the `Ivar` to avoid -potentially accumulating unbounded numbers of triggers in an empty `Ivar`. - -As simple as it is, the design of `Trigger` is far from arbitrary: - -- First of all, `Trigger` has single-assignment semantics. After being signaled, - a trigger takes a constant amount of space and does not point to any other - heap object. This makes it easier to reason about the behavior and can also - help to avoid leaks or optimize data structures containing triggers, because - it is safe to hold bounded amounts of signaled triggers. - -- The `Trigger` abstraction is essentially first-order, which provides a clear - separation between a scheduler and programs, or fibers, running on a - scheduler. The `await` operation performs the `Await` effect, which passes the - trigger to the scheduler. The scheduler then attaches its own callback to the - trigger using `on_signal`. This way a scheduler does not call arbitrary user - specified code in the `Await` effect handler. - -- Separating the creation of a trigger from the `await` operation allows one to - easily insert a trigger into any number of places and allows the trigger to be - potentially concurrently signaled before the `Await` effect is performed in - which case the effect can be skipped entirely. - -- No value is propagated with a trigger. This makes triggers simpler and makes - it less likely for one to e.g. accidentally drop such a value. In many cases, - like with the `Ivar`, there is already a data structure through which values - can be propagated. - -- The `signal` operation gives no indication of whether a fiber will then be - resumed as canceled or not. This gives maximal flexibility for the scheduler - and also makes it clear that cancelation must be handled based on the return - value of `await`. - -### `Computation` - -A `Computation` basically holds the status, i.e. _running_, _returned_, or -_canceled_, of some sort of computation and allows anyone with access to the -computation to attach triggers to it to be signaled in case the computation -stops running. - -Here is an extract from the signature of the -[`Computation` module](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Computation/index.html): - -```ocaml skip -type 'a t - -val create : unit -> 'a t - -val try_attach : 'a t -> Trigger.t -> bool -val detach : 'a t -> Trigger.t -> unit - -val try_return : 'a t -> 'a -> bool -val try_cancel : 'a t -> exn -> Printexc.raw_backtrace -> bool - -val check : 'a t -> unit -val await : 'a t -> 'a -``` - -A `Computation` directly provides a superset of the functionality of the `Ivar` -we sketched in the previous section: - -```ocaml -type 'a t = 'a Computation.t -let create : unit -> 'a t = Computation.create -let try_fill : 'a t -> 'a -> bool = - Computation.try_return -let read : 'a t -> 'a = Computation.await -``` - -However, what really makes the `Computation` useful is the ability to -momentarily attach triggers to it. A `Computation` essentially implements a -specialized lock-free bag of triggers, which allows one to implement dynamic -completion propagation networks. - -The `Computation` abstraction is also designed with both simplicity and -flexibility in mind: - -- Similarly to `Trigger`, `Computation` has single-assignment semantics, which - makes it easier to reason about. - -- Unlike a typical cancelation context of a structured concurrency model, - `Computation` is unopinionated in that it does not impose a specific - hierarchical structure. - -- Anyone may ask to be notified when a `Computation` is completed by attaching - triggers to it and anyone may complete a `Computation`. This makes - `Computation` an omnidirectional communication primitive. - -Interestingly, and unintentionally, it turns out that, given -[the ability to complete two (or more) computations atomically](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Computation/Tx/index.html), -`Computation` is essentially expressive enough to implement the -[event](https://ocaml.org/manual/latest/api/Event.html) abstraction of -[Concurrent ML](https://en.wikipedia.org/wiki/Concurrent_ML). The same features -that make `Computation` suitable for implementing more or less arbitrary dynamic -completion propagation networks make it suitable for implementing Concurrent ML -style abstractions. - -### `Fiber` - -A fiber corresponds to an independent thread of execution. Technically an -effects based scheduler creates a fiber, effectively giving it an identity, as -it runs some function under its handler. The `Fiber` abstraction provides a way -to share a proxy identity, and a bit of state, between a scheduler and other -concurrent abstractions. - -Here is an extract from the signature of the -[`Fiber` module](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Fiber/index.html): - -```ocaml skip -type t - -val current : unit -> t - -val create : forbid:bool -> 'a Computation.t -> t -val spawn : t -> (t -> unit) -> unit - -val get_computation : t -> Computation.packed -val set_computation : t -> Computation.packed -> unit - -val has_forbidden : t -> bool -val exchange : t -> forbid:bool -> bool - -module FLS : sig (* ... *) end -``` - -Fibers are where all of the low level bits and pieces of Picos come together, -which makes it difficult to give both meaningful and concise examples, but let's -implement a slightly simplistic structured concurrency mechanism: - -```ocaml skip -type t (* represents a scope *) -val run : (t -> unit) -> unit -val fork : t -> (unit -> unit) -> unit -``` - -The idea here is that `run` creates a "scope" and waits until all of the fibers -forked into the scope have finished. In case any fiber raises an unhandled -exception, or the main fiber that created the scope is canceled, all of the -fibers are canceled and an exception is raised. To keep things slightly simpler, -only the first exception is kept. - -A scope can be represented by a simple record type: - -```ocaml -type t = { - count : int Atomic.t; - inner : unit Computation.t; - ended : Trigger.t; -} -``` - -The idea is that after a fiber is finished, we decrement the count and if it -becomes zero, we finish the computation and signal the main fiber that the scope -has ended: - -```ocaml -let decr t = - let n = Atomic.fetch_and_add t.count (-1) in - if n = 1 then begin - Computation.finish t.inner; - Trigger.signal t.ended - end -``` - -When forking a fiber, we increment the count unless it already was zero, in -which case we raise an error: - -```ocaml -let rec incr t = - let n = Atomic.get t.count in - if n = 0 then invalid_arg "ended"; - if not (Atomic.compare_and_set t.count n (n + 1)) - then incr t -``` - -The fork operation is now relatively straightforward to implement: - -```ocaml -let fork t action = - incr t; - try - let main _ = - match action () with - | () -> decr t - | exception exn -> - let bt = Printexc.get_raw_backtrace () in - Computation.cancel t.inner exn bt; - decr t - in - let fiber = - Fiber.create ~forbid:false t.inner - in - Fiber.spawn fiber main - with canceled_exn -> - decr t; - raise canceled_exn -``` - -The above `fork` first increments the count and then tries to spawn a fiber. The -Picos interface specifies that when `Fiber.spawn` returns normally, the action, -`main`, must be called by the scheduler. This allows us to ensure that the -increment is always matched with a decrement. - -Setting up a scope is the most complex operation: - - - -```ocaml -let run body = - let count = Atomic.make 1 in - let inner = Computation.create () in - let ended = Trigger.create () in - let t = { count; inner; ended } in - let fiber = Fiber.current () in - let (Packed outer) = - Fiber.get_computation fiber - in - let canceler = - Computation.attach_canceler - ~from:outer - ~into:t.inner - in - match - Fiber.set_computation fiber (Packed t.inner); - body t - with - | () -> join t outer canceler fiber - | exception exn -> - let bt = Printexc.get_raw_backtrace () in - Computation.cancel t.inner exn bt; - join t outer canceler fiber; - Printexc.raise_with_backtrace exn bt -``` - -The `Computation.attach_canceler` operation attaches a special trigger to -propagate cancelation from one computation into another. After the body exits, -`join` - -```ocaml -let join t outer canceler fiber = - decr t; - Fiber.set_computation fiber (Packed outer); - let forbid = Fiber.exchange fiber ~forbid:true in - Trigger.await t.ended |> ignore; - Fiber.set fiber ~forbid; - Computation.detach outer canceler; - Computation.check t.inner; - Fiber.check fiber -``` - -is called to wait for the scoped fibers and restore the state of the main fiber. -An important detail is that propagation of cancelation is forbidden by setting -the `forbid` flag to `true` before the call of `Trigger.await`. This is -necessary to ensure that `join` does not exit, due to the fiber being canceled, -before all of the child fibers have actually finished. Finally, `join` checks -the inner computation and the fiber, which means that an exception will be -raised in case either was canceled. - -The design of `Fiber` includes several key features: - -- The low level design allows one to both avoid unnecessary overheads, such as - allocating a `Computation.t` for every fiber, when implementing simple - abstractions and also to implement more complex behaviors that might prove - difficult given e.g. a higher level design with a built-in notion of - hierarchy. - -- As `Fiber.t` stores the `forbid` flag and the `Computation.t` associated with - the fiber one need not pass those as arguments through the program. This - allows various concurrent abstractions to be given traditional interfaces, - which would otherwise need to be complicated. - -- Effects are relatively expensive. The cost of performing effects can be - amortized by obtaining the `Fiber.t` once and then manipulating it multiple - times. - -- A `Fiber.t` also provides an identity for the fiber. It has so far proven to - be sufficient for most purposes. Fiber local storage, which we do not cover - here, can be used to implement, for example, a unique integer id for fibers. - -### Assumptions - -Now, consider the `Ivar` abstraction presented earlier as an example of the use -of the `Trigger` abstraction. That `Ivar` implementation, as well as the -`Computation` based implementation, works exactly as desired inside the scope -abstraction presented in the previous section. In particular, a blocked -`Ivar.read` can be canceled, either when another fiber in a scope raises an -unhandled exception or when the main fiber of the scope is canceled, which -allows the fiber to continue by raising an exception after cleaning up. In fact, -Picos comes with a number of libraries that all would work quite nicely with the -examples presented here. - -For example, a library provides an operation to run a block with a timeout on -the current fiber. One could use it with `Ivar.read` to implement a read -operation -[with a timeout](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_structured/Control/index.html#val-terminate_after): - -```ocaml -let read_in ~seconds ivar = - Control.terminate_after ~seconds @@ fun () -> - Ivar.read ivar -``` - -This interoperability is not accidental. For example, the scope abstraction -basically assumes that one does not use `Fiber.set_computation`, in an arbitrary -unscoped manner inside the scoped fibers. An idea with the Picos interface -actually is that it is not supposed to be used by applications at all and most -higher level libraries should be built on top of libraries that do not directly -expose elements of the Picos interface. - -Perhaps more interestingly, there are obviously limits to what can be achieved -in an "interoperable" manner. Imagine an operation like - -```ocaml skip -val at_exit : (unit -> unit) -> unit -``` - -that would allow one to run an action just before a fiber exits. One could, of -course, use a custom spawn function that would support such cleanup, but then -`at_exit` could only be used on fibers spawned through that particular spawn -function. - -### The effects - -As mentioned previously, the Picos interface is implemented partially in terms -of five effects: - -```ocaml version>=5.0.0 -type _ Effect.t += - | Await : Trigger.t -> (exn * Printexc.raw_backtrace) option Effect.t - | Cancel_after : { - seconds : float; - exn: exn; - bt : Printexc.raw_backtrace; - computation : 'a Computation.t; - } - -> unit Effect.t - | Current : t Effect.t - | Yield : unit Effect.t - | Spawn : { - fiber : Fiber.t; - main : (Fiber.t -> unit); - } - -> unit Effect.t -``` - -A scheduler must handle those effects as specified in the Picos documentation. - -The Picos interface does not, in particular, dictate which ready fibers a -scheduler must run next and on which domains. Picos also does not require that a -fiber should stay on the domain on which it was spawned. Abstractions -implemented against the Picos interface should not assume any particular -scheduling. - -Picos actually comes with -[a randomized multithreaded scheduler](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_randos/index.html), -that, after handling any of the effects, picks the next ready fiber randomly. It -has proven to be useful for testing that abstractions implemented in Picos do -not make invalid scheduling assumptions. - -When a concurrent abstraction requires a particular scheduling, it should -primarily be achieved through the use of synchronization abstractions like when -programming with traditional threads. Application programs may, of course, pick -specific schedulers. - -## Status and results - -We have an experimental design and implementation of the core Picos interface as -illustrated in the previous section. We have also created several _Picos -compatible_ -[sample schedulers](https://ocaml-multicore.github.io/picos/doc/picos_mux/index.html). -A scheduler, in this context, just multiplexes fibers to run on one or more -system level threads. We have also created some sample higher-level -[scheduler agnostic libraries](https://ocaml-multicore.github.io/picos/doc/picos_std/index.html) -_Implemented in Picos_. These libraries include -[a library for resource management](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_finally/index.html), -[a library for structured concurrency](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_structured/index.html), -[a library of synchronization primitives](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_sync/index.html), -and -[an asynchronous I/O library](https://ocaml-multicore.github.io/picos/doc/picos_io/Picos_io/index.html). -The synchronization library and the I/O library intentionally mimic libraries -that come with the OCaml distribution. All of the libraries work with all of the -schedulers and all of these _elements_ are interoperable and entirely opt-in. - -What is worth explicitly noting is that all of these schedulers and libraries -are small, independent, and highly modular pieces of code. They all crucially -depend on and are decoupled from each other via the core Picos interface -library. A basic single threaded scheduler implementation requires only about -100 lines of code (LOC). A more complex parallel scheduler might require a -couple of hundred LOC. The scheduler agnostic libraries are similarly small. - -Here is an -[example](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_structured/index.html#a-simple-echo-server-and-clients) -of a concurrent echo server using the scheduler agnostic libraries provided as -samples: - -```ocaml -let run_server server_fd = - Unix.listen server_fd 8; - Flock.join_after begin fun () -> - while true do - let@ client_fd = instantiate Unix.close @@ fun () -> - Unix.accept ~cloexec:true server_fd |> fst - in - Flock.fork begin fun () -> - let@ client_fd = move client_fd in - Unix.set_nonblock client_fd; - let bs = Bytes.create 100 in - let n = - Unix.read client_fd bs 0 (Bytes.length bs) - in - Unix.write client_fd bs 0 n |> ignore - end - done - end -``` - -The -[`Unix`](https://ocaml-multicore.github.io/picos/doc/picos_io/Picos_io/Unix/index.html) -module is provided by the I/O library. The operations on file descriptors on -that module, such as `accept`, `read`, and `write`, use the Picos interface to -suspend fibers allowing other fibers to run while waiting for I/O. The -[`Flock`](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_structured/Flock/index.html) -module comes from the structured concurrency library. A call of -[`join_after`](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_structured/Flock/index.html#val-join_after) -returns only after all the fibers -[`fork`](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_structured/Flock/index.html#val-fork)ed -into the flock have terminated. If the main fiber of the flock is canceled, or -any fiber within the flock raises an unhandled exception, all the fibers within -the flock will be canceled and an exception will be raised on the main fiber of -the flock. The -[`let@`](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_finally/index.html#val-let@), -[`finally`](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_finally/index.html#val-instantiate), -and -[`move`](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_finally/index.html#val-move) -operations come from the resource management library and allow dealing with -resources in a leak-free manner. The responsibility to close the `client_fd` -socket is -[`move`](https://ocaml-multicore.github.io/picos/doc/picos_std/Picos_std_finally/index.html#val-move)d -from the main server fiber to a fiber forked to handle that client. - -We should emphasize that the above is just an example. The Picos interface -should be both expressive and efficient enough to support practical -implementations of many different kinds of concurrent programming models. Also, -as described previously, the Picos interface does not, for example, internally -implement structured concurrency. However, the abstractions provided by Picos -are designed to allow structured and unstructured concurrency to be _Implemented -in Picos_ as libraries that will then work with any _Picos compatible_ scheduler -and with other concurrent abstractions. - -Finally, an interesting demonstration that Picos really fundamentally is an -interface is -[a prototype _Picos compatible_ direct style interface to Lwt](https://ocaml-multicore.github.io/picos/doc/picos_lwt/Picos_lwt/index.html). -The implementation uses shallow effect handlers and defers all scheduling -decisions to Lwt. Running a program with the scheduler returns a Lwt promise. - -## Future work - -As mentioned previously, Picos is still an ongoing project and the design is -considered experimental. We hope that Picos soon matures to serve the needs of -both the commercial users of OCaml and the community at large. - -Previous sections already touched a couple of updates currently in development, -such as the support for finalizing resources stored in -[`FLS`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Fiber/FLS/index.html) -and the development of Concurrent ML style abstractions. We also have ongoing -work to formalize aspects of the Picos interface. - -One potential change we will be investigating is whether the -[`Computation`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Computation/index.html) -abstraction should be simplified to only support cancelation. - -The implementation of some operations, such as -[`Fiber.current`](https://ocaml-multicore.github.io/picos/doc/picos/Picos/Fiber/index.html#val-current) -to retrieve the current fiber proxy identity, do not strictly need to be -effects. Performing an effect is relatively expensive and we will likely design -a mechanism to store a reference to the current fiber in some sort of local -storage, which could significantly improve the performance of certain -abstractions, such as checked mutexes, that need to access the current fiber. - -We also plan to develop a minimalist library for spawning threads over domains, -much like Moonpool, in a cooperative manner for schedulers and other libraries. - -We also plan to make Domainslib Picos compatible, which will require developing -a more efficient non-effects based interface for spawning fibers, and -investigate making Eio Picos compatible. - -We also plan to design and implement asynchronous IO libraries for Picos using -various system call interface for asynchronous IO such as io_uring. - -Finally, Picos is supposed to be an _open ecosystem_. If you have feedback or -would like to work on something mentioned above, let us know. - -## Motivation - -There are already several concrete effects-based concurrent programming -libraries and models being developed. Here is a list of some such publicly -available projects:[\*](https://xkcd.com/927/) - -1. [Affect](https://github.com/dbuenzli/affect) — "Composable concurrency - primitives with OCaml effects handlers (unreleased)", -2. [Domainslib](https://github.com/ocaml-multicore/domainslib) — - "Nested-parallel programming", -3. [Eio](https://github.com/ocaml-multicore/eio) — "Effects-Based Parallel IO - for OCaml", -4. [Fuseau](https://github.com/c-cube/fuseau) — "Lightweight fiber library for - OCaml 5", -5. [Miou](https://github.com/robur-coop/miou) — "A simple scheduler for OCaml - 5", -6. [Moonpool](https://github.com/c-cube/moonpool) — "Commodity thread pools for - OCaml 5", and -7. [Riot](https://github.com/leostera/riot) — "An actor-model multi-core - scheduler for OCaml 5". - -All of the above libraries are mutually incompatible with each other with the -exception that Domainslib, Eio, and Moonpool implement an earlier -interoperability proposal called -[domain-local-await](https://github.com/ocaml-multicore/domain-local-await/) or -DLA, which allows a concurrent programming library like -[Kcas](https://github.com/ocaml-multicore/kcas/)[\*](https://github.com/ocaml-multicore/kcas/pull/136) -to work on all of those. Unfortunately, DLA, by itself, is known to be -insufficient and the design has not been universally accepted. - -By introducing a scheduler interface and key libraries, such as an IO library, -implemented on top of the interface, we hope that the scarce resources of the -OCaml community are not further divided into mutually incompatible ecosystems -built on top of such mutually incompatible concurrent programming libraries, -while, simultaneously, making it possible to experiment with many kinds of -concurrent programming models. - -It should be -technically[\*](https://www.youtube.com/watch?v=hou0lU8WMgo) possible -for all the previously mentioned libraries, except -[Miou](https://github.com/robur-coop/miou), to - -1. be made - [Picos compatible](https://ocaml-multicore.github.io/picos/doc/picos/index.html#picos-compatible), - i.e. to handle the Picos effects, and -2. have their elements - [implemented in Picos](https://ocaml-multicore.github.io/picos/doc/picos/index.html#implemented-in-picos), - i.e. to make them usable on other Picos-compatible schedulers. - -Please read -[the reference manual](https://ocaml-multicore.github.io/picos/doc/index.html) -for further information. diff --git a/picos_std/_doc-dir/odoc-pages/index.mld b/picos_std/_doc-dir/odoc-pages/index.mld deleted file mode 100644 index a6016200..00000000 --- a/picos_std/_doc-dir/odoc-pages/index.mld +++ /dev/null @@ -1,18 +0,0 @@ -{0 Sample libraries for Picos} - -This package contains sample scheduler agnostic libraries for {!Picos}. Many of -the modules are intentionally designed to mimic modules from the OCaml Stdlib. - -{!modules: - Picos_std_finally - Picos_std_awaitable - Picos_std_event - Picos_std_structured - Picos_std_sync -} - -{^ These libraries are both meant to serve as examples of what can be done and - to also provide practical means for programming with fibers. Hopefully there - will be many more libraries implemented in Picos like these providing - different approaches, patterns, and idioms for structuring concurrent - programs.} diff --git a/picos_std/index.html b/picos_std/index.html deleted file mode 100644 index 3174f82c..00000000 --- a/picos_std/index.html +++ /dev/null @@ -1,2 +0,0 @@ - -index (picos_std.index)

Package picos_std

This package contains sample scheduler agnostic libraries for Picos. Many of the modules are intentionally designed to mimic modules from the OCaml Stdlib.

These libraries are both meant to serve as examples of what can be done and to also provide practical means for programming with fibers. Hopefully there will be many more libraries implemented in Picos like these providing different approaches, patterns, and idioms for structuring concurrent programs.

Package info

changes-files
license-files
readme-files