diff --git a/moonpool/Moonpool/Fut/index.html b/moonpool/Moonpool/Fut/index.html index 2643037c..b83b1665 100644 --- a/moonpool/Moonpool/Fut/index.html +++ b/moonpool/Moonpool/Fut/index.html @@ -1,2 +1,3 @@ -Fut (moonpool.Moonpool.Fut)

Module Moonpool.Fut

Futures.

A future of type 'a t represents the result of a computation that will yield a value of type 'a.

Typically, the computation is running on a thread pool Runner.t and will proceed on some worker. Once set, a future cannot change. It either succeeds (storing a Ok x with x: 'a), or fail (storing a Error (exn, bt) with an exception and the corresponding backtrace).

Combinators such as map and join_array can be used to produce futures from other futures (in a monadic way). Some combinators take a on argument to specify a runner on which the intermediate computation takes place; for example map ~on:pool ~f fut maps the value in fut using function f, applicatively; the call to f happens on the runner pool (once fut resolves successfully with a value).

type 'a or_error = ('a, Exn_bt.t) result
type 'a t

A future with a result of type 'a.

type 'a promise

A promise, which can be fulfilled exactly once to set the corresponding future

val make : unit -> 'a t * 'a promise

Make a new future with the associated promise.

val on_result : 'a t -> ('a or_error -> unit) -> unit

on_result fut f registers f to be called in the future when fut is set ; or calls f immediately if fut is already set.

val on_result_ignore : _ t -> (Exn_bt.t option -> unit) -> unit

on_result_ignore fut f registers f to be called in the future when fut is set; or calls f immediately if fut is already set. It does not pass the result, only a success/error signal.

  • since 0.7
exception Already_fulfilled
val fulfill : 'a promise -> 'a or_error -> unit

Fullfill the promise, setting the future at the same time.

val fulfill_idempotent : 'a promise -> 'a or_error -> unit

Fullfill the promise, setting the future at the same time. Does nothing if the promise is already fulfilled.

val return : 'a -> 'a t

Already settled future, with a result

val fail : exn -> Stdlib.Printexc.raw_backtrace -> _ t

Already settled future, with a failure

val fail_exn_bt : Exn_bt.t -> _ t

Fail from a bundle of exception and backtrace

  • since 0.6
val of_result : 'a or_error -> 'a t
val is_resolved : _ t -> bool

is_resolved fut is true iff fut is resolved.

val peek : 'a t -> 'a or_error option

peek fut returns Some r if fut is currently resolved with r, and None if fut is not resolved yet.

exception Not_ready
  • since 0.2
val get_or_fail : 'a t -> 'a or_error

get_or_fail fut obtains the result from fut if it's fulfilled (i.e. if peek fut returns Some res, get_or_fail fut returns res).

  • since 0.2
val get_or_fail_exn : 'a t -> 'a

get_or_fail_exn fut obtains the result from fut if it's fulfilled, like get_or_fail. If the result is an Error _, the exception inside is re-raised.

  • since 0.2
val is_done : _ t -> bool

Is the future resolved? This is the same as peek fut |> Option.is_some.

  • since 0.2
val is_success : _ t -> bool

Checks if the future is resolved with Ok _ as a result.

  • since 0.6
val is_failed : _ t -> bool

Checks if the future is resolved with Error _ as a result.

  • since 0.6
val raise_if_failed : _ t -> unit

raise_if_failed fut raises e if fut failed with e.

  • since 0.6

Combinators

val spawn : on:Runner.t -> (unit -> 'a) -> 'a t

spaw ~on f runs f() on the given runner on, and return a future that will hold its result.

val spawn_on_current_runner : (unit -> 'a) -> 'a t

This must be run from inside a runner, and schedules the new task on it as well.

See Runner.get_current_runner to see how the runner is found.

  • since 0.5
  • raises Failure

    if run from outside a runner.

val reify_error : 'a t -> 'a or_error t

reify_error fut turns a failing future into a non-failing one that contain Error (exn, bt). A non-failing future returning x is turned into Ok x

  • since 0.4
val map : ?on:Runner.t -> f:('a -> 'b) -> 'a t -> 'b t

map ?on ~f fut returns a new future fut2 that resolves with f x if fut resolved with x; and fails with e if fut fails with e or f x raises e.

  • parameter on

    if provided, f runs on the given runner

val bind : ?on:Runner.t -> f:('a -> 'b t) -> 'a t -> 'b t

bind ?on ~f fut returns a new future fut2 that resolves like the future f x if fut resolved with x; and fails with e if fut fails with e or f x raises e.

  • parameter on

    if provided, f runs on the given runner

val bind_reify_error : ?on:Runner.t -> f:('a or_error -> 'b t) -> 'a t -> 'b t

bind_reify_error ?on ~f fut returns a new future fut2 that resolves like the future f (Ok x) if fut resolved with x; and resolves like the future f (Error (exn, bt)) if fut fails with exn and backtrace bt.

  • parameter on

    if provided, f runs on the given runner

  • since 0.4
val join : 'a t t -> 'a t

join fut is fut >>= Fun.id. It joins the inner layer of the future.

  • since 0.2
val both : 'a t -> 'b t -> ('a * 'b) t

both a b succeeds with x, y if a succeeds with x and b succeeds with y, or fails if any of them fails.

val choose : 'a t -> 'b t -> ('a, 'b) Either.t t

choose a b succeeds Left x or Right y if a succeeds with x or b succeeds with y, or fails if both of them fails. If they both succeed, it is not specified which result is used.

val choose_same : 'a t -> 'a t -> 'a t

choose_same a b succeeds with the value of one of a or b if they succeed, or fails if both fail. If they both succeed, it is not specified which result is used.

val join_array : 'a t array -> 'a array t

Wait for all the futures in the array. Fails if any future fails.

val join_list : 'a t list -> 'a list t

Wait for all the futures in the list. Fails if any future fails.

module Advanced : sig ... end
val map_list : f:('a -> 'b t) -> 'a list -> 'b list t

map_list ~f l is like join_list @@ List.map f l.

  • since 0.5.1
val wait_array : _ t array -> unit t

wait_array arr waits for all futures in arr to resolve. It discards the individual results of futures in arr. It fails if any future fails.

val wait_list : _ t list -> unit t

wait_list l waits for all futures in l to resolve. It discards the individual results of futures in l. It fails if any future fails.

val for_ : on:Runner.t -> int -> (int -> unit) -> unit t

for_ ~on n f runs f 0, f 1, …, f (n-1) on the runner, and returns a future that resolves when all the tasks have resolved, or fails as soon as one task has failed.

val for_array : on:Runner.t -> 'a array -> (int -> 'a -> unit) -> unit t

for_array ~on arr f runs f 0 arr.(0), …, f (n-1) arr.(n-1) in the runner (where n = Array.length arr), and returns a future that resolves when all the tasks are done, or fails if any of them fails.

  • since 0.2
val for_list : on:Runner.t -> 'a list -> ('a -> unit) -> unit t

for_list ~on l f is like for_array ~on (Array.of_list l) f.

  • since 0.2

Await

NOTE This is only available on OCaml 5.

val await : 'a t -> 'a

await fut suspends the current tasks until fut is fulfilled, then resumes the task on this same runner (but possibly on a different thread/domain).

  • since 0.3

This must only be run from inside the runner itself. The runner must support Suspend_. NOTE: only on OCaml 5.x

Blocking

val wait_block : 'a t -> 'a or_error

wait_block fut blocks the current thread until fut is resolved, and returns its value.

NOTE: A word of warning: this will monopolize the calling thread until the future resolves. This can also easily cause deadlocks, if enough threads in a pool call wait_block on futures running on the same pool or a pool depending on it.

A good rule to avoid deadlocks is to run this from outside of any pool, or to have an acyclic order between pools where wait_block is only called from a pool on futures evaluated in a pool that comes lower in the hierarchy. If this rule is broken, it is possible for all threads in a pool to wait for futures that can only make progress on these same threads, hence the deadlock.

val wait_block_exn : 'a t -> 'a

Same as wait_block but re-raises the exception if the future failed.

Infix operators

These combinators run on either the current pool (if present), or on the same thread that just fulfilled the previous future if not.

They were previously present as module Infix_local and val infix, but are now simplified.

module Infix : sig ... end
include module type of Infix
  • since 0.5
val (>|=) : 'a t -> ('a -> 'b) -> 'b t
val (>>=) : 'a t -> ('a -> 'b t) -> 'b t
val let+ : 'a t -> ('a -> 'b) -> 'b t
val and+ : 'a t -> 'b t -> ('a * 'b) t
val let* : 'a t -> ('a -> 'b t) -> 'b t
val and* : 'a t -> 'b t -> ('a * 'b) t
module Infix_local = Infix
+Fut (moonpool.Moonpool.Fut)

Module Moonpool.Fut

Futures.

A future of type 'a t represents the result of a computation that will yield a value of type 'a.

Typically, the computation is running on a thread pool Runner.t and will proceed on some worker. Once set, a future cannot change. It either succeeds (storing a Ok x with x: 'a), or fail (storing a Error (exn, bt) with an exception and the corresponding backtrace).

Combinators such as map and join_array can be used to produce futures from other futures (in a monadic way). Some combinators take a on argument to specify a runner on which the intermediate computation takes place; for example map ~on:pool ~f fut maps the value in fut using function f, applicatively; the call to f happens on the runner pool (once fut resolves successfully with a value).

type 'a or_error = ('a, Exn_bt.t) result
type 'a t

A future with a result of type 'a.

type 'a promise = private 'a t

A promise, which can be fulfilled exactly once to set the corresponding future. This is a private alias of 'a t since NEXT_RELEASE, previously it was opaque.

val make : unit -> 'a t * 'a promise

Make a new future with the associated promise.

val make_promise : unit -> 'a promise

Same as make but returns a single promise (which can be upcast to a future). This is useful mostly to preserve memory.

How to upcast to a future:

let prom = Fut.make_promise();;
+let fut: _ Fut.t = (prom :> _ Fut.t)
  • since NEXT_RELEASE
val on_result : 'a t -> ('a or_error -> unit) -> unit

on_result fut f registers f to be called in the future when fut is set ; or calls f immediately if fut is already set.

val on_result_ignore : _ t -> (Exn_bt.t option -> unit) -> unit

on_result_ignore fut f registers f to be called in the future when fut is set; or calls f immediately if fut is already set. It does not pass the result, only a success/error signal.

  • since 0.7
exception Already_fulfilled
val fulfill : 'a promise -> 'a or_error -> unit

Fullfill the promise, setting the future at the same time.

val fulfill_idempotent : 'a promise -> 'a or_error -> unit

Fullfill the promise, setting the future at the same time. Does nothing if the promise is already fulfilled.

val return : 'a -> 'a t

Already settled future, with a result

val fail : exn -> Stdlib.Printexc.raw_backtrace -> _ t

Already settled future, with a failure

val fail_exn_bt : Exn_bt.t -> _ t

Fail from a bundle of exception and backtrace

  • since 0.6
val of_result : 'a or_error -> 'a t
val is_resolved : _ t -> bool

is_resolved fut is true iff fut is resolved.

val peek : 'a t -> 'a or_error option

peek fut returns Some r if fut is currently resolved with r, and None if fut is not resolved yet.

exception Not_ready
  • since 0.2
val get_or_fail : 'a t -> 'a or_error

get_or_fail fut obtains the result from fut if it's fulfilled (i.e. if peek fut returns Some res, get_or_fail fut returns res).

  • since 0.2
val get_or_fail_exn : 'a t -> 'a

get_or_fail_exn fut obtains the result from fut if it's fulfilled, like get_or_fail. If the result is an Error _, the exception inside is re-raised.

  • since 0.2
val is_done : _ t -> bool

Is the future resolved? This is the same as peek fut |> Option.is_some.

  • since 0.2
val is_success : _ t -> bool

Checks if the future is resolved with Ok _ as a result.

  • since 0.6
val is_failed : _ t -> bool

Checks if the future is resolved with Error _ as a result.

  • since 0.6
val raise_if_failed : _ t -> unit

raise_if_failed fut raises e if fut failed with e.

  • since 0.6

Combinators

val spawn : on:Runner.t -> (unit -> 'a) -> 'a t

spaw ~on f runs f() on the given runner on, and return a future that will hold its result.

val spawn_on_current_runner : (unit -> 'a) -> 'a t

This must be run from inside a runner, and schedules the new task on it as well.

See Runner.get_current_runner to see how the runner is found.

  • since 0.5
  • raises Failure

    if run from outside a runner.

val reify_error : 'a t -> 'a or_error t

reify_error fut turns a failing future into a non-failing one that contain Error (exn, bt). A non-failing future returning x is turned into Ok x

  • since 0.4
val map : ?on:Runner.t -> f:('a -> 'b) -> 'a t -> 'b t

map ?on ~f fut returns a new future fut2 that resolves with f x if fut resolved with x; and fails with e if fut fails with e or f x raises e.

  • parameter on

    if provided, f runs on the given runner

val bind : ?on:Runner.t -> f:('a -> 'b t) -> 'a t -> 'b t

bind ?on ~f fut returns a new future fut2 that resolves like the future f x if fut resolved with x; and fails with e if fut fails with e or f x raises e.

  • parameter on

    if provided, f runs on the given runner

val bind_reify_error : ?on:Runner.t -> f:('a or_error -> 'b t) -> 'a t -> 'b t

bind_reify_error ?on ~f fut returns a new future fut2 that resolves like the future f (Ok x) if fut resolved with x; and resolves like the future f (Error (exn, bt)) if fut fails with exn and backtrace bt.

  • parameter on

    if provided, f runs on the given runner

  • since 0.4
val join : 'a t t -> 'a t

join fut is fut >>= Fun.id. It joins the inner layer of the future.

  • since 0.2
val both : 'a t -> 'b t -> ('a * 'b) t

both a b succeeds with x, y if a succeeds with x and b succeeds with y, or fails if any of them fails.

val choose : 'a t -> 'b t -> ('a, 'b) Either.t t

choose a b succeeds Left x or Right y if a succeeds with x or b succeeds with y, or fails if both of them fails. If they both succeed, it is not specified which result is used.

val choose_same : 'a t -> 'a t -> 'a t

choose_same a b succeeds with the value of one of a or b if they succeed, or fails if both fail. If they both succeed, it is not specified which result is used.

val join_array : 'a t array -> 'a array t

Wait for all the futures in the array. Fails if any future fails.

val join_list : 'a t list -> 'a list t

Wait for all the futures in the list. Fails if any future fails.

module Advanced : sig ... end
val map_list : f:('a -> 'b t) -> 'a list -> 'b list t

map_list ~f l is like join_list @@ List.map f l.

  • since 0.5.1
val wait_array : _ t array -> unit t

wait_array arr waits for all futures in arr to resolve. It discards the individual results of futures in arr. It fails if any future fails.

val wait_list : _ t list -> unit t

wait_list l waits for all futures in l to resolve. It discards the individual results of futures in l. It fails if any future fails.

val for_ : on:Runner.t -> int -> (int -> unit) -> unit t

for_ ~on n f runs f 0, f 1, …, f (n-1) on the runner, and returns a future that resolves when all the tasks have resolved, or fails as soon as one task has failed.

val for_array : on:Runner.t -> 'a array -> (int -> 'a -> unit) -> unit t

for_array ~on arr f runs f 0 arr.(0), …, f (n-1) arr.(n-1) in the runner (where n = Array.length arr), and returns a future that resolves when all the tasks are done, or fails if any of them fails.

  • since 0.2
val for_list : on:Runner.t -> 'a list -> ('a -> unit) -> unit t

for_list ~on l f is like for_array ~on (Array.of_list l) f.

  • since 0.2

Await

NOTE This is only available on OCaml 5.

val await : 'a t -> 'a

await fut suspends the current tasks until fut is fulfilled, then resumes the task on this same runner (but possibly on a different thread/domain).

  • since 0.3

This must only be run from inside the runner itself. The runner must support Suspend_. NOTE: only on OCaml 5.x

Blocking

val wait_block : 'a t -> 'a or_error

wait_block fut blocks the current thread until fut is resolved, and returns its value.

NOTE: A word of warning: this will monopolize the calling thread until the future resolves. This can also easily cause deadlocks, if enough threads in a pool call wait_block on futures running on the same pool or a pool depending on it.

A good rule to avoid deadlocks is to run this from outside of any pool, or to have an acyclic order between pools where wait_block is only called from a pool on futures evaluated in a pool that comes lower in the hierarchy. If this rule is broken, it is possible for all threads in a pool to wait for futures that can only make progress on these same threads, hence the deadlock.

val wait_block_exn : 'a t -> 'a

Same as wait_block but re-raises the exception if the future failed.

Infix operators

These combinators run on either the current pool (if present), or on the same thread that just fulfilled the previous future if not.

They were previously present as module Infix_local and val infix, but are now simplified.

module Infix : sig ... end
include module type of Infix
  • since 0.5
val (>|=) : 'a t -> ('a -> 'b) -> 'b t
val (>>=) : 'a t -> ('a -> 'b t) -> 'b t
val let+ : 'a t -> ('a -> 'b) -> 'b t
val and+ : 'a t -> 'b t -> ('a * 'b) t
val let* : 'a t -> ('a -> 'b t) -> 'b t
val and* : 'a t -> 'b t -> ('a * 'b) t
module Infix_local = Infix
diff --git a/picos/Picos/Fiber/index.html b/picos/Picos/Fiber/index.html index 8ef2611b..b343ffbc 100644 --- a/picos/Picos/Fiber/index.html +++ b/picos/Picos/Fiber/index.html @@ -19,7 +19,7 @@ else 'x -> 'y -> (Trigger.t -> 'x -> 'y -> unit) -> - bool

try_suspend fiber trigger x y resume tries to suspend the fiber to await for the trigger to be signaled. If the result is false, then the trigger is guaranteed to be in the signaled state and the fiber should be eventually resumed. If the result is true, then the fiber was suspended, meaning that the trigger will have had the resume action attached to it and the trigger has potentially been attached to the computation of the fiber.

val unsuspend : t -> Trigger.t -> bool

unsuspend fiber trigger makes sure that the trigger will not be attached to the computation of the fiber. Returns false in case the fiber has been canceled and propagation of cancelation is not forbidden. Otherwise returns true.

⚠️ The trigger must be in the signaled state!

val resume : + bool

try_suspend fiber trigger x y resume tries to suspend the fiber to await for the trigger to be signaled. If the result is false, then the trigger is guaranteed to be in the signaled state and the fiber should be eventually resumed. If the result is true, then the fiber was suspended, meaning that the trigger will have had the resume action attached to it and the trigger has potentially been attached to the computation of the fiber.

val unsuspend : t -> Trigger.t -> bool

unsuspend fiber trigger makes sure that the trigger will not be attached to the computation of the fiber. Returns false in case the fiber has been canceled and propagation of cancelation is not forbidden. Otherwise returns true.

val resume : t -> ((exn * Stdlib.Printexc.raw_backtrace) option, 'r) Stdlib.Effect.Deep.continuation -> diff --git a/picos/_doc-dir/odoc-pages/index.mld b/picos/_doc-dir/odoc-pages/index.mld index b06ab18a..c061e20f 100644 --- a/picos/_doc-dir/odoc-pages/index.mld +++ b/picos/_doc-dir/odoc-pages/index.mld @@ -1,5 +1,16 @@ {0 Picos — Interoperable effects based concurrency} +This packages contains the core {!Picos} interface library and auxiliary +libraries for dealing with the OCaml multithreading architecture. + +{1 Libraries} + +{!modules: + Picos + Picos_domain + Picos_thread +} + {1 Introduction} {!Picos} is a {{:https://en.wikipedia.org/wiki/Systems_programming} systems @@ -126,14 +137,6 @@ it is perhaps illuminating to explicitly mention some of those: Let's build an incredible ecosystem of interoperable concurrent programming libraries and frameworks! -{1 Libraries} - -{!modules: - Picos - Picos_domain - Picos_thread -} - {1 Conventions} Many operation in the Picos libraries use diff --git a/picos/index.html b/picos/index.html index b45b2116..4a95bfaa 100644 --- a/picos/index.html +++ b/picos/index.html @@ -1,2 +1,2 @@ -index (picos.index)

Package picos

Introduction

Picos is a systems programming interface between effects based schedulers and concurrent abstractions. Picos is designed to enable an ecosystem of interoperable elements of effects based cooperative concurrent programming models such as

If you are the author of an application level concurrent programming library or framework, then Picos should not fundamentally be competing with your work. However, Picos and libraries built on top of Picos probably do have overlap with your work and making your work Picos compatible may offer benefits:

  • You may find it useful that the core of Picos provides parallelism safe building blocks for cancelation, which is a particularly tricky problem to get right.
  • You may find it useful that you don't have to reinvent many of the basic communication and synchronization abstractions such as mutexes and condition variables, promises, concurrent bounded queues, channels, and what not.
  • You may benefit from further non-trivial libraries, such as IO libraries, that you don't have to reimplement.
  • Potential users of your work may be reassured and benefit from the ability to mix-and-match your work with other Picos compatible libraries and frameworks.

Of course, interoperability does have some costs. It takes time to understand Picos and it takes time to implement Picos compatibility. Implementing your programming model elements in terms of the Picos interface may not always give ideal results. To address concerns such as those, a conscious effort has been made to keep Picos as minimal and unopinionated as possible.

Interoperability

Picos is essentially an interface between schedulers and concurrent abstractions. Two phrases, Picos compatible and Implemented in Picos, are used to describe the opposing sides of this contract.

Picos compatible

The idea is that schedulers provide their own handlers for the Picos effects. By handling the Picos effects a scheduler allows any libraries built on top of the Picos interface to be used with the scheduler. Such a scheduler is then said to be Picos compatible.

Implemented in Picos

A scheduler is just one element of a concurrent programming model. Separately from making a scheduler Picos compatible, one may choose to implement other elements of the programming model, e.g. a particular approach to structuring concurrency or a particular collection of communication and synchronization primitives, in terms of the Picos interface. Such scheduler agnostic elements can then be used on any Picos compatible scheduler and are said to be Implemented in Picos.

Design goals and principles

The core of Picos is designed and developed with various goals and principles in mind.

  • Simple: Picos should be kept as simple as possible.
  • Minimal: Picos should be kept minimal. The dependency footprint should be as small as possible. Convenience features should be built on top of the interface.
  • Safe: Picos should be designed with safety in mind. The implementation must be data race free. The interface should promote and always allow proper resource management.
  • Unopinionated: Picos should not make strong design choices that are controversial.
  • Flexible: Picos should allow higher level libraries as much freedom as possible to make their own design choices.

The documentation of the concepts includes design rationale for some of the specific ideas behind their detailed design.

Constraints Liberate, Liberties Constrain

Picos aims to be unopinionated and flexible enough to allow higher level libraries to provide many different kinds of concurrent programming models. While it is impossible to give a complete list of what Picos does not dictate, it is perhaps illuminating to explicitly mention some of those:

  • Picos does not implement capability-based security. Higher level libraries with or without capabilities may be built on top of Picos.
  • Picos never cancels computations implicitly. Higher level libraries may decide when cancelation should be allowed to take effect.
  • Picos does not dictate which fiber should be scheduled next after a Picos effect. Different schedulers may freely use desired data structures (queues, work-stealing deques, stacks, priority queues, ...) and, after handling any Picos effect, freely decide which fiber to run next.
  • Picos does not dictate how fibers should be managed. It is possible to implement both unstructured and structured concurrent programming models on top of Picos.
  • Picos does not dictate which mechanisms applications should use for communication and synchronization. It is possible to build many different kinds of communication and synchronization mechanisms on top of Picos including mutexes and condition variables, STMs, asynchronous and synchronous message passing, actors, and more.
  • Picos does not dictate that there should be a connection between the scheduler and other elements of the concurrent programming model. It is possible to provide those separately and mix-and-match.
  • Picos does not dictate which library to use for IO. It is possible to build direct-style asynchronous IO libraries on top of Picos that can then be used with any Picos compatible schedulers or concurrent programming models.

Let's build an incredible ecosystem of interoperable concurrent programming libraries and frameworks!

Libraries

  • Picos A systems programming interface between effects based schedulers and concurrent abstractions.
  • Picos_domain Minimalistic domain API available both on OCaml 5 and on OCaml 4.
  • Picos_thread Minimalistic thread API available with or without threads.posix.

Conventions

Many operation in the Picos libraries use non-blocking algorithms. Unless explicitly specified otherwise,

  • non-blocking operations in Picos are atomic or strictly linearizable (i.e. linearizable and serializable), and
  • lock-free operations in Picos are designed to avoid having competing operations of widely different complexities, which should make such operations much less prone to starvation.

Package info

changes-files
license-files
readme-files
+index (picos.index)

Package picos

This packages contains the core Picos interface library and auxiliary libraries for dealing with the OCaml multithreading architecture.

Libraries

  • Picos A systems programming interface between effects based schedulers and concurrent abstractions.
  • Picos_domain Minimalistic domain API available both on OCaml 5 and on OCaml 4.
  • Picos_thread Minimalistic thread API available with or without threads.posix.

Introduction

Picos is a systems programming interface between effects based schedulers and concurrent abstractions. Picos is designed to enable an ecosystem of interoperable elements of effects based cooperative concurrent programming models such as

If you are the author of an application level concurrent programming library or framework, then Picos should not fundamentally be competing with your work. However, Picos and libraries built on top of Picos probably do have overlap with your work and making your work Picos compatible may offer benefits:

  • You may find it useful that the core of Picos provides parallelism safe building blocks for cancelation, which is a particularly tricky problem to get right.
  • You may find it useful that you don't have to reinvent many of the basic communication and synchronization abstractions such as mutexes and condition variables, promises, concurrent bounded queues, channels, and what not.
  • You may benefit from further non-trivial libraries, such as IO libraries, that you don't have to reimplement.
  • Potential users of your work may be reassured and benefit from the ability to mix-and-match your work with other Picos compatible libraries and frameworks.

Of course, interoperability does have some costs. It takes time to understand Picos and it takes time to implement Picos compatibility. Implementing your programming model elements in terms of the Picos interface may not always give ideal results. To address concerns such as those, a conscious effort has been made to keep Picos as minimal and unopinionated as possible.

Interoperability

Picos is essentially an interface between schedulers and concurrent abstractions. Two phrases, Picos compatible and Implemented in Picos, are used to describe the opposing sides of this contract.

Picos compatible

The idea is that schedulers provide their own handlers for the Picos effects. By handling the Picos effects a scheduler allows any libraries built on top of the Picos interface to be used with the scheduler. Such a scheduler is then said to be Picos compatible.

Implemented in Picos

A scheduler is just one element of a concurrent programming model. Separately from making a scheduler Picos compatible, one may choose to implement other elements of the programming model, e.g. a particular approach to structuring concurrency or a particular collection of communication and synchronization primitives, in terms of the Picos interface. Such scheduler agnostic elements can then be used on any Picos compatible scheduler and are said to be Implemented in Picos.

Design goals and principles

The core of Picos is designed and developed with various goals and principles in mind.

  • Simple: Picos should be kept as simple as possible.
  • Minimal: Picos should be kept minimal. The dependency footprint should be as small as possible. Convenience features should be built on top of the interface.
  • Safe: Picos should be designed with safety in mind. The implementation must be data race free. The interface should promote and always allow proper resource management.
  • Unopinionated: Picos should not make strong design choices that are controversial.
  • Flexible: Picos should allow higher level libraries as much freedom as possible to make their own design choices.

The documentation of the concepts includes design rationale for some of the specific ideas behind their detailed design.

Constraints Liberate, Liberties Constrain

Picos aims to be unopinionated and flexible enough to allow higher level libraries to provide many different kinds of concurrent programming models. While it is impossible to give a complete list of what Picos does not dictate, it is perhaps illuminating to explicitly mention some of those:

  • Picos does not implement capability-based security. Higher level libraries with or without capabilities may be built on top of Picos.
  • Picos never cancels computations implicitly. Higher level libraries may decide when cancelation should be allowed to take effect.
  • Picos does not dictate which fiber should be scheduled next after a Picos effect. Different schedulers may freely use desired data structures (queues, work-stealing deques, stacks, priority queues, ...) and, after handling any Picos effect, freely decide which fiber to run next.
  • Picos does not dictate how fibers should be managed. It is possible to implement both unstructured and structured concurrent programming models on top of Picos.
  • Picos does not dictate which mechanisms applications should use for communication and synchronization. It is possible to build many different kinds of communication and synchronization mechanisms on top of Picos including mutexes and condition variables, STMs, asynchronous and synchronous message passing, actors, and more.
  • Picos does not dictate that there should be a connection between the scheduler and other elements of the concurrent programming model. It is possible to provide those separately and mix-and-match.
  • Picos does not dictate which library to use for IO. It is possible to build direct-style asynchronous IO libraries on top of Picos that can then be used with any Picos compatible schedulers or concurrent programming models.

Let's build an incredible ecosystem of interoperable concurrent programming libraries and frameworks!

Conventions

Many operation in the Picos libraries use non-blocking algorithms. Unless explicitly specified otherwise,

  • non-blocking operations in Picos are atomic or strictly linearizable (i.e. linearizable and serializable), and
  • lock-free operations in Picos are designed to avoid having competing operations of widely different complexities, which should make such operations much less prone to starvation.

Package info

changes-files
license-files
readme-files
diff --git a/picos_std/Picos_std_sync__Awaitable/index.html b/picos_std/Picos_std_sync__Awaitable/index.html new file mode 100644 index 00000000..25cab728 --- /dev/null +++ b/picos_std/Picos_std_sync__Awaitable/index.html @@ -0,0 +1,2 @@ + +Picos_std_sync__Awaitable (picos_std.Picos_std_sync__Awaitable)

Module Picos_std_sync__Awaitable

This module is hidden.