moonpool/src/core/fifo_pool.mli
Simon Cruanes 9b3c75124e
simon/move to picos (#30)
* feat: depend on picos, use picos.exn_bt

* refactor: remove dla

* non optional dependency on thread-local-storage

it's a dep of picos anyway

* wip: use picos computations

* disable t_fib1 test, way too flaky

* feat `fut`: wrap picos computations

* detail in fut

* gitignore

* refactor core: use picos for schedulers; add Worker_loop_

we factor most of the thread workers' logic in `Worker_loop_`,
which is now shared between Ws_pool and Fifo_pool

* github actions

* feat fut: add `on_result_ignore`

* details

* wip: port to picos

* test: wip porting tests

* fix fut: trigger failing to attach doesn't signal it

* fix pool: only return No_more_tasks when local and global q empty

* format

* chore: fix CI by installing picos first

* more CI

* test: re-enable t_fib1 but with a single core fifo pool

it should be deterministic now!

* fixes after reviews

* bump minimal OCaml version to 4.13

* use `exn_bt`, not `picos.exn_bt`

* feat: optional dep on hmap, for inheritable FLS data

* format

* chore: depend on picos explicitly

* feat: move hmap-fls to Fiber.Fls

* change API for local FLS hmap

* refactor: move optional hmap FLS stuff into core/task_local_storage

* add Task_local_storage.remove_in_local_hmap

* chore: try to fix CI

* format

* chore: CI

* fix

* feat: add `Fls.with_in_local_hmap`

* chore: depend on hmap for tests

* fix test for FLS

use the inheritable keys

* chore: CI

* require OCaml 4.14 :/

* feat: add `moonpool.sync` with await-friendly abstractions

based on picos_sync

* fix: catch TLS.Not_set

* fix: `LS.get` shouldn't raise

* fix

* update to merged picos PR

* chore: CI

* fix dep

* feat: add `Event.of_fut`

* chore: CI

* remove dep on now defunct `exn_bt`

* feat: add moonpool-io

* chore: CI

* version constraint on moonpool-io

* add Event.Infix

* move to picos_io
2024-09-04 12:04:27 -04:00

64 lines
2.3 KiB
OCaml

(** A simple thread pool in FIFO order.
FIFO: first-in, first-out. Basically tasks are put into a queue,
and worker threads pull them out of the queue at the other end.
Since this uses a single blocking queue to manage tasks, it's very
simple and reliable. The number of worker threads is fixed, but
they are spread over several domains to enable parallelism.
This can be useful for latency-sensitive applications (e.g. as a
pool of workers for network servers). Work-stealing pools might
have higher throughput but they're very unfair to some tasks; by
contrast, here, older tasks have priority over younger tasks.
@since 0.5 *)
include module type of Runner
type ('a, 'b) create_args =
?on_init_thread:(dom_id:int -> t_id:int -> unit -> unit) ->
?on_exit_thread:(dom_id:int -> t_id:int -> unit -> unit) ->
?on_exn:(exn -> Printexc.raw_backtrace -> unit) ->
?around_task:(t -> 'b) * (t -> 'b -> unit) ->
?num_threads:int ->
?name:string ->
'a
(** Arguments used in {!create}. See {!create} for explanations. *)
val create : (unit -> t, _) create_args
(** [create ()] makes a new thread pool.
@param on_init_thread called at the beginning of each new thread in the pool.
@param min minimum size of the pool. See {!Pool.create_args}.
The default is [Domain.recommended_domain_count()], ie one worker per
CPU core.
On OCaml 4 the default is [4] (since there is only one domain).
@param on_exit_thread called at the end of each worker thread in the pool.
@param around_task a pair of [before, after] functions
ran around each task. See {!Pool.create_args}.
@param name name for the pool, used in tracing (since 0.6)
*)
val with_ : (unit -> (t -> 'a) -> 'a, _) create_args
(** [with_ () f] calls [f pool], where [pool] is obtained via {!create}.
When [f pool] returns or fails, [pool] is shutdown and its resources
are released.
Most parameters are the same as in {!create}. *)
(**/**)
module Private_ : sig
type worker_state
val worker_ops : worker_state Worker_loop_.ops
val create_single_threaded_state :
thread:Thread.t ->
?on_exn:(exn -> Printexc.raw_backtrace -> unit) ->
unit ->
worker_state
val runner_of_state : worker_state -> Runner.t
end
(**/**)